Operating System Debugging

Operating System Debugging:- Here we provided Operating System Debugging in Realsubtitle. Debugging is the activity of finding and fixing errors, or bugs, in a system. Debugging seeks to find and fix errors in both hardware and software Performance problems are considered bugs, so debugging can also include performance tuning, which seeks to improve performance by removing bottlenecks in the processing taking place within a system. A discussion of hardware debugging is outside of the scope of this text. In this section, we explore debugging kernel and process errors and performance problems.

 

Operating System Debugging

 

Failure Analysis

If a process fails, most operating systems write the error information to a log file to alert system operators or users that the problem occurred. The operating system can also take a core dump-a capture of the memory (referred to as the “core” in the early days of computing) of the process. This core image is stored in a file for later analysis. Running programs and core dumps can be probed by a debugger, a tool designed to allow a programmer to explore the code and 4 memory of a process.

Debugging user-level process code is a challenge. Operating system kernel debugging is even more complex because of the size and complexity of the kernel, its control of the hardware, and the lack of user-level debugging tools. A kernel failure is called a crash. As with a process failure, error information is saved to a log file, and the memory state is saved to a crash dump.

Operating system debugging frequently uses different tools and techniques than process debugging due to the very different nature of these two tasks Consider that a kernel failure in the file-system code would make it risky for the kernel to try to save its state to a file on the file system before rebooting A common technique is to save the kernel’s memory state to a section of disk set aside for this purpose that contains no file system. If the kernel detects an unrecoverable error, it writes the entire contents of memory, or at least the kernel-owned parts of the system memory, to the disk area. When the system reboots, a process runs to gather the data from that area and write it to a crash dump file within a file system for analysis. Process Scheduling in Operating System

 

Performance Tuning

To identify bottlenecks, we must be able to monitor system performance. Code must be added to compute and display measures of system behaviour. In a number of systems, the operating system does this task by producing trace listings of system behaviour. All interesting events are logged with their time and important parameters and are written to a file. Later, an analysis program can process the log file to determine system performance and identify bottlenecks and inefficiencies. These same traces can be run as input for a simulation of a suggested improved system. Traces also can help people to find errors in operating-system behaviour.

Another approach to performance tuning is to include interactive tools with the system that allows users and administrators to question the state of various components of the system to look for bottlenecks. The UNIX command top displays resources used on the system, as well as a sorted list of the “top” resource-using processes. Other tools display the state of disk 1/0, memory allocation, and network traffic. The authors of these single-purpose tools try to guess what a user would want to see while analyzing a system and to provide that information.

Making running operating systems easier to understand, debug, and the tune is an active area of operating system research and implementation. The cycle of enabling tracing as system problems occur and analyzing the traces later is being broken by a new generation of kernel-enabled performance analysis tools. Further, these tools are not single-purpose or merely for sections of code that were written to emit debugging data. The Solaris 10 DTrace dynamic tracing facility is a leading example of such a tool.

 

DTrace

DTrace is a facility that dynamically adds probes to a running system, both in user processes and in the kernel. These probes can be queried via the Di programming language to determine an astonishing amount about the kernel, the system state, and process activities.

Debugging the interactions between user-level and kernel code is nearly impossible without a toolset that understands both sets of code and can instrument the interactions. For that toolset to be truly useful, it must be able to debug any area of a system, including areas that were not written with debugging in mind and do so without affecting system reliability. This tool must also have a minimum performance impact-ideally it should have no impact when not in use and a proportional impact during use. The DTrace tool meets these requirements and provides dynamic, safe, low-impact debugging. environment.

Until the DTrace framework and tools became available with Solaris 10, kernel debugging was usually shrouded in mystery and accomplished via happenstance and archaic code and tools. For example, CPUs have a breakpoint feature that will halt execution and allow a debugger to examine the state of the system. Then execution can continue until the next breakpoint or termination. This method cannot be used in a multiuser operating-system kernel without negatively affecting all of the users on the system. Profiling, which periodically samples the instruction pointer to determine which code is being executed, can show statistical trends but not individual activities.

DTrace runs on production systems-systems that are running important or critical applications and causes no harm to the system. It slows activities while enabled, but after execution, it resets the system to its pre-debugging state. It is also a broad and deep tool. It can broadly debug everything happening in the system (both at the user and kernel levels and between the user and kernel layers). DTrace can also delve deeply into code, individual CPU instructions or kernel subroutine activities.

DTrace is composed of a compiler, a framework, providers of probes. written within that framework, and consumers of those probes. DTrace providers create probes. Kernel structures exist to keep track of all probes that the providers have created. The probes are stored in a hash table data structure that is hashed by name and indexed according to unique probe identifiers. When a probe is enabled, a bit of code in the area to be probed is rewritten to call dtrace_probe (probe identifier) and then continue with the code’s original operation.

Leave a Reply

Your email address will not be published.