Entry tags:
an OS question
While waiting for assorted software updates to install today I found myself wondering... Mac OS and Windows usually need to reboot your machine to install updates. Yet I have, several times, seen Unix machines that I believe were being maintained with uptimes of more than a year. What's the deal? Is Unix just better able to support hot-fixes, or are Unix updates that rare? (Or am I wrong about the maintenance of those machines?) And if it's that Unix is better at updating, why does Mac OS, which is Unix-based, need to reboot so often? Mind, it's definitely better in this regard than when I was running Windows; this is a puzzle, not a rant.
Edit: Thanks for the comments thus far. I now understand more about how Unix is put together, and why Windows is different. Still not sure about Mac OS but comments suggest it could be UI-related (that is, the GUI might be more tied into the OS than is the case on Unix).
Edit: Thanks for the comments thus far. I now understand more about how Unix is put together, and why Windows is different. Still not sure about Mac OS but comments suggest it could be UI-related (that is, the GUI might be more tied into the OS than is the case on Unix).
no subject
The Pretty - it costs.
no subject
no subject
Re: UI -- all you really have to do for these points, unless you're replaced the kernel-level video driver, is restart X. Part of why some Linux distributions have you restart is that they don't include a "restart X" capacity, but many Linux sysadmins know about that and how to trigger it.
Without having worked on Macs, I suspect the above, coupled with the tight coupling Darwin (the kernel for Mac OSX) does between it and the UI, is why Macs reboot so often. But yes, it's mostly about loosely-coupled pieces, and what that can buy you.
no subject
no subject
# # #
Many Linux/Unix admins don't want to reboot, hoping that the occasional (or not-so-occasional) kernel-based security hole won't affect them. I tend to consider that a bad idea these days.
That said, there's a (for-pay, although IIRC some distribution (Ubuntu?) has a reduced functionality version available to all registered users) technology called Ksplice that can patch a running kernel in many (most? all? haven't looked closely but I think the latter is impossible for logistical reasons) cases. It's been around for long enough that there could well be some Linux boxes with uptime > 1 year that use it to keep the kernel up to date on patches.
no subject
With Windows, I suspect that certain things are still very closely coupled to the operating system kernel. If I were a little more paranoid/cynical, I might think it is also a sneaky way to mask memory leaks and garbage collection problems.... 8^) To be honest, I do get a lot of Windows "hot-fixes" at work, but I suspect they simply are patches and upgrades to non-core components of the system (or what passses for TSRs these days)
no subject
no subject
We have updates to various software packages all the time. Mostly we test these, batch them up, and install during maintenance windows. It is very rare to require a reboot, because pretty much the only thing that requires a reboot to replace is the kernel itself. Now, kernels get updates -- but a non-public, no-shell-users machine that is hidden behind a firewall may see a required update rather less than once a year.
That said, our boxes tend to be up for less than a year at a time simply because not all of them have redundant power supplies, and so our annual re-cabling and box-moving day causes them to be shut down. We have a utility box with dual power supplies which has an uptime nearing four years... it may be shut down only when it has a hardware failure or we need beefier hardware.
no subject
To expand slightly: it was part of the original design theory of the UNIX kernel that the kernel be responsible in largest part only for the management of memory, the running of processes, and the calling of device drivers. As much as possible of the standard operating system services is to be run outside the kernel as can be. That's why the many comments about "patching the kernel" imply that it is so rare. It's very rare.
In older days, replacing device drivers required compiling them into the kernel and rebooting. Some 25 or so years ago (maybe a touch more), the standard kernels were reconfigured and rewritten to allow the dynamic loading and unloading of device drivers.
Some years ago, the Mach project (an UNIX OS off-shoot) tried to make the kernel as small as possible, and many of its designs were reflected and re-absorbed back into the modern UNIX/Linux family of systems.
Windows is megalithic. UNIX (and its children, primarily Linux) are small and intended to be so. I could not tell you with authority why it is that the MacOS system (which is based upon a Berkeley UNIX system) requires reboots. I suspect it is either that the kernel is often patched -OR- that there is something running alongside the kernel that requires a reboot, and is very proprietary.
no subject
In Unix filesystems, there is a level of indirection between a file’s name and its inode, which contains things like the ownership, permissions, and pointers to the blocks storing the actual data on the disk. Because of this indirection, one process can open a file, another process can delete it, and the storage the file uses will not actually be freed up until the first process closes it.
In Windows, back in the day, such was not the case: if one process had a file open another could not delete it. (I am using the past tense here because I just tried doing that with Windows 7 and I succeeded. Maybe NTFS has finally caught up with the 1970s, or maybe the user interface just removes the icon from the desktop without actually deleting anything.) So if an upgrade had to modify some file that the OS needed to keep open, then the only way to accomplish the upgrade was to shut down the computer and swap in the new versions before the OS reopened them.
no subject
I'm currently on XP at work, though I understand that Windows 7 will be rolling out next year. I should be good until October, when the lease on my current machine expires, assuming nothing melts down that would require earlier replacement. (My goal is to be among the last to get it, not among the first. That's not specific to Windows 7; for any expensive transition, I want to be able to benefit from what others have learned, 'cause my deadlines aren't going to get pushed out just because I now have to figure out accessibility, security, and just plain usability in a new environment.)
no subject
The alternative, hot-patching, is pretty risky and takes a lot of QA resources, which is why Microsoft has started using it heavily but everybody else is a bit gun-shy. There's a fun attribute (__hotpatch, I think) the MS compilers let you pepper onto functions; it requests that the function be laid out in memory in a particular way which permits the first instruction to be atomically overwritten with a near jump to a scratch buffer containing a far jump to some replacement code. If there's a security problem in a single function, that makes it pretty easy to fix a DLL in memory, and then it gets fixed properly on reboot.
no subject
It's a hassle, and it can bite you later during the "see if it reboots cleanly" stage of your next maintenance window, but it can be useful if availability really matters.
(This can mean, in some environments, that development or QA servers are hotpatched, too, so that they mirror the production servers as closely as possible.)
no subject