cellio: (avatar)
Monica ([personal profile] cellio) wrote2010-11-14 02:04 pm
Entry tags:

an OS question

While waiting for assorted software updates to install today I found myself wondering... Mac OS and Windows usually need to reboot your machine to install updates. Yet I have, several times, seen Unix machines that I believe were being maintained with uptimes of more than a year. What's the deal? Is Unix just better able to support hot-fixes, or are Unix updates that rare? (Or am I wrong about the maintenance of those machines?) And if it's that Unix is better at updating, why does Mac OS, which is Unix-based, need to reboot so often? Mind, it's definitely better in this regard than when I was running Windows; this is a puzzle, not a rant.

Edit: Thanks for the comments thus far. I now understand more about how Unix is put together, and why Windows is different. Still not sure about Mac OS but comments suggest it could be UI-related (that is, the GUI might be more tied into the OS than is the case on Unix).
richardf8: (Default)

[personal profile] richardf8 2010-11-14 07:37 pm (UTC)(link)
I think a lot of it has to do with upkeep of the User Interface. Unix machines with long uptimes tend to be servers, and the GUI, if it is running at all, is not much being interacted with. Stopping a service, updating its executable, and restarting it is trivial. However, even on Unix boxes, once the packages involving the GUI get involved things get messy.

The Pretty - it costs.

[identity profile] asim.livejournal.com 2010-11-14 08:27 pm (UTC)(link)
Servers like that aren't (usually) updating their kernels. The Unix OS design is such that there's very little tied to the kernel in ways that they are in Windows, so unless you have one of a handful of apps, you rarely need to reboot to install. There's no hot-fix capacity in Linux, it's just that the kernel does so much less that the equivalent in Windows does. And even then, there's a new project to allow kernel updates w/o a reboot, but that's in early days, yet.

Re: UI -- all you really have to do for these points, unless you're replaced the kernel-level video driver, is restart X. Part of why some Linux distributions have you restart is that they don't include a "restart X" capacity, but many Linux sysadmins know about that and how to trigger it.

Without having worked on Macs, I suspect the above, coupled with the tight coupling Darwin (the kernel for Mac OSX) does between it and the UI, is why Macs reboot so often. But yes, it's mostly about loosely-coupled pieces, and what that can buy you.

[identity profile] dragonazure.livejournal.com 2010-11-14 10:48 pm (UTC)(link)
Its been a long time since I did any operating systems work, but from what I remember, it largely depends on the type of update. In UNIX, I haven't seen/been involved in a major upgrade in ages, but usually, only services and device drivers get updated and that doesn't require restarting the entire system--just "refreshing" the services. A serious upgrade to the OS kernel will generally require restarting the system. If you have to have to reconfigure your system settings, that also usually requires a restart of the system, but I don't think that is what you are asking....

With Windows, I suspect that certain things are still very closely coupled to the operating system kernel. If I were a little more paranoid/cynical, I might think it is also a sneaky way to mask memory leaks and garbage collection problems.... 8^) To be honest, I do get a lot of Windows "hot-fixes" at work, but I suspect they simply are patches and upgrades to non-core components of the system (or what passses for TSRs these days)
dsrtao: dsr as a LEGO minifig (Default)

[personal profile] dsrtao 2010-11-14 11:39 pm (UTC)(link)
Hi, I run Linux and NetBSD servers for a financial software-as-a-service company that you have not heard of (unless you are a bank, brokerage or RIA). I also manage the desktop infrastructure, which includes Windows and MacOS boxen.

We have updates to various software packages all the time. Mostly we test these, batch them up, and install during maintenance windows. It is very rare to require a reboot, because pretty much the only thing that requires a reboot to replace is the kernel itself. Now, kernels get updates -- but a non-public, no-shell-users machine that is hidden behind a firewall may see a required update rather less than once a year.

That said, our boxes tend to be up for less than a year at a time simply because not all of them have redundant power supplies, and so our annual re-cabling and box-moving day causes them to be shut down. We have a utility box with dual power supplies which has an uptime nearing four years... it may be shut down only when it has a hardware failure or we need beefier hardware.

sethg: picture of me with a fedora and a "PRESS: Daily Planet" card in the hat band (Default)

[personal profile] sethg 2010-11-15 01:19 am (UTC)(link)
I don’t know about Mac OS, but aside from what others have said above, there is a specific difference between Windows and Unix-family filesytems that is relevant here.

In Unix filesystems, there is a level of indirection between a file’s name and its inode, which contains things like the ownership, permissions, and pointers to the blocks storing the actual data on the disk. Because of this indirection, one process can open a file, another process can delete it, and the storage the file uses will not actually be freed up until the first process closes it.

In Windows, back in the day, such was not the case: if one process had a file open another could not delete it. (I am using the past tense here because I just tried doing that with Windows 7 and I succeeded. Maybe NTFS has finally caught up with the 1970s, or maybe the user interface just removes the icon from the desktop without actually deleting anything.) So if an upgrade had to modify some file that the OS needed to keep open, then the only way to accomplish the upgrade was to shut down the computer and swap in the new versions before the OS reopened them.

[identity profile] rjmccall.livejournal.com 2010-11-15 09:31 am (UTC)(link)
Almost all software updates require changes to code. That code might be in the kernel, in a dynamic library, or in a program binary. The update either has to hot-patch currently running code — more about this later — or it has to shut down everything which has that code loaded. That's impossible for the kernel, of course, and relatively painless for programs unless they're system daemons. Thus the major issue is dynamic libraries. Command-line programs tend to have relatively few dylib dependencies (other than the C/C++ standard libraries), so a box which doesn't run a GUI (or can drop out of its GUI) can usually patch most of the system without needing to technically reboot. A GUI program, on the other hand, tends to have dozens of different dynamic libraries loaded at once — many more on Mac OS, which takes this to extremes — and so it's much easier to just reboot the GUI, which usually means the system.

The alternative, hot-patching, is pretty risky and takes a lot of QA resources, which is why Microsoft has started using it heavily but everybody else is a bit gun-shy. There's a fun attribute (__hotpatch, I think) the MS compilers let you pepper onto functions; it requests that the function be laid out in memory in a particular way which permits the first instruction to be atomically overwritten with a near jump to a scratch buffer containing a far jump to some replacement code. If there's a security problem in a single function, that makes it pretty easy to fix a DLL in memory, and then it gets fixed properly on reboot.