Leaving Dell Dimension 8300 running 24/7 ...?

Discussion in 'Dell' started by Thomas G. Marshall, Feb 12, 2005.

  1. Ok, but Steve was pointing out what he saw as a bottom line, which was that
    for whatever reason the systems left on seemed to have far fewer crashes.
    /Regardless/ of the underlying reasons, Steve makes a cogent argument when
    armed with a sampling of several computers, no?

    IOW, are you saying that there /must/ be some other explanation for Steve's
    observations? If so, what might it be?


    w_tom coughed up:
     
    Thomas G. Marshall, Feb 13, 2005
    #21
    1. Advertisements

  2. Thomas G. Marshall

    User N Guest

    I was mostly generalizing. I have met some (technically cluefull) people
    who own an 8300 and leave it up and running full time. But I can't recall
    the specifics of their config/environment, and I don't know how that
    would compare to yours.

    I'm not aware of any fundamental cooling problems in the Dimension
    8300 line, but that doesn't mean much. A google search of the web and
    newsgroups, and a search of the Dell forums, should turn up numerous
    complaints/discussions if there is some inherent design problem. I'm
    not sure what capabilities your box has in terms of reporting temp for
    key components, but if you are that concerned you could gather some
    readings one way or another. If your box is capable of coping with
    heavy use during the warmest periods in your home/business, it should
    be able to cope during less demanding times. I'd be surprised if active
    cooling is inappropriately curtailed in any power savings mode. But
    there again, if you are that concerned you could do measurements/tests.
     
    User N, Feb 13, 2005
    #22
    1. Advertisements

  3. Thomas G. Marshall

    Paul Knudsen Guest

    My wife leaves her 8300 on all the time. No problems so far. I can
    see the back of it from here and the network activity light flashes
    occasionally!

    Seriously, they go into standby mode and use very little power so
    cooling should not be a problem.
     
    Paul Knudsen, Feb 14, 2005
    #23
  4. Thomas G. Marshall

    Julian Guest

    Why leave a computer on 24/7 if it isn't a server? We all have to start
    to look for ways to curb excessive energy use and prevent global
    warming, and one of the most painless ways to do this, it seems to me,
    is to turn equipment off when you aren't using it.
     
    Julian, Feb 14, 2005
    #24
  5. Thomas G. Marshall

    Julian Guest

    Same here, also for environmental reasons. I have a 7 year old Dell
    Dimension XP 400 which was in daily use by me until a few months ago,
    when it became my wife's computer. It has been used on average 10 hours
    a day 5 days a week for all that time, always switched off at night, and
    has never suffered a single hardware failure.
     
    Julian, Feb 14, 2005
    #25
  6. Thomas G. Marshall

    Julian Guest

    They still use more power than when actually shut down. With power
    saving enabled the HD and monitor power off when the system isn't being
    used, so the wear and tear aspect of starting and stopping is still present.
     
    Julian, Feb 14, 2005
    #26
  7. Thomas G. Marshall

    Sparky Guest

    This "leave the computer ON" discussion goes back much farther than Bill
    Gates dreaming of being a billionaire. I worked for IBM 1965-70 and it
    was common wisdom then that the most likely time for a [mainframe]
    system failure was being when being powered up. Many shops at this time
    who didn't run 24/7 left their S/360's on all weekend to avoid a power
    on situation Monday morning.
     
    Sparky, Feb 14, 2005
    #27
  8. Thomas G. Marshall

    Ben Myers Guest

    And we powered down our GE-225 computer every night back in the '60s, unless I
    was working around the clock doing some programming. Of course, it DID consume
    a bit of electricity. Less powerful than an original IBM XT, but with
    processing unit the size of 4 large refirgerators, hard disk pizza oven, and a
    bank of tape drives, all on maybe 1000 square feet of raised flooring. Of
    course, there were no power saving options back then... Ben Myers

     
    Ben Myers, Feb 14, 2005
    #28
  9. Julian coughed up:

    Because otherwise it's too freaking cold in my computer room.

    No, seriously, the reason for leaving it on 24/7 is that I *don't* want the
    AV scan to run while I'm actually using it. Thus it seems that the best and
    easiest way to handle that is to run the AV scan late or at least when I'm
    done with it.

    Unfortunately, it then becomes out of sight and mind, which means that I'll
    never get around to checking for its completion and turning off the system.
    And checking for that is a pain in the ass anyway.

    Which brings me back to this equation:

    Scanning every day == System always on

    I don't like the notion very much, but it is reality. At least in my puny
    universe.

    --
    "This creature is called a vampire. To kill it requires a stake
    through its heart." "I shall drive my staff deep into its rump."
    "No no, this creature is from a dimension where the heart is in the
    chest." "....Disgusting."

    Demons discussing "Angel", a good vampire from our dimension visiting
    theirs.
     
    Thomas G. Marshall, Feb 14, 2005
    #29
  10. Thomas G. Marshall

    Julian Guest

    You could write a bit of VB script to do the scan and then shut down the
    computer when it has finished.
     
    Julian, Feb 14, 2005
    #30
  11. Thomas G. Marshall

    w_tom Guest

    The computer left on 24/7 was finally shut off. Then it
    would not power up. This computer was now considered failed
    due to being 'power cycled'. But it should have remained a
    prime example of why 24/7 is so destructive. In the meantime,
    once we first made the distinction, then suddenly computers
    left on 24/7 were failing more often.

    We see this with computers, TVs, radios, and so many other
    electronics (not to be confuses with high tech, custom
    designed circuits designed to maximize other parameters - ie
    Deep Space Network). If leaving computers on 24/7 was so much
    better, then we must also leave on TVs and radios 24/7. One
    cannot have it both ways. Either Steve must also recommend
    leaving TV and radios always on, or he has misrepresented the
    data by doing subjective observations.

    Bottom line: once we confronted how these people were
    'collecting' the data, then suddenly numerous computers listed
    as 'failed due to power cycling' were transferred to the
    column that said 'failed due to 24/7 wear'.

    Provided is how one must first learn what caused the damage
    - forensic analysis often at the electronic component level.
    Provided are numbers taken from manufacturer data sheets that
    demonstrate why power cycling is not so destructive. Provided
    is how those who recommend 24/7 fail the properly collect
    their data - and then make subjective conclusions.

    And then the contrarian example. If leaving computer on
    24/7 is better, then leave your TVs and radios on also.

    In parallel example, the many will also declare that power
    cycling is destructive to incandescent lamps. They will use
    the same subjective analysis. For example, light bulb burns
    out most often when powered on. Again, they failed to first
    learn why light bulbs burn out. Hours of operation and some
    other parameters. Power cycling does not appear on a list of
    reasons for light bulb failure. And yet so many using the
    same subjective data will say otherwise. Those same reasons
    used to promote 24/7 computer operation will also declare
    power cycling destructive to light bulbs.

    I one example, the fan failed prematurely. Then when
    powered up, the fan inside that power supply did not start
    spinning. Therefore a PSU component failed. The repairman
    then blamed power supply failure on power up surge. In
    reality, the failure was due to a fan with too many hours of
    operation and a mislocated hall effect sensor. Where do you
    put this failure? Due to power up? Or due to too many hours
    of operation? Or due to manufacturing defect? Without the
    details, one might immediately cite this as an example of why
    power cycling is so destructive - because that one did not
    first learn the facts. Failure was due to a manufacturing
    defect in combination with too many hours of operation.

    Don't fall for the myths about 24/7 operation. Use the
    machine as designed. Power off, sleep it, or hibernate it
    when done. Demonstrated by fundamental theory, experience,
    contrarian logic, and real world examples is how so many fall
    for 24/7 myths.

    BTW, I have over there an IBM AT-PC vintage 1984.
    Motherboard was upgraded but most of it, including power
    supply, is original. It gets powered up often when the network
    requires it functions. No failures. None. This computer is
    a 1994 Compaq DX2-66 Mhz machine that is used for Internet
    access. Again, never a failure. All are power cycled daily
    if not more often - without any failures. BTW this Compaq is
    also used during every thunderstorm. Again no failure because
    I first learned why failures occur and don't worry about
    lightning damage either. Its called first learning how things
    work.
     
    w_tom, Feb 14, 2005
    #31
  12. Thomas G. Marshall

    RRR_News Guest

    Tom,
    I have two (2) desktop PC's, a '98 Packard Bell MM955 (333Mhz AMD) & '03
    Dell Dim 4600 (2.4Ghz P4). Except for reboots for software installation,
    hardware installation, when vacuming is done in the room, and the rare power
    outages in my area. The Packard Bell has been running 24/7 since OCT'98. And
    Dell for the same reasons mention before since JUL'03.

    What I would consider doing, is to turn off the CRT monitor attached, when
    not in use. Gone through two (2) generic made Proview ones for my PBell over
    the same time period. Have no clue about flat screen monitors, but the
    current ones should be better made.

    Just make sure that the room is kept at a reasonable temperature, and PC is
    kept dusted for proper ventilation.

    --

    Rich/rerat

    (RRR News) <message rule>
    <<Previous Text Snipped to Save Bandwidth When Appropriate>>



    "Thomas G. Marshall" <>
    wrote in message
    (XP SP1 / Dim 8300 / 3.0 GHz / 800 MHz FSB / 512 meg / bla bla...)

    I seem to be getting a virus here and there found by NAV2003 that makes its
    way in through the auto-protect. In this case I got a couple of circa 2003
    Trojan.ByteVerifies. Don't know how the heck such a simple file can land on
    my system, particularly since it's such a well understood virus.

    I am considering leaving the system on 24/7 and establishing a daily viral
    sweep.

    Questions:

    1. Is the 8300 cooled enough or otherwise built for staying on? I'll of
    course use power options to shutdown unnecessary things and brown down
    perhaps the motherboard or something. I'll have to learn more about
    this---I'm half ignorant on all things power control except for hibernation.

    2. Am I leaving myself open statistically to more infection simply by
    staying on? I'm running SP1's firewall. SP2 is not an option currently
    because of software incompatibilities.

    3. Any thoughts on what I might have to worry about, in general and/or
    specifically to the 8300?

    Thanks!
     
    RRR_News, Feb 14, 2005
    #32
  13. Thomas G. Marshall

    steve Guest

    I agree entirely. The 24/7 systems fail more often when switched on
    but are only needed to be switched on once or twice a year. Ours
    rarely failed during the time they were switched on. The occasional
    break was usually due to a HD fails. Fans did not fail as often as I
    expected they would.
    Not at all, if it's not being used, switch it off. However, if it will
    need to be switched back on a few minutes later, leave it on!
    The development systems I mentioned were left on all the time because
    they were in use all the time running real development work round the
    clock. I was one of the people using them. The admin machines were
    switched off when they were not required. That was every night, all
    night. It's not a random sample of systems. These are two sets of
    machines. The admin systems are in the office environment, with almost
    no regular maintenance. The development systems are in large computer
    rooms and share the mainframe environment to some extent. They are
    also kept clean, that may explain some of the differences.

    The data was collected from the logging system kept by the maintenance
    teams that serviced our computers. I wrote the logging system about
    twenty years ago.
     
    steve, Feb 14, 2005
    #33
  14. Thomas G. Marshall

    User N Guest

    You seem to want a "AV" scans to be performed every day. Specifically,
    what "AV" software are you referring to? Is it something that lacks the
    ability to automatically scan on an as needed basis? For example, when
    the filesystem opens [or closes] files?
     
    User N, Feb 14, 2005
    #34
  15. Thomas,
    If your knowledgeable about hibernation then you're ahead of Microsoft. :)
    Paul


    Thomas G. Marshall wrote:
    snipped

    ---I'm half ignorant on all things power control except for hibernation.

    snipped
     
    Paul Schilter, Feb 14, 2005
    #35
  16. w_tom,
    If the fan failed due to bearings that would be very easy to observe
    when you remove it. If it spins freely and smoothly, the bearings are
    fine and the motor failed. If it doesn't rotate well then the bearings
    failed and could have then caused the motor to fail.
    Paul
     
    Paul Schilter, Feb 14, 2005
    #36
  17. Thomas G. Marshall

    w_tom Guest

    So you recommend running the system and consuming all that
    power to only get the same reliability as when a system is
    powered down at the end of the day?

    24/7 operation provides nothing significant to system
    reliability, causes increased component wear, and consumes
    electricity to no purpose. Wear from hours of operation is
    significant on the parts that fail most often. Power up does
    not cause wear despite the many myths to the contrary. Those
    who promote power up as destructive do not tell us which
    electrical parts failed or why they are damaged. IOW a powerup
    surge is mostly wild speculation. Why consume electricity to
    no purpose?

    A computer should work just fine in a 100 degree room as it
    does in a 70 degree room. In fact, I don't use air
    conditioning. Hate the stuff. These computers of 10+ years
    in constant use get run even when temperature is 100 degrees.
    They still don't fail. Same is true of dust problems. Dust
    problems are greatest when the naive start adding fans to
    solve the mythical heat problem. Too many fans don't cause
    any appreciable cooling but can cause excessive internal dust
    balls. Problems created by heat and dust are common myths.

    Run a new computer one day in a 100 degree room. If the
    computer has marginal or intermittent components, those
    defective components are best identified before the warranty
    expires. A 100 degree room is not destructive to a computer
    despite so many myths to the contrary. Heat is an excellent
    diagnostic to find defectives before hardware fails obviously.

    I've been doing computers and aerospace electronics for too
    many decades to fall for these well promoted myths about 24/7
    operation, heat, and dust created problems. Too many years
    and too much asking questions at the electronic component
    level.
     
    w_tom, Feb 14, 2005
    #37
  18. Thomas G. Marshall

    w_tom Guest

    Fan failure and bearing wear, as described in the earlier
    post, could not be easily identified by restricted spin.
    Let's get down to component level analysis. That Hall Effect
    sensor may not be sitting flat on a PC board inside that fan.
    Torque is significantly reduced. With but slight bearing
    wear, now that fan would not have sufficient torque to startup
    every time. Using a soldering iron, I once reseated a Hall
    effect sensor so that it was not tilted more than 15 degrees.
    A flat sitting sensor increased fan torque significantly so
    that it could overcome an undetectable amount of bearing wear.

    Just because a bearing still feels free and smooth does not
    mean the hall effect sensor is properly seated or that a fan
    has not suffered some bearing wear. Only slight bearing wear
    could make the fan intermittent (another example of why QC
    inspection does not create component and therefore system
    reliability). Fan failed due to a combination of too many
    hours of operation AND a manufacturing defect. Others may
    erroneously assume fan failed due to power on surge.

    BTW, fan tends to be a more frequent failing component. So
    we install two fans working in series. One blows in. The
    other blows out. Then when one fan fails, the other will
    maintain airflow until a human finally discovers the problem.
    One fan provides sufficient cooling. Two may be installed
    only because fans fail so often with mechanical wear.

    Yes a fan with massively worn sleeve bearings can be
    apparent. But that is not the only reason why bearing can
    cause fan failure. Notice the point repeatedly made. One
    does not know why failures occur only because they observe.
    Observation can report when a failure has happened. One must
    also understand reasons why that observed failure happens to
    learn something useful. The devil is in those technical
    details. Many who recommend 24/7 operation for reliable
    operation don't first learn underlying theory. Therefore they
    can be deceived by what they have observed.

    Another way to improve reliability (which is why those
    Hondas and Toyotas also had higher reliability)? Keep those
    human hands out. Humans are another major source of failure
    when we demand the reliability I call minimally acceptable
    even for household appliances. We once wanted to clean out
    the massive dust inside the software development system (in my
    naive and younger days). Manager of software development
    would not let us - and for good reason. Major dust balls were
    not a problem. But human hands do create new and intermittent
    failures. Cleaning can even create problems despite a human
    emotion that a clean computer is better. Don't let human
    emotions or human hands reduce system reliability.

    One more interesting fact. If a vacuum cleaner causes
    computer failure, then the computer has internal hardware
    problems. Again, too many will blame the vacuum rather than
    first learn how (or why) failure happens. Therefore one may
    blame the vacuum rather than a defect inside the computer.
    Just another example of how observation alone leads to
    erroneous conclusions. Just another example of why we want
    everyone to study junior high school science. To know
    something, one must first learn both the underlying theory and
    obtain supporting experimental evidence. Anything less is
    called junk science reasoning or wild speculation.
     
    w_tom, Feb 14, 2005
    #38
  19. [top posting fixed into bottom posting, so I could more sensibly snip]

    w_tom coughed up:

    Yes, I followed this reasoning already from your prior posts.

    First, Steve was not talking about a sampling of one. He was talking about
    several computers in two distinct groups: turned off, and 24/7.

    Second, he reports that over a *period of time* he saw the turned off
    machines fail much more often than the 24/7 ones.

    Your logic addresses what category someone might place a machine at the
    precise moment a failure is discovered. That is not what Steve is doing at
    all.

    I still don't see you address the bottom line issue: The ones that were
    left on 24/7 failed far less often. Yes, it depends upon what category the
    failed machines are placed in. But when you *start* with the category and
    *then* count the failures for each, as steve did, he ends up with the
    following:

    For any given time span of your choice:
    24/7 machines: X repairs
    on/off machines: much more than X repairs

    ....which I believe is a fair assessment of his observations. I still don't
    see you counter this directly. I can only assume that you believe that
    steve has made a fundamental error here.

    But it's /not/ this error:

    Steve sees a machine fail at power on
    Steve then assumes it's because of power on

    It is /this observation/ (without error):

    Steve sees two groups of machines: on/off and 24/7
    Steve sees the 24/7 with far fewer failures.

    ....[rip]...
     
    Thomas G. Marshall, Feb 15, 2005
    #39
  20. User N coughed up:

    As I said in the original post, it is NAV2003.
     
    Thomas G. Marshall, Feb 15, 2005
    #40
    1. Advertisements

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments (here). After that, you can post your question and our members will help you out.