« Goodbye Kodak T-Max P3200 | Main | Free Today: Zoner Photo Studio 14 PRO »

Wednesday, 03 October 2012

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

Couldn't corrupted image files also be evidence of a dud memory card? That seems to be a common enough problem.

That sounds frustrating. With every possible shred of respect for the OP's experience (which isn't clear from the post), I wonder if it makes sense to revisit some basic rules of thumb with regard to transferring files from camera to computer.

Sometime the simplest place to start is at the beginning of the process. So if it were me, I'd check my card reader and cards. Then be sure that I am neither ejecting the cards from the reader improperly or (gasp!) erasing them from the computer while mounted instead of formatting back in the camera. Just a thought.

I'd suggest a full-frame camera. You'll get more bokeh, which is easier on hard drives. And make sure that you - not some program or some assistant - is pressing the shutter button.

Seriously, if the photos that are corrupt are important to you, I'd suggest removing it from the case and trying it in a hard drive docking unit like this one from Thermaltake - http://www.thermaltakeusa.com/Product.aspx?C=1346&ID=1895

Ctein is right that controllers rarely go bad, but sometimes they do.

If that doesn't work, I've heard good things about data recovery services. Expensive, but they work.

Hello,
in my case, corrupt files were due to moving them from one drive to another by "drag-and-drop" instead of a dedicated backup software. In all cases, the backup of the original was fine and the corrupted file appeared corrupt in all photo editing software that I checked. The original transfer from the memory card to the computer (via a card reader and Lightroom) has not caused any problem so far.
I would strongly suggest to copy/move files only via software that verifies what it is moving/copying. Moving from within Lightroom was fine, for example. Tools like "Foldersynchronizer" or "Chronosync" as well (I am sure many others are similar). If you want, there is a short post about the problem that I had here: http://www.blog.floriansphotos.com/2012/07/magenta-file-corruptions.html

You should know better than write
"Seagate drives, in my experience, are not reliable".
Absolutely misleading unless you have statistically relevant data.
Myself had failures on three Western Digital, two Samsung and one Hitachi in my IT support career. No Seagate failure despite having owned and used about two hundred, about two times the number of all the other brands combined. Not statistically significant at all but good enough karma to buy from that brand only now.

While the tests Ctein suggests are all valid and appropriate, the first thing I would recommend is rnning a complete anti-virus scan. Especiallly if there is any indication that something other than a photo file may have gone bad.

No question. Duplicate all data to another drive first and foremost. Driveimage XML will do the job on aPC, carbon copy cloner on mac. Exercising or verifying a drive might be its swan song. You wouldn't do a 100 meter dash to confirm a heart condition, would you.
I don't know file recovery programs, disk recovery yes. Your damaged files are likely gone (sometimes the look can be quite artistic as I experienced myself).

Once backedup test to your hearts content. From my experience, everything should be tested, memory, drive, CPU. Google hardware test.

I had a similar issue that turned out to bad memory (RAM) in the computer. Use memtest to check for that. I was seeing other weird things besides just the image corruption though.

I haven't had any trouble since I switched to a Rolleiflex 3.5F with black and white Tri-x film. I am able to transfer them to a different (manilla) folder by hand. That works fine. When retrieving them, sometimes I have to stretch to reach the folder the larger-than-full-size-sensor 6x6 negatives are stored in. I find they are accessible 100% of the time without fail. :-)

For anyone on Windows Vista or Windows 7, Robocopy is a very reliable built-in command-line utility.

Open a DOS window and type something like:


robocopy "D:\Photos" "f:\Photos" *.* /s /sec /xo

to update all folders on f with new/updated files from d. Lots of other switch options for excluding files/folders etc.

In my lifetime I've had 20 physical hard drives whose brand I know: 10 Western Digital, 2 IBM, and 8 Seagate. The failure tally so far: 2 WD, 1 IBM, 0 Seagate. This is just anecdotal of course, but I think you better assume that every drive is unreliable, not just Seagate.

I had problems recently with a Western Digital "Black" hard drive which has, I think, a 5 yr warranty, so its one of their better drives. I found the problem using a program called "Hard Disk Sentinel". There is a freeware version and a pro paid version. It runs in the background and you can see the current temperature of all your hard drives and a real-time "health" check, plus all sorts of other info. The drive was replaced under warranty. It had some bad sectors. It caused me all sorts of intermittent file errors and Windows spent a lot of time recreating 'repaired' files in the affected folders.

This was one of the innovations of ZFS: http://en.wikipedia.org/wiki/ZFS#Data_Integrity (and probably earlier, just when I became aware of the issue of bit rot: http://en.wikipedia.org/wiki/Bit_rot)

I haven't experienced it (or ... haven't noticed it if it happened) but if you have images that you HAVE to have protected, you might need to look into the file system as well as hardware causes.

Every incidence of corrupt file problems I have dealt with have involved cheap no name brand or counterfeit name brand memory cards. Once for myself, when I got a no name brand of CF card free with a camera and it corrupted files the second time I used it. The others with photogs and clients I do retouch for who have cheaped out on cards. Sometimes images would open in one program but not in another. Just sayin'.

If the problem is memory cards, they may be counterfeit. One manufacturer says that 3/4of those sold on eBay are fakes, usually made from reject chips which can cause data problems.

For a moment there, I thought this was a column on ethics.

Dear folks,

Just about anything that can go wrong in a computer can cause data corruption. So all the possibilities that people mention are, indeed, possibilities. But the absence of general software or system crashes argues against it being bad RAM, and the fact that it's happening to both the image files and the backups indicates that it likely has nothing to do with how memory cards are being handled or transferred or how files are being copied.

Robert E, though, is right; my first advice should have been that if the computer was openable it get a thorough cleaning and all the cables be unseated and re-seated. That fixes a remarkable number of misbehaving computers.

John's suggestion of getting an external drive docking unit is also a good one. They are generally useful and not horribly expensive.

Marc, three of my five Seagate drives have died extremely premature deaths. That makes my statement factually correct and in no way misleading. I'm thrilled it is not representative of your experience, but I have no reason to retract it. As the saying goes, once is happenstance, twice is coincidence…


pax \ Ctein
[ Please excuse any word-salad. MacSpeech in training! ]
======================================
-- Ctein's Online Gallery http://ctein.com 
-- Digital Restorations http://photo-repair.com 
======================================

Here are some further notes about the things that can cause "corruption" of various kinds, as I think Ctein's response covered only some of the possibilities, and I have a certain amount of experience in this area[*].

There are generally a hierarchy of possible causes of damage to files and filesystems. The following list is not in strict order of likelyhood or severity since such orderings depend on many factors. I'm also not going to talk about remedies here as this is already going to be too long, other than to say the the remedies that individuals need are very different in nature to those that large organisations need (if anyone cares about remedies I could dig up some stuff I have written on that).

1. Application-level damage. Pretty much every application you use is written in an inherently unsafe language by people who understand neither data integrity nor the language they are using very well, working to a deadline in a culture where shiny new features are more important than pretty much anything. Your favourite photo-editing tool is maintaining some huge data structure, partly in memory but mostly on disk, and you had better hope that every time it falls over it doesn't leave the remaining on-disk part of that structure in some bad state, or if it does that when it starts next time it detects and fixes this.

But you pretty much know that, in fact, it will sometimes leave the on-disk structure in a bad way, and it will not reliably detect this. At some point later your photo collection will turn into ten thousand blurred pictures of an elk.

2. Conventional disk-system failure. This is where something in the I/O path went wrong, but the the system knew it went wrong. Some such failures are innocuous and are generally dealt with by the system transparently (anyone using any kind of network-attached storage will be experiencing such failures every time a packet gets lost in transit, for instance). The majority of such failures however are *not* innocuous and indicate some imminent failure. Of these, most are failures of disks themselves which, like anything with moving parts, suffer wear and eventual failure. Of disk failures in turn the most common are abrupt catastrophic failure of a disk. You know when this has happened (but watch out if you are using some kind of mirrored disk system: you don't want the first thing you notice to be the failure of the *second* disk in the pair). However disks do occasionally complain for a while before failing: if this starts happening *buy another disk*: they do not get better from this.

3. OS & filesystem-level failures & non-disk-related hardware problems. Like your application, the OS is maintaining a complicated data structure which is partly on disk and partly in memory. The OS can suffer abrupt failure for lots of reasons - loss of power, hardware problems unrelated to the disk system and just bugs. In that case, as with your application, you had better be sure that it either left the on-disk part of the data structure in a good state or notices if it did not.

The good news is that the situation here is much better than it used to be. It has been understood for quite a long time how to design these complicated datastructures – filesystems – in such a way that the state on the disk is (almost) always consistent. In the last 15 years or so these developments have made it into almost all end-user systems. I won't specify how you check that you have a good filesystem because I am not familiar with all of them, but in general you want to make sure that the filesystem has "logging" or "journalling" *and that it is turned on* (which it should be by default).

Additionally, the people who write filesystems are generally smarter than the people who write applications, and they do a lot of work to ensure that the filesystem actually behaves properly in the case of abrupt termination of the OS. Mostly they get this right, though I can't resist mentioning a well-known major "desktop" OS vendor (no, the other one), who have recently shipped a version of their filesystem which can eat itsef alive. Come the revolution, they will hang from the lampposts around my palace.

Filesystem-level damage can still occur (especially if you were foolish enough to buy from the vendor hinted at above) and, worse, can occur and go undetected: if the system trusts its "filesystem state on disk is OK" flag then it will not check for damage and so may neveer find it. It is, therefore, worth while running your system's filesystem checker every once in a while, especially if you think there may be damage. Use the one that *came with the OS* not some third-party one, and expect it to take a *long* time: if it completes rapidly all it did was to check the "state on disk is good" flag and that is no use to you. How long it takes generally depends on the number of files, not how big they are, but on a disk of a TB or so you might expect it to take from 15 minutes to many hours.

And finally you need to understand what "filesystem is always consistent on disk" means. It does *not* mean that all the data you think you have written to the disk is there, it just means that the missing data will not cause damage to the filesystem. Consider, for instance, your photo-editing software. Let's say it writes a new photo like this: first write the image file, and now update the index. If the system crashes between writing the image file and writing the index, then the filesystem is just fine, but your application has probably lost track of that photo. A well-written application will deal with this problem by having some kind of tranactional integrity, but your photo software is not in that class. Even worse, the application may not be doing what it needs to to verify with the OS that the data it just wrote is *actually on non-volatile storage* rather than on the way there, to be written at some later, possibly much later, time, before telling you it is: it is astonishing how many programmers do not understan that this is something they need to check.

4. Silent failure. This what happens when some part of the system says "oh yes, I did that, it's all fine" but in fact it did not do that at all, but something else, or nothing. This has been a known problem for a long time: for instance computer memory ("RAM") can undergo occasional bit-flips which can go undetected. The solution to that problem is also the solution to the general problem: you add one or more extra bits to each memory word in such a way that these flips can be at least detected (parity) and sometimes corrected (ECC memory). Serious machines have at least parity memory: and if your machine does not then it should (I don't know what the current state of play is with regards to parity memory on end-user systems: I hope it is pervasive, but fear it is not.)

In the last five or ten years the phenomena of silent failure of disk systems has become interesting. This is because, as the amount of storage increases mostly exponentially and the complexity of the storage systems increases (I hope less than exponentially, but I fear not), the likelyhood of even a statistically rather rare problem happening becomes signicificant. Worse, the cost of recovering from such a problem becomes significant as well.

That does not make these silent failures common: they are almost certainly the least common of the problems mentioned here.

The solution to these silent failures is generally a new kind of filesystem: one that keeps cryptographically-good checksums of data that it writes (and keeps those checksums somewhere distant from the data). Such a filesytem can, when reading data, verify the checksum is good, and at the very least signal an error if it is not, but often repair the data by using another copy.

I am not sure how common such filesystems are on end-user systems yet (Sun's ZFS was the first filesystem to make a big point of this property), though I expect them to become common over the next few years. However I would stress once again that silent failures are *not* that common.

---

[*] This was how I earned my living, in other words: dealing with the kind of systems where, if this kind of thing happens, it gets in the news.

I also had this problem and traced it to bad RAM. Replacing the RAM fixed the issue.

The underlying problem here is that consumer filesystems, such as are implemented in Windows and OS-X, count on the hardware to report errors, on both read and write. The hardware isn't actually up to the job.

The only consumer-level solution I know is to use something with the ZFS filesystem -- NexentaStore and FreeNAS both support it. ZFS keeps its own block checksums, using much better checksum algorithms than the hardware uses. It verifies the checksum on every read. Also, you can perform a weekly "scrub" on your data store (or backup data store), where it goes and reads each copy of each data block (multiple copies exist if you use mirroring or RAID for redundancy) and verifies that the checksum of the actual data matches the stored checksum. Thus you will see signs of a disk decaying BEFORE you lose data permanently, and can replace it and "resilver" the data from the redundant copy (or, worst case, restore from backup).

It's the only system I know for home use that keeps and verifies block checksums. (You can of course buy enterprise storage units for home, if you have the budget to afford them AND the expertise to manage them.)

For static storage, the PAR2 application will create checksums and chunks of redundancy within whatever filesystem, and verify and recover from them. It doesn't automatically track changes, though, hence my suggesting it only for static storage.

If a person is experiencing data corruption and has isolated the hard drive as the problem, replace the hard drive, easy. If you don't have two backups, one offsite, I hope this scare teaches a good lesson. If you have files that are not backed up, and assuming you have access to a PC, buy and run SpinRite. Can take a few days on the strongest setting, your best bet to recover a hard drive for under 1k. Runs on any spinning drive, Mac or pc, even TiVo. Just need to plug it into a PC to run. Even if your drive runs well afte running SpinRite, replace it.

You should have a system that constantly and automatically backs up your data for when the time comes. In my setup my had drive could die, and within two hours from buying a new drive I canbe running like nothing happened.

This problem of corrupted jpeg files on hard disks showed up in my universe about six years ago, and at the time, it seemed that I was the only person in the world who had the problem, since searches of the web for any related information came up blank.

In desperation, I put up my own web page to describe the problem, and what I understood of it. Since then, I've been receiving a steady stream of enquiries from other people who now have the same problem. Here's my web page:

http://www.alkiracamera.com/index_topic.php?did=104654&didpath=/104654

My current solution involves periodic use of "ImageVerifier" software from Marc Rochkind to detect whether the problem is occurring, and then replacing the hard disks with new ones when the problem shows up. Having a duplicate of all images on other hard disks is necessary in order to recover from the loss of a corrupted jpeg.

I can tell you they don't make hard drives like the used to.I am computer tech for a living. Just my opinion back up your hard drives or get cloud storage for any important files. I find western digital to be the least likely to fail. seagate used to have a very good name but not so much anymore ibm and samsung seem to do ok. but I only buy western digital.

I've had clusters of failures with every major hard drive brand, and I'm not even running hundreds of drives in commercial service. There's random statistical variation, and there's an ebb and flow of which models are good, and so forth.

What this does, for me, is emphasize the importance of NOT buying a batch of drives of the same make and model all at once and putting your entire backup system onto them!

Re. different brands of drives: I work as a programmer for a company that has a few petabytes of storage. We generally see one or two drives fail each day - as you might expect when you have thousands of consumer drives running 24/7 and being worked fairly hard. There doesn't seem to be huge difference between brands in the long term, although at different times it's felt like Seagate or WD or Samsung were overrepresented in the failures.

My digital photo archive scares the hell out of me. That's after 35 years in IT and a consequently paranoid attitude towards data housekeeping procedures. My film archive is also of course accident prone, but somehow the risks seem to be more within my control. As the years advance, I'm seriously considering dropping digital altogether.

I did some analysis a few years ago on the probability of data loss with two replicas of each file (different drives) versus three replicas of each file. The added reliability provided by a third replica is quite large, and I recommend it to anyone even marginally serious about data preservation. Drives fail - all drives, all manufacturers. It's a fact of life. Even the best manufacturer has an occasional dud. What you have to do is ensure that a failure doesn't cost you a lifetime of images.

If you are NOT one of the technically minded people - as most of the above posts plainly are - I would suggest a Drobo for onsite storage (and back it up offsite/cloud). They are simple to use, buy some disks, shove them in and plug it in. They can be set to allow for 2 disk failures at one time, are self-healing - just toss in a new drive to replace the dead one, and it all backs up.

It won't help you if the data is corrupt though- you'll just have lovely copies of corrupt data.

The first priority with problems like you describe is to narrow the possible sources.
I'm assuming you are on a PC here. My experience comes from working for Microsoft and writing data data recovery software.

- First check event logs.
Open Control Panel, go to Administrative Tools and choose Event Viewer
- In Event Viewer select Microsoft > Windows > CorruptedFileRecovery Client. Only one folder under that is Operational. If there is a system problem there will be log entries describing what is happening. If number of events = 0, it's likely not an Operating System or Hardware problem. The problem would be with software you are using to download or view your images.

Another tactic is to open your Start Menu, and in the right column, choose Help & Support. A dialog will come up and at the top type in CHKDSK.
- The instructions for using ChkDsk will come up. Follow that.
Back up all important data before running ChkDsk!

The resulting logs will tell you if you have a system problem. If only your photo files have problems, the problem is with the software you are using to download, store or view your image files, If ChkDsk shows problems with other files, the problem is with the Operating System or more likely hardware. But at least you will know where the problem is.

Gene

Paraphrasing Kenneth Wajda here (he got to it first)

- Yes, I was having some trouble with corrupted files too, but troubleshooting revealed it was the storage medium, and never the original files themselves.

The little tabs on the hangers were getting worn out and the files kept falling to the bottom of the cabinet drawer. My solution was to replace the hanging files with three-ring binders. Very elegant solution it turns out.

The Print-File negative preservers are already, conveniently, pre-punched (i.e. "coded") to be fully compatible with these binders. Since this configuration change, which amounted to a complete overhaul of my filing system, (and which cost me $16) I haven't had a single file access problem. Fortunately, NONE of my files was ever actually corrupted at all. Whew!

My philosophy has been multiple backups on multiple media types (along with a tiered approach for more expensive media). In theory, everything gets backed up to an external hard drive and also burned to a blu-ray disk. For the tiering, I select my best/most important images and back them up online. Finally, I try not to delete anything from a flash card until I've backed it up.

In practice, I'm behind on everything, and the online backups rely on my having selected my best/most important images in Lightroom which I have done somewhat haphazardly.

Dear Richard,

I agree. It's much easier to control the situation with film. The fact that, by its nature, it's a single point failure situation means you know where to devote your efforts. The majority of my negatives are in airtight pouches in a deep freeze in the garage. Not only are they well stabilized, but they're well protected against a major earthquake (digging the freezer out of the rubble could be a major undertaking, but the film is going to be intact) and many fires, so long as the whole structure doesn't burn to the very ground (it takes quite a while for the internal temperature of the deep freeze to rise high enough to damage the film).

It's a set it and forget it situation. As you well know, digital is not.

Of course, in both situations, if you don't engage in good practices your photographs are at risk. The couple of commenters talking about their file cabinets and manila folders are living in fools' paradises. They're working with no protections against single-point failures (burglary, vandalism, fire, flood, earthquake, tornado…)and while such failures aren't likely, if it happens they're going to lose everything.

Apropos your (dis-) comfort level, I think the most telling point is that I could relate my film preservation techniques in 16 words. I thought about summarizing my digital preservation techniques and realized that would take a whole column (which I shall probably write at some point).

That kind of says it all. It's not at all hard to preserve digital information, but it sure is a lot more involved.


pax \ Ctein
[ Please excuse any word-salad. MacSpeech in training! ]
======================================
-- Ctein's Online Gallery http://ctein.com 
-- Digital Restorations http://photo-repair.com 
======================================

....oh, I may have neglected to mention, I really meant my file cabinetS, since there are actually five, not just the one, as I may have implied in a previous comment, which I'm afraid greatly simplified my storage methods. Please allow me to elaborate: Each of these "Cabinets", which is in reality a titanium reinforced capsule containing that group of negatives (which are stored in archivally safe polypropylene sleeves, BTW), and has been cryogenically preserved in a series of concrete and steel vaults equipped with "Nucoguard" (TM) insulation, Halon fire protection systems and offsite surveillance and control. As an added measure of safety, the five cabinets have been strategically located and buried, to a depth of one-half mile (I though a full mile to be a tad excessive) in five different mountains, in different ranges on seperate continents. In the event of a catastrophe, this is to insure that should something happen to one of the cabinets, only one-fifth of my archive should be risked.

So you see, Ctein, it may appear that I am living in a "Fools Paradise" indeed, but at any given time, only one-fifth of my negatives are actually at risk from any single failure, which, while painful as it may be, would be a tolerable loss.

[Hey, Maus, I'll handle the satire around here.[g] —Ed.]

Ctein

My point was merely that risk = likelihood * vulnerability * value.

With digital, you can mitigate vulnerability, but likelihood appears to be high. Means lots of work and attentiveness in my experience.

With film there are less ways to mitigate vulnerability, your freezer notwithstanding, but the risks you itemise are pretty unlikely where I live. So I sleep easier, while I know the digital gremlins are chewing away all the time.

Dear Phil,

You're making light of a serious situation, and I am NOT joking. Back in the days of film-only, those of us in the know spent many hours and articles trying to educate film photographers about proper film storage, which is not what you and Ken are doing. Your practices, are unfortunately, typical ones.

The historical facts are this-- most (as in almost all) photographers will lose the use of some of their film over their lifetime. Some (more than an insignificant number) will lose the use of most of their film. The theoretical archival life of a film image is like the theoretical life of a human being-- only a very small fraction of humans achieve that.

That's not a projection nor supposition, like the future durability of digital images. That's known.

Not making real efforts to preserve your photographs is like not having auto or health or fire insurance. You may go through life never needing it. That doesn't make it a smart bet.

Smugly crowing about how much better off film photographers are than digital photographers, even if true (arguable), doesn't make it one bit a smarter bet. It's a gamble that's still stacked against you.

There's plenty of literature out there on film preservation. All film photographers can educate themselves on how to do this properly and it's not beyond their technical or economic means.

Most won't. And that's no joke.

pax / Ctein

Ctein - I wish you wouldn't take my attempt at a little humor too seriously. I am well aware that proper, safe archiving and storage of media, whether film or digital, is a serious matter and I realize that's what this post was attempting to address.

Not that I don't take it least a tiny bit of pleasure poking fun at some of the problems digital photographers encounter, film definitely had - and has - its own shortfalls and risks, if only of a slightly (thought not by much) nature. In fact, I myself have many many digitized files of my negatives and prints, carefully stored on redundant onside and offsite drives as it would be a terrible loss if something were to happen to them.

Phil

Controlled- humidity cold storage of my film photos is, if not beyond my means, at least beyond my energy levels. It also makes getting at anything to do anything with it a much bigger project.

It's interesting that is true for me, because buying a big chest freezer and a bunch of ziploc bags, finding floor space for it, and keeping the power on, is actually not that much money or effort.

I guess I'm just a computer geek at heart; spending hundreds of hours scanning and editing film images, and the time and effort to keep my fileserver and backup system working, doesn't seem like nearly as much trouble, but by any objective measure it's hugely more time and money. Then again, it's not just for photographic images, either, though by byte count that's certainly the majority of the content. And the scanning isn't just for preservation, it makes the images much more useful to me today.

My film, too, is in plastic protectors in paper packaging (except for a small minority in plastic sealed binders). But I *have* experienced considerable bit-rot in the film collection, both in the area of surface scratches and fading of colors. I'm also now starting to see some yellowing of negatives from early processing batches I did myself (bad fixing, I believe).

I just would like to point out that if the corruption happened on the primary work drive then this defect will of course be transferred to all backups, be they on- or offsite. Cloud based services won't prevent that. You will only have one more corrupted copy.

One can use a tool such as HDDHealth (Freeware, http://www.panterasoft.com/) to keep a watch on the general health status of a disk drive (I'm Windows only so don't know about MacOS software but I guess equivalent tools exist). Another article on CNet: http://howto.cnet.com/8301-11310_39-57498712-285/how-to-monitor-hard-drive-health-with-diskcheckup-for-windows/

Also, keep in mind that almost all backup software is not able to test file integrity. At least none that I know. For them change is change, whether it's user generated (deleted text from Word document, run filter on images) or due to hardware or other failures. They can only verify that the backup process itself went correctly, i. e. source file and backup file are identical. If the source is already corrupted you're out of luck.

Everything else would mean for the backup software to open the files and interpret them. Simple errors such as wrong file headers (which usually tells the opening software what kind of file to expect) can be detected but you could have a perfectly fine JPEG from a file format point of view but the content itself is completely scrambled or half the image is missing. It would still open without errors. Imagine copying a text written in a language you don't understand. You could make a perfect copy of the original given to you but you would not know if paragraphs are missing or if it's full of spelling mistakes or other errors because to do not understand the language.
Of course such a feature would slow down the backup process extremely.

Averting such errors is difficult. Monitoring the health of your drives is a good way to begin with (you might need to check your BIOS setting if S.M.A.R.T. is enabled (monitoring system build in almost all HDDs from the last 10 years. It records defects, temperature of the disk etc.)). If you get SMART-Errors it's time to go shopping for a new HDD and have a look at your data. These error messages are usually early warnings. Bad sectors are being remapped to good ones by the drive's firmare and data loss at that stage usually has not occurred yet. But nevertheless it's a good idea to switch out the drive.

For everything else the only way of keeping loss minimal in case of corruption is an archive of several file versions. So even if in a newer backup the corruption has taken place, you at least have an older version to fall back to. Of course this means having a lot of storage at your disposal.

Oh, and RAID will NOT directly prevent corruption. It will only multiply it across the disks (yes, this is generalized and not entirely true, depending mostly on the RAID controller and the source of the corruption and so on but just assume it doesn't).

By the way it's always a good idea to eject flash drives (USB-Stick, SD-Cards etc.) via the respective OS functions instead of just pulling it out to prevent the operating system from accessing the drive while it's being pulled. That can also have severe impacts on file integrity on the respective drive if the OS was still writing or reading the drive.

Regards, Marcus G.

The comments to this entry are closed.

Portals




Stats


Blog powered by Typepad
Member since 06/2007