Tag: data recovery

Google Chromebooks: You don’t own your data and can’t recover it if the laptop dies

A user came into my computer repair shop with an Acer laptop that happened to be a Google Chromebook. This laptop was dead. It simply doesn’t work. The hard drive works, and under Linux I can mount the filesystems on the hard drive, but the rest of the laptop is shot. We wanted to move the user’s drive to an external hard drive enclosure so that he could at least retrieve his family photos and other data stored on the computer. Obviously, the data would need to be copied off of the Linux filesystems and the drive reformatted to Windows’ NTFS so that it could be read on a Windows PC, and then the data would be copied onto the newly formatted hard drive. The user gets an external hard drive plus all his data, and everyone is happy.

Except for one tiny little problem.

Google Chromebooks encrypt all of the user’s data.

With a key stored in the computer’s Trusted Platform Module (TPM).

If the computer was stolen by someone, this would be a good thing, because that someone wouldn’t have access to the user’s private files. That’s what encryption is supposed to be for, after all…but this laptop wasn’t stolen. The owner had it in his possession, knew the login password, and that should mean that the owner can get into the computer and retrieve his data.

Except the password for that data is stored away in a chip that won’t hand it out unless the computer works and Google’s Chrome OS is what asks for it.

Where does that leave my customer? Simple! With absolutely nothing. A failure of the computer in this case has become equivalent to a total hard drive failure. All of his data is lost forever. There is simply no way I can retrieve it for him without the encryption key locked away in a chip I can’t extract it from. Because the encryption key is not available to the user, the user can’t give it to me to decrypt his information.

Thus, you simply don’t own your own data when it’s on a Chromebook. The maker of the computer and the writer of the operating system do. Please don’t waste your money on a Chromebook…but if you do, back up your stuff.

(To a real external hard drive, not “the cloud.”)

Manually copying a RAID-0 striped array to a single drive for data recovery

This question was posed on a forum:

I have a customer who has a computer, 2 SATA disk (striped in RAID config. Windows won’t load. Diag reports bad hard drive. When I disconnect one, it kills the stripe and the computer appears to not have a hard drive at all. Seems kind of silly to have it set this way as it increases the risk of failure. Other than putting each hard drive in another computer, I’d like to determine which of the disk are bad.

Also, not quite sure how to attack data recovery as they are a stripe set and plugging in to a SATA to USB does not appear to be a valid method. If I put a third hard drive in as a boot drive, do i have to reconfig the stripe set and if i do, will it kill the data.

I have reassembled two RAID-0 “striped” drives to a single larger drive by hand before. It’s actually a programmatically simple operation, but you require a lot of low-level knowledge and some guesswork to do it. The specific pair I had appeared to store the metadata somewhere other than the start of the disk, and I was able to discover through a hex editor that the drive was on a 64KB stripe size. I also spotted which drive had a partition table and which didn’t, because that’s only on the first drive which contains the first stripe.

At a Linux command prompt, with the two RAID-0 disks (that were NOT being detected properly by the Linux LVM2 “FakeRAID” algorithms, by the way) and a disk of twice their size connected, I wrote a very simple script that looked something like this (sda/sdb as RAID-0, sdc as the destination disk, and this might work under ash or similar as well).

—- cut here —-

#/bin/bash

; X=sda position, Y=sdb position, Z=sdc position, STRIPE=stripe size
X=0; Y=0; Z=0; STRIPE=65536

; Retrieve the size of a RAID-0 disk so we can terminate at the end
SIZE=$(cat /proc/partitions | grep ‘sda$’ | awk ‘{print $3}’)
; Divide size by stripe, including any tail blocks (expr truncates)
SIZE=$(( SIZE + STRIPE – 1 ))
SIZE=$(expr $SIZE / $STRIPE ))
while [ “$Z” -lt “$SIZE” ]
do
dd if=/dev/sda of=/dev/sdc seek=$Z; skip=$X bs=$STRIPE count=1
Z=$(( Z + 1 ))
dd if=/dev/sdb of=/dev/sdc seek=$Z; skip=$Y bs=$STRIPE count=1
Z=$(( Z + 1 ))
X=$(( X + 1 ))
Y=$(( Y + 1 ))
done

—- cut here —-

Note that all it does is load 64K at a time from each disk and save it to the third disk in sequential order. This is untested, and requires modification to suit your scenario, and is only being written here as an example. It does not fail if a ‘dd’ command fails, so it will work okay for data recovery; you will lose any stripe that contains a bad block, though, and the algorithm could be improved to use dd_rescue (if you have it) or to copy smaller units in a stripe so that only a partial stripe loss occurs on bad data.