Windows EC2 INACCESSIBLE BOOT DEVICE

So i have a Windows 2016 Datacenter edition EC2 instance that was used for a SQL server and it’s killing me today with the INACCESSIBLE BOOT DEVICE blue screen.

Here’s me documenting what i tried to use to fix it and things that failed:

[Preparations]

  • Take snapshot of the instance
  • Stop instance
  • Detach root volume
  • Create a secondary windows instance, which i’ll be using for restore purposes and log on via RDP to it.
  • Points 1 2 3  below are happening with the volume attached to the recovery instance, after which i detach it, and reattach it to the damaged instance, and boot it up, see if it works.
  • Between each failed point 1 2 3 etc, i am detaching the volume from the damaged instance, and attaching it back to the rescue instance to try the next thing. Make sure you attach it as “/dev/sda1”, as that’s the root mount.

[Things that have failed so far]

1 – Attach it to the recovery instance, run CMD in an elevated prompt and run the command below(didn’t boot):

bcdedit /store D:\boot\bcd /set {default} bootstatuspolicy ignoreallfailures

2 – Use DISM to figure out if it was an automated windows update that installed some patch that bricked it (wasn’t)

dism /Image:D:\ /Get-Packages

3 – Installed EC2 Recovery tools and tried using the offline mode both to “fix boot issues” and “restore last known registry config” (neither worked)

4 – Running a SFC scan to see if there’s any integrity issues (didn’t work):

sfc /scannow /offwindir=d:\windows /offbootdir=d:\

5 – Attempt to rewrite the boot records with bcdedit:

bootsect /nt60 D: /mbr
D:\windows\system32\bcdboot.exe D:\Windows /s D:
bcdedit /store D:\Boot\BCD /set {default} device partition=D:
bcdedit /store D:\Boot\BCD /set {default} osdevice partition=D:
bcdedit /store D:\Boot\BCD /set {bootmgr} device partition=D:

6 – Re-create the boot partition:

bcdboot C:\Windows /S D:

Format / mount ec2 volume

So you added a new volume to a server and you want to mount it quick.

  • Use lsblk to spot the new added volume:

[root@devserver ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 8G 0 disk
??xvda1 202:1 0 8G 0 part /
xvdb 202:16 0 8G 0 disk /var/www/html
xvdf 202:80 0 16G 0 disk /var/lib/mysql
xvdg 202:96 0 16G 0 disk /home
xvdh 202:112 0 32G 0 disk

  • Use mkfs to create a filesystem on the partition:

[root@devserver ~]# mkfs -t ext4 /dev/xvdf
mke2fs 1.42.9 (28-Dec-2013)
/dev/xvdf is mounted; will not make a filesystem here!
[root@devserver ~]# mkfs -t ext4 /dev/xvdh
mke2fs 1.42.9 (28-Dec-2013)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
2097152 inodes, 8388608 blocks
419430 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=2155872256
256 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624

Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

  • Create a folder and mount the newly created partition:

[root@devserver ~]# mkdir /misc
[root@devserver ~]# mount /dev/xvdh /misc
[root@devserver ~]# df -h |grep misc
Filesystem Size Used Avail Use% Mounted on
/dev/xvdh 32G 49M 30G 1% /misc

  • Edit /etc/fstab so the partition is mounted at reboot:

/dev/xvdh /misc ext4 defaults,nofail 0 2

Done!

Amazon EC2 change hostname – OpenSUSE

I am just getting to know OpenSUSE and it’s been driving me insane with the hostname not being persistent even if you change it via “hostnamectl” or entering the new hostname in “/etc/hostname”.

In EC2, you have to open up yast and uncheck the “change hostname via DHCP tick box” in the network card settings.