Life is Pretty Good Right Now

I’m sitting in my basement with my wife and dog, having a few beers and watching NASCAR and tracking how our fantasy teams are going.

I’m looking forward to going on vacation in a week and can’t wait to get back to the race track this summer and drive fast again.

Life is good. I’m so thankful to the world that things have worked out the way they have. I feel like I’m living my dream and finding happiness all along the way.

I like my life. I love my friends and family. I feel content. I feel like this is the most I could hope for. I hope it lasts.

I especially hope that all of you are having a great weekend as well. Enjoy the small things. Today is a beautiful day. ❤️🙂

Top of the Fantasy Leaderboard!

We’ve been playing NASCAR fantasy games for what feels like decades now. It’s not often that all three of us are at the top of the standings like this!

It’s early in the season and I have no idea if it will stick, but this makes me very happy. There’s 107 active players in this league!

Edited to change to the final standings after yesterday’s race. We are the top 3!

Battlestar Galactica

I’ve just finished watching the first season of Battlestar Galactica.

For years I’ve heard about how great it was.

If I’m honest… I’m not that impressed. It’s not bad. It’s just not great and I was expecting greatness.

I’m starting the second season and it seems to be picking up the pace a bit so maybe it gets better as it goes. Maybe Gaius will be killed off. That would improve things.

Missing Cotton

I miss having that little hellhound around. He’s been back to visit a couple times so I can take him for walks. Now that my finger is mostly healed, I’m looking forward to the next round of “WHY WON’T YOU LOVE ME???”

The Virtual Machine That Crashes Hyper-V on AMD

Last weekend I upgraded my server. It was supposed to be easy. I talked about the network adapter problems here:

It was 4 am Sunday morning when that problem got solved. I thought it would be smooth sailing from there.

Now connected to the network, I imported my VM’s and started up my webserver. Everything was peachy. My websites were back online and the system was stable.

I started up my file server. I started up my Windows 10 VM, which was slow as molasses before the upgrade and was happy to find it working well.

I started up a Windows 7 VM, just to start pushing the new hardware a bit. Then this happened:

When it happened, the whole system froze and had to be hard-rebooted with the power switch.

That little display is supposed to make troubleshooting easy. You look up the code and it tells you what the problem is. Unless it’s an 8.

If it’s an 8, you start Googling it and find a bunch of different things that MIGHT be causing it but nothing conclusive.

The first one I found said it was insufficient CPU power.

Normally I wouldn’t think there was a power issue. It’s got a 650 watt power supply. Thing is, the power supply doesn’t have a 4-pin ATX CPU power connector. I found an old Molex to ATX adapter and then couldn’t find the pack of modular wires for the PSU.

I found one that fit but there wasn’t any branding on it to say if it was meant for the current PSU. I used it anyway and figured it would be fine.

When the error came up, that became the primary suspect. I wasn’t sure about either the PSU cable or about how power is supplied. Knowing that two wires were being used to feed 4, I thought maybe I’d made a mistake there.

I used a multimeter to make sure that the PSU cable was correct and I was getting 12 volts instead of 5 or something else. That was fine. I still wasn’t sure if there might be a reason the ATX connector uses 2 wires instead of just one for12 volts. Maybe the PSU limits the amps on that channel or something. I have no idea.

I looked into buying a new power supply. I was gutted to see how expensive they are. I expected around $80 for a good one but they’re double that now.

I did surgery instead. I chucked out the cable with the Molex connectors and took apart the adapter and one of the PCIe 6-pin connectors that wasn’t being used. After some cutting and taping and poking at the connectors with bent staples I ended up with a 4-pin ATX connector that was definitely getting enough power.

It didn’t work. Well… it DID work, but it didn’t solve the problem. The system would still boot up and then crash within a minute with the code 8.

More searching made the situation sound more and more dire. It seemed like something was broken.

Bad CPU? Bad motherboard? Bad memory?

Maybe just some bad BIOS settings?

I updated the BIOS to the latest version, which also wipes out any custom settings. Rebooted. Same error.

More bad memory?

I swapped the memory between the server and my desktop. Same error.

It was now around 6 or 7 am. This was supposed to be easy.

I didn’t want to do this anymore. I wanted to sleep. I remembered that it was working when it was only running my web server. I thought of ways to get back to that.

I started it up and manically kept refreshing the Hyper-V manager on my workstation until it responded and then immediately killed all the VM’s except the webserver.

It worked. It didn’t crash. I went to bed.

When I woke up later on Sunday, I thought about what might cause the problem and how I could narrow it down.

I started the rest of the VM’s. It crashed.

I turned them off using the same method from earlier; frantically refreshing the manager until it responded and then killing them. With just the webserver and fileserver running, it was stable.

What caused it? Too much memory usage? Too much CPU demand?

I changed the settings on the Windows 10 VM to give it all of the available memory on startup. It started. It ran fine. I opened up 5 different YouTube videos and played them all simultaneously. I could see the CPU usage going up.

It ran fine.

I started up the Windows 7 VM again. It crashed almost immediately. It makes no sense to me. How does a VM crash the hardware?

I did more experiments and everything pointed to the VM being the issue. Nothing else I tried caused a problem. The system was running well unless I started that one VM. Then it crashed within seconds.

I deleted the VM and created a new one using the existing virtual hard drive. It started up and worked fine. I let it run like that for days. It was flawless.

Yesterday I re-imported the original VM and started it up. It crashed.

With this new knowledge, I did more searching and found I’m not the first one to have this happen. There’s a detailed story here about someone in a similar situation, migrating a VM from an Intel based server to an AMD one and then having random crashes:

So there you have it. I’ve got a VM that can crash my server’s hardware and throw a code on the motherboard. I have no idea how that’s possible, but it is.

Hopefully this will help someone else with this very specific problem in the future. The solution for me was to create a new VM using the existing VHD.

Cotton’s Parting Gift

Cotton went home but drew blood one more time before he did.

Here’s the current situation:

He’ll be back tonight for a visit. I can’t wait!!

Also worth noting that I fell off a chair while installing a light fixture and nearly broke my thumb on the same hand on Saturday. My left hand is having a bad month.

Installing Network Drivers on Hyper-V with Intel I211-AT Network Adapter

This is nerd stuff. Stop reading and wait for my next post if that’s not immediately interesting to you.

I’m writing this as a how-to for anyone else that has this problem. I know I’m not the first, but I might be the first to get it resolved.

For reference, I’m installing Hyper-V Server 2019 on an Asus ROG Crosshair VII Hero (WiFi) motherboard with an AMD Ryzen 2700x processor.

This past weekend was an adventure. I’ve been running a web server for decades now. It’s a personal playground for me to mess around with new technologies when I find the time. It’s always run on nearly obsolete hardware. Whenever I upgrade my desktop, the scraps go to the server.

In the past few weeks, I’ve been trying to use a Windows 10 VM on it and it’s been slow as molasses. When I ran the PC Health Check app to see if I could upgrade it to Windows 11, it didn’t even offer suggestions. It just did this:

So for the first time ever, I decided to upgrade my desktop while it’s still sort of current and finally give my server some modern guts.

It was supposed to be simple. Swap the parts in, install a fresh copy of Hyper-V server, fire up my VM’s, and call it done. An hour. Maybe two, tops.

I wasn’t expecting the built-in network card to be unsupported and not be able to connect to the network. Connecting to the network is important for it. That was a problem.

The first thing I looked for was how to install a device driver. I found this, which was very helpful:

After that, I went looking for the driver to install and found I had a bigger problem. There wasn’t one available.
Figuring out what to do next, I came across this guy from a few years back with the same problem:

He didn’t seem to resolve it, but pointed me to this guy, who got Intel drivers working for a different unsupported network adapter:

It took a bit of trial and error but I got the gist what he’d done enough to adapt it to my system.

I downloaded the Intel drivers from here:

Note that the I211-AT isn’t on the list there.

Going back to the first link, there’s a utility that lets you see devices similar to what the Device Manager shows on a regular Windows installation. You can get it here:

Using that, I found my ethernet adapter listed and could see what the system saw it as. It was this:

The important bit is the Device Instance ID.

I searched through the pile of stuff in the Intel driver package and found an entry very similar in e1r65x64.inf in the [Intel.NTamd64.10.0.1] section.

%E1539NC.DeviceDesc% = E1539.10.0.1, PCI\VEN_8086&DEV_1539

Looking at what the guy with the NUC did, I added a line below it to match what mine showed up as:

%E1539NC.DeviceDesc% = E1539.10.0.1, PCI\VEN_8086&DEV_1539&SUBSYS_85F01043

I then copied both lines down to the next section as suggested. So at the bottom of the list for [Intel.NTamd64.10.0] I added these:

%E1539NC.DeviceDesc% = E1539.10.0.1, PCI\VEN_8086&DEV_1539
%E1539NC.DeviceDesc% = E1539.10.0.1, PCI\VEN_8086&DEV_1539&SUBSYS_85F01043

Here’s what it looks like. Note that I kept the 10.0.1 bit in the middle. On my first attempt I thought I was clever and removed it to match the rest of the lines in this section. It didn’t work.

Then I followed the rest of the instructions as follows:

I copied the whole NSID65 folder, with my modified inf file, to the server and ran the following commands:

bcdedit /set TESTSIGNING ON
bcdedit /set nointegritychecks ON

Then I rebooted the server.

shutdown /r /t 0

When it came back up, I installed the driver using this command:

pnputil –i –a C:\NDIS65\e1r65x64.inf

You’ll get a warning about it being potentially tampered with. Well… yeah. I just tampered with it.

Choose to install it anyway.

Hopefully it will be successful this time. It was for me.
Now run these commands to turn the driver enforcement back on.

bcdedit /set TESTSIGNING OFF
bcdedit /set nointegritychecks OFF

Now reboot again, and it should be good to go!

Drop a comment if this helped you. I hope at least someone is saved a lot of frustration with this.