So, I had a confession to make.

Jeff, my former daily driver laptop, is still alive today. The self-rebooting problem still exists, but apparently it now boots by itself. And since I configured GRUB to always boot into Arch, it's now just an Arch box I can SSH to. I even made a systemd service so that it runs a server for an internal project I have, and it still works to this day.

I don't want to jinx myself and say that Jeff will always be like this. Maybe there will be a time where Jeff refuses to turn on by itself and I have to turn it back on myself. Maybe Jeff will refuse to turn on anymore, for one reason or another. I don't know.

I'm still on the fence about retiring Jeff. Seeing its worsening physical condition, I think letting Jeff rest for now seem like a good idea. If not the battery with a capacity of 0 mAh, the screen isn't exactly prime condition, even before the battery died. But then again, it's currently my only always-on machine. Maybe letting Jeff work just before its retirement isn't a bad idea. I also don't like seeing a machine that works to some extent be left unused gathering dust (though that seems to be what Jeff is experiencing haha).

All of this to say that I might be implementing or restarting my projects that require an always-on machine on Jeff. However, ChanSpec and Corkboard aren't going back. If #MineralFish:libera.chat ever went more active than it currently is, maybe I'll consider it. However, my other projects like Corkboard on wasteof might happen (when is not certain).

I'll probably not start a Mastodon instance on Jeff. From what I learned when I did the same with my phone, the streaming server (not even the web server) absolutely siphons my phone's battery. It also takes a good chunk of my 8 GB phone RAM (twice of that in Jeff). I'm pretty sure Jeff can handle it, but I had to test it first to see if it's a viable idea.

The same goes with @imgru as well. For some reason the NVIDIA driver just quit working on Jeff, and I can't seem to fix it. Besides, I don't think PyTorch supports the CUDA version Jeff had (though I recall it did), nor do I think the GPU memory can fit the model. Also, I don't feel like having Jeff store a large model in its RAM continuously.

comments