Running Backup Scripts on Boot and Wake with systemd

If you’re running automated backups on a laptop, traditional schedulers like cron or fcron might not be ideal. Your machine isn’t always on, and you might want to trigger backups specifically when the system starts up or wakes from sleep. Here’s how to use systemd to run your backup script at these key moments. The systemd Service File Create /etc/systemd/system/backup.service: [Unit] Description=Run backup script at boot and wake-up After=multi-user.target suspend.target hibernate.target hybrid-sleep.target suspend-then-hibernate.target [Service] Type=oneshot User=yourusername Group=yourgroup ExecStart=/path/to/backup-script.sh [Install] WantedBy=multi-user.target suspend.target hibernate.target hybrid-sleep.target suspend-then-hibernate.target The User and Group directives ensure the script runs as your user instead of root. This is important for: ...

July 31, 2024 · 2 min · 301 words

GPU Comparison Guide: Running LLMs on RTX 4070, 3090, and 4090

As more developers and enthusiasts venture into running Large Language Models (LLMs) locally, one question keeps coming up: Which GPU should you choose? In this post, we’ll compare three popular NVIDIA options: the RTX 4070, 3090, and 4090, breaking down the technical jargon into practical terms. Understanding the Key Terms Before diving into the comparison, let’s decode what these specifications mean in real-world usage: VRAM (Video RAM) Think of VRAM as your GPU’s short-term memory: ...

June 16, 2024 · 3 min · 554 words

Mistral vs Llama2

When it comes to large language models, Mistral and Llama2 are two notable entries in the field, each with its unique attributes: Model Architecture Mistral: Known for its innovative approach, Mistral uses a sparse mixture-of-experts architecture, which allows for more efficient computation by activating only a subset of the model’s parameters for any given input. This leads to faster inference times and potentially lower computational costs. Llama2: Developed by Meta AI, Llama2 follows a more traditional transformer architecture but with significant optimizations for performance and efficiency. It focuses on scaling up the model size to improve capabilities. ...

November 9, 2023 · 2 min · 345 words