Listening to Podcasts While Walking
|

Listening to Podcasts While Walking

Recently I have been listening to plenty of Late Night Linux podcasts. I like them because they’re half an hour long, the adverts are half way through the show, and in general I don’t feel that they’re filling time to fill one and a half hours of podcast time.

Plenty of other podcasts last for an hour and a half or more, which if you listen to one episode a week is okay, but I don’t do that. I find a podcast that I like and I listen to the most recent episodes and then I listen to them in chronological order. This takes a lot of time, but it also provides me with an evolutionary appreciation of how things have changed.

I want to listen to them in chronological order because I feel that they provide me with a timeline of what changed, when, and how people reacted to those challenges, as well as how this made them feel.

There are occasions where I skip episodes, either because I don’t like the topic, especially for Linux Extra. In one case I got annoyed because they spoke about “We weren’t taught A in school” or “we weren’t taught b in school”. I was never taught how to use a computer. I learned by trial and error, after trial and error, after trial and error. I learned by RTFM if I got stuck, but also by experimenting. I only RTFM if I get stuck.

Online Learning

I have paid for Lynda.com which then became Linkedin Learning and I have paid for courses on Udemy and Coursera but that’s courses that I chose to study, rather than formal tuition.

Self Taught Editing

I was never taught how to use Final Cut Pro, or Adobe Premiere, or Avid Media Composer. I learned by having a PC with Adobe Premiere on my computer at home. I learned to edit tape to tape on a DHR-1000 editing deck. I learned to edit with FCP because that’s what we had at uni. I don’t remember whether we had it in Weymouth and Harrow, or just Harrow. It’s two decades ago.

With Avid I spent half a day trying to figure out how to do a simple edit. I eventually figured out that you mark in where you want a video to end and out where you want the next one to start, cut, and then your edit is done. It’s the type of editor where you do everything with keyboard shortcuts so it’s very fast, once you learn how to use it.

That’s why I hate the notion of “I wasn’t taught it at school so I don’t know how it works. I experimented with Linux in the 90s because Windows was constantly getting virused, so I eventually switched to Linux, and then windows, to linux, to mac, and then back to Linux.

Sunset Mac Book Pro

In Autumn of this year my Mac will no longer be supported by Apple. It already isn’t supported for Final Cut Pro X. I can use an older version, but no longer the latest versions.

NixOS and PhotoPrism

Recently I heard plenty of mentions of NixOS in one podcast so I installed and experimented with the OS and it took a while for me to be able to do anything but eventually, yesterday, I got PhotoPrism to work. I installed mysql/MariaDB and PhotoPrism, but I had to setup mariaDB to play nicely with PhotoPrism. I had to do that part manually. Eventually an old HP laptop running NixOS was running PhotoPrism and I was able to transfer photos from my mobile phone to the laptop and it felt extremely fast, compared to the same experiment on a Pi 4 and Pi5.

And Finally

In the past when listening to other podcasts I have found that they are more enjoyable. They’re shorter. They’re edited, and they have fewer adverts. As you get up to date with podcasts adverts become longer, and more intrusive. I stopped listening to a few podcasts just because I got so tired of listening to adverts. I could skip them, but when you’re walking you don’t want to skip the first minute or two.

At the moment I find that I learn a lot from a variety of podcasts about Linux, and if they’re half an hour long then that’s perfect. My walks are one and a half hours long. It’s because they have less time for off topic chatter in short podcasts.

The Nicest Pi Setup Yet

The Nicest Pi Setup Yet

There are several types of people. One of them is youtubers that try and fail until they succeed, and then there are people like me, who also try and fail until they succeed. In one case the individual probably gets millions of views, and earns enough to waste hundreds of dollars per video in microtransactions, to people like me who are experimenting with Pis because it’s cheaper, once you know what you’re doing than getting a synology box.

Over a few weeks I have experimented with installing Ubuntu and Raspberry Pi OS on several Pis and then added docker containers, and tried installing straight to the system. In the process I have iterated and iterated until I developed an effective work flow. Yesterday I spent an hour or two preparing an Ubuntu SD card, snap installing Nextcloud, and then docker, and then Photoprism, Immich, Home assistant and maybe one or two other apps. I also set photoprism to boot automatically at start up. When I tried to do the same with Immich it failed. In the end I settled for a shell script, thanks to Chat GPT help.

I kept a copy of the 48 commands I got to setup the system but ignored the trial and error part, for now. Ideally I should setup a script that can do this configuration automatically. I would install ubuntu, boot it up, and then run the shell script to install what I want automatically, so that a system is quick and easy to setup.

Centralised

Initially I had one Pi per service/server. This gave me the freedom to experiment with one service/server without destroying everything else. As I began to understand how the apps/services/servers work I was able to move them together on the same machine and have them run side by side. I go from needing several Pis with dedicated roles to a single Pi that can do it all, if I feel like centralising everything. Before I centralise everything I want to be able to migrate the logs and data from several apps to a central point.

I like that Home Assistant has weather data for several weeks. Part of the learning process is learning to move data between systems without losing their history.

And Finally

By installing a system, and then re-installing it over and over I learn with each iteration, and with each iteration I see something that could be improved so I improve it. Eventually I get a work flow that is fluid and does what I want with relative ease. I kept those 48 lines of commands so that when I do this again I can refer to my “notes” rather than several pages from two or three sites, and Chat GPT. That I managed to install Immich and Homestation counts as a success, because I had tried and failed to install both of them recently.

PhotoPrism self-boot

PhotoPrism self-boot

This morning I made PhotoPrism self-booting. I am not certain that this is the write term so I will specify what I mean. PhotoPrism, when run via docker boots, when we tell it to boot, like any other app on our laptop. This morning, after a little time spent with AI I found the solution.

I used ChatGPT for this help but this is to give you an idea of how to enable docker containers to boot automatically rather than manually. It’s by a little trial and error that I suceeded in what I wanted to do.

Boiled down Chat GPT gave this overview

To start a service: sudo systemctl start servicename.service
To stop a service: sudo systemctl stop servicename.service
To restart a service: sudo systemctl restart servicename.service
To enable a service to start on boot: sudo systemctl enable servicename.service
To disable a service from starting on boot: sudo systemctl disable servicename service

In concrete terms you need to “sudo nano /etc/systemd/system/photoprism.service” and add

[Unit]
Description=PhotoPrism Docker Compose Service
Requires=docker.service
After=docker.service

[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=/usr/local/bin/docker-compose -f /path/to/your/docker-compose.yml up -d
ExecStop=/usr/local/bin/docker-compose -f /path/to/your/docker-compose.yml down

[Install]
WantedBy=default.target

In my case it was /usr/bin rather than local bin. That’s a little thing to look out for. To double check use “which docker-compose” and you will see what to use for the execstart address.

If you are using an external volume double check that the mount point is static. I rebooted twice and got three mount points as well as an “original picture folder empty” message due to the photo drive being mounted in the wrong place. To fix this I used.

sudo blkid

to locate the uuid of the hard drive before personalising this line:

UUID= /path/to/mountpoint ext4 defaults 0 2
I left the defaults behaviouts. The 0 is for fsck to check the file system and 2 is the backup priority number.

This is added via:

sudo nano /etc/fstab

Once you have ensured that the drive mount point will remain the same, boot after boot you can run the next lines.

Reload Systemd

sudo systemctl daemon-reload

Enable PhotoPrism to launch at boot

sudo systemctl enable photoprism.service

To start the service

sudo systemctl start photoprism.service

And finally you can run

sudo systemctl status photoprism.service
to check service status.

And Finally

When I set up a server for photoprism or other services I want it to boot automatically as soon as the computer is booted. I don’t want to have to start services manually. With this workflow I was able to setup PhotoPrism to boot automatically, as well as to make sure that the photo drive would mount to the right place each time I booted the system.

The NixOS learning Curve

The NixOS learning Curve

While walking and listening to podcasts I kept hearing about NixOS and how good it is for instantiating environments over and over again. What I didn’t hear about, so much, is that there is a steep learning curve, to get started with.

Installing the OS is easy. Download NixOS, flash it to a USB stick, reboot a computer onto the OS on that USB stick and begin installation of the OS.

Changing Desktop Environments

So far, so easy. I chose to install NixOS with XFCE out of curiousity but once I booted into the environment I felt confused. I struggled to find the way to go from using XFCE to Gnome as an environment. Eventually I managed. You mark the old Desktop environment as false, and add the new one as true. You then send the build command. When you reboot you have the option of booting into the old build, or the new one.

Learning How NixOS works

I tried to install Nextcloud and PhotoPrism but that’s where I met resistance. The first challenge is to understand how the configuration.nix file is set out. In the end I managed to get Nextcloud to boot but failed to get MySQL to work properly.

I expected that by switching to NixOS I would find tutorials and it would be easy to transfer from Ubuntu/Debian to NixOS but it isn’t.

The Nix Package manager expects familiarity with NixOS, which I don’t have. I spent time trying and failing, until, after a few searches I found some instructions on how to install Nextcloud on NixOS but I’m only part of the way through the guide.

As I progress I’m running the script, to see that there are no errors before moving onto the next part. I am now on the learning curve, but not very far along.

The Desired Goal

From what I have heard in podcasts the advantage of NixOS is that it makes replicating a setup easy. If, and when, I get Nextcloud and PhotoPrism to work on an instance of NixOS I can easily duplicate the system as many times as I have devices. Instead of installing Nextcloud, and then installing PhotoPrism, CUPS and other apps I can prepare a configuration.nix file and duplicate that system, over and over.

Easy to Recover

With some OSes if you make changes and break something it could take hours to recover. With NixOS you make changes but you don’t have to learn to live with them. You can revert to a version that you liked, or move on to a new version that you prefer. If you experiment and fail with something, you can easily iterate, until it works.

Now I have one build with XFCE, another with Gnome, and a third with VIM and Thunderbird installed. There are more versions where I am trying to get Nextcloud to work. Last night I stopped iterating through experiments because it was time to prepare dinner.

And Finally

Nix Learn exists for those who prefer to take a methodical approach to learning. It has three parts. “Install Nix”, “First Steps with Nix”, and “How Nix Works”, for the more methodically minded learners. I thought documentation would be easier to find, and the OS more intuitive so I tried to sprint before I learned to stand up.

Due to the learning curve I know that I need to devote time to learning to use NixOS properly. I should know most of the concepts from playing with Node, Ruby on Rails and other solutions. It’s a matter of applying that understanding to this context.

I want to spend more time on this in the near future.

Shifting from Tutorials to Practical Experience

Shifting from Tutorials to Practical Experience

Since getting the Raspberry Pi devices my shift has moved from following tutorial after tutorial to experimenting with setting up a number of server projects fron Home Assistant to Nextcloud to Photoprism and Immich to name a few. In the process I have instantiated and then pulled down plenty of instances before finally deciding to keep certain instances up and running for longer, to see whether I can use them. I also experimented with CUPS and set up a printer/scanner for remote use.

Tools That work

Nextcloud, Photoprism, Home Assistant and Pi Hole work well so I can now use them without thinking about them too much. Both Ad Guard and Immich were slightly less successful but this might be due to fatigue, and running two things at once. The issue with Ad Guard is that it seemed to block 192.168.1.1 which I need to be able to access. I also noticed that when I switched things around Windows stopped being able to see local devices.

Immich Potential

I see a lot of potential in Immich but there is one flaw. If your library is too large it loses the ability to crawl all the files and upload them before timing out. I got to 14,000 images backed up, and the last 4000 are currently in limbo, waiting to recognise that they can be moved from the phone to the Immich server. With Nextcloud and Photoprism I was able to transfer all files from the phone to both services and now I can add the latest images with ease.

An Experimental Close Call

This morning I had a close call. I plugged in a one terabyte drive to the Pi with Photoprism and Immich. My plan was to mount the external hard drive, move the files from the SD card library to the external drive library and then to change the .env file to read from the external hard drive rather than the local drive. This didn’t work.

After some quick searches and practicing my AI using skills I had set things up as I thought I wanted them and started to copy from one location to the other. I wanted to check on progress a few minutes later but noticed that I was copying between two partitions rather than between two drives. I aborted the copy. No data was lost.

That’s why you experiment in a context where it doesn’t matter, and why you don’t assume that data is safe until it is backed up off site once, and locally twice.

In the worst case scenario I would have lost my Photoprism and Immich instances.

The Postponed Experiment

Before the setback described above my idea was to see whether the Android App would allow me to backup photos from Google Photos to Immich. It can’t. It only sees what is on the phone locally, not online. For the second part of the experiment I wanted to copy images from Google Takeout to Immich using Immich Go.

If Immich detects duplicates it will mark them as duplicates and throw them away. The reason for which I want to experiment with the tool is that I have eight hundred one gig files to get through, and to do this manually will take more than five minutes. I also need at least a terabyte of space free on another drive, so that when the files are unzipped and added to immich there is enough space for the task to complete successfully.

I want to experiment with the Uploads folder that I noticed in Immich, as well as Immich go.

The Concern

My concern with unzipped Google Photos files is that they will have lost exif and other data so Immich will import them as being created tomorrow or the day after rather than the day they were actually created, along with exif data for location data, camera used, and more.

Immich Go is meant to read the image file and the relevant JSON file to ensure that the relevant metadata is attached to the relevant image. iPhoto and Google Photos tend to strip this data to make migrations more difficult.

And Finally

Tutorials are great for building a good foundation but it’s by playing with practical examples that we learn to think about things in the right way. We consolidate what we learn both through repetition, as I did with Nextcloud, or by finding the right packages for docker and a specific OS, before changing OS and getting Docker to work on that instance. It’s about encountering an error and knowing how to fix it.

My next challenge will be to install Immich, Nextcloud and Photoprism on a single system.

Playing with Nextcloud Continued

Playing with Nextcloud Continued

Setting up a drive to be available via Samba is a relatively simple thing to do. The drawback is that you have files that are as organised as the media asset manager. It can be quite chaotic unless you have someone trained as a media asset manager, archivist, or other, to help order photos videos and more. To some degree Nextcloud is just as disorganised, initially.

I have spent more than five minutes experimenting with Nextcloud through several iterations and I have finally set things up as I want them. I have Nextcloud running on a Raspberry Pi 8GB. I chose this device because it’s the highest spec pi available at the moment without months of waiting. I could have used an HP Elite Book from several years ago but I want a machine that can be on permanently.

The first sync is slow. Twenty hours ago I started with over 19,000 photographs and videos and now I still have 6400 remaining. I activated recognize, an AI solution that recognises music genre, objects, human movement in video, people, bodies of water and more. I also have a tool running that maps photographs to show where they were taken.

The beauty of Nextcloud for photo storage is that it allows you to sync from your phone via the app, but it also allows you to upload photos via a web interface, or if you’re so inclined via file transfers on the back end. I have yet to test the latter. The idea is simple. If you have terabyte drives filled with photos that you have already organised by year, month, day, location, and topic, then that file structure should be recognised by Nextcloud. The work that you have done to organise media assets is already done. It’s just a matter of letting Nextcloud see them, and it will take care of mapping, and recognising objects, monuments, images with people and more.

Facial Recognition

With time it recognises faces and the faces are just given a number. You can then provide them with a name. I added my name to the collection of photos of me. It needs 120 faces before it starts to recognise individuals. As the model is self-hosted this data stays local to your system

Object and Landscape Recognition

I noticed that it recognises water, alpine landscapes, signs, boat, bridge, flower, furniture, historic, information and more. The Pi is still working hard to ingest the remaining 5600 photos but when that is done it will have plenty of time to recognise what is in pictures.

When you work as a media asset manager it takes time to tag images, and to add location data. If AI can provide some of this information automatically then it saves a lot of human time. Time that humans can spend adding images to the right folders.

Folder Structure

As a best practice you should always use folders with year-date-country-event-name-photographer-initials. If Nextcloud is up and running you can rely on Nextcloud but if for some reason Nextcloud crashes, or you can’t use a web browser or app, you want to be able to find things according to year, month date, photographer, and topic. Nextcloud should be an embellishment but good Media Asset Management practices should be prioritised.

To some degree the iOS app can help with this, as long as you set things up properly ahead of ingesting all the photographs. I haven’t seen how to set it up yet, but for now I’m still testing the proof of concept, for mobile phone image backup.

Using an Intel Machine

What I am doing with the Pi is experimenting with a Google Photo and iCloud replacement. What I plan to do with the linux laptop is use the full power of a normal computer to serve as a media asset manager for when the machine can be turned off, and on, when not in use. The aim of Nextcloud on the laptop will be to provide me with a one terabyte NAS where I can experiment with what Nextcloud really has to offer.

Tensorflow WASM mode
WASM mode was activated automatically, because your machine does not support native TensorFlow operation:
Your server does not support AVX instructions
Your server does not have an x86 64-bit CPU

When you use the Pi it does not have the required the required x86 64-bit cpu. For that I need to use the Intel machine. It also has GPU acceleration, which I cannot use on the Pi. The Pi is good because it can be on 24 hours a day, as a quick backup tool for your phone, but an Intel NUC machine can be a Nextcloud server with the required hardware to do things much faster.

And Finally

iCloud and Google Photos are great for backing up when you’re out and about. They’re less great when you want to recover your photos. This is because if you remove photos from iCloud they are removed from everywhere so it’s dangerous to clear photos to make space.

With Google Photos the issue is that their cloud backup solution is 34 CHF more than Infomaniak’s cloud storage solution. It is for this reason that I wanted to have a local backup of my Google Photos and iCloud photos. When I setup the Intel machine and ensure that all my photos are backed up from Google Photos I will be able to purge Google photos and downgrade my Google One plan.

My aim is not to eliminate Google Photos, but to reduce the plan I’m using. I have access to two terabytes but I never use it, and Infomaniak is cheaper, so I prefer to have a single plan. The Intel will be the main backup, and kdrive would be the offsite backup.

In the time it took to write this blog post I went from 6400 images to backup, down to 3800.

The Subtle Art of Trial and Error

The Subtle Art of Trial and Error

For 40 CHF you can buy a Tapo or Xiaomi webcam and it is almost ready to be used as a webcam. You take it out of the box, plug it in, add an SD card, download the app, pair it with the phone and let the phone connect it to wifi and then it detects motion, can take video, photos and more, with ease. In such an environment it’s easy to forget about what we called “Plug and pray” back in the day.

Back in the geeky old days of computing there was a lot of trial and error to get things to work. You would try one thing, and see if it worked, and then another, and then a third, and then a fourth, and eventually you would either find a solution, or give up. One of the reasons I switched to Apple, rather than Linux, in 2003, is that I wanted to be able to connect to the university’s wifi with ease. I expected that if I used a linux machine I would struggle with wifi.

Apple is the leader in making everything work so flawlessly, as long as they want you to do things, that trial and error is part of history. Apple controls everything, to ensure that it works “flawlessly”. I put “flawlessly in quotation marks because my phone crashes or hangs on almost every one of my walks. I rebooted it today and yesterday, while walking. If I take photos during a walk the phone acts up and freezes, and stops the podcast I’m listening to.

I’m being distracted. The point is that Apple, until recently, was known for producing reliable devices. Windows is also known for dumbing down their devices more and more. They try to make it so that users just click install, and the computer does the rest. Usually webcams, printers and more are plug and play.

With Linux you’re using a tinkerer’s OS so things can be simple, if you buy a generic webcam and plug it in. I tried to set an android phone up as a webcam and it worked within minutes. Integration with Home Assistant was smooth and efficient.

With a Raspberry Pi 3b and a Raspberry Pi zero 2 W I have struggled for three or four hours trying to get the camera to work. You have to do A, and then you need to do B and then you need to do C. You also need to wire the camera into the board the right way.

As you’re doing this from a CLI you’re not seeing whether the webcam is giving a picture or not. I tried to take pictures and it appeared to take them but when I tried to get motion to work with the camera to stream to a device with a web browser I just see nothing. I get an error message about the camera not being available.

I know that the right camera is detected because I see it in the output. I just haven’t taken the time to see if the images generated correspond to what I expect them to be. The subtle art of of trial and error is about having a goal and tweaking and experimenting until you get the result you want to get.

The first error is that I wired the camera the wrong way. The second error is that I don’t need to use the legacy camera option with this camera. The third error is that I’m trying to get a Pi and camera module to work as a webcam before I get it to work within its own device.

I am so used to Windows, MacOS and dedicated hardware being so reliable that I forget about the trial and error part of computing that was once so familiar to those of us geeky enough to spend hours of our free time playing with computers. When computers just work it’s easy for everyone to be a geek, because turning it off and on again is easy. So is plugging in a USB device.

My aim is not to build a CinePi.My aim is to setup a webcam that I can see via Home Assistant. I can then add motion detection and more features when I achieve the initial goal of building a Raspberry pi webcam server in minutes“. The instructions are for the V2 module, or a logitech device, and I’m using the V3 module, so the instructions need to be updated. That’s why I’m struggling, and that’s why it’s interesting to do these projects.

I came across this challenge when following programming courses that were over a year old. Sometimes I had to look for the new way of doing things to get the code to behave as it was expected to. Sometimes ChatGPT, Bard and Bing are helpful to find the up to date way of doing things. It is also a case of Reading the Fabulous Manual (RTFM).

There are at least ten Home Assistant Camera integrations to experiment with, so if the method I have been experimenting with doesn’t work I still have 9 other solutions to experiment with. The FFMPEG option looks interesting.

The Subtle Art of Trial and Error Summarised

I call it the subtle art of trial and error because the art lies in learning a methodology by which to come up against an issue and to develop a system by which to resolve the issue in an increasingly short amount of time. The point isn’t in knowing how to do things. It’s in knowing where to look for help. It used to be called Google Fu.

I could easily buy a webcam for 12-30 CHF now but by experimenting with various “integrations” I invest my time in learning new skills and that has value. If I get FFMPEG to work, then I can potentially build my own camera systems. Instead of reverting to film like some, I could go the other way, and experiment with concepts similar to the Cinepi.

Running Two Pi Holes in Tandem

Running Two Pi Holes in Tandem

When walking and listening to the 2.5 Admins I heard about the concept of going from treating servers as pets to treating them as cattle. They discussed the habit of giving servers functional names, rather than emotional ones. The examples were similar to DR-1 for for Disaster Recovery one, prod 1 for production one and related names.
The Pi Hole UI -- The Pi Hole UI

Servers as Cattle, Not Pets

They spoke about the legacy habit of building a server up over a period of years to the modern habit of spinning up instances and containers that can easily be replicated within minutes, independent of hardware. They spoke about the need to take notes and set up an environment once, and then destroy it, and set it up again, and to follow the notes exactly. The concept is that you don’t set something up just once, you set it up over and over, and over again, until you have the work flow perfected, and written down, for repeatability, three years down the line.

Setting up the First Pi Hole

This week I setup a Pi-hole on a Raspberry Pi 3 and I got it to run, connected as the DNS server to block traffic that I did not want. I’m doing this as an experiment, rather than out of a burning desire not to see ads. I do think that blocking Christmas ads would be a good thing but that’s another topic.

Setting Up the Second Pi

I don’t remember how I setup the first pi-hole but I got it working quite easily. I then repeated the process with a Raspberry Pi Zero 2 W. I downloaded the latest Ubuntu Server or Lite LTS and then ran sudo apt update and sudo apt full upgrade. I then rebooted the machine and ran the Pi hole update script and within a certain amount of time the second Pi Hole was up and running.

The router that I am using allows me to setup two DNS servers so I have the Pi as one DNS server and the Zero 2W as the second. Traffic can now go through either one with double redundancy. If one device goes down the second one jumps in, and vice versa.

I feel that rebooting the router made both Pi Holes visible to computers. Now I can see the list of queries update as devices make requests and get answers.

My inspiration for

The Raspberry Pi Tutorial

The Pi Hole Instructions

Regex and Tables

Pi Holes come with a default table list that you can update every so often. You can also block specific domains either specifically, or with regular expressions. The advantage of using regular expressions is that instead of having a line for every single URL you can have regular expressions that look for specific patterns. You go from needing three million lines, to a few hundreds, if you’re really efficient. The table that comes as standard had about 141,000 items to block by default.

Blocking Other Content

The stereotypical use for Pi Holes is to block ads but you could block porn, or gaming networks, or video sharing sites such as Tik Tok, or Social media sites and more. You could block Right Wing media sites and more. The niche that I would like is to find a way to block the hideous iOS ads when I’m playing certain games. If I could block iOS game ads I might revert to playing them again.

Modular

There are two choices. You can use your Pi Hole, via your ISP router by telling it to use your Pi Hole as a DNS router, or you can configure your DNS settings on your mobile phone or laptop. If you’re using Tailscale or another solution you can configure your DNS settings to use the Tailscale IP address to use your DNS server remotely via a VPN. This could be a way of saving bandwidth when you’re surfing the web using your data plan., especially when roaming.

And Finally

By using Pi Hole we block unwanted traffic at the DNS level, rather than the browser level. This means that we don’t need to run third party extensions in the browser. We don’t get the “If you want to use this site deactivate ad blockers, and other messages. A few years ago I blocked Twitter and Facebook using the mac’s onboard ip tables but that’s fiddly and you need to remember the name of the file that needs to be changed. With Pi Hole, once it’s setup you use an intuitive web interface. It can be used for ad blocking but it can be used to block games at work, or streaming services and more.

I would consider adding a schedule feature. Imagine that from 1900-2300 you can access netflix, YouTube and other streaming services but that the connection is blocked outside of that window. Pi Hole could have parental controls to stop access to Steam, to Nintendo and other gaming networks. It could also be used to block iOS apps to stop children from spending money on pay to win games.

I don’t know for how long I will have my two Pi Holes running in tandem. Probably until I find a new project that requires a lower spec Pi device.

Sticking with the Old or Trying New Things
|

Sticking with the Old or Trying New Things

Yesterday I went for a half hour drive to do a favour, but in arriving where I had to do the favour I found that people were deeply focused and did not want to be interrupted so I went for a walk. I didn’t swap to the hiking shoes that were waiting patiently in the car. I wore my “recycled” shoes instead. I eventually regretted this because the ground that was frosty, also had deep puddles of water and I had to walk through them. Two or three times my feet got wet. While getting my feet wet I was also listening to a Linux Podcast, episode 56 of Linux After Dark and they were discussing whether people like to adopt a system and stick with it, or whether they like to experiment and try new things constantly.


I feel that way about watches at the moment. For plenty of people watches are like televisions. “I haven’t owned either for decades, my laptop and ipad are enough.” For years I was without a watch, and without a TV. As a student I never felt the need. It’s only because people had spare televisions that I ended up with one. I never bought one for myself.


Since I bought myself one or more raspberry pies I have been experimenting with various instances, to see how to set them up quickly, and experiment with implementation and more. In the process I am learning skills that I had not experimented with in years. One of these is to flash a USB key with a version of Linux and rebooting a PC from the USB key to run linux. It worked so well that now I am fighting the desire to install Linux over Windows and have the windows machine become a Linux machine.


Watches


Suunto, Casio, Apple and Garmin make watches, and each one tries to quantify the wearer, so it feels as though the wearer must wear all three or four brands to get complete data for all four platforms. but to do this makes us eccentric. The simplest workaround is to track with one device, and manually update all the others.


Whether you wear a Casio, Garmin, Apple Watch or Suunto is also about something else. User Interface. The Garmin Instinct and Casio g-shock watches look tough/solid, while the Apple Watch and Suunto Peak 5 look more fragile, more elegant. The other difference is that the Garmin watch is solar powered and can last for weeks in summer, whereas the Apple Watch and Suunto Peak five can last for a day, or several. The Garmin watches can last for years, by default, because they use mobile phones to do the hard work. They just count steps and time.


Personal Technical Debt


I like the idea of Personal Technical Debt. The concept exists for IT and programming. Writing code is one thing, but updating it later on is a challenge. To give a simple example, if you write a static website by hand then every page that navigates to other pages, needs to updated every time a new page is added. If you use Hugo or another static website generator you see this with every build. My blog is both on wordpress, and as a static site. As a wordpress blog it’s slow and clunky to update because of all the bloat wordpress has added over the two and a half decades that it has been around.


In contrast with Hugo you write you page in markdown, add the categories and tags, run “hugo” and fifteen seconds later the site is ready to publish via GIT FTP. I spent months updating my static site to PHP before being sidetracked by Hugo and blogging.


The New Machine Routine


A while ago if you started to use a new machine you would need to log into all your sites, across several browsers. When I did this once or twice a year it felt slow and uncomfortable. Now that I slide between web browsers fluidly the time it takes to be up and running in Chrome, Firefox or other, is a few minutes. This is because my personal technical debt is low, and because it has become routine to slide between browsers, whether different versions of Chrome, Firefox or other.


With the Raspberry Pi Imager app you can instantiate a new server on an SD card within minutes, and it will be ready for you to log in via SSH whilst connecting to wifi with no user intervention. This is great because you can setup a headless system in a location with no monitor or keyboard.


Devops


When I started following courses on JavaScript, Ruby, Ruby On Rails and more I would get instructions on how to setup an environment and I wasn’t familiar with the process so I had to follow the instructions attentively. By trial and error, as well as repetition the process became relaxed.


I find that, as I become more comfortable with doing things from the command line, I find docker walk throughs more frustrating than helpful. This is because I want to get instructions on how to setup the environment fully, without the overhead of docker running in the background. On a 2016 mac book pro docker slows down the computer.


And Finally


When we do something two or three times we need to follow the instructions when we get stuck. If we set the time on a casio watch several times then it becomes habit. If we implement Linux instances on SD cards and experiment until we break things, then we know how to do things, without breaking them, in a production environment. If we change web browser once every few months or years it can take a while. If we do it several times a month it becomes second nature. That’s what experimenting is about.

Learning by Trial And Error

Learning by Trial And Error

Every day or two I see people post about how the Fediverse should be simplified to welcome new people. It’s a shame. Signing up for a fediverse server is easy. It’s the same process as for every site. The biggest difference is that you’re signing up for a privately owned, crowd sourced community instance. The instances vary slightly from mastodon to firefish to ClassicPress to WordPress but at their core they are the same. It’s just the community that changes, but even that can be the same if you migrate from one instance to another.

Experimenting

A core aspect of the World Wide Web is that it is a platform where people can experiment with ideas, community tools and more. If you need a manual then you are stuck on websites like Twitter and Facebook, rather than exploring what the World Wide Web has to offer. You’re stuck in a silo.

It worries me that people want to simplify the Fediverse, to write guide books and tutorials. If you spend an hour or two experimenting you’ll understand how things work with ease. I don’t think we need manuals for everything in life, especially not social networks on the fediverse.

The Case for Manuals and Tutorials

There are moments when manuals and tutorials are important. That’s when you’re trying to use something that is not as intuitive as you had hoped. WordPress is plug and play and requires no manual. It works like every other CMS works, so it’s easy to pick up. Hugo is easy to pick up, but you might need to RTFM for one or two functionalities.

The moment you really need a manual or instructions is when you’re experimenting with Angular, Laravel, 11ty, Jekyll and other platforms. It’s when you don’t know what the options are that you need a manual. With the Fediverse all the answers are easy to find. You just need to spend a few minutes looking for them. I suspect that chatGPT, Bard and Bing AI can help you when you have questions.

With Hugo I was able to create pages and get a table of contents with relative ease. With 11ty I got a little stuck so I felt the need for a tutorial. I know what I want to do, and I know how I expect it to work, but it’s not like other solutions. that’s why I decided that for 11ty I would use a learning resource, or more, to experiment with, and learn how to use the tool. I am not against manuals and instructions. I like to read the fantastic manuals, when I get stuck. I even like to ask AI for help in some cases, because I can ask a tailored question and get a tailored answer.

The Case Against Manuals

The more time I spend on Mastodon and the Fediverse, the more it feels like a waste of time. I love the concept, and the freedom we have on the website. What I don’t like is that people want to write manuals and instructions to make the same mistakes as people made, on mastodon, and if that is the case, then I have no reason to stick around. Every day people discuss how to use hashtags, bully people into writing alt text for imagess and encourage thousands or tens of thousands of people to follow individual accounts. If that is what people want then there is little reason for using Mastodon, rather than Twitter. Twitter has the community of friends amassed from 2006 to 2023. The Fediverse is a network of strangers.

I Want the Community to Teach Itself

I want communities to learn by participating, rather than reading. I don’t want people to read a manual about how to use the fediverse, I want them to learn by trial and error, and by what feels good, rather than by doing what is expected. The more time I spend on the fediverse, seeing people speak about better onboarding, the more it feels kitsch, and the more I want to close the tabs.

Experimenting with Eleventy

Since I was getting stuck with eleventy I decided to follow Learn Eleventy From Scratch. As I said, it’s not that I am opposed to learning from guides, manuals, or documentation. I like to learn from these sources, when trial and error doesn’t yield the results that I am looking for, or where trial and error takes more time than is reasonable to figure out.

What I want to do is simple. I want to have one directory for posts, and so far that works quite easily, but I also want another folder that acts as an archive. That is where I am getting stuck, and that is where a manual can streamline my learning process. It’s when I see that I have gaps in understanding that I use manuals to fill them.

And Finally

In my experience the people who write manuals on how to use social media websites are utilitarian, rather than humanists. They encourage people to develop a utilitarian approach to social media and that’s what I fear, and object to.

When you look for books about blogging none of them are about the philosophical or intellectual process. They are all about monetisation, which is fantastic, if you want to be a spammer, but awful if you want to be a humanist. I blog to explore ideas, concepts, and speak of cycling, walking and climbing experiences. If I followed the guide books my blog would be aimed at making a fortune, rather than developing and elaborating ideas.

I will always try something new, without RTFMing (Reading the fantastic Manual), before getting stuck and following a tutorial. What has changed is that now I am learning through manuals, instructions and more, rather than online tutorials. I am learning independently. I am a step further along now.