Walking in Rain – Not Singing

Walking in Rain – Not Singing

When some people see that it’s grey or rainy they don’t want to go for a walk. They don’t want to get rained on and they don’t want to experience the discomfort of being in a wet environment. I don’t mind the rain. I don’t mind wearing a rain coat and rain trousers, and waterproof shoes, and ensuring that I don’t need to fiddle with the phone when my hands are wet with rain.

I think tbat one reason for which I’m fine with walking in full rain gear on a rainy day is that I used to drive in a dry suit, and that at the end of the day walking in the rain is not much different from dry suit diving. In both situations you’re wearing cloths to keep warm, with a layer of protective gear over the dry clothes.

One of the problems with walking on a sunny day, after a day of rain is that shoes get extremely muddy, as do trousers, but that mud is just viscous enough to stick to my shoes. It’s the day after heavy rain that it’s awful, because shoes get muddy and when shoes get muddy I sit on the stones and scrape away the mud from my shoes. This takes several minutes. ¨

My shoes get muddy because whether I am on agricultural roads, on main roads car drivers will drive so fast and so close that I am forced to walk in the mud. During a drought this doesn’t matter, and it doesn’t matter when it’s raining. That’s because when it rains water soaks shoes. Mud doesn’t stick. My shoes look good as new, and that’s a great advantage of walking in heavy rain.

Rain also changes the landscape. What is a road, in dry weather, becomes a stream. What is usually an orchard with grass growing between the trees becomes a pond with trees, a mangrove. I might be pushing our imagination a little with this image.

There is one massive disadvantage to rainy days. My coat drains onto the tile floor and I need to keep mopping it up, to avoid stains from forming. I often have to mop the floor where the coat was hanging on the back of a chair. It’s hanging from a chair because if I let it drain on the coat rack it will soak the ISP device.

As I write this blog post I am struck by something. It’s the 30th of december and I’m speaking about rain, with barely a thought for snow. Facebook reminded me that 12 years ago we had snow on this day. According to the Apple Weather app the normal temperature range for where I am is from -6°c to 4°c. It’s 9°c today. At lease precipitation is 7.3cm above average, so that’s one plus. If it was cold we would have a nice snowy landscape.

And Finally

When it’s almost always sunny people like me get fatigued with the sun, and rain becomes a rare treat. In the same way that we used to think that it’s a shame to stay indoors when it’s sunny, I now find it a shame not to go out when it’s raining. At least when it’s raining the landscape changes, the walking paths are quiet, and my shoes are spotless when I get home. Usually the morning frost makes the ground muddy and I need to clean my shoes before entering the building.

Maybe I’m ready for the Camino Primitivo. If it rained for the entire time I might still be comfortable, especially if I sleep in a building every night.

Installing Immich Alongside Photoprism

Installing Immich Alongside Photoprism

Last night I installed Immich on an HP laptop with ease. The issue I came up against is that laptops sleep and hibernate after a few minutes unless you are actively using them. This means that you need to use them whilst files are being transferred if you do not want tasks to be interrupted. That’s why, this morning I decided to try installing immich on two different raspberry Pi devices. The first is the one running Nextcloud.

Trial and Error Installation

I struggled with this install because first I had to download the right docker packages and then I had to unpack them and then I needed to check that docker was up and running and then I had to try to get Immich to launch but I encountered an error message. “no matching manifest for linux/arm/v8 in the manifest list entries”. After a quick search of the web I found that version of Linux and ARM processor are not supported via this instance so I searched for whether Jammy Jellyfish is. It is and that’s when I tried to install on my other Raspberry Pi 4 device. This time was a success so I have Photoprism and Immich running on one Pi and Nextcloud running on the second.

Best for 360 Photos

The biggest difference I noticed with Immich is that it supports 360 photos. If you’re the type of person to take spherical photographs you will be happy with this version. Another strength is that the app is free. It gives you the option of uploading in the foreground or the background, and on or off wifi.

With this experiment I am uploading the photos from a secondary iphone that I retired from outdoor use due to the battery getting old. I am now uploading 19,000 images and videos from that device to Immich to see how it copes. With the laptop it struggled against the device’s desire to sleep. Now it should run until it’s done on the Pi.

Jobs Status

If you look at the administration page you will see job status for a few jobs. These are: Generate thumbnail, extract metadata, library, sidecar metadata, tag objects, smart search, recognise faces, transcode video, storage template migration and migration.

With this instance you see that each job has “active”, “waiting”, “clear” and pause.

Server Stats

The server stats aren’t as complete as Nextcloud. These stats tell you about total usage in terms of photos, videos, storage as well as info by user about photos, videos and size of photo gallery.

Image Tagging

Image tagging is off by default in settings. It uses microsoft/resnet-50 as the image classification model.

Video Transcoding

Immich gives quite a bit of control for video encoding. It gives you options to control Constant rate factor, preset for how quick or slow an encode is, audio codec, video codec, whether h.264, hevc or vp9. You can also select video encoding from 480p to 4k.

It gives you control over max bit rate, threads used, transcode policy, tone mapping, two pass encoding hardware encoding and more.

The Phone App

The phone app has four tabs. Photos, search, sharing, and library. Photos shows all photos that the app is allowed to see, as well as clouds next to the images that have been synced and clouds with a line for images that are yet to be synced. The library tab shows you the albums that are on the device.

The backup icon at the top allows you to select which albums you want to include or exclude, as well as the total number of images, the number of images backed up and the remainder.

The uploading file info gives you the file name, creation date and id info for immich.

It’s in the backup settings that you can choose automatic foreground backup, for when you open the app and want to sync, automatic background backup if you want background options as the nuance to only upload when charging.

With Photoprism and Immich I noticed that you have the option not to backup iCloud images. Immich and Photoprism indicate when they are downloading images from iCloud to the phone, and uploading from the phone to themselves.

Geeking Out

When files are uploaded from the phone or other device they are moved to the uploads folder and from there Immich reads their metadata and sorts them into folders by year, and then by individual days of the year. The format is year-date-day. The images are then stored with their default name.

Export from Google Photos – Takeout

Yesterday I came across Immich-Go which is a tool that can, among other things import from zipped archives without prior extraction. This is great, considering the eight hundred or more zip files from Google Takout, containing all my photos. With this tool I can save a lot of time and effort but it does come with two disclaimers:

?? This an early version, not yet extensively tested
?? Keep a backup copy of your files for safety

Immich comes with the same warning: “The project is under very active development. Expect bugs and changes. Do not use it as the only way to store your photos and videos!”

Since I have Nextcloud, Photoprism and now Immich running I think I’m spreading the risk of all three failing at once.

Why use PhotoPrism, NextCloud and Immich?

iCloud, Picasa and other tools were great for storing photos on our laptops until we lost the ability to swap out the default drive with a bigger drive. Now that we the same storage on our mobile phones, as on our laptops we need external devices to back up images. If we use hard drives then we need to plug them in before each sync and this takes time, and limits mobility. NAS storage solutions are interesting but if the NAS driver fails then we have hard drives that we can no longer access. The beauty of using Raspberry Pi, thin clients and Bare bone PCs is that we have redundancy.

If a Pi dies we just remove the SD card and within seconds our photo gallery is restored by inserting it into a second device. We could start with a 256 gigabyte SDXC card and move up to a 500 gig sd card before moving up to a two terabyte card.

For more resiliency I would use a USB drive connected to the Pi to really increase storage capacity from half a terabyte up to 120 terabytes, in theory, with certain multidisk storage solutions.

With this setup if the Pi fails you just swap out the Pi.

And Finally

I reconfigured my Pis around. Now I have one Pi with photoprism and Immich. I have another with Homeassistant, a third with Nextcloud and a fourth running Pi-Hole. By swapping cards between Pis I got to see whether installations were easily transferred between devices. Home Assistant was, but the Pi-Hole wasn’t happy. Until this experiment I had just copied and pasted instructions. With Immich I did this too, but because I saw that I had to download specific packages to get docker to work I practiced using curl, and then replacing version numbers for an unpack command.

I encountered an error that meant that one setup woudln’t work, but instead of destroying a Pi configuration that I had a added to another one and it worked. It’s easier to start from scratch and get things to work, especially if they have a Pi image, like Photoprism does. Photoprism and Immich run on docker, so they both need the same base setup, which is why they work in parallel.

To conclude, with Docker you could install Nextcloud, Immich and Photoprism on the same Pi and have the three of them running on the same system. Each one uses a different port so they do not clash. You could even add a splash page so that when people browse to this device they are given the choice of all three storage solutions.

The Long Walk and More Playing with Nextcloud

The Long Walk and More Playing with Nextcloud

Two days ago I went for a longer walk than usual. I walked along roads rather than along the narrow agricultural roads I normally use. I wanted to avoid crowds and dog walkers. The thing about solitude is that it’s enjoyable when you are not reminded that you are alone.

Today I will also have to try to avoid people. Some might be really happy for good weather, but not me. Good weather means that the reminder that others are not lonely is brought home. I go on walks to listen to podcasts and get some exercise. That little walk I went up was so good for my health that I had 18 PAI as a result of that single walk.

On a walk like I did two days ago I combine two, three or even four walks together. These are the walks that I started to walk years ago, after my scooter was hit by a careless driver. She hit the back of my scooter so hard that we slid for several meters. I stayed upright but the scooter needed to be fixed. It was. It took time.

Several times I walked to the scooter place to ask “Is it ready yet?” and several times I got a “nope” answer. In the end that walk that I did to check on the scooter became my ordinary walk. It became one of the circuit walks that I would walk daily for several years. I still like the walks. If people walked with smaller dogs, and kept them on leashes, I’d be happier. I would also be happier if people didn’t drive on farm roads as if they were normal roads, because on foot this is dreadful, especially when people drive too fast, too close, several times a day.

More Experimenting with Nextcloud

This morning I experimented with Nextcloud. I experimented with uploading photos from google takeout zips to Nextcloud using both an Ubuntu machine and a mac. The experiment was a partial success. I found that uploading individual pictures from individual folders is clumsy via Linux. I then tried via MacOS and that was also clumsy. Nextcloud can be used for photo management but that is not what it is really designed for.

There are a few features missing. One of these is the ability to select more than one image at a time. I’d like to select a range of images with ease, rather than have to select sixty video files one by one, before moving them.

I also experimented with moving images from one folder to another and that’s chaotic as well, via the command line. The issue is that Nextcloud detects the images, and indexes them, but if you remove those images it then keeps them in the database. I’d like to be able to refresh the database after making such a move.

Mount a Prepared Drive

Imagine that you have a photo archive that is already well organised. Imagine that everything is organised by year, month, date, and subject. With this tutorial you can learn how to mount your external drive. Nextcloud then sees the images and their folder structure and populates either Memories, or Photos, depending on which interface you prefer.

And Finally

After some trial and error I got Nextcloud to work as I expected it to so I can use it to backup photos from my phone automatically. In this regard it’s a great iPhoto and Google Photos replacement. I think I would have Photoprism and Nextcloud running in tandem. I would have Nextcloud taking care of backing images up, from the phone, and photoprism to work as a DAM/MSM solution.

I will experiment and comment, when I have an opinion.

TimeTagger and Christmas

TimeTagger and Christmas

It’s the twenty fourth today, and people who have been good will soon get things, and those who haven’t will get a lump of coal. Given enough pressure that coal could become a diamond. At such a time it’s interesting to take stock of how productive, or unproductive the year has been. One tool with which to do this is timetagger. Timetagger is either a free app, if you set it up on a local machine, or a paying app if you use a cloud services version.

What makes this app interesting is that it allows you to keep track of tasks by title, and tags. Imagine that you’re writing a blog post. You can use writing, blogging and relevant terms to make finding all time spent working on a specific topic quick and easy.

Resume

An interesting feature of this app is that if you work on a task, and then get interrupted to make coffee or for a fire drill you can pause the activity, and then return and set a second start time. You can count the same task and keywords more than once. If you have a task that you do on a regular basis this saves time.

Simple to Use

To start tracking you press play. It doesn’t matter whether you’re looking at yesterday or another day. It will automatically start “today”. You then press stop and you’re done.

At the end of the day, week, month or quarter you can see a report for specific tags in isolation, or relevant tags. It’s quick and easy to use.

So far I have tracked twenty hours but by this time next year I will have tracked several hundred. I find that by using this server based app I don’t kill the battery on my phone as fast as before. I start the timer and know precisely what I have been working on. I know by title and tags.

If you’re tracking for yourself then you can add as many tags as you want, but if you’re tracking for a client then avoid using more than one tag, unless requested to use more. I experimented with reports and you can select to see all “linux” time, but you also see secondary tags. This might look less professional if you use this app to track time spent professionally.

If you’re working as a freelancer you can use the name of a client as the tag and log the time you arrive, and the time you leave. Even if you don’t need to give a time sheet you can double check it to make sure that the hours are correct.

A To Do Variant

For a while I was playing with To Do apps. You give yourself tasks that you have to complete on a daily, weekly or other basis and you just tick that you did it. Time tracking is taking the To Do list a step further. You’re actually tracking the time that you spend on a task, daily fo weeks or months at a time. It’s important to account for the time you spent, not just what you did.

It’s a good habit to have, whether you’re being paid to track or not. If you get into this habit, in your free time it will be able to do this automatically when you can justify your hours. This app only allows you to track one activity at a time. You can start a second activity but it will pause the first. You can’t track “Time spent in the office”, and then “time spent on a sub task”, at the office.

Self Hosting vs Paid Solution

Self hosting is free but you need to configure the app, and make it accessible when you’re away from the server. With paid solution, for three francs per month you have access from anywhere.

And Finally

With this app you can track the time you spend filming an event, and then you can track the time you spend ingesting footage, logging and more. You can track the time you spend editing and then the time you spend exporting the video files. You can then track time spent on modifications. The application is highly modular and you can start and stop timers with ease, and tag tasks.

The beauty of self-hosting on your local network is that the data is private. No one can use it, other than you, and those you hand reports to. Other solutions may use AI and other tools to quantify the data you give them.

With TimeLogger you could track “learning Linux” or “Learning German”. With this you can track “learning ‘irgendwie’, ‘irgendwo’, ‘irgendwas'” and more. It’s as modular as you want it to be.

Playing with Nextcloud Continued

Playing with Nextcloud Continued

Setting up a drive to be available via Samba is a relatively simple thing to do. The drawback is that you have files that are as organised as the media asset manager. It can be quite chaotic unless you have someone trained as a media asset manager, archivist, or other, to help order photos videos and more. To some degree Nextcloud is just as disorganised, initially.

I have spent more than five minutes experimenting with Nextcloud through several iterations and I have finally set things up as I want them. I have Nextcloud running on a Raspberry Pi 8GB. I chose this device because it’s the highest spec pi available at the moment without months of waiting. I could have used an HP Elite Book from several years ago but I want a machine that can be on permanently.

The first sync is slow. Twenty hours ago I started with over 19,000 photographs and videos and now I still have 6400 remaining. I activated recognize, an AI solution that recognises music genre, objects, human movement in video, people, bodies of water and more. I also have a tool running that maps photographs to show where they were taken.

The beauty of Nextcloud for photo storage is that it allows you to sync from your phone via the app, but it also allows you to upload photos via a web interface, or if you’re so inclined via file transfers on the back end. I have yet to test the latter. The idea is simple. If you have terabyte drives filled with photos that you have already organised by year, month, day, location, and topic, then that file structure should be recognised by Nextcloud. The work that you have done to organise media assets is already done. It’s just a matter of letting Nextcloud see them, and it will take care of mapping, and recognising objects, monuments, images with people and more.

Facial Recognition

With time it recognises faces and the faces are just given a number. You can then provide them with a name. I added my name to the collection of photos of me. It needs 120 faces before it starts to recognise individuals. As the model is self-hosted this data stays local to your system

Object and Landscape Recognition

I noticed that it recognises water, alpine landscapes, signs, boat, bridge, flower, furniture, historic, information and more. The Pi is still working hard to ingest the remaining 5600 photos but when that is done it will have plenty of time to recognise what is in pictures.

When you work as a media asset manager it takes time to tag images, and to add location data. If AI can provide some of this information automatically then it saves a lot of human time. Time that humans can spend adding images to the right folders.

Folder Structure

As a best practice you should always use folders with year-date-country-event-name-photographer-initials. If Nextcloud is up and running you can rely on Nextcloud but if for some reason Nextcloud crashes, or you can’t use a web browser or app, you want to be able to find things according to year, month date, photographer, and topic. Nextcloud should be an embellishment but good Media Asset Management practices should be prioritised.

To some degree the iOS app can help with this, as long as you set things up properly ahead of ingesting all the photographs. I haven’t seen how to set it up yet, but for now I’m still testing the proof of concept, for mobile phone image backup.

Using an Intel Machine

What I am doing with the Pi is experimenting with a Google Photo and iCloud replacement. What I plan to do with the linux laptop is use the full power of a normal computer to serve as a media asset manager for when the machine can be turned off, and on, when not in use. The aim of Nextcloud on the laptop will be to provide me with a one terabyte NAS where I can experiment with what Nextcloud really has to offer.

Tensorflow WASM mode
WASM mode was activated automatically, because your machine does not support native TensorFlow operation:
Your server does not support AVX instructions
Your server does not have an x86 64-bit CPU

When you use the Pi it does not have the required the required x86 64-bit cpu. For that I need to use the Intel machine. It also has GPU acceleration, which I cannot use on the Pi. The Pi is good because it can be on 24 hours a day, as a quick backup tool for your phone, but an Intel NUC machine can be a Nextcloud server with the required hardware to do things much faster.

And Finally

iCloud and Google Photos are great for backing up when you’re out and about. They’re less great when you want to recover your photos. This is because if you remove photos from iCloud they are removed from everywhere so it’s dangerous to clear photos to make space.

With Google Photos the issue is that their cloud backup solution is 34 CHF more than Infomaniak’s cloud storage solution. It is for this reason that I wanted to have a local backup of my Google Photos and iCloud photos. When I setup the Intel machine and ensure that all my photos are backed up from Google Photos I will be able to purge Google photos and downgrade my Google One plan.

My aim is not to eliminate Google Photos, but to reduce the plan I’m using. I have access to two terabytes but I never use it, and Infomaniak is cheaper, so I prefer to have a single plan. The Intel will be the main backup, and kdrive would be the offsite backup.

In the time it took to write this blog post I went from 6400 images to backup, down to 3800.

Watching a Roomba

Watching a Roomba

In theory the entire reason for getting a Roomba is to let it do the vacuuming and forget about it. In reality I always feel the desire to watch roombas as they do their work. It doesn’t make any sense, because the entire goal of a roomba is that it automates what many people find to be a chore. Vacuuming can be quite boring and quite frustrating, especially during the muddy seasons.

During the season I often need to vacuum every day, sometimes even several times a day. Mud likes to get on shoes but brushing doesn’t get it off. Walking does. The g-forces that we generate when our feet hit the ground loosen the mud, especially the next day, as we’re running down the stairs to exit the building.

Wet mud is well behaved. It sticks in place, and it’s hard to remove. Dry mud tends to fall off in clumps. With the roomba that dried mud is easy to clean up. The robot will take care of picking up every last bit of mud.

The paradox of watching a Roomba is that you see bits of dirt and you think “go and get that bit of mud, stop ignoring it. Stop going back over the same spot six times. Of course, by going over the same spot six times a roomba picks up all the dander and fluff from our clothing, as well as human dust, hair and more. It really is methodical.

What looks clean to us, as humans who vacuum is still dirty, and a roomba will pass over and over until everything is gone. What may seem clean, when you look at it by eye is still dirty. It’s just that it is so sparse that we don’t detect it by eye. It’s when we check the roomba filter that we see how dirty the floor was. It’s when we see a big quantity of fluff in the bin.

Yesterday as I watched a brand new roomba clean the floor I expected that I would see dried mud but I saw as much dander as if I had run a clothes drier through three or four loads without cleaning the filters. Don’t forget to clean filters after every run. That dander is a fire risk.

I think that Roomba are expensive but when you compare their price to normal vacuum cleaners they are reasonably priced. You could pay the same for a human guided vacuum cleaner but by doing this you would not be as methodical about vacuuming. As a human we are guided by what we see. If it looks dusty we will clean. If it looks clean we won’t devote much time.

Do you have a room where the light shines in at a low angle, in the morning, or in the evening? I have noticed that if I hoovered at sunrise I can see the dirt because of its shadow. I saw this and thought at least two or three times that I should grab the opportunity to vacuum when the dust is contrasted with the surroundings. The roomba is so fastidious that time of day doesn’t matter. You can clearly see that a Roomba has been at work because the floor looks clean.

Roomba als have another advantage. they’re low to the ground so they can go under furniture that is high enough. They can go under couches and sittees but they can also go under over furniture. They can vacuum and mop places that may never be mopped or vacuumed due to these places being otherwise inaccessible.

Sometimes with a Roomba it really cleans the floor well but in vacuuming well it also highlights where you need to pass with a mop and bucket to get the floor to be perfect.With some roomba they are equipped with a small reservoir of water in which you put water and optionally soap. The roomba will then hoover with the front, and mop with the back. It isn’t over-exuberant like a human. It leaves just enough to clean, but not so much that your socks are soaked if you walk around as it mops.

And Finally

If we watch Roomba work it can be frustrating to watch as they throw dirt away from their path, or kick it to a place they cannot reach. It can also be amusing to get a roomba to vacuum in a room where tiles that were being cut generated an enormous amount of dust. In such situations Roomba become artists. Roomba are expensive and slow but they’re good at making a place look clean. With newer roombas they are quieter so you can run them while you’re around without the noise being as disruptive.

A Walk by the Vallée De Joux

Every so often I get in a car to walk somewhere different. For two or three days we have been in the fog. Yesterday the fog was so thick that when I was driving I decided to slow down. I wanted to be able to stop in half the visible distance.

When the wind is still, and fog forms, there is another advantage, if you get above it. The water on lakes is flat. It’s so flat that the lake becomes a mirror. This is great for photography.

Looking at the rock face near the Lac de Joux -- Looking at the rock face near the Lac de Joux

Le Pont when the Lac de Joux is calm -- Le Pont when the Lac de Joux is calm

Frost that has built upwards -- Frost that has built upwards

The Subtle Art of Trial and Error

The Subtle Art of Trial and Error

For 40 CHF you can buy a Tapo or Xiaomi webcam and it is almost ready to be used as a webcam. You take it out of the box, plug it in, add an SD card, download the app, pair it with the phone and let the phone connect it to wifi and then it detects motion, can take video, photos and more, with ease. In such an environment it’s easy to forget about what we called “Plug and pray” back in the day.

Back in the geeky old days of computing there was a lot of trial and error to get things to work. You would try one thing, and see if it worked, and then another, and then a third, and then a fourth, and eventually you would either find a solution, or give up. One of the reasons I switched to Apple, rather than Linux, in 2003, is that I wanted to be able to connect to the university’s wifi with ease. I expected that if I used a linux machine I would struggle with wifi.

Apple is the leader in making everything work so flawlessly, as long as they want you to do things, that trial and error is part of history. Apple controls everything, to ensure that it works “flawlessly”. I put “flawlessly in quotation marks because my phone crashes or hangs on almost every one of my walks. I rebooted it today and yesterday, while walking. If I take photos during a walk the phone acts up and freezes, and stops the podcast I’m listening to.

I’m being distracted. The point is that Apple, until recently, was known for producing reliable devices. Windows is also known for dumbing down their devices more and more. They try to make it so that users just click install, and the computer does the rest. Usually webcams, printers and more are plug and play.

With Linux you’re using a tinkerer’s OS so things can be simple, if you buy a generic webcam and plug it in. I tried to set an android phone up as a webcam and it worked within minutes. Integration with Home Assistant was smooth and efficient.

With a Raspberry Pi 3b and a Raspberry Pi zero 2 W I have struggled for three or four hours trying to get the camera to work. You have to do A, and then you need to do B and then you need to do C. You also need to wire the camera into the board the right way.

As you’re doing this from a CLI you’re not seeing whether the webcam is giving a picture or not. I tried to take pictures and it appeared to take them but when I tried to get motion to work with the camera to stream to a device with a web browser I just see nothing. I get an error message about the camera not being available.

I know that the right camera is detected because I see it in the output. I just haven’t taken the time to see if the images generated correspond to what I expect them to be. The subtle art of of trial and error is about having a goal and tweaking and experimenting until you get the result you want to get.

The first error is that I wired the camera the wrong way. The second error is that I don’t need to use the legacy camera option with this camera. The third error is that I’m trying to get a Pi and camera module to work as a webcam before I get it to work within its own device.

I am so used to Windows, MacOS and dedicated hardware being so reliable that I forget about the trial and error part of computing that was once so familiar to those of us geeky enough to spend hours of our free time playing with computers. When computers just work it’s easy for everyone to be a geek, because turning it off and on again is easy. So is plugging in a USB device.

My aim is not to build a CinePi.My aim is to setup a webcam that I can see via Home Assistant. I can then add motion detection and more features when I achieve the initial goal of building a Raspberry pi webcam server in minutes“. The instructions are for the V2 module, or a logitech device, and I’m using the V3 module, so the instructions need to be updated. That’s why I’m struggling, and that’s why it’s interesting to do these projects.

I came across this challenge when following programming courses that were over a year old. Sometimes I had to look for the new way of doing things to get the code to behave as it was expected to. Sometimes ChatGPT, Bard and Bing are helpful to find the up to date way of doing things. It is also a case of Reading the Fabulous Manual (RTFM).

There are at least ten Home Assistant Camera integrations to experiment with, so if the method I have been experimenting with doesn’t work I still have 9 other solutions to experiment with. The FFMPEG option looks interesting.

The Subtle Art of Trial and Error Summarised

I call it the subtle art of trial and error because the art lies in learning a methodology by which to come up against an issue and to develop a system by which to resolve the issue in an increasingly short amount of time. The point isn’t in knowing how to do things. It’s in knowing where to look for help. It used to be called Google Fu.

I could easily buy a webcam for 12-30 CHF now but by experimenting with various “integrations” I invest my time in learning new skills and that has value. If I get FFMPEG to work, then I can potentially build my own camera systems. Instead of reverting to film like some, I could go the other way, and experiment with concepts similar to the Cinepi.

From Timelogger to Timetagger

From Timelogger to Timetagger

For at least two or three years I have been using Timelogger and I really liked the app. That’s why I kept using it for so long. There is one fatal flaw to Timelogger. It wants you to pay 2 CHF per month, or 9 CHF per year, or 25 CHF for a lifetime of use. It makes sense to pay for more features. It doesn’t make sense that you need to pay to backup to your own icloud account or export the data.

Three Years of Data

I have been using the app since 2020. I tracked 60 hours in 2020, 601 hours in 2021, 780 hrs in 2022 and 681 hours so far this year. The issue is that this data is now trapped on the phone I am currently using. I can release it for 24 CHF if I pay per month, 9 CHF if I pay for year, or 25 CHF if I pay for a lifetime.

Payment Snowball

In the early days of the Apple store you could download an app and use it for free but over time every app you download has asked for 25 CHF to 50 CHF per year. When you use 4 apps that’s 100CHF to 200 CHF per year. If you use 8 apps it’s 200-400 CHF per year. The apps that you use end up costing more than an iPhone SE, every year. It becomes absurd. I have used the Timelogger plus app for over 2,122 hours so I should pay but I’m not going to pay just to back up my data to my own cloud.

Paying to Backup Your Own Data on Your Own Things

That’s especially true when you have to pay to backup your own data to your own iCloud account or to your iphone’s storage. That’s where an app like Timetagger becomes interesting, especially for people ready to setup a Pi on their home network to use as a time tracker.

An iOS subscription tip

One technique I use when dealing with iOS subscriptions is to take the minimum option, go to settings, subscriptions and then cancel the subscription within minutes of paying for a service. This has two advantages. The first is that you don’t get conned into paying for more than you want to use. If you do decide to extend you are asked, and have to take action to spend more money.
The second reason is that in my experience I would pay for a year, use the app for a week or less, and not be able to be reimbursed. It’s cheaper to pay for a year after you decide that you have a use for the app.

As a case in point, the Timelogger Plus allows you to “backup” your data but as a proprietary file, rather than a useful CSV or other file. The result is that you’re paying for a backup that condemns you to keep using the app. I find it dishonest to provide apps that give no way of sliding from one to the other.

TimeTagger

You have the option to pay 3 Euros per month, for someone else to take care of the hosting, and you just use it, or you can follow the Pi My Life Up instructions to setup your own instance. The one thing to note with this install is that you need to run it with a docker command each time you reboot the Pi. You could set the Pi to do this automatically but I haven’t read the fabulous manual to see how to set that up yet.

With an app like Timetagger you give a name to what you’re doing but you also tag it with what you’re specifically focusing on. For what I am doing now I tagged it with blogging, writing and one or two more tags. I can then look at what I have spent time on for the last day, week month, quarter or year. You can select which tag you want to export as a spreadsheet, CSV or PDF document.

Flexible

One of the key features that seems of interest with Time Tagger is that you use tags. With tags you can start tracking time spent for a specific client within seconds, without the need to create a folder with the name of the type of activity, and a specific sub activity within. With this you press play when you start, add the right tags, and when you stop the activity you press stop and everything is logged in simplicity.

Cheap Cheap Or Hosted by them

The cheapest option would be to setup a Raspberry Pi Zero 2 W, and if it works you’d have paid 30 CHF and be happy. The options you have for the hosted solutions are 4 Euros per month, 36 Euros per year, or 144 Euros for life. They’re more expensive than the Timelogger app but exporting data is easy. You’re not locked in. You can also setup your own server and potentially add billing and other functionality.

Portability

If you only want to log your activities when you’re at the office or home then you can simply use the local network but if you want to access time tracking remotely you can add your instance to tailscale and use your VPN to connect when you’re away from the network with the instance.

And Finally

Although I use the example of the raspberry Pi Zero 2 W you could just as easily set it up on your windows, mac os or other machine using Docker. Your work machine can serve as the host.If you want to work on adding features you can visit the github page and scroll to the bottom.

I have only tracked two hours so far but I like what I see.

I initially really liked Timelogger and Timelogger Plus but as the project advanced, so it became more and more of a trap. I would have left sooner, if I had found an alternative before now.

Walking in Heavy Rain
| |

Walking in Heavy Rain

I knew that it would rain heavy yesterday (at the time when you read this) so I considered running so that I would spend less time in the weather. The issue, at this time of year, is that if you run you need to do so before the sun sets but you also want to wear lighter clothes, for running to be easier.

Ready for Rain

For these reasons I went for a walk instead. I rolled up the trousers to avoid contact between the socks and trousers. I wore waterproof trousers, and a good rain coat. I walked for an hour and a half in the rain and crossed almost no one. In this weather even the dog walkers stay home. That is what I want. I like when the paths are empty of people, when I can enjoy my solitary walks in solitude, without being reminded of my isolation.

I wore barefoot shoes for this walk. They get wet almost immediately as they are not waterproof. Within 200 meters my feet were drenched. That’s what I expected. That’s what I planned for. That’s why my trousers were rolled up. I didn’t want the humidity to creep up my socks, and then my trousers, and into my t-shirt and fleece.

It worked. I stayed dry.

The Inconvenience of Touch Screen Phones When Wet

There is one challenge in such rain. When you get to the end of one podcast you need to find an underpass, or a lending library, or some other shelter. You need to dry the phone screen and your hands enough to use the phone to select the next podcast. After that you can keep walking.

For many it would seem to walk in the rain, but that’s because they don’t walk the same path every single day, for weeks or months, or even years in a row. Changes in weather are like changes in crops, changes in seasons and more. When it rains I see a different landscape. I see where the land is low, and where it is higher. I see where the water flows heavily, and where your feet remain dry.

Golden Hour

The greatest paradox is that despite the heavy rain, and the uncomfortable conditions you can still notice golden hour. As I walked today I saw that the light became more yellow, despite being under the rain. Despite the bad weather there was a discernable golden hour.

As I walked through one village I saw people burning wood in a barbecue. I don’t know whether it was to actually have a barbecue, or just to burn wood. If they were going to cook with it then it shows that the English are not the only people to barbecue in the rain.

As if that wasn’t surreal enough I also saw two children walking with someone dressed in a Santa costume. They all carried umbrellas to protect themselves from the rain. It’s not every day you see Santa walking in the rain with an umbrella.

In the end I wasn’t the only strange person out this afternoon, walking in the rain, as the heavy rain fell. If I was that type of person I would say that this walk was magical. Today was surreal, like Godard’s 1967 film, Weekend, where we see strange things as a car drives through a traffic jam.

And Finally

For many rain is an excuse to stay in. I don’t see it that way. The familiar landscape becomes unfamiliar. The rivers that were barely a trickle are now full. The water that is transparent when the rain has just started has become brown. We can see rivers of muddy water flowing from the Gravière into the river. We can see where the road is low, and water flooded onto a road, and left mud and other detritus. In another location I saw apples strewn about. The rain had made the apples float, and transported them into nearby fields where other crops were growing.

Walking during the rain is unique, and worth doing, when equipped for the weather.