A collection of tips and tricks...

Confluence – Export PDF Template

I spent the good part of a sunday afternoon to play around with the Export to PDF function in Confluence, and this is my finished stylesheet. Inspired by various online sources…

body {
font-family: Arial, sans-serif;
}

*|h1 {
font-family: Arial, sans-serif;
font-size: 36pt !important;
}

*|h2 {
font-family: Arial, sans-serif;
font-size: 24pt !important;
}

*|h3 {
font-family: Arial, sans-serif;
font-size: 16pt !important;
}

*|h4 {
font-family: Arial, sans-serif;
}

.pagetitle h1 { 
font-family: Arial, sans-serif;
font-size: 60pt !important;
margin-left: 0px !important;
padding-top: 200px !important;
page-break-after: always;
}

@page {
size: 210mm 297mm;
margin: 15mm;
margin-top: 20mm;
margin-bottom: 15mm;
padding-top: 15mm;
font-family: Arial, sans-serif;

.pagetitle
{
page-break-before: always;
}

@top-right {
    background-image: url("wiki/download/attachments/123456/Logotype.png"); 
    background-repeat: no-repeat;
    background-size: 75%;
    background-position: bottom right;
}

/* Copyright */
@bottom-left {
        content: "Company INC.";
        font-family: Arial, sans-serif;
        font-size: 8pt;
	    vertical-align: middle;
	    text-align: left;
    }

/* Page Numbering */
@bottom-center {
        content: "Page " counter(page) " of " counter(pages);
        font-family: Arial, sans-serif;
        font-size: 8pt;
	    vertical-align: middle;
	    text-align: center;
    }
	
/* Information Class */
@bottom-right {
        content: "Information Class: Public";
        font-family: Arial, sans-serif;
        font-size: 8pt;
	    vertical-align: middle;
	    text-align: right;
    }

/* Generate border between footer and page content */
 border-bottom: 1px solid black;
}

Is Microsoft running out of capacity?

When we talk about Cloud and Cloud Services, most people imagine “unlimited resources at your disposal”, and sure in most situations the capacity of the cloud providers are sufficient to make you feel like you have unlimited resources (if you ignore things that is already in place to stop you from consuming all the resources, like account limitations…) but that might have to change…

According to TechSpot and The Information, Microsofts Cloud Service: Azure, is currently operating at reduced capacity and this might mean that certain services are no longer available to the users, or you are limited in the amount of services you are allowed to deploy.

The reason behind this, according to the news sites, is again, the global chip shortages.

https://www.theinformation.com/articles/microsoft-cloud-computing-system-suffering-from-global-shortage

https://www.techspot.com/news/95164-microsoft-data-centers-around-world-experiencing-capacity-resource.html

Turn on and off your AKS (Azure Kubernetes)!

Turning off your AKS will reduce cost, since all your nodes will be shut down.

Turning it off:

Stop-AzAksCluster -Name myAKSCluster -ResourceGroupName myResourceGroup

And then turning it back on:

Start-AzAksCluster -Name myAKSCluster -ResourceGroupName myResourceGroup

If your an Azure N00b like me, and you get “Resource Group not found”, change into the correct subscription using either name or id, with:

Select-AzSubscription -SubscriptionName 'Subscription Name'

or

Select-AzSubscription -SubscriptionId 'XXXX-XXXXX-XXXXXXX-XXXX'

Thats it for today!

Migrating from Authy to 1Password

I’ve previously used LastPass and Authy, but have decided to start using 1Password instead as their app is nicer and they have many features that are not available natively on the LastPass Desktop App or Browser Extention, like 2FA.

But how to migrate without having to re-setup 2FA on every site?

After trying out some javascript-browser-hacks without much luck, I found a Go Lang written program that uses the Device-feature of Authy to get access to the TOTP-secrets, works like a charm as they say!

Here’s a link to the program:

https://github.com/alexzorin/authy

Recursive unrar into each folder…

I’m not sure why this was so hard to find, but now it’s working… I was initially working on having “find” -exec UnRAR but it didn’t seem to work too well (I couldn’t find a good one-liner to separate file directory and full-filename to put into the find -exec command, if you have one, let me know).

#!/bin/bash
find $PWD -type f -name '*.rar' -print0 | 
while IFS= read -r -d '' file; do
dir=$(dirname "$file")
	unrar e -o+ "$file" "$dir"/ 
done

(some parts of the script was inspired from other online sources)

That oneliner…

Once again, this is just so that I don’t forget 🙂

apt -y update && apt -y upgrade && apt -y dist-upgrade && apt -y autoremove && apt -y clean && apt -y purge && reboot

Because you just want keep your system updated … I’m sure some of the commands are redundant, but hey.. It works!

Manipulating “date added” in the Plex Media Server Database

So I noticed a quite annoying thing in my Plex Server, some items were always marked as “recently added”. As usual, I went online and searched for a solution.

Some posts suggested to wipe the library and start over, while some posts suggested that doing that doesn’t fix the problem as the “bug” seems to be that when an initial scan is done, some times Plex will add a “date in the future” as the “added date”.

So I got curious, I started looking at the XML-file of the Items in question, and just as described in one of the posts, the year of the added-date, was in 2037 a fair bit into the future.

I started looking for solutions to the problem and trying to find someone who had been able to fix it, when I stumbled upon a reddit post with a SQL-script.

Reddit Post by bauerknight

So I started by downloading the Plex SQLite3 Databases from my Plex Server (making sure to make backups of the original files…), I then downloaded a SQLite3 Database Tool to my computer and started exploring the database structure.

After looking around, it seemed like the 4 year old Script found on Reddit would do what I wanted, so I modified it to work on my Library and ran it.

After that I uploaded the DB-files to my Plex Media Server and started it, problem solved!

Reddit article: bauerknights post

SQL Oneliner for future use:

UPDATE metadata_items SET added_at = originally_available_at where library_section_id = '5' and added_at >= '2020-08-29 00:00:00'

KVM VM Image Crash

So, something bad happened! My Virtual Machine running on my NAS crashed.

And it looked like this…

2020-05-08T19:51:09.582896Z qemu-system-x86_64: -drive file=/share/Storage/VM/Windows 10 Enterprise/Windows 10 Enterprise.img,format=qcow2,if=none,id=drive-virtio-disk0,cache=writeback: qcow2: Image is corrupt; cannot be opened read/write

So I did what I normally do, I started googling, and found a bunch of old articles saying that I had to install some “nbd” module in the kernel and then run “ddrescue”, most of the articles I found pointed to one very sad solution, scrap your VM and reinstall, because you’re never getting your data back.

Anyway, all of that seemed pretty old (and sad), and I thought there must be a better way. And guess what, there was!

First I ran a command called “qemu-img” with the command “check”, and It looked like this:

./qemu-img check /share/Storage/VM/Windows\ 10\ Enterprise/Windows\ 10\ Enterprise.img

When that was done it gave this:

<WALL OF TEXT> (list of all errors in the Image), and then a Summary:

2047 errors were found on the image.
Data may be corrupted, or further writes to the image may corrupt it.
17593 leaked clusters were found on the image.
This means waste of disk space, but no harm to data.
802489/4096000 = 19.59% allocated, 4.83% fragmented, 0.00% compressed clusters
Image end offset: 53754200064

I liked the part about “no harm to data”. So I ran the second command that I had found, which was:

./qemu-img check -r all /share/Storage/VM/Windows\ 10\ Enterprise/Windows\ 10\ Enterprise.img

And after that command finished:

The following inconsistencies were found and repaired:
17593 leaked clusters
1024 corruptions
Double checking the fixed image now…
No errors were found on the image.
802489/4096000 = 19.59% allocated, 4.83% fragmented, 0.00% compressed clusters
Image end offset: 53754331136

And that solved it!

Lesson learned: Scheduled Backup of your VM = good thing.

Remove files with Find

Try to remember that the syntax to remove files recursively using Find is:

find . -name "Thumbs.db" -exec rm '{}' \;

HDMI + MacBook = No Network

I’ve always been one of those guys, saying that “why do you need a fancy HDMI-cable, it’s a digital signal, either it works or it doesn’t work, the signal will not be affected by interference in the same way as an analog signal”.

While this is true, a digital connection like HDMI comes with other challenges like, versioning. It turns out (I was aware of this, but have not payed much attention to it in the past), that HDMI has several different versions that all include different new features, and even though the cable and the connection interface looks the same on all products, the actual result of the connection might vary and/or bring different problems with it if there is a missmatch.

I recently experienced this with my new monitor, when I connected it via my HDMI-dock-adapter to my Mac Book Pro, all networking went bananas, I tried turning the monitor off, thinking I had received a unit that was interfering with the Wi-Fi connection on my computer (Bad ESD protection or something, was my first though). But after some researching and tinkering with different cables and input-devices, I figured out that it was actually the HDMI-cable causing the problem.

With a newer HDMI-cable (and not one of the many old-ones I’ve gotten for free with different purchases over the years), everything worked fine!

So if your network connection or other things starts acting up after you connect something new with an HDMI-cable, you might be experiencing the same thing!

What I really think could be improved here, as a consumer, is some kind of error-messaging via a fall-back to a lower version, “Ethernet over HDMI malfunction, because cable does not support HDMI v. 1.4”.