🎃 Valiukas

jump to main content

Creating my own digital garden

Sep 13, 2022

EDIT: [2022-09-21 Wed]

This first blog post is a test to check whether the website works, and DOES NOT PROVIDE MEDICAL ADVICE.
Always seek the advice of your physician or other qualified health care provider with any questions you may have regarding a medical condition or treatment and before undertaking a new health care regimen, and never disregard professional medical advice or delay in seeking it because of something you have read on this website.

I am not a doctor (yet)!

I have borrowed the pipeline for building thecashewtrader's website from my org-roam Zettelkasten and wrote several helper functions in bash and python to add some functionality. For instance, add-backlinks.py is a very inefficient approach to add backlinks to my notes: basically, for each note it finds a list of all notes that link to it.

from orgparse import load
from pathlib import Path
import sys
import emoji

def find_backlinks(org_file, directory):

    org_file = load(org_file)
    id = org_file.get_property("ID")

    backlinks = []
    files = Path(directory).glob('*.org')
    for f in files:
        f_org = load(f)
        if f"[id:{id}]" in f_org.get_body(format='raw'):
            id_ = f_org.get_property("ID")
            # Other method makes a distinction when not capitalized
            title_ = f_org._special_comments['TITLE'][0]
            backlinks.append((id_, title_))

    if backlinks:
        backlink_section = [emoji.emojize("* :link: Backlinks")]
        for bl in backlinks:
            backlink_section.append(f"- [[id:{bl[0]}][{bl[1]}]]")
        return backlink_section
    else:
        return []

if __name__ == '__main__':
    file = sys.argv[1]
    directory = Path(file).parents[0]
    backlink_section = find_backlinks(file, directory)
    print("\n".join(backlink_section))

However, since the original code for generating cashewtrader's website does not work with ID-links (a link to a specific ID of a note, rather than a filepath), I also needed to translate these to filepaths and then to the names (slugs) used by weblorg.

All is then structured nicely in a Makefile that also finds the relevant python virtual environment to ensure the right packages are installed:

update:
        rm _braindump/*
        rm _snippets/*
        bash update_braindump.sh

preprocess:
        rm braindump/*
        rm snippets/*
        cp _braindump/* braindump
        cp _snippets/* snippets
        bash add_backlinks.sh
        bash fix_links.sh

build:
        rm -rf public/*
        emacs --script publish.el

publish:
        rsync -a ./public USER@IP:caddy

If I find time, I'll tidy up all the code and the website-generation process. For now, I have sown the first seed for the digital garden.