.plan

2024-11-04 (permalink)

* [Setting to prevent matchparen from overriding the cursor highlighting](https://github.com/vim/vim/pull/15984)

2024-10-20 (permalink)

TIL: Google Chrome sync does not natively support setting different default fonts for different OSes!

What if you want to use different default fonts on Mac OS and Windows with the same account?

You have two options:

1. Inject custom JavaScript / CSS into each and every page, and override typography rules to match your taste
2. Switch from _full_ to _custom_ sync, and opt out of "Settings" sync (that's what I ended up doing)

I initially toyed with (1) for a bit, but I soon realize things will get unwieldy pretty easily; on top of that, if you add that I would need to find a way to sync this script across my browsers, I figured I might as easily get the job done by opting for (2) instead.

For posterity, this is the [Violentmonkey](https://violentmonkey.github.io/) user-script that Claude Sonnet helped me put together; it kinda works, except:

- It fails to override CSS rules like: `font-family: serif`; it will render using the browser defined Serif font, and not the override of my like
- If fails to override CSS rules like: `font-family: initial`; again, it will render using the browser fonts, and not the custom overrides

Anyways, there you go...
// ==UserScript==
// @name         Enhanced OS-Specific Font Fallbacks
// @namespace    http://tampermonkey.net/
// @version      1.9
// @description  Apply OS-specific font fallbacks with improved OS detection and HTML element consideration
// @match        *://*/*
// @grant        GM_addStyle
// ==/UserScript==

(function() {
    'use strict';

    // Function to determine the OS
    function getOS() {
        if (navigator.userAgentData) {
            // Modern approach using userAgentData
            const platform = navigator.userAgentData.platform.toLowerCase();
            if (platform.includes('mac')) return 'MacOS';
            if (platform.includes('win')) return 'Windows';
        } else {
            // Fallback to platform for older browsers
            const platform = navigator.platform.toLowerCase();
            if (platform.includes('mac')) return 'MacOS';
            if (platform.includes('win')) return 'Windows';
        }
        return 'Unknown';
    }

    // Get the current OS
    const currentOS = getOS();

    // Define OS-specific font stacks
    const osFonts = {
        MacOS: {
            serif: 'Georgia, "Times New Roman", Times, serif',
            sansSerif: '"SF Pro Text", "Helvetica Neue", -apple-system, BlinkMacSystemFont, Roboto, Arial, sans-serif',
            monospace: '"SF Mono", Menlo, Monaco, Consolas, "Liberation Mono", "Courier New", monospace'
        },
        Windows: {
            serif: '"Cambria", "Times New Roman", Times, serif',
            sansSerif: '"Segoe UI", Roboto, "Helvetica Neue", Arial, sans-serif',
            monospace: 'Consolas, "Liberation Mono", "Courier New", monospace'
        },
        Unknown: {
            serif: 'Georgia, "Times New Roman", Times, serif',
            sansSerif: '-apple-system, BlinkMacSystemFont, "Segoe UI", Roboto, Helvetica, Arial, sans-serif',
            monospace: 'Menlo, Consolas, Monaco, "Liberation Mono", "Courier New", monospace'
        }
    };

    const baseCSS = `
    :root {
      --os-serif: ${osFonts[currentOS].serif};
      --os-sans-serif: ${osFonts[currentOS].sansSerif};
      --os-monospace: ${osFonts[currentOS].monospace};
    }`;

    GM_addStyle(baseCSS);

    // Function to check if an element has a custom font-family set
    function hasCustomFontFamily(element) {
        // Check inline style
        if (element.style.fontFamily) {
            return true;
        }

        // Check stylesheets
        const styles = document.styleSheets;
        for (let i = 0; i < styles.length; i++) {
            let rules;
            try {
                rules = styles[i].cssRules || styles[i].rules;
            } catch (e) {
                continue; // Skip cross-origin stylesheets
            }
            for (let j = 0; j < rules.length; j++) {
                let rule = rules[j];
                if (rule.style && rule.style.fontFamily) {
                    if (element.matches(rule.selectorText)) {
                        return true;
                    }
                }
            }
        }

        return false;
    }

    // Function to test default fonts and apply fallbacks if needed
    function testAndApplyFallbacks() {
        const testDiv = document.createElement('div');
        testDiv.style.visibility = 'hidden';
        testDiv.style.position = 'absolute';
        testDiv.style.top = '-9999px';
        document.body.appendChild(testDiv);

        let cssRules = [];

        // Check html and body font
        if (!hasCustomFontFamily(document.documentElement) && !hasCustomFontFamily(document.body)) {
            cssRules.push(`html, body { font-family: var(--os-serif); }`);
        }

        // Check monospace elements
        const monoElements = ['pre', 'code', 'kbd', 'samp'];
        monoElements.forEach(tag => {
            const el = document.createElement(tag);
            testDiv.appendChild(el);
            if (!hasCustomFontFamily(el)) {
                cssRules.push(`${tag} { font-family: var(--os-monospace); }`);
            }
            testDiv.removeChild(el);
        });

        document.body.removeChild(testDiv);

        if (cssRules.length > 0) {
            GM_addStyle(cssRules.join('\n'));
        }
    }

    // Wait for the page to load before testing fonts and applying fallbacks
    if (document.readyState === 'loading') {
        document.addEventListener('DOMContentLoaded', testAndApplyFallbacks);
    } else {
        testAndApplyFallbacks();
    }
})();

2024-10-19 (permalink)

TIL: [Kamal](https://kamal-deploy.org/) "Assets Bridging", or "Old assets -> New container / New assets -> Old container"

- When deploying a new version of the app, there will be a small window of time in which both the old and the new version of the app will be running at the same time, and the app proxy might forward requests to either one, randomly
- What if the old version receives a request for a new asset or vice-versa?  Clients will see a bunch of 404s!
- The solution: copy the old assets and the new assets in a local directory, and then volume-mount that into the app container
docker run --name app-web-assets ... \
    --entrypoint sleep <app-image> ...

docker cp -L app-web-assets:/rails/public/assets/. <folder>

docker stop -t 1 app-web-assets

docker run ... \
    --volume <combined asset folder>:/rails/public/assets ...
References:

- [Donal McBreen - Kamal 2.0: Deploy web apps anywhere - Rails World 2024](https://www.youtube.com/watch?v=rne8OoabiC8)
- [Kamal > Configuration > Asset path](https://kamal-deploy.org/docs/configuration/overview/#asset-path)

2024-07-24 (permalink)

* [Unyelding](https://blog.glyph.im/2014/02/unyielding.html)

  > ## The Problem with Threads
  >
  > ### Key Points:
  > - **Threads** (specifically shared-state multithreading) complicate local reasoning, which is essential for correct software development.
  > - **Local reasoning** allows understanding a routine's behavior by examining it in isolation, but threads require considering every possible execution order, increasing complexity exponentially.
  >
  > ### The Issue with Local Reasoning:
  > - Nonconcurrent systems allow imagining state changes straightforwardly.
  > - Concurrent systems, especially with threads, require understanding interactions with all possible callers and shared states.
  >
  > ### Arguments Against Threads:
  > - **Combinatorial Complexity**: Threads create an exponential increase in complexity due to potential interleaving of instructions.
  > - **Error-prone**: It’s challenging to track where shared state may lead to bugs.
  > - **Cognitive Load**: Human minds can handle limited concurrent interactions, making thread management even harder.
  >
  > ### Common Counterarguments:
  > - **Mutexes**: While mutexes can help, they introduce their own complexities and don't eliminate the fundamental issues.
  > - **Practicality**: Despite the theoretical problems, threads can work in practice due to other simplifications in real-world scenarios.
  >
  > ## Alternatives to Threads
  >
  > ### Suggested Approaches:
  > 1. **Straight Callbacks**: Twisted’s IProtocol, JavaScript’s on<foo> idiom.
  > 2. **Managed Callbacks/Futures**: Twisted’s Deferred, JavaScript’s Promises/A[+].
  > 3. **Explicit Coroutines**: Twisted’s @inlineCallbacks, C#’s async/await.
  > 4. **Implicit Coroutines (Not Recommended)**: Green threads, eventlet, gevent.
  >
  > ### Advantages of Explicit Coroutines:
  > - **Visibility**: Explicit yield points in the code make concurrency visible and manageable.
  > - **Local Reasoning**: Easier to reason about concurrency as potential yield points are clear.
  > - **Easier Debugging**: Knowing where a routine might yield helps in understanding and debugging the code.
  >
  > ## Real-World Examples
  >
  > ### Twisted Reality:
  > - A bug involving a mechanical cockroach in a multithreaded game highlighted the difficulties of tracking concurrency issues.
  > - Rewriting the system using an event-driven approach (Twisted) eliminated the bug, demonstrating the practical benefits.
  >
  > ### Theoretical Example:
  > - **Transfer Function**: Demonstrates the potential concurrency issues in a simple payment transfer routine.
  > - **Explicit Concurrency Management**: Using `yield from` in coroutines helps in managing concurrency and makes potential issues visible.
  >
  > ## Conclusion
  >
  > ### Main Takeaways:
  > - **Threads vs. Microthreads**: Both suffer from similar problems regarding concurrency, making them challenging to manage.
  > - **Explicit Concurrency**: Emphasizes the importance of visible and manageable concurrency through explicit coroutines.
  > - **No Shortcuts**: Achieving correct concurrent behavior is inherently difficult and requires careful management, not just better threading models.
  >
  > ### Final Thought:
  > - **Supple Code**: Like the reed bending in the storm, programs should yield gracefully under concurrency pressures to avoid breaking like the rigid oak.

* [Ruby methods are colorless](https://jpcamara.com/2024/07/15/ruby-methods-are.html)

  > ## Overview
  > This blog post, part of a series on concurrency, parallelism, and asynchronous programming in Ruby, discusses the concept of "colorless" methods in Ruby. This means that in Ruby, there is no syntactic difference between synchronous and asynchronous methods, simplifying concurrent programming.
  >
  > ## Function Colors
  > - **Function colors**: Refers to distinguishing between synchronous (blue) and asynchronous (red) functions, a concept highlighted by Bob Nystrom in the context of JavaScript.
  > - **JavaScript's evolution**: Initially relied on callbacks (callback hell), then moved to Promises, and finally to async/await, each step making asynchronous code somewhat easier but still maintaining a distinction between sync and async functions.
  >
  > ## Ruby's Colorlessness
  > - **Unified syntax**: In Ruby, methods for blocking operations (like file reads, HTTP requests, etc.) look identical to their non-blocking counterparts.
  > - **Threads and Fibers**: Ruby achieves colorlessness by using Threads and Fibers, which allow independent call stacks that can be switched between, enabling concurrent execution without needing to distinguish between sync and async code.
  >
  > ## Concurrency Layers in Ruby
  > - **Nesting doll model**: Ruby’s concurrency model is layered like nesting dolls, with each layer providing different levels of concurrency:
  >   - **Processes**: Independent instances of Ruby, running in isolated memory spaces.
  >   - **Ractors**: Ruby’s actors that can run in parallel, designed for message-passing concurrency.
  >   - **Threads**: OS-level threads sharing the same memory space, capable of parallelizing blocking operations.
  >   - **Fibers**: Lightweight, cooperative concurrency within threads, enhanced by the FiberScheduler in Ruby 3.
  >
  > ## Concurrent, Colorless Programming
  > - **Examples**: Demonstrates concurrent HTTP requests using Threads and Fibers, showing how Ruby can handle blocking operations in parallel without changing the method calls.
  > - **Concurrency vs. Parallelism**: Concurrency is about managing multiple tasks at once, while parallelism is about executing tasks simultaneously. Ruby threads and fibers achieve concurrency, allowing tasks to be interleaved efficiently.
  >
  > ## Practical Implications
  > - **Web and job servers**: Ruby’s colorless concurrency model powers threaded and fibered web servers (like Puma and Falcon) and job servers (like Sidekiq and GoodJob), enabling efficient handling of blocking operations.
  > - **Historical context**: Ruby had its callback phase (via EventMachine) similar to early JavaScript, but moved towards a more seamless concurrency model with threads and fibers.
  >
  > ## Conclusion
  > The post concludes by noting that the understanding of Ruby’s concurrency model will deepen in subsequent parts of the series, starting with an in-depth look at Threads.

2024-06-04 (permalink)

* quickutil: Add bunch of macros to _undefine_ stuff https://github.com/iamFIREcracker/quickutil/commit/0fb331356c5df6fae414d38b446aeab115699390

2024-06-03 (permalink)

* quickutil: LOOPING: generate only required labels https://github.com/iamFIREcracker/quickutil/commit/0f8c7abc18d1f1f85619dbe15ac99648981c0f27 (at one point I should give in and use ITERATE instead)

2024-04-28 (permalink)

* quickutil: Poor's man Clojure style threading macro https://github.com/iamFIREcracker/quickutil/commit/8205f8f04da6e218429cd08bc220b3280fe85aa6

2022-07-25 (permalink)

Prevent layout shifts when body becomes scrollable

(The above notes were ripped from this nicely written post: [React: Preventing Layout Shifts When Body Becomes Scrollable](https://maxschmitt.me/posts/react-prevent-layout-shift-body-scrollable/))

Use [`scrollbar-gutter`](https://developer.mozilla.org/en-US/docs/Web/CSS/scrollbar-gutter).  However, if Safari / iOS support is paramount for your business, you can try with the following.

First, calculate the width of the scrollbar:
function getScrollbarWidth() {

  // Creating invisible container
  const outer = document.createElement('div');
  outer.style.visibility = 'hidden';
  outer.style.overflow = 'scroll'; // forcing scrollbar to appear
  outer.style.msOverflowStyle = 'scrollbar'; // needed for WinJS apps
  document.body.appendChild(outer);

  // Creating inner element and placing it in the container
  const inner = document.createElement('div');
  outer.appendChild(inner);

  // Calculating difference between container's full width and the child width
  const scrollbarWidth = (outer.offsetWidth - inner.offsetWidth);

  // Removing temporary elements from the DOM
  outer.parentNode.removeChild(outer);

  return scrollbarWidth;
}
Then create a custom hook, around `ResizeObserver`, to understand when the document body is scrollable and when it's not:
function useBodyScrollable() {
    const [bodyScrollable, setBodyScrollable] = useState(
        document.body.scrollHeight > window.innerHeight
    )
    useEffect(() => {
        const resizeObserver = new ResizeObserver(() => {
            setBodyScrollable(document.body.scrollHeight > window.innerHeight)
        })
        resizeObserver.observe(document.body)
        return () => {
            resizeObserver.unobserve(document.body)
        }
    }, [])
    return bodyScrollable
}
Last, add _some_ padding to the document when its body is **not** scrollable, and remove it (the padding) when the body gets scrollable:
  ...
  const bodyScrollable = useBodyScrollable()
  useLayoutEffect(() => {
      if (bodyScrollable) {
          document.body.style.paddingRight = '0px'
      } else {
          document.body.style.paddingRight = `${scrollbarWidth}px`
      }
  }, [bodyScrollable])
  ...

2022-07-13 (permalink)

Sending keep-alive messages with Nest.js

SSE keep-alive messages are usually implemented via _comments_, i.e. lines with a colon as first character ([MDN](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events#event_stream_format)); unfortunately Nest does not seem to support this type of messages, and this forces us to get a hold of the current request object and manually write keep-alive messages into it.
@Sse('subscribe')
sse(@UserWithOrganization() user: AuthenticatedUser, @Res() res: Response): Observable<MessageEvent> {
  const notifications$ = this.notificationsEventService.subscribe(user.profileId, user.organizationId).pipe(map((data) => ({ data })));
  const keepAlive$ = interval(this.configService.get('app.sseKeepAliveInterval')).pipe(
    tap(() => res.write(': keepalive\n\n')),
    filter(() => false)
  ) as Observable<MessageEvent>;
  return merge(keepAlive$, notifications$);
}
Nest expects SSE messages to comply to the following interface:
export interface MessageEvent {
    data: string | object;
    id?: string;
    type?: string;
    retry?: number;
}
I tried to emit objects shaped like: `{ data: '' }`, as keep-alive messages, but Nest actually wound up emitting:
id: 1

id: 2

id: 3

...
I did not like the fact that it was increasing the last id message for each keep alive message, so I opted for the other solution described above, i.e. get the response object injected and manually write into it.

2022-07-12 (permalink)

Which of the `useEffect` dependencies was the one that triggered the effect?

I was recently trying to understand why one of my effects was getting fired even though I _thought_ that none its dependencies had changed, but then after a little bit of debugging I realized that I was listing as dependency an object which was getting re-created every time the hook was run...

The following homemade hook is what really helped me pinpoint the faulty dependency:

- Same signature as `useEffect`: effect function and dependencies array
- Internally calls `useEffect` to register the effect to fire when _any_ of the dependencies changes
- Also registers individual effects, one for each dependency, to log to console the dependency that did change
- Also registers an extra effect to log when the current component is mounted / unmounted
import { EffectCallback, DependencyList, useEffect } from 'react';

const useEffectsVerbose = (effect: EffectCallback, deps: DependencyList) => {
  useEffect(() => {
    console.log('MOUNT');
    return () => console.log('UNMOUNT');
  }, []);
  for (let each of deps) {
    // eslint-disable-next-line react-hooks/rules-of-hooks
    useEffect(() => {
      console.log('CHANGED', each);
    }, [each]);
  }
  useEffect(effect, deps); // eslint-disable-line react-hooks/exhaustive-deps
};

export default useEffectsVerbose;

2022-06-09 (permalink)

More handy react-hooks

`useEventSource`, a tiny wrapper around `EventSource` (for SSE):

- The hook's signature matches exactly `EventSource` one
- It returns the most recently received event / error
import { useEffect, useState } from 'react';

type UseEventSourceReturn<T> = {
  data: T | undefined;
  error: Event | undefined;
};

const useEventSource = <T>(url: string, init?: EventSourceInit): UseEventSourceReturn<T> => {
  const [data, setData] = useState<T>();
  const [error, setError] = useState<Event>();

  useEffect(() => {
    const es = new EventSource(url, init);

    es.onmessage = (e) => {
      setData(JSON.parse(e.data));
    };

    es.onerror = (err) => {
      setError(err);
      es.close();
    };

    return () => {
      es.close();
    };
  }, [url, init]);

  return { data, error };
};

export default useEventSource;

2022-06-08 (permalink)

Couple of useful react-hooks we have created at work, recently

`useIntersectionObserver`, a tiny `IntersectionObserver` wrapper:

- `node` is the `HTMLElement` to observe on
- `opts` is the init params to pass to `IntersectionObserver`
- `observedEntry` is the observed entry, as seen by `IntersectionObserver`
import { useEffect, useState } from 'react';

const INTERSECTION_OBSERVER_INIT = {
  root: null,
  rootMargin: '0px',
  threshold: 0.75,
};

const useIntersectionObserver = (
  node: HTMLElement | null,
  opts: IntersectionObserverInit | undefined = INTERSECTION_OBSERVER_INIT,
) => {
  const [observedEntry, setObservedEntry] = useState<IntersectionObserverEntry>();

  useEffect(() => {
    const observer = new IntersectionObserver(([entry]) => setObservedEntry(entry), opts);

    if (node) {
      observer.observe(node);
    }

    return () => observer.disconnect();
  }, [node, opts]);

  return { observedEntry };
};

export default useIntersectionObserver;
`useVirtualKeyboardVisibility`, a hook to detect if the virtual keyboard is visible or not, while on mobile:

- handy if you wanted to hide certain elements as the user is typing into a form or something
- works by diffing the screen height with the document height
import { useEffect, useState } from 'react';
import { isMobile } from 'react-device-detect';

const useVirtualKeyboardVisibility = (minBrowserToolbarHeight = 200) => {
  const [visible, setVisible] = useState(isVisible(minBrowserToolbarHeight));

  useEffect(() => {
    if (!isMobile) {
      return;
    }
    function handler() {
      if (visible !== isVisible(minBrowserToolbarHeight)) {
        setVisible(!visible);
      }
    }
    window.visualViewport.addEventListener('resize', handler);
    return () => {
      window.visualViewport.removeEventListener('resize', handler);
    };
  }, [visible, minBrowserToolbarHeight]);

  return visible;
};

function isVisible(minDelta: number) {
  return window.screen.height - document.body.clientHeight > minDelta;
}

export default useVirtualKeyboardVisibility;

2022-04-29 (permalink)

? vim indentexpr that uses neoformat behind the scenese to figure out where the new line should go

  - similar to what my vim-lispindent plugin does

? vim netrw to automatically create the file **and** the parent directory, if missing

2022-04-26 (permalink)

[11 promises from a manager](https://twitter.com/MrEchs/status/1516024322409832450):
1. We’ll have a weekly 1:1. I’ll never cancel this meeting, but you can cancel it whenever you like. It’s your time.
2. Our 1:1 agenda will be in the meeting invite so we remember important topics. But you’re always free to use the time for whatever’s on your mind.
3. When I schedule a meeting with you, I’ll always say *when I schedule it* what it’s meant to be about. I will not schedule meetings without an agenda.
4. When I drop into your DM’s, I’ll always say “hi and why.” No suspense, no small talk while you are wondering what I want.
5. News or announcements that significantly impact you, your work, or your team will come from me directly in a 1:1, not revealed in a big meeting.
6. You’ll get feedback from me when it’s fresh. There will be no feedback in your performance review that you’re hearing for the first time.
7. I trust you to manage your own time. You don’t need to clear with me in advance your time AFK or OOO.
8. Your work gets done your way. My focus is on outcomes, not output. Once we’re clear on where we need to go, how to get there is up to you. If I ever find it necessary to suggest a specific approach, I will supply an example.
9. A team is strongest when it’s working together, looking after one another, and taking care of each other. Please look to your left and to your right for opportunities to help your colleagues. Please ask for help when you need it. Nobody works alone.
10. I trust you to skip level and talk to my manager or other senior management about anything you feel is relevant. You don’t need to clear it with me, and I’m not going to get weird about it when you do.
11. I will attribute credit appropriately to you and your team. I will never exaggerate my own role or minimize your contribution. I’ll be especially certain to nail down attribution when senior management are hearing of our accomplishments.

2022-04-15 (permalink)

[How does React Suspense Work?](https://www.youtube.com/watch?v=onrMjAGY4qA)
  - You **throw a Promise**
  - React will catch that, wait for it to be fulfilled / rejected, and then call the render function again
  - The next time the render function is invoked, the promise's result is returned **syncrhonously** (i.e. no need to `await` for it)
let resource = {
  user: wrapPromise(fetch(...)),
};

function wrapPromise(promise) {
  let status = "pending";
  let result;
  let suspender = promise.then(
    r => {
      status = "success";
      result = r;
    },
    e => {
      status = "error";
      result = e;
    }
  )
  return {
    read() {
      if (status === "pending") {
        throw suspender;
      } else if (status === "error") {
        throw result;
      } else if (status === "success") {
        return result;
      }
    }
  }
}

function ProfileDetails() {
  const user = resource.user.read();
  return <h1>{user.name}</h1>
}

2022-04-13 (permalink)

TIL: you can pass `--not --remotes` to `git log` to get the list of local commits not present in any of the remotes
  - This was mentioned on [tpope/vim-fugitive] Large Unpulled section (Issue #1963), as a way to improve the information show while firing `:G`

2022-04-10 (permalink)

TIL: [vim-rhubarb](https://github.com/tpope/vim-rhubarb) supports GitHub issues, issue URLs, and collaborator omni-completion
TIL: [Curl](http://curl.haxx.se/) supports reading secrets from .netrc

Git's dotted Range Notations
The .. (two-dot) Range Notation
>
The ^r1 r2 set operation appears so often that there is a shorthand for it. When you have two commits r1 and r2 (named according to the syntax explained in SPECIFYING REVISIONS above), you can ask for commits that are reachable from r2 excluding those that are reachable from r1 by ^r1 r2 and it can be written as r1..r2.
>
The ... (three-dot) Symmetric Difference Notation
>
A similar notation r1...r2 is called symmetric difference of r1 and r2 and is defined as r1 r2 --not $(git merge-base --all r1 r2). It is the set of commits that are reachable from either one of r1 (left side) or r2 (right side) but not from both.
Source: https://mirrors.edge.kernel.org/pub/software/scm/git/docs/gitrevisions.html

2022-04-09 (permalink)

TIL: ECMAScript [bind operator proposal](https://github.com/tc39/proposal-bind-operator)
  - It introduces a new operator `::` which performs `this` binding and method extraction
  - In its binary form, the `::` operator creates a bound function such that the left hand side of the operator is bound as the `this` variable to the target function on the right hand side
// before
let hasOwnProp = Object.prototype.hasOwnProperty;
let obj = { x: 100 };
hasOwnProp.call(obj, "x");
// after
let hasOwnProp = Object.prototype::hasOwnProperty;
obj.hasOwnProp("x")
  - In its unary prefix form, the `::` operator creates a bound function such that the base of the supplied reference is bound as the `this` variable to the target function
// before
Promise.resolve(123).then(console.log.bind(console));
// after
Promise.resolve(123).then(::console.log);
  - `document::foo(bar)` is `foo.bind(document)(bar)` (this only makes sense when `foo` is defined like `function foo` and not `foo = () => ...` (`.bind` has no effect on arrow functions))
  - `::document.foo` is `document.foo.bind(document)`
  - The proposal has not been updated since 2018, but 2 new other proposals, namely [extensions](https://github.com/tc39/proposal-extensions), and [call-this](https://github.com/tabatkins/proposal-call-this-operator), are slowly gaining audience (both Stage-1)
  - Even if almost dead, 3 new proposals seem to try to replace this functionality
- [proposal-call-this](https://github.com/tc39/proposal-call-this)

* @read [Declarative Shadow DOM](https://web.dev/declarative-shadow-dom/)

  - [Shadow DOM](https://web.dev/declarative-shadow-dom/), but for SSR
  - Add `shadowroot` property to a `<template>` element, and that's it
data:text/html,<host-element><template shadowroot="open"><slot></slot></template><h2>Light content</h2></host-element>

* ["Unison: a new distributed programming language" by Paul Chiusano](https://www.youtube.com/watch?v=gCWtkvDQ2ZI)

  - Content-addressed code: all the below definitions, **are the same**
factorial n = product (range 1 (n+1))
blah      n = product (range 1 (n+1))
blah2     z = product (range 1 (z+1))
  - For each hash, it will store it's serialized AST, plus additional metadata, like _names_
.unison/
  j1ejquc/ # hash
    ast.ub
    names/
      factorial
      blah
      blah2
  - Inside the AST file, you will find the original definition with all the dependencies replaced by their hashes, and with all local variables normalized
factorial =
  $arg1 -> #8h7jbs5 (#h9bwx9q 1 (#e2i3biw $arg 1))
  - No builds, easy renames, test caching...
  - No dependency conflicts
  - Easy typed, durable storage
  - Programs that deploy themselves + describe whole elastic distributed systems
  - Further reading: [Unison Programming Language](https://news.ycombinator.com/item?id=27652677), on Hacker News

2022-04-07 (permalink)

WTF: Since when am I using the ancient version of bash bundled within MacOS, and not `brew`s one?
  - default bash shell
$ $BASH --version
GNU bash, version 3.2.57(1)-release (x86_64-apple-darwin20)
Copyright (C) 2007 Free Software Foundation, Inc.
  - brew's one
$ $(brew --prefix)/bin/bash --version
GNU bash, version 5.1.16(1)-release (x86_64-apple-darwin20.6.0)
Copyright (C) 2020 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>

This is free software; you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
$ echo $(brew --prefix)/bin/bash
/usr/local/bin/bash
  - Make sure `/usr/local/bin/bash` is listed anywhere inside /etc/shells
  - `chsh -s /usr/local/bin/bash`
  - didn't I do this already? @2017-12-12
  - I am not sure what's going on, but I am going to pretend this never happened...computers are weird
  - https://itnext.io/upgrading-bash-on-macos-7138bd1066ba

2022-04-06 (permalink)

TIL: how to get a customizable entrypoint for a docker image
  - specify in the ENTRYPOINT, the base command only, e.g. `ENTRYPOINT ["node"]`
  - specify in the CMD, all the additional arguments to run the app, e.g. `CMD ["dist/main.js"]`
  - this way, a standard `docker run $image` would run: `node dist/main.js`
  - if you wanted to lunch a Node.js REPL instead: `docker run $image -`
  - if you wanted to fire a terminal shell instead, you will have to use the `--entrypoint` flag instead: `docker run --entrypoint sh $image`
  - [Docker CMD vs. Entrypoint Commands: What's the Difference?](https://phoenixnap.com/kb/docker-cmd-vs-entrypoint)
TIL: you can point your browser to `data:text/html,...`, and have it process some HTML
  - Three nested div elements, with a slightly opaque black background each (`%23` is just `#`, url-encoded)
data:text/html,<style>* { background: %230001 }</style><div>foo<div>bar<div>baz
TIL: `<a><b><a>` means `<a><b></b></a><b><a>`, i.e. the `<b>` gets duplicated
data:text/html,<style>a { background: rgba(255,255,0,0.5) } * { border: 1px solid black; padding: 1em; margin: 1em } b { outline: 5px dotted red }</style><div id=div><a>foo<b>bar<a>baz</div><pre id=pre></pre><script>pre.innerText = div.innerHTML</script>
  - What if the `<b>` has an ID?  Well, that gets duplicated too!
data:text/html,<style>a { background: rgba(255,255,0,0.5) } * { border: 1px solid black; padding: 1em; margin: 1em } b { outline: 5px dotted red }</style><div id=div><a>foo<b id=b>bar<a>baz</div><pre id=pre></pre><script>pre.innerText = div.innerHTML</script>
TIL: the `<plaintext>` tag can be used to stop the parser from processing the rest of the document
data:text/html,foo<plaintext>bar</plaintext>baz
  - You can use `<plaintext>` to make an html preamble to an arbitrary document, and have the whole thing be an html document
<html>[...html document...]<plaintext>[...arbitrary document...]
  - The `<plaintext>` tag just swallows everything coming after it, but you can still interact with the rest of the document just fine
data:text/html,foo<script>onload = () => document.body.append(123)</script><plaintext>bar</plaintext>baz
TIL: module `<script>` are automatically deferred
  - btw, `defer` here basically means to wrap the script in `onload`

2022-04-05 (permalink)

- is it possible to npm install a project with package-lock.json but without package.json? because that might speed up my docker builds a bit
  - say I add a new script to my package.json, but don't change anything else
  - ideally I would not want my CI/CD to re-download dependencies
  - re-building the it's fine, I guess, but re-downloading dependencies really seems like a waste of time, especially if the package-lock.json did not change
  - ideally I would: 1) copy package-lock, 2) run the magic npm command, 3) copy package.json
  - and if the package-lock does not change, docker should be able to use the cached layers for step 1 and 2, and only run 3
  - but does such magic command exist?
  ? https://stackoverflow.com/questions/51110793/bumping-package-json-version-without-invalidating-docker-cache

2022-04-04 (permalink)

TIL: Replit.com just launched a @SearchEngine -- https://blog.replit.com/search
  - **We believe that you should be able to find anything on Replit in less than 30 seconds.**
  - First prototype created during what they internally call, the _hack week_
  - It uses Elasticsearch
  - Plus a bunch of data-pipelines whose sole job is to take from the database (or other sources), convert it to a more convenient format, and finally push the result _into_ Elasticsearch
  - It searches across REPLs, posts, comments, snippets of code, doc articles
  - Most importantly: no _commingled_ search: all the different data-stores are searched in parallel, and results are ranked
- It's a difficult problem to be able to rank objects coming from different sources
- It does not surprise me that they avoided this in the first iteration
- I meant to ask about this during the Q&A at ReplCon, but run out of time...
  - Some personal notes:
- 30 seconds is a lot of time, so that should give you plenty of time to figure out a way to search and retrieve all the data you have
- they most likely have their search engine scattered across different servers, with lots of RAM available for grabs
- they don't have commingled search, so they can search different indexes in parallel, and populate the search results page on a first-come-first-served basis, giving then time to the slowest paths, to catch up

2022-04-03 (permalink)

? @LiveProgramming with [LSX](https://github.com/fukamachi/lsx), and [parenscript](https://parenscript.common-lisp.dev/), and Websockets

  - Clients open a Websocket connection to the liveprogramming server, and everything they receive, they will `eval()` it
  - On the server we will have custom DEFUN, DEFVAR macros that would a) emit actual _JavaScript_ (this way you can save the output to a file, and use that as bootstrap point for new clients), and b) send a message to all the connected clients to evaluate the just updated piece of code
  - I am not sure what a DEFVAR expression gets translated into, but we need to make sure the existing value is **not** overridden
  - Aha: https://github.com/eshamster/proto-cl-hot-loads
  - One could even think of integrating [incremental-dom](https://github.com/google/incremental-dom), and basically roll out your own poor man's version of React

? @book The Programmer's Brain

? @book Pragmatic Thinking and Learning: Refactor Your Wetware

2022-04-02 (permalink)

TIL: You can stop `brew` from updating itself while `brew install`-ing something, by setting the following env variable: `HOMEBREW_NO_AUTO_UPDATE=1`
TIL: In JavaScript, `>>> 0`, is an abuse of the operator to convert any numeric expression to an "integer" or non-numeric expression to zero. -- [SO](https://stackoverflow.com/a/11385688)

* Always persist the @whiteboard at: https://whiteboard.party/20220402.b4t4fmyrcf

  - This of this as a _public_ whiteboard, almost like the old guestbook
  - I am sure I am going to regret this ;-)

? @book The Act of Living: Talks on the Alexander Technique

? @book A Pattern Language: Towns, Buildings, Construction (Center for Environmental Structure Series)

2022-04-01 (permalink)

TIL: Vim landed native Language Server Protocol support! [vim/vim@9247a22](https://github.com/vim/vim/commit/9247a221ce7800c0ae1b3487112d314b8ab79f53)

? Give this new vim/lsp plugin a try: https://github.com/yegappan/lsp

2022-03-30 (permalink)

Interesting presentation from @DavidNolen on @live-programming a react-native application: [ClojureScript in the Age of TypeScript](https://www.youtube.com/watch?v=3HxVMGaiZbc)

- Good analogy between a file-system, and a programs
  - You use a SHELL to interact with the file-system
- You can change directory
- You an see what's inside the current directory
- You can create a new file in the current directory
- You can delete a file from the current directory
- You would **never** use a SHELL that did not offer those 4 basic functionalities
  - REPL is how you interact with the program
- You should be able to change module
- You should be able to inspect the current module
- You should be able define a new function in the current module
- You should be able to delete a function from the current module
- Why then settle for a REPL that did not offer the same 4 basic functionalities?
- Too much focus on the _static_ part of the development cycle; no affordances for the _dynamic_ part, i.e. "what happens when I actually run the program?"
  - Lisp
  - Smalltalk
- Storybook (to deal with the _static_)
  - Helps you develop your React components in isolation
  - All significant UI component/view states are represented in a story
  - Can be tested via a simulator and on the device
- Krell (to deal with the _dynamic_)
  > In programming there is a wide-spread 1st order theory that one shouldn't build one's own tools, languages, and especially operating systems.  This is true--an incredible amount of time and energy has gone down these ratholes.  On the 2nd hand, if you can build your own tools, languages and operating systems, then you absolutely should because the leverage that can be obtained (and often the time not wasted in trying to fix other people's not quite right tools) can be incredible.  --Alan Kay, The Power of the Context
  - Connect to native devices / simulator
  - Hot code reload without loss of state
  - Possible thanks to: Clojure/ClojureScript, and React/React-Native
- It cannot be done with Swift-UI, or Flutter
- Tooling should help with both the static and dynamic elements of our programs
  - debuggers solve only part of this problem
- Given the current state of technology 70s era Lisp/Smalltalk style live programming delivers incredible value over current practice
- You need powerful tools that do not get in the way to enable higher level thoughts

* Process scrollwheel events only when triggered from the canvas -- @whiteboard

* Started collecting in a single place all the keyboard-related customizations I use -- one day I will publish it somewhere

  - Apps layer
  - Motion layer
  - Function layer
  - Capslock
  - ...

2022-03-29 (permalink)

TIL: CSS support the "parent selector", or as the [spec](https://drafts.csswg.org/selectors/#relational) calls it, "the relational pseudo-class" or simply `:has()`

This could be used for form styling, e.g. styling labels differently, based on the validity of their inputs:
form:has(input:invalid) {
  border: 2px solid red;
}
But I am sure somebody else would come up with more interesting use cases that this.

A note on its usage though, from the [MDN](https://developer.mozilla.org/en-US/docs/Web/CSS/:has):
The `:has()` pseudo-class takes a relative selector list as an argument. In earlier revisions of the CSS Selectors Level 4 specification, `:has` had a limitation that it couldn't be used within stylesheets. Instead, it could only be used with functions like `document.querySelector()`; this was due to performance concerns. _This limitation has been removed_ because no browser implemented it that way. Instead, browsers currently only support the use of :has() within stylesheets.
So it's fine to use it inside CSS; but it **cannot** be used with `document.querySelector()`...too bad!

Getting livereload to work when deployed on Replit turned out a bit more complicated that expected -- @whiteboard

- I am trying to integrate livereload with an Express application running on [Replit](https://replit.com)
  - When deployed there, Replit assigns you a hostname
  - It proxies all incoming connections to your service, so long as it listened to `0.0.0.0`
- To make this all work I then need the Express application to listen on `0.0.0.0` (not a big deal, it's the default)
- I also need to tell livereload to listen on a different interface, e.g. `127.0.0.1`
  - But this, unfortunately is currently not supported
  - Hence a fork, to [add support for an additional config, `host`](https://github.com/iamFIREcracker/node-livereload/commit/ac2b26e4d205f5be54c697eb487e21e98368e826) which I can then tweak accordingly (I have also [asked](https://github.com/napcs/node-livereload/issues/161) the maintainer of the project if there was appetite for such a change...let's see what they say)
- So, knowing all this and with our fork: first we spin up the livereload server
var lrServer = livereload.createServer({ port: 35729, host: '127.0.0.1' });
lrServer.watch(__dirname + "/public");
- Then we the application Websocket server
var ioServer = http.createServer((req, res) => {
  res.writeHead(404);
  res.end();
});
ioServer.listen(23434, '127.0.0.1')
var io = socket(ioServer);
- Then the Express application, with middlewares to properly proxy requests to the liveserver and socket-io servers respectively
var app = express();
app.use(createProxyMiddleware('/lr', {
  target: 'http://127.0.0.1:35729',
  ws: true,
  pathRewrite: {
    '^/lr': '',
  },
}));
app.use(createProxyMiddleware('/socket.io', {
  target: 'http://127.0.0.1:23434',
  ws: true,
}));
var server = http.createServer(app);
server.listen(3000, '0.0.0.0');
- Finally, we inject the following code on the client, to have them connect to the livereload server (you can read more about this [here](https://github.com/livereload/livereload-js/issues/98#issuecomment-1081922742))
document.write(
  '<script src="' +
    location.protocol +
    '//' +
    location.host +
    '/lr/livereload.js?snipver=1' + (location.protocol === 'https:' ? '&port=443' : '') + '&path=lr"></' +
    "script>"
);

DNS for @whiteboard

- I am going to be hosting this on Replit
- Replit supports custom domains via CNAME records
  - They give you a unique name, e.g. `$id.id.repl.co`
  - They have it resolve the IP address of the server your Repl runs on
  - You then set up a CNAME record so that `app.domain` gets resolved via `$id.id.repl.co`
- I bought [whiteboard.party](https://whiteboard.party) and I was hoping I could use that to access the app, but
- Google Domains would not let me set up a CNAME on a 2nd level domain
  - OK if you did it for: www.whiteboard.party, or app.whiteboard.party
  - Not OK for whiteboard.party
- So I have settled for [d.whiteboard.party](https://d.whiteboard.party) for now, also set up automatic forwarding so people landing on whiteboard.party would automatically be redirected to d.whiteboard.party
- I have not made up my mind yet; other alternatives follow
  - the.whiteboard.party
  - a.whiteboard.party
  - app.whiteboard.party

2022-03-28 (permalink)

[Programmable notes](https://maggieappleton.com/programmatic-notes)
Programmable notes are note-taking systems that allow you to write programmatic rules that facilitate particular ways of working with your notes.
>
Based on triggers (specific conditions are met, input from the user, time of day), they kick off a sequence of actions. Actions might be prompts for the user to answer, transformations on the notes themselves, or requests to external websites and data sources for information.
Few examples:
When I open my note-taking app first thing in the morning -> ask what I'm currently thinking about. Once I've marked this complete, show me what I said yesterday, 3 days ago, and last week.
Or:
When I create a note related to a specific book -> pull in the author, cover, and publication date from the Google Books API.
Or:
When a week has passed since I've written notes on a book -> ask me everything I remember about it off the top of my head. Put this into the summary section of my notes as a first draft.
Or:
When I mark an essay draft as complete -> ask me all the questions my favourite writing professor would ask.

- Do you actually have a point and have you made it?
- Does your introduction present a compelling problem?
- Are you assuming the audience is thicker than a brick?
- Is this littered with passive voice (to be, is, was, were...) and evasive language?
So the idea is basically figure out a way to make the knowledge base...alive!

From the same article:
This is [end-user programming](https://www.inkandswitch.com/end-user-programming/) in action. The developers and designers of note-taking apps will never be able to create systems that suit the needs of everyone's particular knowledge management workflows. Enabling users to design and share their own is the only way to give people agency over their knowledge bases.

* Change the color of the toolbar when a new marker color is selected -- @whiteboard

  - I have also replaced the `<select>` with a _palette_ icon

* Smaller button sizes on mobile screens -- @whiteboard

  - This (and the change above too) enables us to display more control buttons on mobile

* Start livereload-js from _within_ the server application @whiteboard

  - This way I don't have to mess around with nix scripts to get the livereload server to start while running on Replit

2022-03-27 (permalink)

* @A Add "Welcome" dialog to @whiteboard

  - Quick overview of the existing functionalities, and on how to use it
  - "whiteboards are secret but not private" warning
  - "whiteboards older than 15 days will be nuked" danger

2022-03-25 (permalink)

* @C @whiteboard Show the user how big the eraser (and the marker) is going to be

  - As of now, we are simply rendering an _ellipsis_ following the mouse movements

? @B Implement touch gestures @whiteboard

  - Pinch to zoom / double tap and hold a-la Google Maps
  - Two fingers to move around

* @A Fixed a bug where zooming out too much changes the aspect ratio of the @whiteboard

  - [`p5.image()`](https://p5js.org/reference/#/p5/image), when in "destination rectangle mode" (i.e. invoked with 8 parameters as shown below) seems to be stretching the image when source's width or height gets bigger than the buffer's width or height
p5.image(
  OFFSCREEN,
  // destination
  0, 0, p5.windowWidth, p5.windowHeight,
  // source
  view.x, view.y, view.w, view.h,
);
  - The solution was to implementing the _scaling_ logic ourselves, and simply paint the buffer (i.e. the dashboard) scaled up or down accordingly
p5.image(
  OFFSCREEN,
  -view.x / scale(),
  -view.y / scale(),
  width / scale(),
  height / scale(),
);

function scale() {
  return view.w / p5.windowWidth;
}

+ @B @2022-03-28 Implement an API so it's possible to programmatically create a @whiteboard (think about creating a bot for Microsoft Teams)

  - `/` redirects to `/new`
  - `/new` redirects to `/20220325.ga49oukh4i`
  - In the future, we can have our landing page mounted on `/`

+ @A @2022-03-28 @whiteboard forget about network import -- use local modules instead

  - Repeatable builds, please!

2022-03-24 (permalink)

* Integrated [livereload-js](https://github.com/livereload/livereload-js) with @whiteboard

  - `npm run serve` will use `nodemon` to start the server, excluding changes happening inside public/
  - `npm run livereload` will start `livereload` on the public/ folder
  - `npm run start-dev` will runn `npm run serve` and `npm run livereload`, in parallel
  - Change something on the express app, `nodemon` will kick in and restart it
  - Change something on the client facing code, `livereload` will kick in and tell any connected client to _refresh_ (note: JavaScript related changes require a full page refresh, while CSS or HTML ones, not)

* @A @whiteboard It was not possible to use the mouse on laptops with touch-screens

  - That's because we were either registering touch events **or** mouse ones
  - Instead, we should have always registered mouse events, and optionally touch ones

? [skypack is breaking font files](https://github.com/skypackjs/skypack-cdn/issues/137), so I was forced to use unpkg.com instead; should I go all in with this one, and forget about skypackdev?

2022-03-23 (permalink)

* @A Fixed Out of memory exception while on mobile @whiteboard -- I guess saving a canvas **per shape** is a bit too much, isn't it?! Ended up reworking the drawing logic a bit

  - 1 rendering canvas, 1 offscreen graphic
  - all the _remote_ draw operations are done on the offscreen
  - inside p5.draw, first we copy the offscreen graphic over to the canvas, and then we directly draw any local shape not yet acknowledged by the server
  - And with this, it's now decently usable on mobile as well!
  - For the record, this is the out of memory exception I was talking about
Uncaught RangeError: Failed to execute 'getImageData' on 'CanvasRenderingContext2D': Out of memory at ImageData creation
    at d.default.Renderer2D.loadPixels (cdn.skypack.dev/-/p5@v1.4.1-EGzjLa9vkIOCBSEWMsHC/dist=es2019,mode=imports/optimized/p5.js:10747:130)
    at s.default.Graphics.h.default.loadPixels (cdn.skypack.dev/-/p5@v1.4.1-EGzjLa9vkIOCBSEWMsHC/dist=es2019,mode=imports/optimized/p5.js:13082:73)
    at Object.h.default._copyHelper (cdn.skypack.dev/-/p5@v1.4.1-EGzjLa9vkIOCBSEWMsHC/dist=es2019,mode=imports/optimized/p5.js:13071:12)
    at s.default.Graphics.h.default.copy (cdn.skypack.dev/-/p5@v1.4.1-EGzjLa9vkIOCBSEWMsHC/dist=es2019,mode=imports/optimized/p5.js:13069:29)
    at draw (:3000/main.js:391:12)
    at add (:3000/main.js:377:20)
    ...

* @A @whiteboard Prevented touch events from scrolling the window -- only works on Chrome for now, but whatever

? @B @whiteboard Touch events cause the window to scroll while using DuckDuckGo Android browser

  - [javascript + html5 canvas: drawing instead of dragging/scrolling on mobile devices?](https://stackoverflow.com/questions/9975352/javascript-html5-canvas-drawing-instead-of-dragging-scrolling-on-mobile-devic)
  - [prevent browser scrolling while drawing on canvas on mobile](https://kirkdev.blogspot.com/2020/10/prevent-browser-scrolling-while-drawing.html)

* @B @whiteboard Added Zoom-in/Zoom-out explicit buttons/menu

2022-03-22 (permalink)

? @B Immediately scrolling without moving the mouse first, causes the @whiteboard camera to close in (or get far from) the top left corner instead of where the mouse is currently at

2022-03-21 (permalink)

* Integrated fzf with z, as suggested [on the official wiki](https://github.com/junegunn/fzf/wiki/Examples#z)

* Integrated fzf with bash's history search (CTRL-R) (I actually borrowed all the magic from [the official key bindings](https://github.com/junegunn/fzf/blob/master/shell/key-bindings.bash))

* Complete UI overhaul for @whiteboard

  - We now use Bootstrap
  - Different modes of working: panning, drawing, erasing

* Fixed @whiteboard panning bug, where slight movements would cause the camera view to move **a lot**

  - We were moving the camera based on the point where the dragging had begun, and not just the previous point
  - We are now also taking the zoom level into account, and adjusting the movements accordingly (e.g. the more the user is zoomed in, the smaller the movement)

* Adjust the @whiteboard marker size accordingly, based on the current zoom level -- what you see is what you get

2022-03-19 (permalink)

* Shapes are now relative to the top-left corner of the @whiteboard

- Previously they were relative to the client view, and because of that draw commands had to include not just the points, but also the size of the client view
- This also also addresses a couple of quirks of the current implementation
  - Lines looked different (i.e. more thick, or more slim) based on the zoom level of the client that drew them; now they all look the same, at least **after** the server has acknowledged the draw command (while drawing, for the client who is drawing, the line would still look _different_)
  - Shapes would get stretched based on the aspect ratio of the client screen -- definitely a bug

* System generated @whiteboard room IDs now include the creation date

- We were previously using UUIDs
- Now IDs look like: `r20220319.83e8fmmoo1`
  - `r`, to mark that it's a room ID
  - `20220319`, the date when the room was created
  - `.`, a separator
  - `83e8fmmoo1`, some random data [zbase32](https://philzimmermann.com/docs/human-oriented-base-32-encoding.txt) encoded
- One plus of having switched to a _structured_ ID like this one, is that we can now parse the room creation date and, for example, decide **not** to persist any room which is older than, say 30 days
  - We will still keep it it memory, just in case, but the moment the service is bounced, old rooms will be gone
setInterval(function syncToDisk() {
  const today = moment.utc();
  const copy = {};
  for (const key of Object.keys(state)) {
    const cdate = moment.utc(key.substring(1, 9), "YYYYMMDD");
    if (today.diff(cdate, 'days') < 15) {
      copy[key] = state[key];
    }
  }
  fs.writeFile("state.json", JSON.stringify(copy), function writeFileCb(err) {
    if (err) throw err;
    console.log(`Persisted [state=${JSON.stringify(copy)}]`);
  });
}, seconds(30));

2022-03-18 (permalink)

* @Whiteboard changed the active cursor based on what the user is doing (i.e. pen, eraser, move)

  - Interestingly enough I was not able to use `p5.cursor` because it does not support .svg images; well, it actually lets you use them, it's just that if you also try to specify a x y offset to identify the _active_ portion of the cursor, p5.js will ignore this additional information

+ @A @2022-03-19 Shapes are wrongly stretched based on the different aspect ratios -- @whiteboard

  - (try to) Draw a square
  - Open developer tools
  - Refresh the page (now that developer tools are open, the aspect ration of the canvas would be different)
  - The Square should be rendered as stretched vertically

+ @A @2022-03-19 Zooming out too much causes the @whiteboard to be rendered with the wrong aspect ratio

~ @C @2022-04-02 @Whiteboard is really janky while accessed from mobile... maybe the following might help: [Touch events ](https://developer.mozilla.org/en-US/docs/Web/API/Touch_events)

2022-03-16 (permalink)

? @B Replace `(permalink)`s with `¶`s while generating html for my @plan file -- https://www.w3.org/WAI/EO/Drafts/tutorials/page-structure/in-page-navigation/#anchors

? @B Implement permalinks for sections (or for special markers like `~$DATE.$RANDOM`) -- @plan


TIL about the difference between _direct_ and _indirect_ `eval` in JavaScript
var x = 'outer';
(function() {
  var x = 'inner';
  eval('console.log(x)'); // "inner"
})();

var x = 'outer';
(function() {
  var x = 'inner';
  [eval][0]('console.log(x)'); // "outer"
})();
(You can read more about this here: [Global eval. What are the options?](http://perfectionkills.com/global-eval-what-are-the-options/))

Anyways, I will be playing with this a bit more over the following days, and see how difficult it actually is to implement a system that would let me change a Web page from my IDE, without page refresh (related: [Skewer: live web development with Emacs](https://github.com/skeeto/skewer-mode)).

2022-03-14 (permalink)

I am dropping here a few links I stumbled upon, recently, around ID schemas.

Let's start with [Snowflake IDs](https://en.wikipedia.org/wiki/Snowflake_ID):

- Total of 64 bits (only 63 are actually used)
  - First 41 bits, representing milliseconds since a chose epoch
  - Next 10 bits represent the machine ID (to prevent clashes)
  - Next 12 bits, represent per-machine sequence number (this allow creating multiple IDs in the same millisecond)
- Sortable by time (IDs are based on the time they were created)
- The creation time of the entity using the ID, can be calculated from the snowflake
- Introduced by Twitter, in [June 2010](https://blog.twitter.com/engineering/en_us/a/2010/announcing-snowflake)
  - Later adopted by other companies as well, like Instagram in [December 2012](https://instagram-engineering.com/sharding-ids-at-instagram-1cf5a71e5a5c)

[Collision-resistant IDs](https://github.com/ericelliott/cuid):

- Example: **c - h72gsb32 - 0000 - udoc - l363eofy**
  - Literal `c`, identifying it's a cuid (starting with a character guarantees that the ID can be used pretty much everywhere, e.g. you cannot have a html entity ID starts with a number)
  - Timestamp
  - Counter, representing the number of IDs (modulus the block size) generated by the current process
  - Client fingerprint
  - Random
- Monotonically increasing (when less than 10000 cuids are generated within the same millisecond)
  - Suited for [high-performance database primary keys](http://code.openark.org/blog/mysql/monotonic-functions-sql-and-mysql), because they can be binary searched
- Fast, Secure, and Portable

Interesting [Twitter thread](https://twitter.com/meekaale/status/1503340213342556166) from @meekaale, on how, tagging functions, variables, and slots, with persistent unique IDs could aid with change tracking and hypermedia in general
another thing I started working on for http://Wisp.Town is a kind of Roam-like system where functions, variables, slots can be tagged with a persistent unique ID
>
those would then be used in hyperlink block reference fashion with help from the IDE or editor mode
---
I’ve been thinking about this since early last year when I started implementing something similar in Elixir. Persistent identifiers will also be great for understanding source code changes: you can tell computationally when a function is renamed, etc
---
If your docstrings can reference stable IDs instead of names, it’s easier to refactor the code. Your documentation URLs become stable. Etc
---
You’ll be able to mention these IDs in comments and other non-semantic little thingies. Use them in commit messages if you want to identify something clearly. Maybe they can be used as bug issue tracking identifiers too, who knows!
---
I devised a little format for these identifiers, because everyone loves a new ad hoc UUID standard:
>
~20220314.PZCBSXQ1HR
>
It’s today’s date and 48 bits of randomness in a case insensitive base-32, fully URL safe.
---
These IDs fit in 64 bits if you store the date as days since 20220101. 🤓
This [commit message](https://github.com/mbrock/wisp/commit/6b5cb6599225beab6d0ec5bc0323042336608436) also shows how the author thinks this system might be used:
Make Ctrl-Shift-K insert a fresh key in the editor
This isn't very useful yet, but it will be.

The idea is that you will write e.g.

    (defun foo (x) ~20220314.8ITJ8NXNXY
      (append '(1 2 x) '(a b c)))

to give FOO that key.  Then FOO will have a stable identifier even if
you rename it, which is great for URLs and other hypermedia stuff.
This could be useful, for example, if I wanted to create anchor points in my @plan file and have other notes reference to it.  Nothing crazy, really, but it's the simplicity of the solution that really caught my attention.

? @book Rationality: From AI to Zombies -- by Eliezer Yudkowsky

2022-03-14 (permalink)

? @book Lila: An Inquiry Into Morals - Robert M. Pirsig

? @book Zen and the Art of Motorcycle Maintenance: An Inquiry Into Values - Robert M. Pirsig


Feel like listening some chiptune @music, lately? [c64radio.com](https://c64radio.com/) got you covered!

Another @music related note; this time, it's [ThePrimeagen](https://www.twitch.tv/ThePrimeagen)'s [BANGERS](https://www.youtube.com/playlist?list=PLm323Lc7iSW9oSIDihesMJXmMNfh8U59k) playlist!

2022-03-13 (permalink)

+ @B @2022-03-14 @whiteboard Add recenter button

2022-03-12 (permalink)

Getting zoom support for my toy @whiteboard turned out a tiny bit more complicated than I thought it would.

Zooming in or out, **without** taking the mouse position into account, is quite straightforward: all you have to do is widening or shortening the field of view as the user uses the scroll wheel.  However, users are used to move the mouse over the area of interest first, and when they use scroll wheel they expect the application to zoom in or out on that (and not, say, on around the center of the visible area, like the naive solution described above would probably end up doing).

To implement a smooth zoom in / out animation, the area of the whiteboard / the `<x, y>` point that the user is pointing at, with their mouse, _needs to stay the same for the whole duration of the animation_, or otherwise the camera would start moving around.  And that's it, really: it's that's simple!

The following is some documentation that I attached to my zoom routine; inside of it:

- `view`, represents which part of the whiteboard is currently visible to the current user
  - `view.x` and `view.y` represent the top-left corner of the visible area (relative to the top-left corner of the whiteboard)
  - `view.w` and `view.h` represent the width and height of the visible area
- `tview`, represents the target view, i.e. what do we want the user to be looking at, next
  // What does `p5.mouseX` currently points to?
  //
  //     x = view.x + (p5.mouseX / p5.windowWidth) * view.w
  //
  // While zooming, we don't want to change what `p5.mouseX` currently
  // points at, or otherwise the _camera_ would move.  This means
  // the the following should be true as well:
  //
  //     x = tview.x + (p5.mouseX / p5.windowWidth) * tview.w
  //
  // `x` is known from the first equation, `tview.w` we just
  // calculated it inside the previous block; all we have to do
  // is figure out `tview.x`:
  //
  //     x = view.x + (p5.mouseX / p5.windowWidth) * view.w
  //     x = tview.x + (p5.mouseX / p5.windowWidth) * tview.w
  //     ----------------------------------------------------
  //     0 = view.x - tview.x + (p5.mouseX / p5.windowWidth) * (view.w - tview.w)
  //     ----------------------------------------------------
  //     tview.x = view.x + (p5.mouseX / p5.windowWidth) * (view.w - tview.w)
Again, pretty simple _after_ you realize you cannot move the area the user is zooming in to / out from.

+ @C @2022-03-19 @whiteboard: should lines drawn while at different zoom level look "the same"? We are currently using `p5.scale()`, and with it, drawing when zoomed out will result in thick lines!

2022-03-10 (permalink)

* @B @whiteboard Add panning support

* @B @whiteboard Add Export button to export as png

~ @B @2022-04-02 @whiteboard Slow performance due to bigger canvas **and** increasing undo history -- maybe offscreen rendering can help?

  - [Main Thread Canvas vs Web Worker (Offscreen) Canvas](https://codepen.io/newinweb/pen/bxazNg)
  - [Generating the Mandelbrot Set](https://badgerbadgerbadgerbadger.dev/generating-the-mandelbrot-set/)

+ @B @2022-03-12 @whiteboard Scrollwheel to zoom in / zoom out

  - [companje/PanZoom.js](https://gist.github.com/companje/5478fff07a18a1f4806df4cf77ae1048)

2022-03-09 (permalink)

* @A @whiteboard @mobile Set up `<meta>` element accordingly, so it does not start completely zoomed out

* @A @whiteboard @mobile Added support for touch events ([Mobile Touch Draw](https://openprocessing.org/sketch/1398186))

* @A @whiteboard @mobile `crypto.randomUUID()` does not seem to be defined while on mobile (I am not sure why though), so I ended up snatching a [UUID generator](https://stackoverflow.com/a/2117523) from StackOverflow and use that when `crypto.randomUUID` is missing

? @B @whiteboard Fix flicker effect when sending `draw` commands remotely (1. draw few shapes, rapidly, 2. each shape will disappear only to reappear a few ms right after, when the linked `draw` message is received)

2022-03-08 (permalink)

[Programming, Motherfucker](http://programming-motherfucker.com/), will never get old!
We are a community of motherfucking programmers who have been humiliated by software development methodologies for years.
>
We are tired of XP, Scrum, Kanban, Waterfall, Software Craftsmanship (aka XP-Lite) and anything else getting in the way of...**Programming, Motherfucker**.
>
We are tired of being told we're socialy awkward idiots who need to be manipulated to work in a Forced Pair Programming chain gang without any time to be creative because none of the 10 managers on the project can do... **Programming, Motherfucker**.
>
We must destroy these methodologies that get in the way of...**Programming, Motherfucker**.

In the same spirit, there is also: [@Outage party](https://outage.party)
Incident Response Runbook
>
1. Find out what changed.
2. Rollback everything.
>
FAQ
>
>> My change was in a different datacenter.
>
Roll that back.
>
>> I changed the internal network and this is an external problem.
>
Please be rolling that back.
>
>> I only changed staging.
>
Oh really? Roll it back.
>
>> My change couldn't possibly be related!
>
Don't care, roll it back.
>
>> I didn't change code, it's just a backfill
>
Don't care, roll it back. Turn off the backfill.
>
>> Nothing changed! I just added some additional app nodes!
>
Turn them off.
>
>> Seriously it would be impossible for this change to be related.
>
You keep using that word, I do not think it means what you think it means. Roll it back.

2022-03-07 (permalink)

* Added a `New` button to initialize a new @whiteboard

+ @2022-03-19 Evict unused @whiteboard's data, after a week

+ @2022-03-10 Make it possible to move around without drawing -- @whiteboard

? Fucking fix pasting text inside a Microsoft Teams private chat; few relevant pointers:

  - [Why does Teams always strip indentation when pasting text?](https://techcommunity.microsoft.com/t5/microsoft-teams/why-does-teams-always-strip-indentation-when-pasting-text/m-p/2349812) -- first opened in May 2021!
  - [From Markdown To Pastable Formatted Text In OS X Terminal](https://www.ostricher.com/2015/08/from-markdown-to-pastable-formatted-text-in-os-x-terminal/)
  - [Clipboard inspector](https://evercoder.github.io/clipboard-inspector/) -- Web page you can past stuff into, to see what's actually being stored inside the OS clipboard (i.e., plaintext, vs html, vs rft)

? @read This other article on how to draw smooth lines in Javascript: http://scaledinnovation.com/analytics/splines/aboutSplines.html @whiteboard

2022-03-06 (permalink)

* Started to put together some notes on the syntax for my @plan files

* Started to put together some notes for a CLI to actually manage my @plan files

? @read patterns of enterprise application architecture

? @read debugging: the 9 indispensable rules for finding even the most elusive software and hardware problems

2022-03-02 (permalink)

Thoughts on implementing undo/redo for my @multiplayer @whiteboard

- Each user keeps a stack of shapes -- the ones they drew
- Each shape has a unique ID attached to it -- generated by the user that created the shape
- "draw" event are broadcast'd to all the users (the sender of the message included)
  - this way, the sender, seeing a draw command with a _known_ ID, can acknowledge that the message was received by the server, and promote the shape into the "undo history stack"
- Each user keeps an undo-history stack
  - Entries of this stack contain the last received "draw" message
  - They also contain a "buffer", i.e. a copy of the visible canvas, **after** the last "draw" message had been processed
- When a user wants to undo **their** last "draw" action, they would generate an "undo" message, containing the ID of the shape that needs to be undone, and send it to the server which will then broadcast it to all the other clients
- They would pop from their shapes stack, and push it into a different stack (e.g., undone shapes)
- Each client (the sender included) will then
  - Pop elements elements from the undo-history stack (and push them into a different temporary stack), until the element with the matching ID is found
  - Pop that element too -- but discard it, don't save it anywhere
  - Copy the buffer of the first element of the undo-history stack onto the canvas
  - Re-apply all the "draw" operations, in order
- When re-doing the last operation instead, the client will simply generate a "draw" message, with the exact same shape information as before -- the only difference, is that now the shape will be rendered **on top** of everything else
- Drawing a new shape, should empty the local stack of undone shapes too -- correct, we do not support undo tree ;-)

2022-03-01 (permalink)

[Copy canvas to graphics object in p5.js](https://stackoverflow.com/questions/70231135/copy-canvas-to-graphics-object-in-p5-js) @multiplayer @whiteboard

2022-02-28 (permalink)

How to disable / enable specific Git branches from triggering a CI, with GitHub actions?

[This](https://github.com/git/git/commit/e76eec35540f9eca7baef3d84c164a1f8d4d3cb9) is how `git` did it, and I am going to summarize the steps here.

The first step is to create a branch, `config-repo`, with no files in it except for `ci/config`, whose sole responsibility is to return `true` or `false` if the input branch name is enabled for CI or not:
#!/bin/sh
#
# Sample script for enabling/disabling GitHub Actions CI runs on
# particular refs. By default, CI is run for all branches pushed to
# GitHub. You can override this by dropping the ".sample" from the script,
# editing it, committing, and pushing the result to the "ci-config" branch of
# your repository:
#
#   git checkout -b ci-config
#   cp allow-refs.sample allow-refs
#   $EDITOR allow-refs
#   git commit -am "implement my ci preferences"
#   git push
#
# This script will then be run when any refs are pushed to that repository. It
# gets the fully qualified refname as the first argument, and should exit with
# success only for refs for which you want to run CI.

case "$1" in
# allow one-off tests by pushing to "for-ci" or "for-ci/mybranch"
refs/heads/for-ci*) true ;;
# always build your integration branch
refs/heads/my-integration-branch) true ;;
# don't build any other branches or tags
*) false ;;
esac
The next step is to create a new GitHub action job, `ci-config`, responsible for invoking `ci/config` above and ultimately set an _output_ variable indicating whether the current commit should trigger CI or not:
jobs:
  ci-config:
      runs-on: ubuntu-latest
      outputs:
        enabled: ${{ steps.check-ref.outputs.enabled }}
      steps:
        - name: try to clone ci-config branch
          continue-on-error: true
          run: |
            git -c protocol.version=2 clone \
              --no-tags \
              --single-branch \
              -b ci-config \
              --depth 1 \
              --no-checkout \
              --filter=blob:none \
              https://github.com/${{ github.repository }} \
              config-repo &&
              cd config-repo &&
              git checkout HEAD -- ci/config
        - id: check-ref
          name: check whether CI is enabled for ref
          run: |
            enabled=yes
            if test -x config-repo/ci/config/allow-ref &&
               ! config-repo/ci/config/allow-ref '${{ github.ref }}'
            then
              enabled=no
            fi
            echo "::set-output name=enabled::$enabled"
Last, you will have to update all the pre-existing jobs to a) wait for job `ci-config` to run first, and b) skip executing the job if `ci-config` decided that CI had to be disabled for the current commit:
  windows-build:
    needs: ci-config
    if: needs.ci-config.outputs.enabled == 'yes'
    runs-on: windows-latest
    steps:
    - uses: actions/checkout@v1
And that's it!

2022-02-25 (permalink)

I have been thinking about implementing a @multiplier @whiteboard lately.

"Why?" you might ask, and well, the answer is two-fold:

- I got sick of the alternatives, e.g. Gimp + scree/window share, or Microsoft Whiteboard, mostly because a) they usually eat up a lot of CPU and my MacBook Air from 2014 can hardly cope with that, and b) their integration with Microsoft Teams, when available, kind of suck (e.g., on Teams, you can activate Whiteboard only from meetings, but not from private chats)
- I wanted to play around with multiplayer concepts, and this seemed like a good application, with the right size, to get started with it

Here are some ideas of a possible MVP for this (`@A` items are more important than, say, `@C` ones):

+ @A @2022-03-07 Create new @whiteboard

+ @A @2022-03-04 Join existing @whiteboard by URL or something

~ @C @2022-04-02 See where the others are on the @whiteboard, and click to follow-through

+ @A @2022-02-28 Pre-defined colored markers @whiteboard

~ @C @2022-04-02 Color picker for markers @whiteboard

~ @C @2022-04-02 Sliders to increase/decrease marker size @whiteboard

? @B Keyboard shortcuts @whiteboard

+ @A @2022-02-28 Eraser (i.e. a bigger, white, marker) @whiteboard

~ @C @2022-04-02 Erase shapes instead of drawing over them @whiteboard

+ @A @2022-03-02 Undo / Redo @whiteboard

? @B Smooth drawing @whiteboard

? @C Electron / standalone app @whiteboard

Where do we start?! Well, of course the lines drawn will have to look _stunning_ (i.e. smooth), so why not kicking this off searching for ways to draw smooth lines?  Well, it _should_ be possible to do this (either with splines, or point filtering based on age / position), but I will surely have to dig deeper:

- [How to draw a smooth line through a set of points using Bezier Curves](https://stackoverflow.com/questions/9916119/how-to-draw-a-smooth-line-through-a-set-of-points-using-bezier-curves)
- [Canvas Spliner](https://github.com/jonathanlurie/canvasSpliner)
- [Cubic splines in JavaScript (via CoffeeScript)](http://blog.mackerron.com/2011/01/01/javascript-cubic-splines/)
- [understand implementation of cubic interpolation](https://stackoverflow.com/questions/28032958/understand-implementation-of-cubic-interpolation)
- [how to draw smooth curve through N points using javascript HTML5 canvas?](https://stackoverflow.com/questions/7054272/how-to-draw-smooth-curve-through-n-points-using-javascript-html5-canvas)

PS. I put together a simple Repl (it's a fork of another existing one, plus some local edits) which I am going to use to experiment with this: you can play with this [here](https://replit.com/@iamFIREcracker/MulitplayerTest).

2022-02-24 (permalink)

+ @2022-02-27 @read [Zettelkasten](https://en.wikipedia.org/wiki/Zettelkasten)

? @read [Unit of Work + Repository Pattern using TypeScript (NestJs) and TypeOrm](https://jideowosakin.com/unit-of-work-pattern-in-typescript/)

? @read [Node.js + MySQL + Unit of Work pattern](https://medium.com/@vinjenks/node-js-mysql-unit-of-work-pattern-4b20cbc61963)

2022-02-23 (permalink)

Intent-based development environment

(taken from: https://amasad.me/right)
At Replit, we're really passionate about making a simple yet powerful
programming environment. The environment needs to be very intuitive for
newbies and should be enjoyable for experts.
>
The environment always starts with 3-columns, something we've tested and made
sure that newbies intuitively get: files, editor, console. That's great, but
now what if you want to do web development?
>
You simply write the code, and the environment will open a new pane to show
you the output.
>
[...]
>
Web development is not the only thing people want to do on Replit, many would
also like to build games, plot, or boot up old operating systems for fun. In
this case, if you wrote the code, or invoked the program that wants to do
graphics, we will detect that and stream VNC down to your browser. So how do
we do it? Initially, we relied on LD_PRELOAD[4] which allows you to override
arbitrary function calls and inject your own logic. Pretty neat and works for
other intent-based functionality. However, we decided to move away[5] from
that, here is the new approach:
>
In order to avoid using the LD_PRELOAD trick, we took a page out of
systemd's socket activation feature, so that we detect the intent of
a repl wanting to communicate with X by opening the socket to the
X server, and launch the server + Window Manager at that point in time.
>
[...]
>
Finally, much of software development today involves open-source packages.
However, package manager can be a bit of pain to use, and honestly they can
take out of the flow of coding. So while we do have native UI for package
management, we also built a system to detect your intent to install an OSS
package and simply do it for you.
>
Intent detection is hard but worth it.

2022-02-13 (permalink)

@vim
NrrwRgn - Region highlight is lost on write

I have been playing with [NrrwRgn]() Vim plugin lately, and I have to say it, I am quite enjoying it so far.  One small annoyance that I have experienced, is the fact that the highlight of the narrowed region inside the original window seem to get lost every time I write changes back from the narrowed region to the original window (read: https://github.com/chrisbra/NrrwRgn/issues/87).

I did spend some time looking into this today, and here are my findings.

First off, here is the context of the .txt file I have used for testing:
Who does not love Lorem Ipsum?

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor
incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis
nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.
Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu
fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in
culpa qui officia deserunt mollit anim id est laborum.
While to create the narrowed regions I have tried with the following steps, and either one of them seem to trigger the bug:

1. Line-wise selection, `:NarrowRegion` command
2. Visual selection, `<leader>nr` mapping

Let's get now to the interesting part:

- On save, while on the narrowed buffer, `:WidenRegion` ends up getting called
- Inside of it, `winnr` points to the ID of the current window, i.e. the ID of the window where the narrowed buffer is currently loaded
- `orig_buf` instead to the ID of the buffer where the narrowed region was created from
- In my execution path, `<sid>JumpToBufinTab` gets invoked which results in making the original window active (i.e. `winnr()` is now equal to `bufwinnr(orig_buf)`
- Note: later on, the following is run: `exe ':noa'. orig_win. 'wincmd w'`, but I am not sure what that is for, since `JumpToBufinTab` already took care of focusing the "right" window
- Then the narrowed region is widened accordingly, and highlights optionally reset - this is the guard being used:
" also, renew the highlighted region
" only highlight, if we are in a different window
" then where we started, else we might accidentally
" set a match in the narrowed window (might happen if the
" user typed Ctrl-W o in the narrowed window)
if !(has_key(s:nrrw_rgn_lines[instn], 'single') || winnr != winnr())
But the thing is, `winnr` will **always** be different from `winnr()` (in my execution path at least) which means the "re-highlight" logic will **never** be executed.

Now, by the look of `:help matchadd()`, the new highlight pattern gets defined for the _current window_; doesn't this suggest that t the original window (i.e. the one where the narrowed region was created from), be the active, at the time `<sid>AddMatches` is invoked? (which should already be the case, thanks to the call to `JumpToBufinTab`)

For fun, I tweaked the guard logic as follows, and now the narrowed region seems to get re-highlighted correctly after each call to `:WidenRegion`:
@@ -1361,7 +1361,7 @@ fun! nrrwrgn#WidenRegion(force)  abort "{{{1
 		" then where we started, else we might accidentally
 		" set a match in the narrowed window (might happen if the
 		" user typed Ctrl-W o in the narrowed window)
-		if !(has_key(s:nrrw_rgn_lines[instn], 'single') || winnr != winnr())
+		if !(has_key(s:nrrw_rgn_lines[instn], 'single') || bufwinnr(orig_buf) != winnr())
 			call <sid>AddMatches(<sid>GeneratePattern(
 				\ s:nrrw_rgn_lines[instn].start[1:2],
 				\ s:nrrw_rgn_lines[instn].end[1:2],
@@ -1397,7 +1397,7 @@ fun! nrrwrgn#WidenRegion(force)  abort "{{{1
 		" then where we started, else we might accidentally
 		" set a match in the narrowed window (might happen if the
 		" user typed Ctrl-W o in the narrowed window)
-		if !(has_key(s:nrrw_rgn_lines[instn], 'single') || winnr != winnr())
+		if !(has_key(s:nrrw_rgn_lines[instn], 'single') || bufwinnr(orig_buf) != winnr())
 			call <sid>AddMatches(<sid>GeneratePattern(
 				\s:nrrw_rgn_lines[instn].start[1:2],
 				\s:nrrw_rgn_lines[instn].end[1:2],
Again, this seems to work, but I am not sure about all the other use cases that this change might have broken...

@vim @lisp
Today I finally got around to scratching an itch of my own: getting Vim to highlight the S-Expression surrounding the cursor position.

The implementation is super naive:

- When the cursor moves (both in normal, and insert mode)
- Get `vlime` to return us the start and end position of the surrounding S-expression
- Create a text property for that region (see: `:help textprop`), and highlight that accordingly
augroup vlime_hilight_curr_expr
    autocmd!

    if empty(prop_type_get('lisp_curr_expr'))
        call prop_type_add('lisp_curr_expr', {'highlight': 'LispCurrExpr'})
    endif
    hi link LispCurrExpr MatchParen

    function! s:HilightCurrExpr() abort
        call prop_remove({'type': 'lisp_curr_expr', 'all': v:true})

        let expr = vlime#ui#CurExpr(1)
        if expr[0] == '' || expr[1][0] == 0
            return
        endif

        let [line1, col1] = expr[1]
        let [line2, col2] = expr[2]
        let l:options = {
                    \   'type': 'lisp_curr_expr',
                    \   'end_lnum': line2,
                    \   'end_col': col2,
                    \ }
        call prop_add(line1, col1, l:options)
    endfunction

    autocmd CursorMoved,CursorMovedI <buffer> call s:HilightCurrExpr()
augroup end
I am sure it can be improved (e.g. caching some of the calls to `vlime#ui#CurExpr()`), but I am really enjoying the scope-highlight effect so far!

Thanks [SO](https://vi.stackexchange.com/a/20522) for mentioning Vim's support for text properties (`:help textprop`)!

2022-02-10 (permalink)

? "compose key" for the ultimate hacking keyboard? I know I can use digraphs when in vim...it's just that sometimes it could come in handy outside of vim too!


@vim @plan
`Neoformat` invokes `prettier` as follows:
prettier --stdin-filepath "/Users/matteolandi/plan/NrrwRgn_.faang_1" 2> /var/folders/s1/zl3vtwg54_9_1s8fsxxtxpl00000gn/T/neoformat/stderr.log
But the missing file extension causes `prettier` to error out:
[error] No parser could be inferred for file: /Users/matteolandi/plan/NrrwRgn_.faang_1
One possible solution to this is to rename the buffer and include the proper file extension to it:
" While setting up the NrrwRgn buffer
let filename_with_ext = expand('%') . '.js'
execute "file" filename_with_ext
filetype detect
Running `:Neoformat` will now finally get `prettier` to properly reformat my code...great!

There is one more problem though: `:Neoformat` is not kicking in automatically on `:write`, as it should as per instructed by the following `:autocommand`:
 thisaugroup neoformat_neoformat
  autocmd!

  " Thanks: https://github.com/sbdchd/neoformat/issues/134#issuecomment-347180213
  au BufWritePre * try | undojoin | Neoformat | catch /E790/ | Neoformat | endtry
augroup END
Same is for `:Neomake`, or any other `:BufWrite` activated autocommand.

Apparently, `NrrwRgn` is using `:BufWriteCmd` to manually sync the narrowed buffer back into the not narrowed one, and that seems to cause Vim to skip triggering `BufWrite` and `BufWritePost` events.

Anyways, long story short I ended up creating a [PR](https://github.com/chrisbra/NrrwRgn/pull/88) to add a new hook for running `:Neomake` in it; let's see what the maintainers think about this.

PS. I abandoned the idea of automatically running `prettier` on save; instead, I will be manually running it whenever I need.  On the other hand, I still think it's a good idea to have `eslint` (i.e. `:Neomake`) run every time the buffer is saved, especially because that is **not** changing the content of the buffer, but simply adding annotations to it.

2022-02-07 (permalink)

* Another small PR to fix `scmindent` indentation for `#| ... |#` regions (https://github.com/ds26gte/scmindent/pull/12)

2022-02-02 (permalink)

Consider adding swank-loader.lisp's `dirname` to the location of the directory where fasl files are cached

(This was posted on [slime/slime/issues/704](https://github.com/slime/slime/issues/704))

While testing something out, yesterday, I got into this weird situation where it appeared as if an older version of Slime was getting loaded, and not the one that I was trying to test; after a little bit of debugging, I realized the problem had something to do with where Slime is caching fasl files, hence the enhancement request.

A little bit about my setup (your mileage might vary of course):

- `~/opt/slime/__main__/`, git worktree pointing to my personal fork of Slime
- `~/opt/slime/upstream-master/`, git worktree pointing to the upstream repo, i.e. this one
- Inside my CL init script I have `(pushnew #P"~/opt/slime/__main__/" asdf:*central-registry*)` (or `(pushnew #P"~/opt/slime/upstream-master/" asdf:*central-registry*)`) to easily switch between Slime implementations

With this setup, and due to the default location chosen by Slime to cache its fasl files, every time I changed my init script to switch the _active_ version of Slime, I also had to _touch_ any .lisp file contained in that directory or chances are the fasls from the _inactive_ worktree will end up being loaded instead (especially if more recent).

Of course this only happens because I have 2 separate directories where Slime can be loaded from: in the case of a single git worktree, switching branches would indeed _touch_ any changed file and invalidate cached fasls accordingly.

At the moment, Slime's fasl files are getting cached inside a directory whose location takes into account:

- The current user's home
- Slime's version
- A _fingerprint_ of the CL implementation / OS where Slime is getting run

For example, on my laptop (MacOS), when I fire up Slime with SBCL I get:
* swank-loader:*fasl-directory*
#P"/Users/matteolandi/.slime/fasl/2.27/sbcl-2.1.9-darwin-x86-64/"
What if the chosen directory also took into account the location of the swank-loader file itself?
diff --git a/swank-loader.lisp b/swank-loader.lisp
index 5434b9d1..44eb5437 100644
--- a/swank-loader.lisp
+++ b/swank-loader.lisp
@@ -159,7 +159,8 @@ Return nil if nothing appropriate is available."
    (make-pathname
     :directory `(:relative ".slime" "fasl"
                  ,@(if (slime-version-string) (list (slime-version-string)))
-                 ,(unique-dir-name)))
+                 ,(unique-dir-name)
+                 ,@(cdr (pathname-directory *load-truename*))))
    (user-homedir-pathname)))
And with the above:
* swank-loader:*fasl-directory*
#P"/Users/matteolandi/.slime/fasl/2.27/sbcl-2.1.9-darwin-x86-64/Users/matteolandi/my-env/opt/slime/__main__/"
Alternatively, and as explained at the top of swank-loader.lisp, one could load the loader, override SWANK-LOADER:*FASL-DIRECTORY*, and finally call SWANK-LOADER:INIT.  Unfortunately though, even if the above would surely get the job done without the need for further enhancements from Slime, it would also _not_ prevent a different person with a setup similar to mine, from having to figure this all out on their own and implement the same workaround; so here I am, picking your brain about this possible enhancement.

Let me know what you think about this, and in case I will shove the diff above (or a similar one) into a pull request for approval.

M.

2022-02-01 (permalink)

* Use vlime#SYM instead of the undefined vlime#CL #3: https://github.com/phmarek/vlime/pull/3

* Fix input reading #4: https://github.com/phmarek/vlime/pull/4

2022-01-31 (permalink)

? shopping list: shared amongst people (i.e. household); work offline and only sync up when possible; single list but different items order (i.e. one for store); integrate with Alexa or Google Home

2022-01-28 (permalink)

? vlime: buffer with multiple packages: it seems to be picking the last one, and not the one just "above" the evaluated expression

2022-01-22 (permalink)

? scmindent: property indent lambda lists -- all args are aligned to the second element

2022-01-13 (permalink)

On HASH-TABLEs performance

(posted on [r/Common_Lisp](https://www.reddit.com/r/Common_Lisp/comments/s0g2q3/on_hashtables_performance/))

I [built](https://topaz.github.io/paste/#XQAAAQDdBwAAAAAAAAAUGQimgx+p6Nchs0/Hc3cRxcmyfeV+Bzi2FxeDDBZEkgZaDBSyRjbUkuFXP6dhVAaOH8AG99AzdDUHR/lSrXrRzt2ru8sshOcmQ5UeY7P5AZQhS4ncXon3a6r8j5K/gr4xuK1jjTG4PHnVajamzB6t54TV5dSPS63MDOK6jD6CsHfC+VkV4bms/+I8MmYXI4GU2JxpblEiXKntPh1orBPfGpyTCfhx4v+kHU29G6mF1Y4EM6ozuoAyzjwxpY0eKSSV1zFprANFIIlsaPqm/nWF9W4i224u0tUm3IHb+tnMDrFtscipbIplaGiZN8B2Q1apClGT/+51hyLVNGM+4prkx8VWxBWeWuN62LCmMrY0iqRvFeCwKqpwg4s4VRogYMgeG3AOyY1DlJ0UJFetXLjJ+7pMA+8jUmYyE6xRJd9R+jn/cPnkNyDceLd1iFB/ODUgVXLcyrYb7jFIIeucRNGqkgQ7lwe0xvmyiBLkHyIe3ML8lZQcmhyinz9Ynab880cOGsBYBuTRjCS7dCvLr5XKDjw8SpdnWXgjVrzLA49wE+NZXVkPz6tTVzFUR36unRBsIJZJKfYRi3ZUToT1eFyKKayvxZpI9V1QkNJvJehPwqf797u2T57qMuE1UrtnhSso1kQh/BEo33aCHrNXfO/H71RV95KbXakyaLfqTAeeKA2hUrQSbAFnewyRS+3eKmTvbe0+XL1ltwMKAccDiSJR+DZ8iMZhQx91qBpU6W4Gu5c+XA4oT8DFrhGWR1aHzdLLf2ICtG8VUVnLizIL/+Wv790=) an implementation of the A* algorithm which internally uses two HASH-TABLEs:

- `cost-so-far`, mapping all the searched states the cost required to get there, from the initial state -- this is used to calculate the final cost to move from the initial state to the goal one
- `come-from`, mapping all the searched states to their previous state -- this is used to calculate the complete path to move from the initial state to the goal one

As I was profiling this, I came to the conclusions that representing search states with 2D ARRAYs instead of HASH-TABLEs of the same size would have a huge positive impact on the overal runtime (i.e. it's better to use the former than the latter), but I cannot quite understand why, hence this post.

Now, it does not matter if you are not familiar with A*; in fact, we can completely ignore all its mechanics and simply pretend we have two HASH-TABLEs we need to update.  In addition, let's pretend our search state was a `2` rows by `10` columns grid, with numbers in it.

I came up with (and benchmarked) the following 3 state representations:

1. HASH-TABLE having `(row col)` as keys
2. HASH-TABLE having `(+ (* row 100) col)` as keys -- this to rule out any excessive consing from the above
3. 2D ARRAY

The benchmark is simple:

- We initialize `cost-so-far` and `come-from` hash tables
- We initialize our starting grid
- Then, 10000 times
- We copy the grid (I used ALEXANDRIA:COPY-HASH-TABLE and ALEXANDRIA:COPY-ARRAY for this)
- Move two random elements
- Save the new state inside `cost-so-far` and `come-from`

Here is the run time for the first state representation, HASH-TABLE with LISTs as keys:
(time
  (loop with cost-so-far = (make-hash-table :test 'equalp)
        with come-from = (make-hash-table :test 'equalp)
        with rows = 2 and cols = 10 and burrow = (make-hash-table :test 'equal)
        initially (loop for row below rows do
                        (loop for col below cols do
                              (setf (gethash (list row col) burrow) (random 100))))
        repeat 10000 do
        (let ((next (copy-hash-table burrow))
              (row1 (random rows)) (col1 (random cols))
              (row2 (random rows)) (col2 (random cols)))
          (rotatef (gethash (list row1 col1) next) (gethash (list row2 col2) next))
          (setf (gethash next cost-so-far) (random 1000)
                (gethash next come-from) burrow
                burrow next))
        finally (return cost-so-far)))
Evaluation took:
  8.908 seconds of real time
  8.398207 seconds of total run time (8.174489 user, 0.223718 system)
  94.27% CPU
  20,484,287,455 processor cycles
  13,913,664 bytes consed

#<HASH-TABLE :TEST EQUALP :COUNT 9414 {10032C19A3}>
Here is the run time for the second state representation, HASH-TABLE with NUMBERs as keys:
> (time
    (flet ((key (row col) (+ (* row 100) col)))
      (loop with cost-so-far = (make-hash-table :test 'equalp)
            with come-from = (make-hash-table :test 'equalp)
            with rows = 2 and cols = 10 and burrow = (make-hash-table :test 'eql)
            initially (loop for row below rows do
                        (loop for col below cols do
                              (setf (gethash (key row col) burrow) (random 100))))
            repeat 10000 do
            (let ((next (copy-hash-table burrow))
                  (row1 (random rows)) (col1 (random cols))
                  (row2 (random rows)) (col2 (random cols)))
              (rotatef (gethash (key row1 col1) next) (gethash (key row2 col2) next))
              (setf (gethash next cost-so-far) (random 1000)
                    (gethash next come-from) burrow
                    burrow next))
            finally (return cost-so-far))))
Evaluation took:
  7.051 seconds of real time
  6.668094 seconds of total run time (6.502108 user, 0.165986 system)
  94.57% CPU
  16,217,080,790 processor cycles
  11,858,336 bytes consed

#<HASH-TABLE :TEST EQUALP :COUNT 9389 {10041E7953}>
While this is the runtime of the third and last state representation, i.e. 2D ARRAY:
> (time
    (loop with cost-so-far = (make-hash-table :test 'equalp)
          with come-from = (make-hash-table :test 'equalp)
          with rows = 2 and cols = 10 with burrow = (make-array (list rows cols))
          initially (loop for row below rows do
                          (loop for col below cols do
                                (setf (aref burrow row col) (random 100))))
          repeat 10000 do
          (let ((next (copy-array burrow))
                (row1 (random rows)) (col1 (random cols))
                (row2 (random rows)) (col2 (random cols)))
            (rotatef (aref next row1 col1) (aref next row2 col2))
            (setf (gethash next cost-so-far) (random 1000)
                  (gethash next come-from) burrow
                  burrow next))
          finally (return cost-so-far)))
Evaluation took:
  0.027 seconds of real time
  0.026169 seconds of total run time (0.016391 user, 0.009778 system)
  96.30% CPU
  62,550,110 processor cycles
  4,964,608 bytes consed

#<HASH-TABLE :TEST EQUALP :COUNT 9285 {1004F65393}>
That's a 100x speed-up!

For fun I run another experiment in which I still used a HASH-TABLE (with `(row col)` as keys) to represent the search state, but also serialized that into a LIST of its elements and used _that_ to update update `cost-so-far` and `came-from`:
> (time
    (loop with cost-so-far = (make-hash-table :test 'equalp)
          with come-from = (make-hash-table :test 'equalp)
          with rows = 2 and cols = 10 and burrow = (make-hash-table :test 'equal)
          initially (loop for row below rows do
                          (loop for col below cols do
                                (setf (gethash (list row col) burrow) (random 100))))
          repeat 10000 do
          (let ((next (copy-hash-table burrow))
                (row1 (random rows)) (col1 (random cols))
                (row2 (random rows)) (col2 (random cols)))
            (rotatef (gethash (list row1 col1) next) (gethash (list row2 col2) next))
            (let ((state-key (uiop:while-collecting (add)
                               (loop for row below rows do
                                     (loop for col below cols do
                                           (add (gethash (list row col) next)))))))
              (setf (gethash state-key cost-so-far) (random 1000)
                    (gethash state-key come-from) burrow
                    burrow next)))
          finally (return cost-so-far)))
Evaluation took:
  0.093 seconds of real time
  0.090564 seconds of total run time (0.052988 user, 0.037576 system)
  97.85% CPU
  214,038,690 processor cycles
  26,702,832 bytes consed

#<HASH-TABLE :TEST EQUALP :COUNT 9216 {10070F1E13}>
I was expecting this run to cons up more than the previous ones, but the runtime?! I was definitely not expecting this to be comparable with the run in which we used a 2D ARRAY as our search state!

So, can anyone tell me what's going on here? Are ARRAYs really that easier to EQUALP as compared to HASH-TABLEs, or am I looking at the wrong place?  I thought with such small states using an array or a hash table would not make much of a difference, but apparently it does!

...
An array is basically the cost of a write to memory + a tiny bit of arithmetic.
>
A hash table requires walking a list (the key), producing a hash code, finding a suitable location in the underlying storage, reallocating and copying if needed, checking for collisions (also requires walking a list), and storing.
I get what you are saying: HASH-TABLEs require more "bookkeeping" than ARRAYs, and I 100% agree agree with you on that.  However, most of these things that you mentioned are executed in **all** the different scenarios that we run, and yet the runtime seems to increase only when we try to EQUALP hash tables together.  Let's review what each scenario does.

Scenario 1 (HASH-TABLE with `(row col)` as keys):

- Cost to generate a new state

  - Copy a HASH-TABLE 10000 times
  - Rotate two random elements

- Bookkeeping costs for `cost-so-far`

  - Look up the new state, i.e. a HASH-TABLE instance, into the hash table
  - Potentially resize the hash table
  - Potentially insert the new entry in the hash table

- Bookkeeping costs for `come-from` (same as above)

Scenario 3 (2D ARRAY):

- Cost to generate a new state

  - Copy a ARRAY 10000 times
  - Rotate two random elements

- Bookkeeping costs for `cost-so-far`

  - Look up the new state, i.e. a ARRAY instance, into the hash table
  - Potentially resize the hash table
  - Potentially insert the new entry in the hash table

- Bookkeeping costs for `come-from` (same as above)

In one case we get a 7s runtime, while in the other a 0.2s one; the difference can be attributed to:

- Different cost required to copy a HASH-TABLE / ARRAY
- Different cost required to rotate elements inside a HASH-TABLE / ARRAY
- Different cost required to see if a HASH-TABLE / ARRAY is contained inside another HASH-TABLE (this last one being `cost-so-far` or `come-from`)

The last scenario though, where we serialize the hash table into a list of its elements, should help us better understand where most of the time is actually spent:

- Cost to generate a new state

  - Copy a HASH-TABLE 10000 times
  - Rotate two random elements

- Bookkeeping costs for `cost-so-far`

  - Serialize the HASH-TABLE into a list of its elements
  - Look up the new serialized state, i.e. a LIST, into the hash table
  - Potentially resize the hash table
  - Potentially insert the new entry in the hash table

- Bookkeeping costs for `come-from` (same as above)

With this, the runtime is now to 0.9s, and the only differences with the first scenario are that here:

- There is an extra serialization step
- EQUALP for the `cost-so-far` and `come-from` tables works with LISTs and not HASH-TABLEs

I mean, I get that scanning a HASH-TABLE would require more work, but _that_ more work really?  Here are two more tests where we try to measure the time to generate the new states, and the time to update `cost-so-far` and `come-from`, separately.

Here we use use HASH-TABLE as keys:
> (let ((cost-so-far (make-hash-table :test 'equalp))
        (come-from (make-hash-table :test 'equalp))
        states)
    ;; Generate all the states
    (time
      (loop with rows = 2 and cols = 10 and burrow = (make-hash-table :test 'equal)
            initially (loop for row below rows do
                            (loop for col below cols do
                                  (setf (gethash (list row col) burrow) (random 100))))
            repeat 10000 do
            (let ((next (copy-hash-table burrow))
                  (row1 (random rows)) (col1 (random cols))
                  (row2 (random rows)) (col2 (random cols)))
              (rotatef (gethash (list row1 col1) next) (gethash (list row2 col2) next))
              (let ((state-key next)
                    (cost-next (random 1000)))
                (push (list next state-key cost-next) states)
                (setf burrow next)))))
    ;; Update cost-so-far, came-from
    (time
      (loop for (state key cost) in states for prev = nil then state do
            (setf (gethash key cost-so-far) cost
                  (gethash key come-from) prev)))
Evaluation took:
  0.092 seconds of real time
  0.090417 seconds of total run time (0.070339 user, 0.020078 system)
  [ Run times consist of 0.067 seconds GC time, and 0.024 seconds non-GC time. ]
  97.83% CPU
  213,098,600 processor cycles
  12,494,080 bytes consed

Evaluation took:
  7.497 seconds of real time
  7.245294 seconds of total run time (7.097045 user, 0.148249 system)
  96.64% CPU
  17,254,061,811 processor cycles
  2,054,608 bytes consed

#<HASH-TABLE :TEST EQUALP :COUNT 9378 {10092E6233}>
#<HASH-TABLE :TEST EQUALP :COUNT 9378 {10092E62F3}>
And it takes around 7.5s for keys comparisons and resizing.

Here instead we serialize HASH-TABLEs into a LIST first, and then use that as key:
> (let ((cost-so-far (make-hash-table :test 'equalp))
        (come-from (make-hash-table :test 'equalp))
        states)
    ;; Generate all the states
    (time
      (loop with rows = 2 and cols = 10 and burrow = (make-hash-table :test 'equal)
            initially (loop for row below rows do
                            (loop for col below cols do
                                  (setf (gethash (list row col) burrow) (random 100))))
            repeat 10000 do
            (let ((next (copy-hash-table burrow))
                  (row1 (random rows)) (col1 (random cols))
                  (row2 (random rows)) (col2 (random cols)))
              (rotatef (gethash (list row1 col1) next) (gethash (list row2 col2) next))
              (let ((state-key (uiop:while-collecting (add)
                                 (loop for row below rows do
                                       (loop for col below cols do
                                             (add (gethash (list row col) next))))))
                    (cost-next (random 1000)))
                (push (list next state-key cost-next) states)
                (setf burrow next)))))
    ;; Update cost-so-far, came-from
    (time
      (loop for (state key cost) in states for prev = nil then state do
            (setf (gethash key cost-so-far) cost
                  (gethash key come-from) prev)))
    (values cost-so-far come-from))
Evaluation took:
  0.034 seconds of real time
  0.034214 seconds of total run time (0.033711 user, 0.000503 system)
  100.00% CPU
  80,230,767 processor cycles
  25,301,488 bytes consed

Evaluation took:
  0.008 seconds of real time
  0.008258 seconds of total run time (0.008023 user, 0.000235 system)
  100.00% CPU
  19,813,576 processor cycles
  2,044,816 bytes consed

#<HASH-TABLE :TEST EQUALP :COUNT 9323 {1002275C53}>
#<HASH-TABLE :TEST EQUALP :COUNT 9323 {1002275D13}>
I am still puzzled by this, but it is what it is, so I guess I will have to make peace with it.

...
In your Common Lisp implementation, which looks like SBCL, ...
That's correct.
...the EQUALP hash of a hash table depends only on the number of elements in the hash table (see SB-IMPL::PSXHASH), which in this case is NEXT and which never changes size because it always has row*col entries (it also depends on the test, but that's the same too). So your gethash on COST-SO-FAR and COME-FROM are not O(1) operations. They are O(N).
What about the scenario in which I serialize `next` into a LIST first, and then use that to GETHASH into `cost-so-far` and `come-from`?  I would expect the hash function to behave similarly to the case where we are hashing hash tables; yet somehow there is almost a 100 folds difference in the overall runtime.
Plus on top of that, the cost of EQUALP on a set of hash tables is quite high. You have to verify that every key+value pair in one hash table is in the other and vice versa.
For _fun_, I even tried to serialize not just `next` values, but their keys too; it conses more of course, but its runtime is still way better than if we used EQUALP on hash tables:
> (time
    (loop with cost-so-far = (make-hash-table :test 'equalp)
          with come-from = (make-hash-table :test 'equalp)
          with rows = 2 and cols = 10 and burrow = (make-hash-table :test 'equal)
          initially (loop for row below rows do
                          (loop for col below cols do
                                (setf (gethash (list row col) burrow) (random 100))))
          repeat 10000 do
          (let ((next (copy-hash-table burrow))
                (row1 (random rows)) (col1 (random cols))
                (row2 (random rows)) (col2 (random cols)))
            (rotatef (gethash (list row1 col1) next) (gethash (list row2 col2) next))
            (let ((state-key (hash-table-alist next)))
              (setf (gethash state-key cost-so-far) (random 1000)
                    (gethash state-key come-from) burrow
                    burrow next)))
          finally (return cost-so-far)))
Evaluation took:
  0.034 seconds of real time
  0.033374 seconds of total run time (0.032833 user, 0.000541 system)
  97.06% CPU
  78,876,062 processor cycles
  20,299,376 bytes consed

#<HASH-TABLE :TEST EQUALP :COUNT 9368 {1006C41283}>
"For a _small_ dataset, a LIST performs better than a HASH-TABLE" -cit

I am not sure where did I read this from, but I guess this rabbit hole I find myself in, right now, kind of testifies to that ;-)

...

>> What about the scenario in which I serialize next into a LIST first, and then use that to GETHASH into cost-so-far and come-from?
>
Then this works fine and the hash table cost is now O(1) for each gethash call and everything is fast.
>
>> I would expect the hash function to behave similarly to the case wherewe are hashing hash tables; yet somehow there is almost a 100 folds difference in the overall runtime.
>
You are wrong with your expectation. The hash of the list version is nicely randomly distributed as it includes all the key value pairs. Just try it! call #'sb-impl::psxhash on NEXT hash tables and then call it on the list you construct with your loop over row and cols. You will find it is the SAME VALUE each time through your loop in the first case, and unique and random in the second case. This means when you use a hash-table as a key in your COST-SO-FAR and COME-FROM equalp hash tables, they behave like lists, wherein each gethash call has to search all the elements in the hash table to find the key that is equal to your hash table, which is now an O(N) operation where N is the number of entries in the COST-SO-FAR and COME-FROM hash tables. Please message me directly if this explanation isn't making sense and I can try again to explain it some different ways.
Oh, I finally see what's going on, and I think I misread your: "So your gethash on COST-SO-FAR and COME-FROM are not O(1) operations. They are O(N)." in the parent comment!

Let's see if I am getting this right -- please, do correct me if I am wrong:

- When writing to / reading from a HASH-TABLE, the provided key, `k`, gets _hashed_ (SBCL uses SB-IMPL::PSXHASH for this), and if an entry with the same key is already found in the HASH-TABLE, the implementation then goes on and tests for _equality_
- If equal, the we found the entry to update / read, otherwise, the implementation needs to try a different entry; this usually translates to either scanning a list with all the keys clashing on the same hash value, until one is found which is _equal_ to `k` or until the list is exhausted; or, a different implementation could decide to keep on trying `(hash + 1) % size`, `(hash + 2) % size`, until an _empty_ entry, or one which is _equal_ to `k` is found
- Now, when a HASH-TABLE is passed in to SB-IMPL::PSXHASH, the function will return a hash value which is only function of a) the test function the HASH-TABLE got set up with, and b) the number of entries of the HASH-TABLE (yes, the **number of entries**, and not their values; I wanted to stress this because I originally did not pay too much attention to this while reading the parent comment, as I assumed it was referring to the _complexity_ to calculate this hash, i.e. the more elements, the longer the hash function would take, but clearly I was wrong about it!)
- But the number of entries of our HASH-TABLE instances, is always the same (i.e. `20`), and so is the `test` function we initialized them with (i.e. EQUAL)
- This means that SB-IMPL::PSXHASH, for any given state, would always be returning the same hash number (i.e. `4242822982970596467` for the case of a EQUAL HASH-TABLE with `20` entries)
- This means that GETHASH's performance will quickly degrade from O(1) to O(N), as the implementation is forced to scan the full set of entries until one which is _equal_ to `k` is found, or we wrapped around

This also means that under these circumstances, `cost-so-far` and `come-from` are almost behaving as if we they were association lists; actually, let's try to confirm this with yet another scenario:
> (time
    (loop with cost-so-far = '()
          with come-from = '()
          with rows = 2 and cols = 10 and burrow = (make-hash-table :test 'equal)
          initially (loop for row below rows do
                          (loop for col below cols do
                                (setf (gethash (list row col) burrow) (random 100))))
          repeat 10000 do
          (let ((next (copy-hash-table burrow))
                (row1 (random rows)) (col1 (random cols))
                (row2 (random rows)) (col2 (random cols)))
            (rotatef (gethash (list row1 col1) next) (gethash (list row2 col2) next))
            (let ((existing (assoc next cost-so-far :test 'equalp)))
              (if existing
                (setf (cdr existing) (random 1000))
                (setf cost-so-far (cons (cons next (random 1000)) cost-so-far))))
            (let ((existing (assoc next come-from :test 'equalp)))
              (if existing
                (setf (cdr existing) burrow)
                (setf come-from (cons (cons next burrow) come-from))))
            (setf burrow next))
          finally (return (length cost-so-far))))
Evaluation took:
  7.588 seconds of real time
  7.153604 seconds of total run time (7.015142 user, 0.138462 system)
  [ Run times consist of 0.043 seconds GC time, and 7.111 seconds non-GC time. ]
  94.28% CPU
  17,452,971,454 processor cycles
  12,449,472 bytes consed

9438
And we exclude HASH-TABLE resizing costs from the first scenario, we are getting a very similar runtime!
> (time
    (loop with cost-so-far = (make-hash-table :size 10000 :test 'equalp)
          with come-from = (make-hash-table :size 10000 :test 'equalp)
          with rows = 2 and cols = 10 and burrow = (make-hash-table :test 'equal)
          initially (loop for row below rows do
                          (loop for col below cols do
                                (setf (gethash (list row col) burrow) (random 100))))
          repeat 10000 do
          (let ((next (copy-hash-table burrow))
                (row1 (random rows)) (col1 (random cols))
                (row2 (random rows)) (col2 (random cols)))
            (rotatef (gethash (list row1 col1) next) (gethash (list row2 col2) next))
            (setf (gethash next cost-so-far) (random 1000)
                  (gethash next come-from) burrow
                  burrow next))
          finally (return cost-so-far)))
Evaluation took:
  7.296 seconds of real time
  7.080104 seconds of total run time (6.941640 user, 0.138464 system)
  97.04% CPU
  16,782,132,425 processor cycles
  12,445,328 bytes consed

#<HASH-TABLE :TEST EQUALP :COUNT 9301 {1003339C63}>
A note on your code above. Using hash-table-alist does not guaranteed the ORDER of the resulting ALIST is the same, so it CANNOT be used as a replacement because you can now have the case where (EQUALP hash-table-1 hash-table-2) is T, but (EQUALP (hash-table-alist hash-table-1) (hash-table-alist hash-table-2)) is NIL, and so while it appears to work in this case (because the hash table is never resized, thus internal order of KV pairs does not change), and thus is a bug in your code.
Right: I wanted to simplify the code a little bit, to make it easier for readers to follow, but clearly I did not think this through.  Thanks for noticing this silly mistake of mine!
The 'equalp' hash function (psxhash) when applied to a hash table in SBCL is a number generated from the number of elements in the hash table and the hash table type, and so is always the same number for all your NEXT hash tables and thus your hash table is no longer a hash table and just a slow list doing equalp comparisons. There are reasons psxhash does this and I recommend you think about them and figure them out, along with what equalp means on two hash tables.
Well, there is a comment in the source code covering just that:
       ...
       (structure-object
        (cond ((hash-table-p key)
               ;; This is a purposely not very strong hash so that it does not make any
               ;; distinctions that EQUALP does not make. Computing a hash of the k/v pair
               ;; vector would incorrectly take insertion order into account.
               (mix (mix 103924836 (hash-table-count key))
                    (sxhash (hash-table-test key))))
              ((pathnamep key) (pathname-sxhash key))
              (t
               (structure-object-psxhash key depthoid))))
       ...
Hashing the k/v pair vector would take insertion order into account; this could lead to SB-IMPL::PSXHASH to generate two different hash numbers, even though the two HASH-TABLE instances are _equal_ (as per the EQUALP definition); this could lead to duplicate data getting stored in the HASH-TABLE.  Instead, we prefer SB-IMPL:PSXHASH to do the minimal amount of work, that cannot cause duplicate keys to accidentally be stored in the HASH-TABLE, and let the `test` predicate (i.e. EQUALP) do the rest.

Thanks /u/PhysicistWantsDonuts for helping me with this!

2022-01-11 (permalink)

Advent of Code: [2021/25](https://adventofcode.com/2021/day/25)
This is it: the bottom of the ocean trench, the last place the sleigh keys could be. Your submarine's experimental antenna **still isn't boosted enough** to detect the keys, but they **must** be here. All you need to do is **reach the seafloor** and find them.
>
At least, you'd touch down on the seafloor if you could; unfortunately, it's completely covered by two large herds of sea cucumbers, and there isn't an open space large enough for your submarine.
So we call a deep-sea marine biologist (whose number we found on a handwritten note taped to the wall of the submarine's cockpit), which explains us what's going on:
"Sea cucumbers? Yeah, they're probably hunting for food. But don't worry, they're predictable critters: they move in perfectly straight lines, only moving forward when there's space to do so. They're actually quite polite!"
Continues:
There are two herds of sea cucumbers sharing the same region; one always moves **east** (`>`), while the other always moves **south** (`v`). Each location can contain at most one sea cucumber; the remaining locations are **empty** (`.`). The submarine helpfully generates a map of the situation (your puzzle input).
For example:
v...>>.vv>
.vv>>.vv..
>>.>v>...v
>>v>>.>.v.
v>v.vv.v..
>.>>..v...
.vv..>.>v.
v.v..>>v.v
....v..v.>
Every **step**, the sea cucumbers in the east-facing herd attempt to move forward one location, then the sea cucumbers in the south-facing herd attempt to move forward one location. When a herd moves forward, every sea cucumber in the herd first simultaneously considers whether there is a sea cucumber in the adjacent location it's facing (even another sea cucumber facing the same direction), and then every sea cucumber facing an empty location simultaneously moves into that location.
OK, kinda makes sense...
Due to **strong water currents** in the area, sea cucumbers that move off the right edge of the map appear on the left edge, and sea cucumbers that move off the bottom edge of the map appear on the top edge. Sea cucumbers always check whether their destination location is empty before moving, even if that destination is on the opposite side of the map.
The task for the day:
Find somewhere safe to land your submarine. **What is the first step on which no sea cucumbers move?**
We are going to simulate this, i.e. move the two herds, the east facing one first, and the south facing one next, until none of them can move any more.

But first, the input; let's parse the input map into a 2D array:
(defun parse-map (lines &aux (rows (length lines)) (cols (length (car lines))))
  (make-array (list rows cols) :element-type 'character :initial-contents lines))
Now, let's see how we are going to move the two herds:

- All the sea cucumber from the same herd are moving at the same time, so we will be creating a copy of the input map, `next`: we will be reading from `curr`, and writing to `next`
- Then, we will move the **east** facing heard first, and the **south** facing one next
- For each row, for each column
- If the _next_ spot is empty, we swap the content of the two elements
- And we also keep track that at least one sea cucumber did move
(defun tick (curr &aux
                  (rows (array-dimension curr 0))
                  (cols (array-dimension curr 1))
                  (next (copy-array curr))
                  moved)
  (flet ((is (ch row col) (eql ch (aref curr row col)))
         (swap (row1 col1 row2 col2)
           (rotatef (aref next row1 col1) (aref next row2 col2))
           (setf moved t)))
    (dotimes (row rows)
      (dotimes (col cols)
        (let ((col-next (mod (1+ col) cols)))
          (when (and (is #\> row col) (is #\. row col-next))
            (swap row col-next row col)))))
    (setf curr next next (copy-array curr))
    (dotimes (row rows)
      (dotimes (col cols)
        (let ((row-next (mod (1+ row) rows)))
          (when (and (is #\v row col) (is #\. row-next col))
            (swap row-next col row col))))))
  (values next moved))
Then, all is left to do is call TICK repeatedly until the sea cucumbers stop moving, and return the number of elapsed steps:
(defun part1 (map &aux (moved t))
  (loop while moved count t do (setf (values map moved) (tick map))))
And that's it!

Final plumbing:
(define-solution (2021 25) (input parse-map) (part1 input))

(define-test (2021 25) (532))
Test run:
> (time (test-run))
TEST-2021/25.
Success: 1 test, 1 check.
Evaluation took:
  1.093 seconds of real time
  1.056084 seconds of total run time (1.034671 user, 0.021413 system)
  [ Run times consist of 0.006 seconds GC time, and 1.051 seconds non-GC time. ]
  96.61% CPU
  2,513,864,169 processor cycles
  81,222,576 bytes consed
This is not too far off from what I [used](https://topaz.github.io/paste/#XQAAAQBUBwAAAAAAAAAUGQimgx+p6PZhRh7uIO+WahagjQFtPUiCoxNgFycn2BrqOj/HOBL8Z7DCsyBqFgSXDidjaytfAiVhJKKToKeI2JxiZ1r3xMJuElxpV/LfXi+nfkg/MWFCDwfkS6x35gl2OmlvYLs1AppDHrAR5MIjo5hAvQiA9y9MxVh48Mqj81RrYO7q1GfdyW/pFPj3hl6kTirDyjE5L6TtFsCxNfeCLugBv8cfYt3DY/ia2Epg1VkMZ91iBAFrgeSef1aRh4arkULTSjUqIMa588z4sP9TTTv0PqBamaxPUZGDu1Tr1KRPunpf8tB2zfbmEabkt86WtB7WTltXj6WGRoeFCcg8tGjKV9/bV7Qj4xMu1NVIm94XISUtHE6i7n6xaAHOiMFQyEhLctU51vQxOHfIJwnL81D7j9i6oYdEMRjfNJfahIPYpKmBTlc4V8/AaCLCBumz24LNRMR305NuNFQNX1OmgK2eTToEfjF94iG26pTN5KbOoxpmaUGs56fY7XAU1shggy59mMrVqjquFxuBkorPucWyv4rA00DIw4fSC/UrRvGV5kduUHEkQBxEGGVxenkuzNkZ/cgwceHNxLAEHF3vYIPjbPdBPtFtCtE8tLpm9p4Y+7iqDKi2dpxvYceCYl3bVrxRR3fz4Sm1PMTVhC6KZWlh8Sq+TqhXvrfpqHPDFEqU6zd+0YHxwiK8Iv35pare3idS1FS8oeYm6+vSgDstpS04s/OwxCdMJSADz/VguTxIcg07RU9e0P/pqzT/) to get my star for the day; main differences:

- Used a HASH-TABLE instead of a 2D array for the map
- Kept on forgetting about ROTATEF -- instead, I went with the usual SETF / GETHASH combo

Rest is pretty much the same; and if you wonder how does that compare to the solution presented here, well, as expected, the HASH-TABLE one would cons up more and take longer to run:
TEST-2021/25.
Success: 1 test, 1 check.
Evaluation took:
  5.764 seconds of real time
  5.592039 seconds of total run time (5.371084 user, 0.220955 system)
  [ Run times consist of 0.221 seconds GC time, and 5.372 seconds non-GC time. ]
  97.02% CPU
  13,256,996,615 processor cycles
  1,541,412,576 bytes consed
And this is really it, folks; this was the last day of Advent of Code 2021, and these are my personal stats for the year:
      --------Part 1--------   --------Part 2--------
Day       Time   Rank  Score       Time   Rank  Score
 25   00:35:38   1765      0       >24h   7849      0
 24       >24h   9639      0       >24h   9523      0
 23       >24h  10091      0       >24h   8567      0
 22   00:16:12   1233      0   09:26:47   4814      0
 21   00:36:54   3629      0   06:33:55   6294      0
 20   01:18:22   2795      0   01:18:53   2519      0
 19   13:53:21   5731      0   13:59:27   5473      0
 18   09:38:24   6701      0   09:48:56   6514      0
 17   01:21:38   4923      0   01:47:03   4742      0
 16   01:42:44   3507      0   02:33:20   3958      0
 15   00:10:48    464      0   00:59:10   1927      0
 14   00:24:18   4237      0   01:38:04   4805      0
 13   00:56:19   6101      0   01:02:34   5368      0
 12   00:29:21   2620      0   01:09:05   3880      0
 11   00:37:06   3370      0   00:38:14   2960      0
 10   00:20:08   4481      0   00:30:26   3688      0
  9   00:14:25   3057      0   01:57:27   8388      0
  8   00:22:15   6338      0   11:31:30  25666      0
  7   00:11:03   4999      0   00:13:57   3206      0
  6   00:10:01   3043      0   00:39:17   4775      0
  5   07:28:10  27193      0   07:41:09  24111      0
  4   00:30:52   2716      0   00:41:34   2704      0
  3   00:12:42   4249      0   00:48:51   5251      0
  2   00:05:42   3791      0   00:09:24   3731      0
  1   00:02:45   1214      0   00:07:16   1412      0
What to say:

- Another great year, with lots of interesting challenges
- First year where I tried to be aware at 0600 when the problem unlocked
- First year where I ranked sub-1000 for one of the problems
- A bit upset that I failed to solve day 23 and day 24 on time

Until the next time...

2022-01-10 (permalink)

TIL: You better cache your CL-SOURCE-FILE instances if you don't want ASDF to compile your files...TWICE!

Let's create ourselves a little playground, with a couple dummy files in it, `foo1.lisp` and `foo2.lisp`, each defining an empty package (`:foo1` and `:foo2` respectively):
cd /tmp
mkdir asdf-playground/ && cd asdf-playground/
mkdir foo/
echo '(defpackage :foo1)' > foo/foo1.lisp
echo '(defpackage :foo2)' > foo/foo2.lisp
Create `playground.asd`, and dump the following in it:
(defclass auto-module (module) ())

(defmethod component-children ((self auto-module))
  (mapcar (lambda (p)
            (make-instance 'cl-source-file :type "lisp"
                           :pathname p
                           :name (pathname-name p)
                           :parent (component-parent self)))
          (directory-files (component-pathname self)
                           (make-pathname :directory nil :name *wild* :type "lisp"))))

(asdf:defsystem #:playground
  :components ((:auto-module "foo")))
Before we move on, let's take a closer look at the snippet above:

- We define a new class, AUTO-MODULE
- We specialize COMPONENT-CHILDREN for AUTO-MODULE instances to look for all the `.lisp` files contained in the current directory (i.e. `self`), and create instances of CL-SOURCE-FILE off of them
- We define our system, `:playground`, and add an AUTO-MODULE component to it (this way ASDF will load / compile all the files contained inside `foo` without us having to explicitly listing them all)

Pop SBCL open (or any other CL implementation), make sure ASDF can load systems defined in the current working directory, then load `:playground`:
* (pushnew '*default-pathname-defaults* asdf:*central-registry*)
(*DEFAULT-PATHNAME-DEFAULTS*)
* (asdf:load-system 'playground)
; compiling file "/private/tmp/asdf-playground/foo/foo1.lisp" (written 10 JAN 2022 10:54:31 AM):
; processing (DEFPACKAGE :FOO1)

; wrote /Users/matteolandi/.cache/common-lisp/sbcl-2.1.9-macosx-x64/private/tmp/asdf-playground/foo/foo1-tmpGHU3ALSV.fasl
; compilation finished in 0:00:00.003
; compiling file "/private/tmp/asdf-playground/foo/foo2.lisp" (written 10 JAN 2022 10:54:33 AM):
; processing (DEFPACKAGE :FOO2)

; wrote /Users/matteolandi/.cache/common-lisp/sbcl-2.1.9-macosx-x64/private/tmp/asdf-playground/foo/foo2-tmpAAURSO1.fasl
; compilation finished in 0:00:00.001
; compiling file "/private/tmp/asdf-playground/foo/foo1.lisp" (written 10 JAN 2022 10:54:31 AM):
; processing (DEFPACKAGE :FOO1)

; wrote /Users/matteolandi/.cache/common-lisp/sbcl-2.1.9-macosx-x64/private/tmp/asdf-playground/foo/foo1-tmp5GEXGEG5.fasl
; compilation finished in 0:00:00.000
; compiling file "/private/tmp/asdf-playground/foo/foo2.lisp" (written 10 JAN 2022 10:54:33 AM):
; processing (DEFPACKAGE :FOO2)

; wrote /Users/matteolandi/.cache/common-lisp/sbcl-2.1.9-macosx-x64/private/tmp/asdf-playground/foo/foo2-tmpAR3FSGEY.fasl
; compilation finished in 0:00:00.001
T
We defined 2 packages, `:foo1` and `:foo2`, yet somehow we ended up with 4 different compilation messages, two per package; this isn't happening if I manually list all the files in my .asd file, is it?  Let's confirm this.

Update `playground.asd`:
(defclass auto-module (module) ())

(defmethod component-children ((self auto-module))
  (mapcar (lambda (p)
            (make-instance 'cl-source-file :type "lisp"
                           :pathname p
                           :name (pathname-name p)
                           :parent (component-parent self)))
          (directory-files (component-pathname self)
                           (make-pathname :directory nil :name *wild* :type "lisp"))))

(asdf:defsystem #:playground
  :components ((:module "foo" :components ((:file "foo1") (:file "foo2")))))
Load the system again:
* (asdf:load-system 'playground)
; compiling file "/private/tmp/asdf-playground/foo/foo1.lisp" (written 10 JAN 2022 10:54:31 AM):
; processing (DEFPACKAGE :FOO1)

; wrote /Users/matteolandi/.cache/common-lisp/sbcl-2.1.9-macosx-x64/private/tmp/asdf-playground/foo/foo1-tmpGHU3ALSV.fasl
; compilation finished in 0:00:00.004
; compiling file "/private/tmp/asdf-playground/foo/foo2.lisp" (written 10 JAN 2022 10:54:33 AM):
; processing (DEFPACKAGE :FOO2)

; wrote /Users/matteolandi/.cache/common-lisp/sbcl-2.1.9-macosx-x64/private/tmp/asdf-playground/foo/foo2-tmpAAURSO1.fasl
; compilation finished in 0:00:00.001
T
As expected, no double compilation messages.

So what's going on exactly?  Well, COMPONENT-CHILDREN is getting called twice, one from within LOAD-OP and one from within COMPILE-OP, and there is nothing we can do about that (i.e. it's expected); the problem is, we are creating new CL-SOURCE-FILE instances at each COMPONENT-CHILDREN invocation, and that seems to trick ASDF into believing that there were 4 different files instead of 2.

Let's solve this by creating a cache of reusable CL-SOURCE-FILE instances (we will be attaching it to the AUTO-MODULE instance so that the cache will be deleted as soon as the AUTO-MODULE instance object is disposed of):
(defclass auto-module (module)
  ((file-cache :initform (make-hash-table))))

(defmethod component-children ((self auto-module)
                               &aux (file-cache (slot-value self 'file-cache)))
  (mapcar (lambda (p &aux (existing (gethash p file-cache)))
            (if existing
                existing
                (setf (gethash p file-cache)
                      (make-instance 'cl-source-file :type "lisp"
                                     :pathname p
                                     :name (pathname-name p)
                                     :parent (component-parent self)))))
          (directory-files (component-pathname self)
                           (make-pathname :directory nil :name *wild* :type "lisp"))))

(asdf:defsystem #:playground
  :components ((:auto-module "foo")))
Reload the system:
* (asdf:load-system 'playground)
; compiling file "/private/tmp/asdf-playground/foo/foo1.lisp" (written 10 JAN 2022 10:54:31 AM):
; processing (DEFPACKAGE :FOO1)

; wrote /Users/matteolandi/.cache/common-lisp/sbcl-2.1.9-macosx-x64/private/tmp/asdf-playground/foo/foo1-tmp5GEXGEG5.fasl
; compilation finished in 0:00:00.001
; compiling file "/private/tmp/asdf-playground/foo/foo2.lisp" (written 10 JAN 2022 10:54:33 AM):
; processing (DEFPACKAGE :FOO2)

; wrote /Users/matteolandi/.cache/common-lisp/sbcl-2.1.9-macosx-x64/private/tmp/asdf-playground/foo/foo2-tmpAR3FSGEY.fasl
; compilation finished in 0:00:00.001
T
...and it looks like we fixed it: this time around, `foo1.lisp` and `foo2.lisp` were compiled only once, as if we listed them explicitly inside the system definition.

So yeah, you better cache your CL-SOURCE-FILE instances if you don't want ASDF to compile your files twice!

:wq

2022-01-09 (permalink)

Advent of Code: [2021/24](https://adventofcode.com/2021/day/24)
Magic smoke starts leaking from the submarine's arithmetic logic unit (ALU). Without the ability to perform basic arithmetic and logic functions, the submarine can't produce cool patterns with its Christmas lights!
Continues:
The ALU is a four-dimensional processing unit: it has integer variables `w`, `x`, `y`, and `z`. These variables all start with the value `0`. The ALU also supports **six instructions**:
>
`inp a` - Read an input value and write it to variable `a`.
`add a b` - Add the value of `a` to the value of `b`, then store the result in variable `a`.
`mul a b` - Multiply the value of `a` by the value of `b`, then store the result in variable `a`.
`div a b` - Divide the value of `a` by the value of `b`, truncate the result to an integer, then store the result in variable `a`. (Here, "truncate" means to round the value toward zero.)
`mod a b` - Divide the value of `a` by the value of `b`, then store the remainder in variable `a`. (This is also called the modulo operation.)
`eql a b` - If the value of `a` and `b` are equal, then store the value `1` in variable `a`. Otherwise, store the value `0` in variable `a`.
>
In all of these instructions, `a` and `b` are placeholders; `a` will always be the variable where the result of the operation is stored (one of `w`, `x`, `y`, or `z`), while `b` can be either a variable or a number. Numbers can be positive or negative, but will always be integers.
OK...
Once you have built a replacement ALU, you can install it in the submarine, which will immediately resume what it was doing when the ALU failed: validating the submarine's **model number**. To do this, the ALU will run the MOdel Number Automatic Detector program (MONAD, your puzzle input).
>
Submarine model numbers are always **fourteen-digit numbers** consisting only of digits `1` through `9`. The digit `0` cannot appear in a model number.
OK......
When MONAD checks a hypothetical fourteen-digit model number, it uses fourteen separate inp instructions, each expecting a **single digit** of the model number in order of most to least significant. (So, to check the model number `13579246899999`, you would give `1` to the first inp instruction, `3` to the second inp instruction, `5` to the third inp instruction, and so on.) This means that when operating MONAD, each input instruction should only ever be given an integer value of at least `1` and at most `9`.
>
Then, after MONAD has finished running all of its instructions, it will indicate that the model number was **valid** by leaving a `0` in variable `z`. However, if the model number was **invalid**, it will leave some other non-zero value in `z`.
>
MONAD imposes additional, mysterious restrictions on model numbers, and legend says the last copy of the MONAD documentation was eaten by a tanuki. You'll need to **figure out what MONAD does** some other way.
OK.........

Finally, the task:
To enable as many submarine features as possible, find the largest valid fourteen-digit model number that contains no `0` digits. **What is the largest model number accepted by MONAD?**
OK............

First off, let's take care of the input:

- For each line
- We start reading _forms_ out of it
- Instruction and variable names will be read into symbols
- Integers will be read into...well, integers
(defun read-instructions (lines)
  (loop for string in lines collect
        (loop with start = 0 and value-read while (< start (length string))
              collect (setf (values value-read start)
                            (read-from-string string nil nil :start start)))))
Let's join the instructions together, to see if anything interesting comes up.

First we define a function, EXEC, with associated modify macro EXECF, to symbolically execute the given instructions, i.e. wrap two operands together with the symbol of operations the instruction represents:
(define-modify-macro execf (other op) exec)
(defun exec (rand1 rand2 rator)
  (list rator rand1 rand2))

> (exec 1 2 '+)
(+ 1 2)

> (exec 'x 'y '*)
(* x y)
Next we start executing instructions, and inspect the content of the variables (and reset them), every time an INP instruction is found:
(defun b= (n1 n2) (if (= n1 n2) 1 0))

> (let ((vars (make-array 4 :initial-element 0)))
    (labels ((offset (v) (ecase v (w 0) (x 1) (y 2) (z 3)))
             (value (v) (if (numberp v) v (aref vars (offset v)))))
      (loop for (cmd a b) in (read-instructions (uiop:read-file-lines "src/2021/day24.txt")) do
            (ecase cmd
              (inp (pr vars)
                   (setf (aref vars 0) 'w
                         (aref vars 1) 'x
                         (aref vars 2) 'y
                         (aref vars 3) 'z)
                   (setf (aref vars (offset a)) '?))
              (add (execf (aref vars (offset a)) (value b) '+))
              (mul (execf (aref vars (offset a)) (value b) '*))
              (div (execf (aref vars (offset a)) (value b) 'truncate))
              (mod (execf (aref vars (offset a)) (value b) 'mod))
              (eql (execf (aref vars (offset a)) (value b) 'b=))))))
#(0 0 0 0)
#(? #1=(B= (B= (+ (MOD (+ (* X 0) Z) 26) 10) ?) 0)
  #2=(* (+ (+ (* #3=(+ (* (+ (* Y 0) 25) #1#) 1) 0) ?) 12) #1#)
  (+ (* (TRUNCATE Z 1) #3#) #2#))
#(? #1=(B= (B= (+ (MOD (+ (* X 0) Z) 26) 10) ?) 0)
  #2=(* (+ (+ (* #3=(+ (* (+ (* Y 0) 25) #1#) 1) 0) ?) 10) #1#)
  (+ (* (TRUNCATE Z 1) #3#) #2#))
#(? #1=(B= (B= (+ (MOD (+ (* X 0) Z) 26) 12) ?) 0)
  #2=(* (+ (+ (* #3=(+ (* (+ (* Y 0) 25) #1#) 1) 0) ?) 8) #1#)
  (+ (* (TRUNCATE Z 1) #3#) #2#))
...
Let's see if we can simplify some of these expressions a bit (I am thinking about those `(* ... 0)` expressions):
(defun exec (rand1 rand2 rator)
  (if (and (numberp rand1) (numberp rand2))
    (funcall rator rand1 rand2)
    (list rator rand1 rand2)))

(defun simplify (rator rand1 rand2)
  (case rator
    (* (cond ((or (eql rand1 0) (eql rand2 0)) 0)))))
With this, the output tends to get a bit better, but there is more we can do:
#(0 0 0 0)
#(? #1=(B= (B= (+ (MOD (+ 0 Z) 26) 10) ?) 0) #2=(* (+ (+ 0 ?) 12) #1#)
  (+ (* (TRUNCATE Z 1) (+ (* (+ 0 25) #1#) 1)) #2#))
#(? #1=(B= (B= (+ (MOD (+ 0 Z) 26) 10) ?) 0) #2=(* (+ (+ 0 ?) 10) #1#)
  (+ (* (TRUNCATE Z 1) (+ (* (+ 0 25) #1#) 1)) #2#))
#(? #1=(B= (B= (+ (MOD (+ 0 Z) 26) 12) ?) 0) #2=(* (+ (+ 0 ?) 8) #1#)
  (+ (* (TRUNCATE Z 1) (+ (* (+ 0 25) #1#) 1)) #2#))
...
All those `(+ 0 ...)` expressions can be replaced with their non-zero operand; all those `(truncate z 1)` expressions can be replaced with `z`:
(defun simplify (rator rand1 rand2)
  (case rator
    (+ (cond ((eql rand1 0) rand2)))
    (* (cond ((eql rand2 0) 0)))
    (truncate (cond ((eql rand2 1) rand1)))))
With the above changes we get:
#(0 0 0 0)
#(? #1=(B= (B= (+ (MOD Z 26) 10) ?) 0) #2=(* (+ ? 12) #1#)
  (+ (* Z (+ (* 25 #1#) 1)) #2#))
#(? #1=(B= (B= (+ (MOD Z 26) 10) ?) 0) #2=(* (+ ? 10) #1#)
  (+ (* Z (+ (* 25 #1#) 1)) #2#))
#(? #1=(B= (B= (+ (MOD Z 26) 12) ?) 0) #2=(* (+ ? 8) #1#)
  (+ (* Z (+ (* 25 #1#) 1)) #2#))
#(? #1=(B= (B= (+ (MOD Z 26) 11) ?) 0) #2=(* (+ ? 4) #1#)
  (+ (* Z (+ (* 25 #1#) 1)) #2#))
#(? #1=(B= (B= (+ (MOD Z 26) 0) ?) 0) #2=(* (+ ? 3) #1#)
  (+ (* (TRUNCATE Z 26) (+ (* 25 #1#) 1)) #2#))
#(? #1=(B= (B= (+ (MOD Z 26) 15) ?) 0) #2=(* (+ ? 10) #1#)
  (+ (* Z (+ (* 25 #1#) 1)) #2#))
#(? #1=(B= (B= (+ (MOD Z 26) 13) ?) 0) #2=(* (+ ? 6) #1#)
  (+ (* Z (+ (* 25 #1#) 1)) #2#))
#(? #1=(B= (B= (+ (MOD Z 26) -12) ?) 0) #2=(* (+ ? 13) #1#)
  (+ (* (TRUNCATE Z 26) (+ (* 25 #1#) 1)) #2#))
#(? #1=(B= (B= (+ (MOD Z 26) -15) ?) 0) #2=(* (+ ? 8) #1#)
  (+ (* (TRUNCATE Z 26) (+ (* 25 #1#) 1)) #2#))
#(? #1=(B= (B= (+ (MOD Z 26) -15) ?) 0) #2=(* (+ ? 1) #1#)
  (+ (* (TRUNCATE Z 26) (+ (* 25 #1#) 1)) #2#))
#(? #1=(B= (B= (+ (MOD Z 26) -4) ?) 0) #2=(* (+ ? 7) #1#)
  (+ (* (TRUNCATE Z 26) (+ (* 25 #1#) 1)) #2#))
#(? #1=(B= (B= (+ (MOD Z 26) 10) ?) 0) #2=(* (+ ? 6) #1#)
  (+ (* Z (+ (* 25 #1#) 1)) #2#))
#(? #1=(B= (B= (+ (MOD Z 26) -5) ?) 0) #2=(* (+ ? 9) #1#)
  (+ (* (TRUNCATE Z 26) (+ (* 25 #1#) 1)) #2#))
If you carefully look at this you will notice that:

- `w`, i.e. the first element of the array of variables, will **always** contain `?`, i.e. the value read with the IMP instruction
- `z`, i.e. the last element of the array of variables, will **always** contain an expression, function of the `?` and `z` only

Knowing this, we can _compile_ these expressions into 14 different functions which later on we are going to use to find the largest model number which our monad can validate.  Note: these functions are going to accept two parameters, one for `?`, and one for the previous `z` value:
(defun read-monad (lines &aux
                         (instructions (read-instructions lines))
                         (vars (make-array 4 :initial-element 0)))
  (labels ((offset (v) (ecase v (w 0) (x 1) (y 2) (z 3)))
           (value (v) (if (numberp v) v (aref vars (offset v))))
           (compile-into-lambda (e) (compile nil (pr `(lambda (? z) ,e)))))
    (uiop:while-collecting (add)
      (loop for (cmd a b) in instructions do
        (ecase cmd
          (inp (unless (eql (aref vars (offset 'z)) 0)
                 (add (compile-into-lambda (aref vars (offset 'z)))))
               (setf (aref vars 0) 'w (aref vars 1) 'x
                     (aref vars 2) 'y (aref vars 3) 'z)
               (setf (aref vars (offset a)) '?))
          (add (execf (aref vars (offset a)) (value b) '+))
          (mul (execf (aref vars (offset a)) (value b) '*))
          (div (execf (aref vars (offset a)) (value b) 'truncate))
          (mod (execf (aref vars (offset a)) (value b) 'mod))
          (eql (execf (aref vars (offset a)) (value b) 'b=))))
      ;; Make sure to pick up all the 14 ALU stages
      (add (compile-into-lambda (aref vars (offset 'z)))))))
What next?  Well, we are going to brute-force this, hoping that the simplifications that we discussed above will reduce the complexity of each of the 14 ALU stages.

Here is what we are going to do:

- We pick a value in between `1` and `9` (well, better to start from `9` and then down to `1` if we want to find the *largest** model number)
- We feed that digit the previous value of `z` (it starts from `0`) to the first function that we compiled, and get the new `z` value back
- We move to the next ALU function and repeat
- Until there are no more functions to execute **and** `z` is 0; in which case, we found our solution and all we have to do is walk backward and construct the model number
- Or we run out of functions to execute, in which case we will have to backtrack as we hit a dead end (i.e. we found an **invalid** 14-digits model number)
(defun first-valid-model (expressions &optional
                                      (input (iota 9 :start 9 :step -1))
                                      (z 0)
                                      &aux
                                      (remaining (length expressions)))
  (cond ((and (= remaining z 0)) 0)
        ((= remaining 0) nil)
        (t (loop for w in input for z-next = (funcall (car expressions) w z) do
                 (when-let (model (first-valid-model (rest expressions) input z-next))
                   (return (+ (* (expt 10 (1- remaining)) w) model)))))))
Easy right?!  Well, no: there are a lot of different states to try out, i.e. `9**14`, so the above will take quite some time to run, irrespective of all the simplifications we did before.

So what can we do?  We can give memoization a try, and hope that it would help us quickly prune input choices which will ultimately lead to an invalid model number:
(defun first-valid-model (expressions &optional
                                      (input (iota 9 :start 9 :step -1))
                                      (z 0)
                                      (dp (make-hash-table))
                                      &aux
                                      (remaining (length expressions))
                                      (key (+ (* z 100) remaining)))
  (multiple-value-bind (model foundp) (gethash key dp)
    (if foundp
      model
      (setf (gethash key dp)
            (cond ((and (= remaining z 0)) 0)
                  ((= remaining 0) nil)
                  (t (loop for w in input for z-next = (funcall (car expressions) w z) do
                           (when-let (model (first-valid-model (rest expressions) input z-next dp))
                             (return (+ (* (expt 10 (1- remaining)) w) model))))))))))

> (time (first-valid-model (read-monad (uiop:read-file-lines "src/2021/day24.txt"))))
Evaluation took:
  2.168 seconds of real time
  2.104432 seconds of total run time (1.844197 user, 0.260235 system)
  [ Run times consist of 0.271 seconds GC time, and 1.834 seconds non-GC time. ]
  97.05% CPU
  350 lambdas converted
  4,986,893,536 processor cycles
  175,630,864 bytes consed

93959993429899
Let's go!

What about part 2?
**What is the smallest model number accepted by MONAD?**
We lucked out!  All we have to do is tell FIRST-VALID-MODEL to try the different digits starting from `1`, up to `9`:
> (time (first-valid-model (read-monad (uiop:read-file-lines "src/2021/day24.txt"))
                           (iota 9 :start 1)))
Evaluation took:
  0.338 seconds of real time
  0.327866 seconds of total run time (0.311545 user, 0.016321 system)
  97.04% CPU
  350 lambdas converted
  778,391,119 processor cycles
  45,884,672 bytes consed

11815671117121
Final plumbing:
(define-solution (2021 24) (monad read-monad)
  (values (first-valid-model monad) (first-valid-model monad (iota 9 :start 1))))
Test run:
> (time (test-run))
TEST-2021/24..
Success: 1 test, 2 checks.
Evaluation took:
  2.152 seconds of real time
  2.096319 seconds of total run time (2.027116 user, 0.069203 system)
  [ Run times consist of 0.036 seconds GC time, and 2.061 seconds non-GC time. ]
  97.40% CPU
  350 lambdas converted
  4,950,528,890 processor cycles
  204,097,808 bytes consed
Now, how did I fare while on the spot, as the clock was ticking?  Not so well actually:

- I only managed to complete this few days after Christmas
- I got stuck and was _forced_ (well, nobody really _forced_ me...) to peek into the day's [solutions megathread](https://www.reddit.com/r/adventofcode/comments/rnejv5/2021_day_24_solutions/) looking for just the minimal amount of help that would unblock me

What did go well?

- Even back then I was able to simplify input expressions quite a bit; some might say I even went [a little overboard](https://topaz.github.io/paste/#XQAAAQCqEAAAAAAAAAAUHMineAofRq+qilruyMI8srv1Rg1uS/cG09uVz6iNOSymsWauBb4HdvwRzUgG3psYwA9ob03xmHHICYRXgzfu2i0nGZpixjzLcDptv6TCLlXXrz98p9BIthGkSVFhdvb6gLoZ1STrdqf969ZW0uN6e9tnSLpzmV/cfNPAa4yZuX8r/QUt5GPLLw9WBxZsvBwVd+iA3jgA0dSOkuWY/Wny+WZ9MDi9HufgXnKGeCV9q3rqEv9KMjQ7gvBB7qXZi3DbHfcXP/Rh5GRrbuKUI+p1fChdFmfPWTq303v6WlVEh9SXwXYEzgQo26eLpHG4d3v+POROD1yIk3I5aLlQQCbalbasGItRN9EpKT8aP5jk9wEX/EBTX64iRSAlXAHUTpTA6S19gXIvN1cX6wbx4iux2hdjw0fcADYjKwMXcNYHV/cBZxaas46sis7VUpI6UEZHIk5SS1s2cq48brLVFAp+OSMpMLN9sOvxwWwjdoql1MQlqAZeZj/lwvJC9+7zYdov52Qpa1Jl12AWN1m2hS8gNsxl+VRv41FtOBqLPxJEvCXUjTPR5SjIc4LRgvS3ZRlfvnrn46LGR86q5IJvYCMP2cYVLmbsWhDTljQQsfIpRaUbQlDnAOjdj441qqd0uq3wCwnKYCckEDY8rZzulQOAuK3CcIbgA1rx4xFqoD6A1E63buHyYhivk3EeBq2AFEEwlWVK3F5WtIxLTFTpUTjN2O2P94/BuIHPFfOBBRCmN9yhrZTpHEcyjHYV3QpstwDdoVaV9ZCW0GoMp6SpCOHDnYg1ApFYBjbxXZXFNOwIwLqs4jbvGZEgucK/Xl5H5FJmi23msTBaKI1KosOczB92Vls4M81OP/LixEsGk1Oxb6++JIcGLQQj1/wAkXBodz9ipb9R3mzuqTD6E7032eMaJH8ZT6VVkpWHdQTY+weIJFMP+2USy+MZcwwsw/E2fJY+khg+iHwavAJzu99224jTDLMR7jGwPjhxmHM62eLj+bngOyhZXLlICMckWQ5qbyjunI1c78GYZoG5Pkc/pRkecAqa1EOe4kLtAFdssxTzRoOescam6+9bAP4Y3+1Zsl98+ToPxb+OimqcU5ogi7jFf27Iq/FkOAH3qwcoc7rqOpeiD4krQRVwwd/gIagp6A7n0oFzBZcMzzfkF/tUAYvLLNk+KIcB+hDqfvqrqAchZTFjh2zj2pAwONRTSW3JQBf14HZ9+BiM26wMCxashcpd26q0Rf/7dksh) with it, especially as I started reducing them based on the minimum and maximum values of each expression
- I was able to notice that each expression wound up being a function of the previous value of `z` and of the current digit of the model number, i.e. `w`

What did not go well?

- I spent time trying to get an ALU runner to work with local bindings (instead of an ARRAY as presented above), only to realize SYMBOL-VALUE cannot not be used to access the value of a lexical variable (see: http://clhs.lisp.se/Body/f_symb_5.htm).  I should have used DEFPARAMETER instead, as shown [here](https://topaz.github.io/paste/#XQAAAQBRBAAAAAAAAAAUGQimgulVkMqJePdJ12uYSyrx9HETSDeM7iPwMm5IzbnhqpMgXbIZ4j+wT3G3yM/Phd9tY3YWGUV5vVqRYE41FzNleAVB3CZ1VdavQwqHLspPcheinZ29SxrxBs+UugGbKyMw8Cjyc5tfvwceJi2lK3srvAqCYe8dTUDreC1lbu5JIlNdTDSvkoQ4U6tfizTr5wNdnl/PT+rX4qRsvS/HylH4dGdRVXNkttLCoE3/ZuaGbju5AcTub2qhXsAH2D2KWb7m368oylPdNdaeeElu3w+21SA/wiHJlBPSZgs/daoC4FQiNa7Zvj46biP2W4tShV9lpsclm0+zKm9ADHlF29MYMkpuv6p+JVndGry6/HywWWuO4mKd7jem6ik1BVOJLXPmecX3AKsQFX058TVpucNg+P0QL6xyB1/ISRh3Npa7pBPXMRhsLa4LWseDuqM4RhdnT7B0B5ECH5pler/vvA1RJepfEq/NGbbzTgI+96j4UsPgAarLdXIi5BIUY7n///JNZps=), but go figure why it took me so long to figure this out...
- I went [a little overboard](https://topaz.github.io/paste/#XQAAAQCqEAAAAAAAAAAUHMineAofRq+qilruyMI8srv1Rg1uS/cG09uVz6iNOSymsWauBb4HdvwRzUgG3psYwA9ob03xmHHICYRXgzfu2i0nGZpixjzLcDptv6TCLlXXrz98p9BIthGkSVFhdvb6gLoZ1STrdqf969ZW0uN6e9tnSLpzmV/cfNPAa4yZuX8r/QUt5GPLLw9WBxZsvBwVd+iA3jgA0dSOkuWY/Wny+WZ9MDi9HufgXnKGeCV9q3rqEv9KMjQ7gvBB7qXZi3DbHfcXP/Rh5GRrbuKUI+p1fChdFmfPWTq303v6WlVEh9SXwXYEzgQo26eLpHG4d3v+POROD1yIk3I5aLlQQCbalbasGItRN9EpKT8aP5jk9wEX/EBTX64iRSAlXAHUTpTA6S19gXIvN1cX6wbx4iux2hdjw0fcADYjKwMXcNYHV/cBZxaas46sis7VUpI6UEZHIk5SS1s2cq48brLVFAp+OSMpMLN9sOvxwWwjdoql1MQlqAZeZj/lwvJC9+7zYdov52Qpa1Jl12AWN1m2hS8gNsxl+VRv41FtOBqLPxJEvCXUjTPR5SjIc4LRgvS3ZRlfvnrn46LGR86q5IJvYCMP2cYVLmbsWhDTljQQsfIpRaUbQlDnAOjdj441qqd0uq3wCwnKYCckEDY8rZzulQOAuK3CcIbgA1rx4xFqoD6A1E63buHyYhivk3EeBq2AFEEwlWVK3F5WtIxLTFTpUTjN2O2P94/BuIHPFfOBBRCmN9yhrZTpHEcyjHYV3QpstwDdoVaV9ZCW0GoMp6SpCOHDnYg1ApFYBjbxXZXFNOwIwLqs4jbvGZEgucK/Xl5H5FJmi23msTBaKI1KosOczB92Vls4M81OP/LixEsGk1Oxb6++JIcGLQQj1/wAkXBodz9ipb9R3mzuqTD6E7032eMaJH8ZT6VVkpWHdQTY+weIJFMP+2USy+MZcwwsw/E2fJY+khg+iHwavAJzu99224jTDLMR7jGwPjhxmHM62eLj+bngOyhZXLlICMckWQ5qbyjunI1c78GYZoG5Pkc/pRkecAqa1EOe4kLtAFdssxTzRoOescam6+9bAP4Y3+1Zsl98+ToPxb+OimqcU5ogi7jFf27Iq/FkOAH3qwcoc7rqOpeiD4krQRVwwd/gIagp6A7n0oFzBZcMzzfkF/tUAYvLLNk+KIcB+hDqfvqrqAchZTFjh2zj2pAwONRTSW3JQBf14HZ9+BiM26wMCxashcpd26q0Rf/7dksh) with trying to simplify the given expressions, to the point that I started reducing expressions based on the minimum and maximum values of each expression.  Anyways, why would this a problem? Well, it's not a problem per-se, it's just that even with these fully reduced expressions, I still did not know what to make out of them -- which leads me to the next 2 points
- I could not "connect the dots": the monad is made of 14 expressions, 7 of which multiply by 26, and seven of which either multiply **or** divide by 26; we do not have any control over the former ones (i.e. the ones that multiply by 26), but in order for the final number to be `0` then the remaining 7 expressions need to execute a division and not a multiplication.  Well, with this hint, I was then able to put together a [brute-force solution](https://topaz.github.io/paste/#XQAAAQBDBAAAAAAAAAAUGQimgx+p6Nd2KTQBipD+ebHgWB2KYDhg0p5cT2fB6T5DWjG1mGzImJ/kMxYYski2R0EnsbD9DF3grfTnWe90IMbzhaDW4WiGLYZjXrM+/SmzHR9JwBqvEb4zsQ2dj3FKSP6OxJxreUccJIEvgbi4LjKXtUhxNe40qDl8DSS9h7DUMOup9fUxuPTx0mL4cgxRxXwzP1SXolNrWIqaIeRySLL+xqHk5qdYz4YgF8M+Yji7vb1/s6fj6MkkleWB1ugxORngfKvDSR5y4EeQRytPcHZCqWY+zNzGh7KciXn3Q+23tQvK6tEmHvKq9K/QcUn1aYxKHJLQHQup8Kd60xBQo81hImkTexgQfd6UbIZeJmAbxizkRmGiuCbYHgLcnDzMT30Fimb3sM4s9nFYbr7W4rAxzNVhn+Bd/NDK8JtQDB2OFPRC47sERu/1Cm5gKJWCrv9XpNfIdUDYx8ei1hh5ESnFidaugWaEhu+N1rWGbu+j0osqaMSX4OSV/82ZuEs=) that _ultimately_ got me the two stars for the day
- Last but not least, I could not see that the monad would do the same operations over and over again and that memoization could have helped pruning lots of the dead ends; instead, I let the thing run for hours (true story: I went outside, had a few drinks with friends, and on my return, the solution was there ready for me to submit it).  Cherry on top: using ERROR to notify when a valid model number was found -- a clear sign of burn-out from my side lol

Well, what else to say? Another great problem, with quite a few things to learn from!

2022-01-05 (permalink)

Advent of Code: [2021/23](https://adventofcode.com/2021/day/23)
A group of amphipods notice your fancy submarine and flag you down. "With such an impressive shell," one amphipod says, "surely you can help us with a question that has stumped our best scientists."
Continues:
They go on to explain that a group of timid, stubborn amphipods live in a nearby burrow. Four types of amphipods live there: **Amber** (`A`), **Bronze** (`B`), **Copper** (`C`), and **Desert** (`D`). They live in a burrow that consists of a **hallway** and four **side rooms**. The side rooms are initially full of amphipods, and the hallway is initially empty.
We are given a diagram of the situation (your puzzle input), for example:
#############
#...........#
###B#C#B#D###
  #A#D#C#A#
  #########
The amphipods would like a method to organize every amphipod into side rooms so that each side room contains one type of amphipod and the types are sorted `A`-`D` going left to right, like this:
#############
#...........#
###A#B#C#D###
  #A#B#C#D#
  #########
Amphipods can move up, down, left, right, but there is a cost attached to each movement:
Amphipods can move up, down, left, or right so long as they are moving into an unoccupied open space. Each type of amphipod requires a different amount of **energy** to move one step: Amber amphipods require `1` energy per step, Bronze amphipods require `10` energy, Copper amphipods require `100`, and Desert ones require `1000`. The amphipods would like you to find a way to organize the amphipods that requires the **least total energy**.
In addition, amphipods are not free to move wherever they want:
However, because they are timid and stubborn, the amphipods have some extra rules:
>
- Amphipods will never **stop on the space immediately outside any room**. They can move into that space so long as they immediately continue moving. (Specifically, this refers to the four open spaces in the hallway that are directly above an amphipod starting position.)
- Amphipods will never **move from the hallway into a room** unless that room is their destination room **and** that room contains no amphipods which do not also have that room as their own destination. If an amphipod's starting room is not its destination room, it can stay in that room until it leaves the room. (For example, an Amber amphipod will not move from the hallway into the right three rooms, and will only move into the leftmost room if that room is empty or if it only contains other Amber amphipods.)
- Once an amphipod stops moving in the hallway, **it will stay in that spot until it can move into a room**. (That is, once any amphipod starts moving, any other amphipods currently in the hallway are locked in place and will not move again until they can move fully into a room.)
OK, so given all these rules, and given the energy that each amphipod uses to make a move, we need to help the amphipods organize themselves and at the same time minimize the energy required to do that:
**What is the least energy required to organize the amphipods?**
Input first:

- we will be parsing the burrow into a CHARACTER 2D array
- we will also keep track, separately, of the `(row col)` locations of all the siderooms -- bottom ones first!
(defun parse-input (lines &aux
                          (rows (length lines))
                          (cols (length (car lines)))
                          (burrow (make-array (list rows cols)
                                              :element-type 'character
                                              :initial-element #\Space))
                          rooms)
  (loop for row from 0 for string in lines do
        (loop for col from 0 for ch across string do
              (setf (aref burrow row col) ch)
              (when (and (< 1 row (1- rows)) (member col '(3 5 7 9)))
                (let* ((type (case col (3 #\A) (5 #\B) (7 #\C) (9 #\D)))
                       (existing (assoc type rooms)))
                  (if existing
                    (push (list row col) (second existing))
                    (push (list type (list (list row col))) rooms))))))
  (cons burrow rooms))
(defun siderooms (rooms type) (second (assoc type rooms)))
Next, a little constant for all the _valid_ locations in the hallway (i.e. the ones **not** facing any rooms):
(defparameter *hallway* '((1 1) (1 2) (1 4) (1 6) (1 8) (1 10) (1 11)))
We are going to treat this as a search problem:

- Starting the input situation
- We try to move every amphipod that can actually be moved (as per the rules above) -- POSSIBLE-MOVES
- We keep on doing this until each amphipod is in the right room

Note: we don't have a heuristic to play with, hence we will go with Dijkstra's algorithm, and not A*.
(defun organize (input &aux (burrow (car input)) (rooms (cdr input)))
  (search-cost
    (dijkstra burrow :test 'equalp :goalp (partial-1 #'donep rooms)
              :neighbors (partial-1 #'possible-moves rooms))))
Let's now take a look at DONEP, the function used to break out of the search loop:

- For each sideroom -- a sideroom is characterized by a type (i.e. `#\A`, `#\B`, `#\C`, and `#\D`) and a location
- We check if it's occupied by an amphipod of matching type
- If not, it means we are not done yet
(defun donep (rooms burrow)
  (loop for (type siderooms) in rooms always
        (loop for (row col) in siderooms
              always (char= (aref burrow row col) type))))
We now have to figure out, given the current amphipods configuration, all the possible moves that do not violate any of the rules above; we will do this in two steps: first we will try to move amphipods from their current (and likely wrong) room into the hallway; next from the hallway into to the _correct_ room/sideroom.

For the first part, i.e. out of the current sideroom, and into the hallway:

- For each sideroom location
- If it's empty or occupied by an amphipod already in the right place, we move on
- Otherwise, for each valid position in the hallway
- If not blocked, we move into that location
    ...
    (loop for (_ positions) in rooms do
          (loop for pos in positions for type = (aref burrow (car pos) (cadr pos))
            unless (or (char= type #\.)
                       (in-place-p (siderooms rooms type) burrow type pos)) do
            (loop for target in *hallway*
                  unless (blockedp burrow pos target) do (add-move type pos target))))
    ...
For the second step instead, i.e. out of the hallway, and into the _right_ room / sideroom:

- For each hallway location
- If occupied by a amphipod (of type `type`)
- For each of of the siderooms for type `type` (from the bottom one, up to the top)
- If not blocked, we move into that location
- As we scan the siderooms of type `type`, bottom up, we stop as soon as we bump into a location occupied by an amphipod of the wrong type -- it does not make sense to move any amphipod into any sideroom above the current one, occupied by the wrong amphipod, if later on we would be forced to make space to let the amphipod out of this room and into their right one
    ...
    (loop for pos in *hallway* for type = (aref burrow (car pos) (cadr pos))
          unless (char= type #\.) do
          (loop for target in (siderooms rooms type)
            unless (blockedp burrow pos target) do (add-move type pos target)
            always (eql (aref burrow (car target) (cadr target)) type))))
    ...
We add two locally defined functions, ADD-MOVE, responsible for:

- Creating a copy of the current situation, i.e. the burrow
- Swapping the content of `pos` with `target`, i.e. we move `pos` into `target`
- Calculating the cost to move there, i.e. manhattan distance times the amphipod unitary energy cost

We wrap this all into a DEFUN, and there is our POSSIBLE-MOVES complete definition, in all its glory:
(defun possible-moves (rooms burrow &aux moves)
  (labels ((add-move (type pos target &aux
                           (next (copy-array burrow))
                           (cost (* (manhattan-distance pos target)
                                    (ecase type (#\A 1) (#\B 10) (#\C 100) (#\D 1000)))))
             (rotatef (aref next (car pos) (cadr pos))
                      (aref next (car target) (cadr target)))
             (push (cons next cost) moves)))
    ;; From the wrong sideroom to the hallway
    (loop for (_ positions) in rooms do
          (loop for pos in positions for type = (aref burrow (car pos) (cadr pos))
            unless (or (char= type #\.)
                       (in-place-p (siderooms rooms type) burrow pos)) do
            (loop for target in *hallway*
                  unless (blockedp burrow pos target) do (add-move type pos target))))
    ;; Fromt he hallway to the right sideroom
    (loop for pos in *hallway* for type = (aref burrow (car pos) (cadr pos))
          unless (char= type #\.) do
          (loop for target in (siderooms rooms type)
            unless (blockedp burrow pos target) do (add-move type pos target)
            always (eql (aref burrow (car target) (cadr target)) type)))
    moves))
Let's now take a look at IN-PLACE-P, the function responsible for figuring out if the amphipod at position `pos` needs to move out of the way or not:

- We get the amphipod type first
- Then for each sideroom of the same type (again, bottom up)
- If the location of the room matches with the position of the amphipod, then we the amphipod is already in place
- Otherwise we check if that sideroom is occupied by an amphipod of the right type
- If the types match, we move to the next sideroom (i.e. the one above)
- Otherwise it means the amphipod needs to move into the hallway
(defun in-place-p (rooms burrow pos
                         &aux (type (aref burrow (car pos) (cadr pos))))
  (loop for r in (siderooms rooms type)
        thereis (equal r pos)
        ;; if not in the room, at least let's make sure the one below is
        ;; occupied by an amphipod of the right type
        always (eql (aref burrow (car r) (cadr r)) type)))
The last missing piece of the puzzle, BLOCKEDP; we know our starting position, `pos`, and we know where we are headeded, `target`; all we have to do is to check all the locations _along the way_ and break out if any of them is occupied by another amphipod.  But what are all the locations along the way?  If we are inside a room, we need to move vertically first to get to the hallway, and then horizontally; otherwise, horizontally first to get in front of the room, and then vertically into the sideroom:

- We destruct `pos` and `target` into their row / col components
- Then using the spaceship operator <=> we figure out which direction we need to move horizontally to get from `pos` to `target`
- Then, we check if `pos` is in the hallway
- If it is, we move up first, then horizontally
- Otherwise, horizontally first, and then down into the room
(defun blockedp (burrow pos target)
  (destructuring-bind (row1 col1) pos
    (destructuring-bind (row2 col2) target
      (let ((col-step (<=> col2 col1)))
        (flet ((check-all (row1 col1 row2 col2 row-step col-step)
                 (loop do (incf row1 row-step) (incf col1 col-step)
                       thereis (char/= (aref burrow row1 col1) #\.)
                       until (and (= row1 row2) (= col1 col2)))))
          (if (= row1 1)
            (or (check-all row1 col1 row1 col2 0 col-step)
                (check-all row1 col2 row2 col2 1 0))
            (or (check-all row1 col1 row2 col1 -1 0)
                (check-all row2 col1 row2 col2 0 col-step))))))))
And that's it:
> (organize (parse-input (uiop:read-file-lines "src/2021/day23.txt")))
18282
Let's take a look at part 2.  It begins with:
As you prepare to give the amphipods your solution, you notice that the diagram they handed you was actually folded up. As you unfold it, you discover an extra part of the diagram.
Uh oh...
Between the first and second lines of text that contain amphipod starting positions, we are asked to insert the following lines:
>
#D#C#B#A#
#D#B#A#C#
>
Using the initial configuration from the full diagram, **what is the least energy required to organize the amphipods?**
OK, let's _massage_ our input as instructed:
(defun massage (lines)
  (append
    (subseq lines 0 3)
    (list
      "  #D#C#B#A#"
      "  #D#B#A#C#")
    (subseq lines 3)))
And that's it -- our part 1 solution should be well equipped to deal with the bigger burrow!

Final plumbing:
(define-solution (2021 23) (lines)
  (values (organize (parse-input lines)) (organize (parse-input (massage lines)))))

(define-test (2021 23) (18282 50132))
Test run:
> (time (test-run))
TEST-2021/23..
Success: 1 test, 2 checks.
Evaluation took:
  3.197 seconds of real time
  3.089562 seconds of total run time (2.924923 user, 0.164639 system)
  [ Run times consist of 0.134 seconds GC time, and 2.956 seconds non-GC time. ]
  96.65% CPU
  7,353,654,160 processor cycles
  249,187,888 bytes consed
Nice!

Unfortunately, I was not able to solve this problem on time; actually, I could not solve this until a couple of days after Christmas.  In addition, the [solution](https://topaz.github.io/paste/#XQAAAQDMIAAAAAAAAAAUGQimgulVPYrJhMVjyYOkoNMRrPzuMVk/CekO/LruVC9Ms1jGXD3R2Xlv5UJlKv+s2h7+RTSP3QyatBEOZf/6fJFeUYm3COdf7TjD+XiU8nnIhdlmU0sqbnEce3WEEZT8fVJzKzYGaenESgPH65+4oNltMKuRZg8ao2GtPWXf8DnPXqqfbnFIsTeDvaTSGfGvdU9GUnQcBljQ9K6Nw62ag26yS8cwUZM/bCRnajawkIc2weuEY/hmFsnGIZxPhCSIQv7WZjvc+kaKuCbmoC7YAP3d0c7vaOVDEMl8PwCANg/SZTB/sfaFOPtnGTv8Aw/ZllojmEAwPe6v5CCwiNzIMs2pWUE3AuVriSTxOZYEb2kkrzpIzLelBDL3h1xDxMmaEMTyF6FcxebFSEUbdi5gCjNtETqx/6YsOJ2izvZyVGVV20ZX8pvHfkrZF2dDiK2CsQUpshHYh70NioDVWqVT6C5+MIxmuSWYE3gMVGJ+34O+ZspJOAiEsGsKofsG6U/l1up9CudsT49cOlUVLNwxS417crs99EZqzLUdVNcV202tbGHGYKxjKr+C/1s6IBE2lOSYkovTgEjPHj1LqrzyXSXDCOAcdwaLHbm/3xICgVMBOjUZX4YnbmWpuU4K7Lz2h/1aMnZ0jd1fE/UMwug8thiL1zUnk/xM3/5JcYhbrJ8QpDQCOLP1CL2VVj8BdN20hldXz/YXxAgGC/vpaIp88uS9l6km595sHdXAQlV1uc+tY3Dw/HpL7nt1Oy2tUSzsmTbSuSt3gVq3zrspMLvorORLUL/1OP/8ptBJ/7I3ZKDbIkRp31hQBuhUGtchpJWrCelDqLHBZS9196+ilHJ1YnjAoHVc2gBDDtplDUcqd9nBF/AT9Gxi9oOmQGgBSPkQveQxrQiyWjQU1rXHWKDO9KHvjb8m4vXBCladiph1PygKi/cNvmHEJeo+wtwDX8RL/T69B3AAi6Xy9y+/aF2KF3USpeYkGl9bXX/UV8GzI4dNnqJXS3VEunEjaK50B/Z7ZjLcSisiNdH1mcgp35yzegmbULYgn6m3X6UMGMrwENtR2CAXXX0renSLm0GZYQ5t1sao7iazmv26Q4KCjg66PQPlw33Z8loRnLhzQ5CvDeW2RFWb6Bx92mDaNroqrhYhirq5jmtwaLFCIM7/UYtNvmSCxL5bIzqfjvycajG/0G3gkv4aeASr2kdbQuET1wOky/53hlsJmzG2GR7gF1gzSZwHWiu/PscBM3Qme2VA9kV2CiQx0HHEd9VM1LjmO7os/I7tUUspmwzuH3RFdX/YSOJ2U5Til55ns4/ki0qhjPcU8z4CFgvqrSIyv4gh9muPYP2OHN2duPrNh3T1OV9OfDnnSIYj6sPDBzFvfsCd84Fcj/LioSVKTkAsggTg/bw4/5bSlHr8LZpZS+CTOT7qRGOIfXKCLNAMQ9kXPYNW9wB3tsnlPy10RZIE/Q99PG6ALFJeknjKSDd1gBC6LCYQHTJyTFLXt3FPkQw8tq/HEyP3yuXtimfm2pWAmNXP9oCyuf73v2uHS8clt6EPMSTMOcVfv+6kI5ZrAHDKk80hlbGq8OvyG5Rmitr1an8gOoHbmUTAkjc36GpdGhS0ILoAR4wDIVwLeuGwggQeNNep1JLCKN5D5bN1pyfgwI0Q0Qpj8p8d/Is7vkgZrPW6JApPNAsRvhHB5aUVr6W4/gEcCTWn+KHUQ7Bj4AqnwHLwXmWmzKdRULMM+4OGs+xYMaHB0GR9Dbt7oHqpRng8Sv0e68LlP2rWZO2kq9zkxunEYf2KA6QOB2pfAz7B5DdDtVGFWLIra/WG7ODKfeMthR14vY3foNDwg2B7nQE1ePGzmIyLALxf/c1u8efki3ERj8PMwrok+eJ/HR0RmwP35YGYbhfGq94R9sry1amPvlquBI8ipNx1IG3ain0JX0172IgZpvsPe/jbIyzCu/EUZaUGIOSPp/eoVpkNZ62E8VHLtpi23n52xRdYMj0cA64EHSciXL169QdRwYTTw4iluj/D3tVkce92ZOMW2F/CQs4mJUSZiZWw31euDnYDVZWO7yH98PFM6XajEsV3J74muooEA89szjItMuBrx8Zax6pmznTZfHaIh/9nW73Lzmf82wsd1pKIu6ylYjWbijMXWlFyPKG7jFpOTUOQjN0LEESDkjOOhhneuoKbbKWJjiOYkt8gpgNNWjz1Zsiy//Mj1U55JIHMfEe79iA7fcmiMckT9T0I6c1KrgZklYNErsOuF9bVQfAxqWUZaJ7gZCshwckMI7STzOsupvQV6JEMk2dsMPOuuDmjKuND93MBuEb9AyV4MoyoR3PyYWbaL6JRbLj291lXd+XyDAtUMdpSAvPKtN13KAgHuywMslkYFHdjSP8uLWvYcupGKzHYG3ceUpnBgh1lqweJOxnZ4rmvOOApFgqnIXh4ww3yH9MgvTw8gEwoNFZU186OyIzOFrJFPkRJ+DcBWSMi5weqOBEKb1zdCqvEIrRnPCOR9J51976qmOV5bNXfyUKAEgARdjgoEeA85Wh5A6Rzf/+DdtYAz/BLd44yRpnAUTQ6IjWNoW4ySzZBtOUAIkOy/tO8ArYaY5nNYMKxGkICAEUEWh1K4ULTEupaF0m2QeMtPufvwwOmVHdINMF4YKobMNondrDtyWKwgRQ9xvS5jnLtRTIv4OWMKVUS1bRD0rynV4f/knwKtg==) I came up with back then was not nearly as efficient as this one: around 8 seconds for part 1, while for part 2...
Evaluation took:
  2742.203 seconds of real time
  2570.780518 seconds of total run time (2510.998329 user, 59.782189 system)
  [ Run times consist of 17.916 seconds GC time, and 2552.865 seconds non-GC time. ]
  93.75% CPU
  6,306,877,691,085 processor cycles
  130,990,228,720 bytes consed

50132
What did I get so wrong, back then, to achieve such a huge runtime? Well, a few things actually.

First, I did not realize I was _pre-calculating_ all the costs to move between any two locations, over and over again every time A* would invoke my heuristic function...WTF?!  Well, I did realize my solution was taking forever to run, so I came up with the following heuristic hoping that this would speed things up a little:

- For each amphipod type
- We find where these amphipods are
- We find the minimal cost to move them in place, as if no other amphipod could get into our way (e.g. with two amphipods, `A1` and `A2`, and the two siderooms, `R1` and `R2`, we have two possible scenarios: `A1` moves into `R1` and `A2` into `R2`, or `A1` moves into `R2` and `A2` into `R1`; we try both, and pick whichever costs less)
- We sum all these minimum costs together, and that's it

_But we cannot calculate all these costs, all the time, right?_  So we are going to pre-calculate these, _once_,  and then look the values up:
(defun heuristic (costs amphipods &aux by-type)
  (loop for (type) being the hash-values of amphipods using (hash-key pos)
        for existing = (assoc type by-type) do
        (if existing
          (push pos (second existing))
          (push (list type (list pos)) by-type)))
  (loop for (type (pos1 pos2 pos3 pos4)) in by-type for m = (cost type) sum
        (loop for (room1 room2 room3 room4) in (all-permutations (rooms type))
              minimize (* (+ (gethash (cons pos1 room1) costs)
                             (gethash (cons pos2 room2) costs)
                             #+#:part2 (gethash (cons pos3 room3) costs)
                             #+#:part2 (gethash (cons pos4 room4) costs))
                          m))))
Except...I did a silly mistake on the calling site:
(defun part1 (input &aux (opens (car input)) (amphipods (cdr input)))
  (multiple-value-bind (best cost path all-costs)
      (a* amphipods :test 'equalp :goalp #'donep
          :neighbors #'next
          :heuristic (partial-1 #'heuristic (calculate-all-costs opens)))
    cost))
Spot the problem? It's the use of PARTIAL-1!  Let's MACROEXPAND-1 it:
(LAMBDA (#:MORE-ARG632)
  (FUNCALL #'HEURISTIC (CALCULATE-ALL-COSTS OPENS) #:MORE-ARG632))
I am not exactly sure what I was thinking, but I guess I thought it would create a new local binding, assign to it the result of `(calculate-all-costs opens)`, and than pass that along to HEURISTIC; instead, PARTIAL-1 is keeping all its input forms as they are, meaning CALCULATE-ALL-COSTS would be invoked over and over again!  Oh boy...

Something else that I did, and that only later I realized was not beneficial at all (in terms of performance) was keeping track of the number of moves each amphipod had made (and of course backtrack when any of them did two and still found itself _not in place_).  The thing is, we only allow two types of moves, from a room to the hallway, and vice versa, and each move has a cost attached to it; so why care if an amphipod keeps on bouncing back and forth between the same two locations, especially if it does not make the burrow anymore organized than it was before?  The cost to get to that state would simply increase over time, and after a bit A* would simply pick a different, and cheaper branch.  My original solution (**without** the heuristic!) would take around 316s to organize the example burrow situation, and when I changed it to stop keeping track of the amphipod moves and simply let A* do its thing, the runtime went down to 264s.  Still a lot, but definitely better.

Last but not least: search state representation.  My [A*](https://topaz.github.io/paste/#XQAAAQDdBwAAAAAAAAAUGQimgx+p6Nchs0/Hc3cRxcmyfeV+Bzi2FxeDDBZEkgZaDBSyRjbUkuFXP6dhVAaOH8AG99AzdDUHR/lSrXrRzt2ru8sshOcmQ5UeY7P5AZQhS4ncXon3a6r8j5K/gr4xuK1jjTG4PHnVajamzB6t54TV5dSPS63MDOK6jD6CsHfC+VkV4bms/+I8MmYXI4GU2JxpblEiXKntPh1orBPfGpyTCfhx4v+kHU29G6mF1Y4EM6ozuoAyzjwxpY0eKSSV1zFprANFIIlsaPqm/nWF9W4i224u0tUm3IHb+tnMDrFtscipbIplaGiZN8B2Q1apClGT/+51hyLVNGM+4prkx8VWxBWeWuN62LCmMrY0iqRvFeCwKqpwg4s4VRogYMgeG3AOyY1DlJ0UJFetXLjJ+7pMA+8jUmYyE6xRJd9R+jn/cPnkNyDceLd1iFB/ODUgVXLcyrYb7jFIIeucRNGqkgQ7lwe0xvmyiBLkHyIe3ML8lZQcmhyinz9Ynab880cOGsBYBuTRjCS7dCvLr5XKDjw8SpdnWXgjVrzLA49wE+NZXVkPz6tTVzFUR36unRBsIJZJKfYRi3ZUToT1eFyKKayvxZpI9V1QkNJvJehPwqf797u2T57qMuE1UrtnhSso1kQh/BEo33aCHrNXfO/H71RV95KbXakyaLfqTAeeKA2hUrQSbAFnewyRS+3eKmTvbe0+XL1ltwMKAccDiSJR+DZ8iMZhQx91qBpU6W4Gu5c+XA4oT8DFrhGWR1aHzdLLf2ICtG8VUVnLizIL/+Wv790=) implementation, makes use of two HASH-TABLEs internally:

- `cost-so-far`, mapping from a given search state, to the cost required to get there -- required to build up the final cost
- `come-from`, mapping from a given search state, to the state before it -- required to build up the list of steps to get to the goal state

And as it turns out, putting HASH-TABLEs (my original representation of the burrow) inside another HASH-TABLE (e.g. `cost-so-far` or `come-from`) can slow things down...**a lot**; I am not entirely sure why, but I guess it has something to do with the EQUALP function performing way better on ARRAYs than HASH-TABLEs.  Let's put together a little experiment to showcase this:

- We initialize our `cost-so-far` and `come-from` hash tables
- We initialize our burrow, with 20 elements
- Then, 10000 times
- We copy the burrow
- Move two random elements
- Save the new state inside `cost-so-far` and `come-from`

First we do this, with the burrow backed by a HASH-TABLE:
> (time
    (loop with cost-so-far = (make-hash-table :test 'equalp)
          with come-from = (make-hash-table :test 'equalp)
          with rows = 2 and cols = 10 and burrow = (make-hash-table :test 'equal)
          initially (loop for row below rows do
                          (loop for col below cols do
                                (setf (gethash (list row col) burrow) (random 100))))
          repeat 10000 do
          (let ((next (copy-hash-table burrow))
                (row1 (random rows)) (col1 (random cols))
                (row2 (random rows)) (col2 (random cols)))
            (rotatef (gethash (list row1 col1) next) (gethash (list row2 col2) next))
            (setf (gethash next cost-so-far) (random 1000)
                  (gethash next come-from) burrow
                  burrow next))
          finally (return cost-so-far)))
Evaluation took:
  8.590 seconds of real time
  7.674754 seconds of total run time (7.435813 user, 0.238941 system)
  [ Run times consist of 0.117 seconds GC time, and 7.558 seconds non-GC time. ]
  89.35% CPU
  19,757,710,348 processor cycles
  4 page faults
  13,891,952 bytes consed

#<HASH-TABLE :TEST EQUALP :COUNT 9383 {1008D48DA3}>
Then, with the burrow backed by a 2D ARRAY:
> (time
    (loop with cost-so-far = (make-hash-table :test 'equalp)
          with come-from = (make-hash-table :test 'equalp)
          with rows = 2 and cols = 10 with burrow = (make-array (list rows cols))
          initially (loop for row below rows do
                          (loop for col below cols do
                                (setf (aref burrow row col) (random 100))))
          repeat 10000 do
          (let ((next (copy-array burrow))
                (row1 (random rows)) (col1 (random cols))
                (row2 (random rows)) (col2 (random cols)))
            (rotatef (aref next row1 col1) (aref next row2 col2))
            (setf (gethash next cost-so-far) (random 1000)
                  (gethash next come-from) burrow
                  burrow next))
          finally (return cost-so-far)))
Evaluation took:
  0.016 seconds of real time
  0.016474 seconds of total run time (0.014154 user, 0.002320 system)
  100.00% CPU
  38,822,509 processor cycles
  4,955,568 bytes consed

#<HASH-TABLE :TEST EQUALP :COUNT 9472 {10053A5973}>
/me gulps...

Ok...let's tweak the first experiment a little; let's serialize the burrow into a list of values (i.e. one value per location of the burrow), and see if that affects or not the runtime -- it surely will cons more than before:
> (time
    (loop with cost-so-far = (make-hash-table :test 'equalp)
          with come-from = (make-hash-table :test 'equalp)
          with rows = 2 and cols = 10 and burrow = (make-hash-table :test 'equal)
          initially (loop for row below rows do
                          (loop for col below cols do
                                (setf (gethash (list row col) burrow) (random 100))))
          repeat 10000 do
          (let ((next (copy-hash-table burrow))
                (row1 (random rows)) (col1 (random cols))
                (row2 (random rows)) (col2 (random cols)))
            (rotatef (gethash (list row1 col1) next) (gethash (list row2 col2) next))
            (let ((state-key (uiop:while-collecting (add)
                               (loop for row below rows do
                                     (loop for col below cols do
                                           (add (gethash (list row col) next)))))))
              (setf (gethash state-key cost-so-far) (random 1000)
                    (gethash state-key come-from) burrow
                    burrow next)))
          finally (return cost-so-far)))
Evaluation took:
  0.082 seconds of real time
  0.079201 seconds of total run time (0.065240 user, 0.013961 system)
  96.34% CPU
  190,095,780 processor cycles
  26,703,456 bytes consed

#<HASH-TABLE :TEST EQUALP :COUNT 9370 {10073CA463}>
It does cons more indeed, but the overall runtime is now comparable to the second experiment, the one in which we used the 2D array!

So I went on and [updated](https://topaz.github.io/paste/#XQAAAQBXCAAAAAAAAAAUGQimgx+p6Nchs0/Hc3cRxcmyfeV+Bzi2FxeDDBZEkgZaDBSyRjbUkuFXP6dhVAaOH8AG99AzdDUHQi11t+qsyRC3uWF4xf7knr94XFFlmAeHILKPJwTwqqiga/1AEzyjEmZm8Yje6+ZHdKsFZGwHMCe/PZHtbXN/DaFKTz/nMcvv0X0lc8AlYgDi14Hn9ISQyH9XRgO+JbXFpgPy71pVPWxKSeE9inD39mrgND27XXv7js03AL688VbWW8oZQyLz1u5RAXqcKElyltwGhlBEB6XVIKJAuaP/FuFdbS3SHHBNncyV7y7P6mMraqzuyU4MEyUWrNiUkhkRpwZbu/s+fvqjI9sdiDoQKZbnjy7V+/gv1v4wkhkkv1NrFjXEjD2+jiiJVJc431kuzbcB/juIWu+jV0SBCzURL96i8kES2htSfzAyxPuOqzuPoWZ+Q72smhX9RJXjeTq2pVRlpu0D1LNOiakkvMJxGtK1+HdQOGRD3UBQhw9POB14yS+g4KS8NBTn0hBS9r2IRYc/LmAfNjy9aBys2QEtgSx/fl03ASL6xd6y4VSCgeus+A+GGfcdRsNfXIvKnVjMx7XWksMgNbFGkpzejqIunal82kOYfSuIRv+cBTydZ4w5WCjVkePqmn+COe7U6k5LneS/8KgPSmFl+W0GjKFiOtWzF7yILs7M9ErGHAUEeuCpCcp+fMb5NXkNsedoDxc7zQNP0FO+XOTsktOJ1EWnf7RMjcC+XZr+kYITTwOplIy23GfpjIiQN7pwWJAF5O5hf03PlvWKriOw/3zKT9MbMIhP33+f4viqU/auPXhPLIpnkOX3IUyD2WjKYQVh5y7vs0s7/8K2ADA=) A* to accept a new `:state-key` argument to be used to transform search states into easier to EQUALP structures; I then [changed](https://topaz.github.io/paste/#XQAAAQA0EAAAAAAAAAAUGQimgulVPYrJhMVjyYOkoNMRrPzuMVk/CekO/LruVC9Ms1jGXD3R2Xlv5UJlKv+s2h7+RTSP2wDdB5SpbuTIPB6pqSVlqYlUsNiH6bdTOE5QKqdFl92a7Ozx1/mb/dNm2aI7MI+ZxX5n9plD5QGYOpO2b4mm1/FiuMP2shOFemQ/Wfwh8V+p129h/QDVtymp0AdnAEQOhyl+bp4E9+3xL2NcfGe1BSlhs0YyeE3zE8wJrVNf9GvX3K72f0cvPEq7Y3F5Bv4AxMtxLwKd6gOTIQXo/WttDCftDO9XmR3inU3/uaIviMgdOQt+DqZBy95/qjG9yOXUUmBYvAsF1fSiVBxiKa4/fNkL//VVW7gJnOGymMv+JdnfRgnPuKXr6K5ldamg5rh1TUe24OBDzxAyGrjcc5J0nAG/ug8002ZOFrZEVa09UhGe1gYuGG3zmGD7zXGZIzmOefMWykGdUaZkRMWV1EyTvmPUu45wHk06qlY8VO4v5k9eZBWnlqvubobMmlUsYNfyP+ytl89kzRG9llQFOeHy5wj4qPrWbKDWOxIMyLMoufo6F9RQwcQWugoxqZ2+bN8agbTYfs5jbEMFuysB8OkPXs18Da6Lp0MJVk758pxSaCtHmJNHG2a0jc5hdLQ3a72g6+i/b/RaARny+CxcMX6ZszC3fWJzQq+33Xx2IPKvAnwWpDCQ9cVJ+AwPEWe/m06hVs4gpSJGVSfM9M5Gkigl9erK6cesztVVhlGAGUOX5C7f/Nug2IwTH/xAUEonOzKIUofdgkxExwuMwh3+7P1UQpbP4cccrjSP8BJGwgjWEn2KpF0j6Q3xIx82P3kUkiY3JIiWY+bV7WhJNGSdoqcMYNOLTZdAykAQ6hIHFqfV8JRz3ml17yG6GroygkZ3lO7d9X93p9TzrbKibQ3cfZ8l5M1Dm3uY+wPggdZj9mx//MrifxUqQulfVnTS4tUQWlqqnQ6rGpUdbjc34k2U5Wdb+PEwf+YA5rC7+jYTbUz+wZG8v/yPHuQchRr0w4wI6yvg14V+cRthX5AemM9pUDhBFjRFCerJ9XlNVdvU2UIrYhtqKDF63OintpoEnlyA9iqtheBIdRxiUsWyZnfhI9U49ahnaOjO2l21fMOZzwzJmT/5BxIvU91gDUlqJ/VnLI/KV2gIVlJulaigucc77sDzEkxB/5GW0lbXE9hXBbMoDehA3Sx6VsSzAQfRde1tklW+v+YyiAnhaD4i57j+W6KZOBd5lrS0On+gXcQSX9JJdjqkrB5MzOmQTb5+88+i5V1uU+8pwZmXxjMEuEGE0+eIFkmAOyzWI0ggGkCIuWNtnadXjWG3bLKcPFRWCl01YIuWBp8qVe7txAggenOaGnnAKF6PJsuP4QNaPY64+Omb5jkmK+i7oli8mTzBv93BVS9csxalztzUF/zlzveWfRWTm3fSR2dJbXI6o5c6m1TtBgdH0CsJRv4lebrKOlFIE98fYubgxL9XkKSOA0vuQtWeFjzq9tAb6Sl4UXnKWutcUnbZtsysRHEj36u7Hktd8QtA8+UpLkXKaDx6vZ5E95IbVCshSbw7irz3/bIVOAcxyExnprnrFmSneaPwOTpwFddbYcKhMJhSKuOqCn4JIhwz40VXkRd/W38RZo9DKS3PHAqQD+T5isN95R4WP7BlrzlDB/L+j9lB3GKTRcLDvEz7e9m+2dsMTb9aWaB5/HRMnZDvzu0x00SuBfQiQiP3NydjG1M/ECIhDvkVbDt563KmpWj/GDTXU/vFg9m3URYO9LD606zyyF9P00OGUDlvsnywI7BxrOr/O7/8AA==) the calling site of A* to leverage this new argument and serialize the burrow into a mere list of amphipods; and with this:

- Part 1 with the example configuration would now complete in less than 1 second (that's a 100x speed up!!!)
- Part 1 with my input would complete in less than 8 seconds, while part 2 in less than 11 seconds

That's huge.  It's still better to use a 2D array instead of a HASH-TABLE, but at least I would not have to wait minutes to see an answer!

From 40 minutes, to 3 seconds...what a ride!

2022-01-04 (permalink)

Advent of Code: [2021/22](https://adventofcode.com/2021/day/22)

We overloaded the submarine's reactor, and we need to reboot it:
The reactor core is made up of a large 3-dimensional grid made up entirely of cubes, one cube per integer 3-dimensional coordinate (`x,y,z`). Each cube can be either **on** or **off**; at the start of the reboot process, they are all **off**.
Continues:
To reboot the reactor, you just need to set all of the cubes to either **on** or **off** by following a list of **reboot steps** (your puzzle input). Each step specifies a cuboid (the set of all cubes that have coordinates which fall within ranges for `x`, `y`, and `z`) and whether to turn all of the cubes in that cuboid **on** or **off**.
For example, this is how our input is going to look like:
on x=10..12,y=10..12,z=10..12
on x=11..13,y=11..13,z=11..13
off x=9..11,y=9..11,z=9..11
on x=10..10,y=10..10,z=10..10
Let's begin by parsing this; for each line, we are going to be creating a tuple containing:

- :ON/:OFF as first element
- Min and max x values, as second and third argument
- Min and max y values, as fourth and fifth argument
- Min and max z values, as sixth and seventh argument
(defun parse-instructions (data)
  (mapcar #'parse-instruction data))
(defun parse-instruction (string)
  (cl-ppcre:register-groups-bind ((#'as-keyword state) (#'parse-integer x1 x2 y1 y2 z1 z2))
      ("(on|off) x=(-?\\d+)..(-?\\d+),y=(-?\\d+)..(-?\\d+),z=(-?\\d+)..(-?\\d+)" string)
    (list state (min x1 x2) (max x1 x2) (min y1 y2) (max y1 y2) (min z1 z2) (max z1 z2))))
Let's keep on reading:
The initialization procedure only uses cubes that have `x`, `y`, and `z` positions of at least `-50` and at most `50`. For now, ignore cubes outside this region.
OK...
Execute the reboot steps. Afterward, considering only cubes in the region `x=-50..50,y=-50..50,z=-50..50`, **how many cubes are on?**
Easy peasy:

- For each instruction
- _Clip_ the cuboid to the given -/+50 units region
- For each 1x1x1 cuboid in the resulting region, turn it on or off accordingly
- At the end, count the 1x1x1 cuboids which are on
(defun part1 (instructions &aux (cuboids (make-hash-table :test 'equal)))
  (loop for (state x1 x2 y1 y2 z1 z2) in instructions do
        (setf x1 (max x1 -50) x2 (min x2 50)
              y1 (max y1 -50) y2 (min y2 50)
              z1 (max z1 -50) z2 (min z2 50))
        (loop for x from x1 to x2 do
              (loop for y from y1 to y2 do
                    (loop for z from z1 to z2 do
                          (setf (gethash (list x y z) cuboids) state)))))
  (loop for state being the hash-values of cuboids count (eql state :on)))
Part 2 starts with:
Now that the initialization procedure is complete, you can reboot the reactor.
>
Starting again with all cubes **off**, execute all reboot steps. Afterward, considering all cubes, **how many cubes are on?**
Note: the text contains an example, the solution of which seems to be: `2758514936282235`.  So I am afraid we won't be able to bruteforce this -- not naively at least!

The solution that I ended up implementing, which honestly took me way too long to get to and that only later on I found out that in literature it goes under the name of: [**coordinate compression**](https://stackoverflow.com/a/29532057), builds on the idea that we could _merge_ together adjacent cuboids if a) they are in the same state, i.e. :ON or :OFF, and b) if the final shape is still a cuboid.

So, if we figured out a way to _split_ all the input cuboids (possibly overlapping) into another set of **non-overlapping** cuboids, we could then solve the problem as follows:

- First we build a 3D array, where each element represents the state of a **non-overlapping** cuboid -- we will use a BIT array for this
- Then, for each input cuboid
- We figure out which **non-overlapping** cuboid it covers, and turn that on / off accordingly -- here we turn on / off the NxMxO cuboid, not each of its 1x1x1 ones
- When done turning processing input instructions, we scan the grid of the **non-overlapping** cuboids, and sum up the _volume_ of the ones which are on

The above should translate into the following function -- We are going to be looking into COMPRESS-COORDINATES in a sec:
(defun part2 (instructions &aux)
  (destructuring-bind (xx yy zz) (compress-coordinates instructions)
    (let ((grid (make-array (list (length xx) (length yy) (length zz))
                            :element-type 'bit
                            :initial-element 0)))
      (flet ((index-of (item vector &aux (start 0) (end (length vector)))
               (binary-search start end (partial-1 #'<=> (aref vector _) item))))
        (loop for (state x1 x2 y1 y2 z1 z2) in instructions
              for i-min = (index-of x1 xx) for i-max = (index-of (1+ x2) xx)
              for j-min = (index-of y1 yy) for j-max = (index-of (1+ y2) yy)
              for k-min = (index-of z1 zz) for k-max = (index-of (1+ z2) zz) do
              (loop for i from i-min below i-max do
                (loop for j from j-min below j-max do
                      (loop for k from k-min below k-max do
                            (setf (aref grid i j k) (ecase state (:on 1) (:off 0))))))))
      (loop with i-max = (1- (length xx)) and j-max = (1- (length yy)) and k-max = (1- (length zz))
            for i below i-max for x1 = (aref xx i) for x2 = (aref xx (1+ i)) sum
            (loop for j below j-max for y1 = (aref yy j) for y2 = (aref yy (1+ j)) sum
                  (loop for k below k-max for z1 = (aref zz k) for z2 = (aref zz (1+ k))
                        when (= (aref grid i j k) 1) sum (* (- x2 x1) (- y2 y1) (- z2 z1))))))))
So how can we split the complete 3d space into a set of **non-overlapping** cuboids?  Well, we could use all the distinct `x1`, `x2`, `y1`, `y2`, `z1` and `z2` values mentioned in the input instructions; unfortunately though, this choice would still result in a list of overlapping cuboids, and that's because all the `*2` values mentioned in the input instructions turn out to be inclusive.  So, what can we do instead? We could use all the `v1` and `(1+ v2)` values, and _that_ would for sure result in a set of **non-overlapping** cuboids.
(defun compress-coordinates (instructions)
  (flet ((unique&sorted (list &aux (list (remove-duplicates list)))
           (make-array (length list) :initial-contents (sort list #'<))))
    (loop for (_ x1 x2 y1 y2 z1 z2) in instructions
          collect x1 into xx collect (1+ x2) into xx
          collect y1 into yy collect (1+ y2) into yy
          collect z1 into zz collect (1+ z2) into zz
          finally (return (list (unique&sorted xx) (unique&sorted yy) (unique&sorted zz))))))
And that's it!

Final plumbing:
(define-solution (2021 22) (instructions parse-instructions)
  (values (part1 instructions) (part2 instructions)))

(define-test (2021 22) (570915 1268313839428137))
Test run:
> (time (test-run))
TEST-2021/22..
Success: 1 test, 2 checks.
Evaluation took:
  14.835 seconds of real time
  13.695186 seconds of total run time (13.327857 user, 0.367329 system)
  [ Run times consist of 0.162 seconds GC time, and 13.534 seconds non-GC time. ]
  92.32% CPU
  34,120,998,148 processor cycles
  231,646,416 bytes consed
Unfortunately this takes quite some time to generate the correct answer for part 2, but I am not sure what we could do to speed things up a bit.  When I timed the two LOOP forms of PART2 inside:
(defun part2 (instructions &aux)
  (destructuring-bind (xx yy zz) (compress-coordinates instructions)
    (let ((grid (make-array (list (length xx) (length yy) (length zz))
                            :element-type 'bit
                            :initial-element 0)))
      (flet ((index-of (item vector &aux (start 0) (end (length vector)))
               (binary-search start end (partial-1 #'<=> (aref vector _) item))))
        (time (loop for (state x1 x2 y1 y2 z1 z2) in instructions
                for i-min = (index-of x1 xx) for i-max = (index-of (1+ x2) xx)
                for j-min = (index-of y1 yy) for j-max = (index-of (1+ y2) yy)
                for k-min = (index-of z1 zz) for k-max = (index-of (1+ z2) zz) do
                (loop for i from i-min below i-max do
                      (loop for j from j-min below j-max do
                            (loop for k from k-min below k-max do
                                  (setf (aref grid i j k) (ecase state (:on 1) (:off 0)))))))))
      (time (loop for i below (1- (length xx)) for x1 = (aref xx i) for x2 = (aref xx (1+ i)) sum
                  (loop for j below (1- (length yy)) for y1 = (aref yy j) for y2 = (aref yy (1+ j)) sum
                        (loop for k below (1- (length zz)) for z1 = (aref zz k) for z2 = (aref zz (1+ k))
                              when (= (aref grid i j k) 1) sum (* (- x2 x1) (- y2 y1) (- z2 z1)))))))))
And run the test again I got the following:
> (time (test-run))
TEST-2021/22
Evaluation took:
  4.088 seconds of real time
  3.189448 seconds of total run time (3.073847 user, 0.115601 system)
  [ Run times consist of 0.037 seconds GC time, and 3.153 seconds non-GC time. ]
  78.01% CPU
  9,402,713,194 processor cycles
  65,504 bytes consed

Evaluation took:
  10.920 seconds of real time
  9.741171 seconds of total run time (9.536960 user, 0.204211 system)
  89.20% CPU
  25,115,619,739 processor cycles
  416 bytes consed

..
Success: 1 test, 2 checks.
Evaluation took:
  15.704 seconds of real time
  13.589492 seconds of total run time (13.193504 user, 0.395988 system)
  [ Run times consist of 0.084 seconds GC time, and 13.506 seconds non-GC time. ]
  86.53% CPU
  36,120,260,013 processor cycles
  231,686,928 bytes consed
Which means the two thirds of the run time are spent while scanning the grid of non overlapping cuboids (FYI, it's a `828x836x828` 3D array, with `573148224` elements in it).  I am going to leave this for now, but if anyone had any idea on how to optimize this, do reach out.

For comparison, my [original solution](https://topaz.github.io/paste/#XQAAAQDzDQAAAAAAAAAUGQimgx+p6PZhRh7uIO5BVe7pcMkiG6eOV22j2MYKuFs91c+ybCSTgFinLxYMDBog9+D9wK6Mym3ryCQmnaAMvpJKQHyEdtVsPNjjYnuytyD7Qrf6bGu6LFg1BiAQ72xs23DoX1Ss84dXIY34R1KEstD5kaS3TZaBhOkCRt1rui/F8X46S4FNkH8lmNyXe65HGJxbLmM5AxC70fIrc01PnIeYXoBYDXOWDzVB/LWRj/aG+TH+ub2IlKnO2F+v6DYfX2gwfzmZLlgQw+QhaHr0GPGmPWgaMnfHgKe7blNOvRP1NRvQ6FTbxw5mMV1TIJxFCc/lapiFsx6hxF5SPmxBQe6Btb2qY+fbFBRVnoSuvtD2+klndh7SHKu9rsBkp+sXSacQCv1uey4f7dheHjB0H+G4gQK3CvxjXd1ePXR972yZ/R5KQ789h7lPj5gU+zhSC3ckrECpA9D0pqD5j3hr2RXNun3RxEY9/u9KM29QW8U24xifKkVT9ugCuaB1JcAOFPluLox+xz+5z0lFEyXVGnLPN6i+zx2xnZiJ1Hwxt/irXSdaV2Y/hqGyxagGeuOSFBUxhropP0WtGMcggJ7h+uTFi03SUVFSdAMdE89APmygn4tZ8Ao54unEWsiD9ePqp/LcGKnOcgdq3fJkE47Alx9QR2F8DQKau80zfX3qYOOOTzXqAxVk8aUbZOtMV6iz5GUr5798oXsSWXKn0/ghAnO7GjQrOCbeGz7uYBi4u1E7ehUSfpJG+MKlo4fpL7Jyf7Nmn1nAACLn/he+SIWsa/6VkYIQT0IJis/C3tuiGeOEicXwvpjNNuR4n5czJY0Ta0UyWnbyftmHNNakZXKJcGWEZq4p0XFuLuzZqfp22bwwxOJWEIPTmPCrYL3bgpoaDpFbJhAc8T07WauEsQbDJA4UzfSAA1yClQpeW3HG6LUOTdM5s+q5ef7uEkxOZ8lqdbwVVJqh1R5NI5ITL2u2tXpWlH7thLThrYBccNXPxqFWlQLlFWWkgwQbkxqrx5NrygPJ2qqvBUHE2mFQic9ovGOTTF1T9Dj9FMQ7OL1TZ46ZtHbZynWaYYwnPpTfyZQDXPjuvpgPtYA9TWuq0dXupWhex6Nfi+OqXuzxtnDtLUgUpHl+ps8Wq1jEGk4jWijdseIIRMaur9U8U7Sv2nMAr0JFrbS7b8AOc+jkFxDNtOwNvkkn4oIMzrRiTKAecYWCMpERpIDgJEYI6RbyzeWrv073RkOY00F9LYH2uIP86bmY5cBcapqcu4cDzpbgIv6HZGV50onnbTTFfJ5pKTHt4mGefrymPqzdSKJF9uXVM+EMtJw6iSFT4pt6XOf/PbBBAA==) took around twice as much to get the answer for part 2.  What did I do wrong, there?! Well, as I moped things up for this entry, I realized that we could speed things up a little bit by a) using **binary search** instead of a linear scan to find the min / max indices of the **non-overlapping** cuboids, but most importantly, b) we could do this search **once** per input instruction (i.e. once for each `x1`, `(1+ x2)`, `y1`, `(1+ y2)`, `z1`, `(1+ z2)` value) instead of doing this over and over again inside each one of the nested LOOP forms.

Live and learn!

2022-01-03 (permalink)

Advent of Code: [2021/21](https://adventofcode.com/2021/day/21)

We have been challenged by the computer to play a game with it:
This game consists of a single die, two pawns, and a game board with a circular track containing ten spaces marked `1` through `10` clockwise. Each player's **starting space** is chosen randomly (your puzzle input). Player 1 goes first.
Continues:
Players take turns moving. On each player's turn, the player rolls the die **three times** and adds up the results. Then, the player moves their pawn that many times **forward** around the track (that is, moving clockwise on spaces in order of increasing value, wrapping back around to `1` after `10`). So, if a player is on space `7` and they roll `2`, `2`, and `1`, they would move forward 5 times, to spaces `8`, `9`, `10`, `1`, and finally stopping on `2`.
After each player moves, they increase their **score** by the value of the space their pawn stopped on. Players' scores start at `0`. So, if the first player starts on space `7` and rolls a total of `5`, they would stop on space `2` and add `2` to their score (for a total score of `2`). The game immediately ends as a win for any player whose score reaches **at least `1000`**.
Since the first game is a practice game, the submarine opens a compartment labeled **deterministic dice** and a 100-sided die falls out. This die always rolls `1` first, then `2`, then `3`, and so on up to `100`, after which it starts over at `1` again. Play using this die.
The task:
Play a practice game using the deterministic 100-sided die. The moment either player wins, **what do you get if you multiply the score of the losing player by the number of times the die was rolled during the game?**
As usual, let's worry about our the input first; this is what we are expecting:
Player 1 starting position: 4
Player 2 starting position: 8
And we are going to parse this into a CONS cell, with accessors functions for each player starting position:
(defun parse-positions (data)
  (flet ((player-position (string)
           (cl-ppcre:register-groups-bind ((#'parse-integer pos))
               ("Player \\d+ starting position: (\\d+)" string)
             pos)))
    (cons (player-position (first data)) (player-position (second data)))))
(defun player1 (positions) (car positions))
(defun player2 (positions) (cdr positions))
Now...to get the answer for part 1, we are just going to have to simulate the game:

- Each player score starts at `0`
- We initialize `last-die` to `100`, and `roll-count` to `0`
- We roll the die thrice, and store its sum, taking care of applying the _right_ wrap around logic
- We increase `last-die` by 3 -- again, taking care of applying the _right_ wrap around logic
- We move player 1 forward
- We increase its score
- We increase the die roll count
- If player one won, we stop; otherwise, we recurse and swap player 1 data with with player 2 one
(defun play (p1 p2 &optional (s1 0) (s2 0) (last-die 100) (roll-count 0)
                 &aux
                 (result (mod1 (+ (+ last-die 1) (+ last-die 2) (+ last-die 3)) 10))
                 (last-die (mod1 (+ last-die 3) 100))
                 (p1 (mod1 (+ p1 result) 10))
                 (s1 (+ s1 p1))
                 (roll-count (+ roll-count 3)))
  (if (>= s1 1000) (* s2 roll-count) (play p2 p1 s2 s1 last-die roll-count)))
MOD1's job is simply to apply the _right_ wrap around logic, i.e. if the new value is equal to the maximum value, then set it to `1`:
(defun mod1 (n max)
  (loop while (> n max) do (decf n max))
  n)
Now that we're warmed up, it's time to play the real game -- part 2:
A second compartment opens, this time labeled **Dirac dice**. Out of it falls a single three-sided die.
>
As you experiment with the die, you feel a little strange. An informational brochure in the compartment explains that this is a **quantum die**: when you roll it, the universe **splits into multiple copies**, one copy for each possible outcome of the die. In this case, rolling the die always splits the universe into **three copies**: one where the outcome of the roll was `1`, one where it was `2`, and one where it was `3`.
>
The game is played the same as before, although to prevent things from getting too far out of hand, the game now ends when either player's score reaches at least `21`.
The task:
Using your given starting positions, determine every possible outcome. **Find the player that wins in more universes; in how many universes does that player win?**
Again...we are going to simulate this.

Let's suppose we had a COUNT-WINS function, capable of telling us, given each player positions, how many times each player would win; with it, getting the answer for part 2 would be as simple as calling MAX with each player win count:
(defun part2 (positions)
  (destructuring-bind (wc1 . wc2) (count-wins (player1 positions) (player2 positions))
    (max wc1 wc2)))
Nice! Let's now see what our COUNT-WINS function is going to look like:

- Each player rolls the three-faced die, thrice, and collect its score
- If it won, we increase the player win counter
- Otherwise, we let the other player play, and collect win count accordingly (when recursing, player 1 and player 2 data is swapped, so the win counts will be swapped as well)
(defun count-wins (p1 p2 &optional (s1 0) (s2 0))
  (let ((wc1 0) (wc2 0))
    (loop for d1 from 1 to 3 do
          (loop for d2 from 1 to 3 do
                (loop for d3 from 1 to 3
                      for p1-next = (mod1 (+ p1 d1 d2 d3) 10)
                      for s1-next = (+ s1 p1-next) do
                      (if (>= s1-next 21)
                        (incf wc1)
                        (destructuring-bind (w2 . w1) (count-wins p2 p1-next s2 s1-next dp)
                          (incf wc1 w1)
                          (incf wc2 w2))))))
    (cons wc1 wc2)))
The problem with this naive solutions is that it will take forever to run; the text for part 2 should have hinted us about this:
Using the same starting positions as in the example above, player 1 wins in **`444356092776315`** universes, while player 2 merely wins in `341960390180808` universes.
Let's think about this:

- each player maximum score is `29` -- worst case, it started off position `20` (with a score of `20`) and rolled three `3`
- the same is valid for player positions

So maybe there aren't that many _distinct_ states after all; maybe we can _memoize_ all these calls?!

Well, it turns out we can, and it will work pretty fast too!
(defun count-wins (p1 p2
                      &optional (s1 0) (s2 0) (dp (make-hash-table :test 'equal))
                      &aux (key (list p1 s1 p2 s2)))
  (uiop:if-let (wins (gethash key dp))
    wins
    (setf (gethash key dp)
          (let ((wc1 0) (wc2 0))
            (loop for d1 from 1 to 3 do
                  (loop for d2 from 1 to 3 do
                        (loop for d3 from 1 to 3
                              for p1-next = (mod1 (+ p1 d1 d2 d3) 10)
                              for s1-next = (+ s1 p1-next) do
                              (if (>= s1-next 21)
                                (incf wc1)
                                (destructuring-bind (w2 . w1) (count-wins p2 p1-next s2 s1-next dp)
                                  (incf wc1 w1)
                                  (incf wc2 w2))))))
            (cons wc1 wc2)))))
Final plumbing:
(define-solution (2021 21) (positions parse-positions)
  (values (part1 positions) (part2 positions)))

(define-test (2021 21) (998088 306621346123766))
Test run:
> (time (test-run))
TEST-2021/21..
Success: 1 test, 2 checks.
Evaluation took:
  0.066 seconds of real time
  0.064617 seconds of total run time (0.062712 user, 0.001905 system)
  98.48% CPU
  153,951,731 processor cycles
  20,318,160 bytes consed
And that's it!

As I was trying to clean this up and make use of my custom DEFUN/MEMO macro, I realized it was not properly supporting functions with `&optional` and `&aux` arguments:
; in: DEFUN COUNT-WINS
;     (LIST AOC/2021/21::P1 AOC/2021/21::P2 &OPTIONAL (AOC/2021/21::S1 0)
;           (AOC/2021/21::S2 0))
;
; caught WARNING:
;   undefined variable: COMMON-LISP:&OPTIONAL

;     (AOC/2021/21::S1 0)
;
; caught STYLE-WARNING:
;   undefined function: AOC/2021/21::S1

;     (AOC/2021/21::S2 0)
;
; caught STYLE-WARNING:
;   undefined function: AOC/2021/21::S2
;
; compilation unit finished
;   Undefined functions:
;     S1 S2
;   Undefined variable:
;     &OPTIONAL
;   caught 1 WARNING condition
;   caught 2 STYLE-WARNING conditions
WARNING: redefining AOC/2021/21::COUNT-WINS in DEFUN
WARNING: redefining AOC/2021/21::COUNT-WINS/CLEAR-MEMO in DEFUN
COUNT-WINS
COUNT-WINS/CLEAR-MEMO
So I went on, and fixed it -- pay attention to EXPAND-DEFUN/MEMO-KEYABLE-ARGS, in which we:

- Skip `&optional`
- Skip `&key`
- Skip `&rest` and the following parameter
- Exit on `&aux`
- _Unwrap_ optional / key parameter
(defmacro defun/memo (name args &body body)
  (expand-defun/memo name args body))


(defun expand-defun/memo (name args body)
  (let* ((memo (gensym  (mkstr name '-memo))))
    `(let ((,memo (make-hash-table :test 'equalp)))
      (values
        ,(expand-defun/memo-memoized-fun memo name args body)
        ,(expand-defun/memo-clear-memo-fun memo name)))))


(defun expand-defun/memo-memoized-fun (memo name args body)
  (with-gensyms (key result result-exists-p)
    `(defun ,name ,args
      (let ((,key (list ,@(expand-defun/memo-keyable-args args))))
        (multiple-value-bind (,result ,result-exists-p) (gethash ,key ,memo)
          (if ,result-exists-p
            ,result
            (setf (gethash ,key ,memo)
                  ,@body)))))))

(defun expand-defun/memo-keyable-args (args)
  (uiop:while-collecting (collect)
    (loop while args for a = (pop args) do
          (cond ((eq a '&optional) nil)
                ((eq a '&key) nil)
                ((eq a '&rest) (pop args))
                ((eq a '&aux) (return))
                ((consp a) (collect (car a)))
                (t (collect a))))))


(defun expand-defun/memo-clear-memo-fun (memo name)
  (let ((clear-memo-name (intern (mkstr name '/clear-memo))))
    `(defun ,clear-memo-name ()
      (clrhash ,memo))))
With this, COUNT-WINS can be updated as follows:
(defun/memo count-wins (p1 p2 &optional (s1 0) (s2 0))
  (let ((wc1 0) (wc2 0))
    (loop for d1 from 1 to 3 do
          (loop for d2 from 1 to 3 do
                (loop for d3 from 1 to 3
                      for p1-next = (mod1 (+ p1 d1 d2 d3) 10)
                      for s1-next = (+ s1 p1-next) do
                      (if (>= s1-next 21)
                        (incf wc1)
                        (destructuring-bind (w2 . w1) (count-wins p2 p1-next s2 s1-next)
                          (incf wc1 w1)
                          (incf wc2 w2))))))
    (cons wc1 wc2)))
(Yes, all we did was replacing DEFUN with DEFUN/MEMO)

2021-12-20 (permalink)

Advent of Code: [2021/20](https://adventofcode.com/2021/day/20)
With the scanners fully deployed, you turn their attention to mapping the floor of the ocean trench. When you get back the image from the scanners, it seems to just be random noise. Perhaps you can combine an image enhancement algorithm and the input image (your puzzle input) to clean it up a little.
For example:
..#.#..#####.#.#.#.###.##.....###.##.#..###.####..#####..#....#..#..##..###..######.###...####..#..#####..##..#.#####...##.#.#..#.##..#.#......#.###.######.###.####...#.##.##..#..#..#####.....#.#....###..#.##......#.....#..#..#..##..#...##.######.####.####.#.#...#.......#..#.#.#...####.##.#......#..#...##.#.##..#...##.#.##..###.#......#.#.......#.#.#.####.###.##...#.....####.#..#..#.##.#....##..#.####....##...##..#...#......#.#.......#.......##..####..#...#.#.#...##..#.#..###..#####........#..####......#..#

#..#.
#....
##..#
..#..
..###
The first section is the **image enhancement algorithm**.  The second section is the **input image**, a two-dimensional grid of light pixels (`#`) and dark pixels (`.`).
The image enhancement algorithm describes how to enhance an image by **simultaneously** converting all pixels in the input image into an output image. Each pixel of the output image is determined by looking at a 3x3 square of pixels centered on the corresponding input image pixel. So, to determine the value of the pixel at (5,10) in the output image, nine pixels from the input image need to be considered: (4,9), (4,10), (4,11), (5,9), (5,10), (5,11), (6,9), (6,10), and (6,11). These nine input pixels are combined into a single binary number that is used as an index in the image **enhancement algorithm string**.
The task for the day:
Start with the original input image and apply the image enhancement algorithm twice, being careful to account for the infinite size of the images. **How many pixels are lit in the resulting image?**
OK, first off, the input:

- A tuple
- First element is the algorithm, with all the `#` and `.` characters replaced with Ts and NILs
- The second element, a HASH-TABLE mapping from `(row col)` pairs, to whether the pixel was on (T), or not (NIL)
(defun parse-input (data)
  (list (parse-enhancement-algorightm (first data)) (parse-image (cddr data))))

(defun parse-enhancement-algorightm (string)
  (map 'vector (partial-1 #'eql #\#) string))

(defun parse-image (data &aux (image (make-hash-table :test 'equal)))
  (loop for r below (length data)
        for string in data do
        (loop for c below (length string)
              for ch across string
              when (eql ch #\#) do (setf (gethash (list r c) image) t)))
  image)

(defun enhancement-algorithm (input) (first input))
(defun image (input) (second input))
Next, the core of the enhance algorithm:

- All the pixels are enhanced at the same time, so start off by copying the current image
- Then, for each pixel -- plus some extra ones, to properly account account for the enhancements happening on the border of the image
- We fetch the 3x3 block centered on _that_ pixel -- NEIGHBORS9
- We convert that into an index
- We use that to access the algorithm and figure out whether the pixel will be on or off
(defun enhance-step (algo curr &aux
                          (next (make-hash-table :test 'equal))
                          (row-min (loop for (r _) being the hash-keys of curr minimize r))
                          (row-max (loop for (r _) being the hash-keys of curr maximize r))
                          (col-min (loop for (_ c) being the hash-keys of curr minimize c))
                          (col-max (loop for (_ c) being the hash-keys of curr maximize c)))
  (flet ((block-3x3 (pos)
           (mapcar (partial-1 #'gethash _ curr) (neighbors9 pos))))
    (loop for row from (- row-min 3) to (+ row-max 3) do
          (loop for col from (- col-min 3) to (+ col-max 3)
            for pos = (list row col)
            for i = (as-binary (block-3x3 pos)) do
            (setf (gethash pos next) (aref algo i)))))
  next)
NEIGHBORS9 simply checks all the adjacent positions (including the current one), and does this in the _right_ order -- fail to do so, and we would be accessing elements of the enhancement algorithm wrong!
(defparameter *nhood* '((-1 -1) (-1  0) (-1 1)
                        ( 0 -1) ( 0  0) ( 0 1)
                        ( 1 -1) ( 1  0) ( 1 1)))

(defun neighbors9 (pos)
  (loop for d in *nhood* collect (mapcar #'+ pos d)))
Inside AS-BINARY, we scan the input list of surrounding pixels, and thanks to DPB and BYTE we construct the index to use while accessing the enhancement algorithm:
(defun as-binary (list &aux (number 0))
  (loop for ch in list for i from 8 downto 0 when ch do
        (setf number (dpb 1 (byte 1 i) number)))
  number)
The last bit: the guy that does the enhancement, a given number of times (`2` in our case), and that finally returns the number of pixels which are on:
(defun enhance (iterations input &aux (algo (enhancement-algorithm input)) (curr (image input)))
  (dotimes (_ iterations)
    (setf curr (enhance-step algo curr)))
  (loop for state being the hash-values of curr count state))
Feed this our input, and we should be getting our result back...right?!  Well...no!

As it turns out, the first entry of the enhancement algorithm is a `#`; that entry can only be accessed with index `0` which can only be obtained for pixels centered around a 3x3 with all the pixels off; now, since our image is _infinite_, and since all the surrounding pixels not specified by our input are expected to be off, that means that after the first enhancement step...**all the surrounding pixels will be on...WTF?!**

Well, lucky for us, the last element of the enhancement algorithm is a `.`, which means that a 3x3 block with all the pixels turned on will cause the middle one to be turned off; so maybe, if after the first iteration all the _background_ pixels will be turned on, after the second one they would all be turned off again.

OK, let's start by changing ENHANCE to check for the current enhancement step, and use that information to decide what the background state is going to be (i.e. it should be on, T, for all the odd steps):
(defun enhance (iterations input &aux (algo (enhancement-algorithm input)) (curr (image input)))
  (dotimes (i iterations)
    (setf curr (enhance-step algo curr (oddp i))))
  (loop for state being the hash-values of curr count state))
Next we update ENHANCEMENT-STEP to use `background-lit-p` accordingly (i.e. use that when accessing a pixel which is outside of the current image):
(defun enhance-step (algo curr background-lit-p &aux
                          (next (make-hash-table :test 'equal))
                          (row-min (loop for (r _) being the hash-keys of curr minimize r))
                          (row-max (loop for (r _) being the hash-keys of curr maximize r))
                          (col-min (loop for (_ c) being the hash-keys of curr minimize c))
                          (col-max (loop for (_ c) being the hash-keys of curr maximize c)))
  (flet ((block-3x3 (pos)
           (mapcar (partial-1 #'gethash _ curr background-lit-p)
                   (neighbors9 pos))))
    (loop for row from (- row-min 3) to (+ row-max 3) do
          (loop for col from (- col-min 3) to (+ col-max 3)
            for pos = (list row col)
            for i = (as-binary (block-3x3 pos)) do
            (setf (gethash pos next) (aref algo i)))))
  next)
Calling ENHANCE on the input image should now work as expected.

What about part 2?  It looks like `2` steps were not enough:
Start again with the original input image and apply the image enhancement algorithm 50 times. **How many pixels are lit in the resulting image?**
Easy peasy -- the code we have for part 1 will work for part 2 as well!

Final plumbing:
(define-solution (2021 20) (input parse-input)
  (values (enhance 2 input) (enhance 50 input)))

(define-test (2021 20) (5503 19156))
Test run:
> (time (test-run))
TEST-2021/20..
Success: 1 test, 2 checks.
Evaluation took:
  8.300 seconds of real time
  7.358258 seconds of total run time (6.756652 user, 0.601606 system)
  [ Run times consist of 0.756 seconds GC time, and 6.603 seconds non-GC time. ]
  88.65% CPU
  19,090,743,941 processor cycles
  2,622,069,728 bytes consed
Well, that conses a lot, but again...it gets the job done, so I am going to keep it as is for now!  Maybe I could use a 2D BIT array, rather than a HASH-TABLE of Ts and NILs?

PS. As usual, my [REPL buffer](https://topaz.github.io/paste/#XQAAAQBRCgAAAAAAAAAUGQimgx+p6PZhRh7uIO5BVNPIdM7CxikuU7ERkd0dQuCDrZBhpxZTCueZNLCEOlFkCLmeqCJZFrxgbEMOXRzMoYJB/FMrFs8T4gJnMYA3TPqI3D5vjVHPr1ZfdXnTvjYJBkm6A/55PkeXgxNmxzw3LW7bJ0kUiMhOp6QbHb2yWa8RAgVsSF0ty2N7peZQ76w6a8ZVyDteIUd5jJxoqqPOE8oBWOjlb8NquqBmrq4rss2WDa1qyYznBxpJBUT/pZULLTPsKTfDwp5rT5gp+dGNnfoqSwkGzc32cfjmmEYQQiT7cqMgAgDNOAX5vWFDGdudGXY9Qu7SaTQdPgBCVOc8r/y6qgUKDvREdOixMDwECLpDGRtnBhJL9EJ76Fxmf1R7alP2C58s9yw8gzkzvM3EJzGmiABCfwiS7T/g6gx7gFz/32oGCvVFtgQ65rq3JfWMtNvooXIuGCBZ+fdsXzUv8UC+0GIejPoaYe3EIyXdBr4I//Pb3c0qM+TeCcKGEkSV32QnI6xshkfMIC98CwvKFXZaZLNjXfw52gPUFiKoZAomS3hYxzlPf4x7jP5e9LrWxGIEf2SGl69YuU2r2R1QJ9rAQIJ9/HBbX1+sh/TIBQGI+8t71PWDMHI7VzpUGU8RRQ9kjsd/MIDb2xNnLlzKatDYnhq+NUIhXfvIs4Q8nU+ptkfQd0HEtYHOE6mkea/iymvu72Wmqt0NMgI5vlFtuD+jj+QQRAksMjtVtA2dnqSaxbAet5W37ws30UvbcpI1zxZ47/tC6pc1OQmtdq2tdnAzToPU1ZDBryRsDy5Cc+d2s2pLa7LjCbrjFnGY+6GGqOB/Nidfzbt+nlTkxXydENby7ITRAiyv/JgmR3U9ve44B7lSntPwWR39Lbpk4exY/0AABHYeou36vUEfEJF/HfT0L+DAFQicFBvYteYCgrehEwePyMQLTu/jgUux+SRXDBG0s7n7yRZjbiYWIHuw84v3bYwxPI/cVvEAbizScOWYoBCDZj7whw2pZZ99hNv8f9RIBCeXuAbno+SRNCNpH+O1DrfA515YQCl4ep23IsSZxKx3o2FESuCqlcH+ZpTJaAO8MYjJb//VGo7k) -- it's not too bad...I have seen (and authored) worse!

2021-12-19 (permalink)

Advent of Code: [2021/19](https://adventofcode.com/2021/day/19)

Solving today's problem took me way to long, so I am not going put together a full write-up for it, not yet, especially because my solution takes around 2 minutes to complete, and conses up like hell!
Evaluation took:
  124.405 seconds of real time
  92.907883 seconds of total run time (90.247127 user, 2.660756 system)
  [ Run times consist of 0.972 seconds GC time, and 91.936 seconds non-GC time. ]
  74.68% CPU
  286,124,018,756 processor cycles
  13,399,299,136 bytes consed
Very high level:

- We pick a scanner and check it against every other one, and see if they overlap
- To see if the two scanners overlap...
- ...we keep one as is, while rotate the other in all the 24 possible directions
- ...for each beacon of the first scanner, we calculate the relative distances with all the other beacons seen by the scanner
- ...for each beacon of the second scanner (the rotated one), we calculate the relative distances with all the others beacons seen by the scanner
- ...if these two sets of relative distances have at least 12 elements in common, then the two scanners overlap!
- ...do some math to return the scanner point, and the rotation matrix that we used to _orient_ the second scanner
- We now have a new scanner (with a new origin, and a new rotation matrix), so we can:
- ...convert the positions of the local beacons to the system of coordinates of the scanner used as reference
- ...use the new scanner to check against all of the others

Input parsing:
(defun parse-scanner (paragraph &aux (paragraph (cl-ppcre:split "\\n" paragraph)))
  (flet ((numbers (string)
           (mapcar #'parse-integer (cl-ppcre:all-matches-as-strings "-?\\d+" string))))
    (cons
      (first (numbers (first paragraph)))
      (mapcar #'numbers (rest paragraph)))))
(defun id (scn) (car scn))
(defun beacons (scn) (cdr scn))
Some utilities to deal with matrices first:
(defun row (i m) (nth i m))
(defun col (j m) (loop for row in m collect (nth j row)))
(defun dot-product (v1 v2) (apply #'+ (mapcar #'* v1 v2)))
(defun invert (m) (transpose m))
(defun transpose (m) (loop for i below (length m) collect (col i m)))

; Coutertesy of: https://www.euclideanspace.com/maths/algebra/matrix/orthogonal/rotation/index.htm
(defparameter *rotate-90-x*
  '((1 0  0)
    (0 0 -1)
    (0 1  0)))
(defparameter *rotate-90-y*
  '(( 0 0 1)
    ( 0 1 0)
    (-1 0 0)))
(defparameter *rotate-90-z*
  '((0 -1 0)
    (1  0 0)
    (0  0 1)))

(defun m* (m1 m2)
  (loop for i below (length m1) collect
        (loop for j below (length (first m1)) collect
              (dot-product (row i m1) (col j m2)))))

(defun compute-all-rotation-matrices (&aux
                                       (midentity '((1 0 0) (0 1 0) (0 0 1)))
                                       rez)
  (loop repeat 4
        for rz = midentity then (m* rz *rotate-90-z*) do
        (loop repeat 4
              for ry = (m* rz midentity) then (m* ry *rotate-90-y*) do
              (loop repeat 4
                    for rx = (m* ry midentity) then (m* rx *rotate-90-x*) do
                    (push rx rez))))
  (reverse (remove-duplicates rez :test 'equal)))

(defparameter *all-rotation-natrices* (compute-all-rotation-matrices))
The beef for part 1:
(defun find-beacons (scanners &aux
                       (queue (list (list (first scanners) '(0 0 0) '((1 0 0) (0 1 0) (0 0 1)))))
                       (rez (beacons (first scanners)))
                       (scanners (rest scanners)))
  (loop while queue
        for (scn1 d10 r10) = (pop queue) do
        (loop for scn2 in scanners do
              (multiple-value-bind (p1 r21) (locate-scanner scn1 scn2)
                (when p1
                  (flet ((transform21 (p2) (v+ (rotate p2 r21) p1))
                         (transform10 (p1) (v+ (rotate p1 r10) d10)))
                    (let ((d20 (transform10 p1))
                          (r20 (m* r10 r21)))
                      (prl (id scn1) d10 r10 (id scn2) p1 r21 d20 r20)
                      (setf rez (append rez (mapcar (lambda (p2)
                                                      (transform10 (transform21 p2)))
                                                    (beacons scn2)))
                        scanners (remove scn2 scanners :test 'equal)
                        queue (cons (list scn2 d20 r20) queue))))))))
  (remove-duplicates rez :test 'equal))


(defun locate-scanner (scn1 scn2 &aux (bb1 (beacons scn1)))
  (dolist (b1 bb1)
    (let ((dd1 (relative-to b1 bb1)))
      (dolist (m *all-rotation-natrices*)
        (let ((bb2 (rotate-all (beacons scn2) m)))
          (dolist (b2 bb2)
            (let ((dd2 (relative-to b2 bb2)))
              (let ((common (intersection dd1 dd2 :test 'equal)))
                (when (>= (length common) 12)
                  (let ((b2-orig (rotate b2 (invert m))))
                    (return-from locate-scanner
                                 (values (locate b1 b2-orig m) m))))))))))))

(defun relative-to (b bb) (mapcar (lambda (o) (mapcar #'- o b)) bb))
(defun rotate-all (points matrix) (mapcar (partial-1 #'rotate _ matrix) points))
(defun rotate (point matrix)
  (loop for row in matrix collect (dot-product row point)))
(defun locate (b1 b2 m) (v- b1 (rotate b2 m)))

#; DO IT!
(find-beacons (mapcar #'parse-scanner (cl-ppcre:split "\\n\\n" (uiop:read-file-string "src/2021/day19.txt"))))
(setq beacons *)
(length *)
With part 1 done (and working!), part 2 should be pretty easy:

- With the same algorithm we used for part 1, we figure out the list of scanners -- FIND-SCANNERS is 99% copy-pasta of FIND-BEACONS
- Then we check each scanner against each other, and maximize the manhattan distance
(defun find-scanners (scanners &aux
                       (queue (list (list (first scanners) '(0 0 0) '((1 0 0) (0 1 0) (0 0 1)))))
                       rez
                       (scanners (rest scanners)))
  (loop while queue
        for (scn1 d10 r10) = (pop queue) do
        (loop for scn2 in scanners do
              (multiple-value-bind (p1 r21) (locate-scanner scn1 scn2)
                (when p1
                  (flet ((transform21 (p2) (v+ (rotate p2 r21) p1))
                         (transform10 (p1) (v+ (rotate p1 r10) d10)))
                    (let ((d20 (transform10 p1))
                          (r20 (m* r10 r21)))
                      (prl (id scn1) d10 r10 (id scn2) p1 r21 d20 r20)
                      (setf rez (cons d20 rez)
                        scanners (remove scn2 scanners :test 'equal)
                        queue (cons (list scn2 d20 r20) queue))))))))
  rez)

#; DO IT!
(find-scanners (mapcar #'parse-scanner (cl-ppcre:split "\\n\\n" (uiop:read-file-string "src/2021/day19.txt"))))
(setq scanners *)
(loop for (s1 . remaining) on scanners maximize
      (loop for s2 in remaining maximize (manhattan-distance s1 s2)))
Things have gotten...interesting!

2021-12-18 (permalink)

Advent of Code: [2021/18](https://adventofcode.com/2021/day/18)

First off, it's the last _serious_ weekend before December 25th, so expect this **not** to be easy!

We descend into the ocean trench and encounter some snailfish; they say they saw the sleigh keys; they will tell us which direction the keys went only if we help them with a little math problem...sigh!
Snailfish numbers aren't like regular numbers. Instead, every snailfish number is a **pair** - an ordered list of two elements. Each element of the pair can be either a regular number or another pair. Pairs are written as `[x,y]`, where `x` and `y` are the elements within the pair.
Continues:
This snailfish homework is about **addition**. To add two snailfish numbers, form a pair from the left and right parameters of the addition operator. For example, `[1,2]` + `[[3,4],5]` becomes `[[1,2],[[3,4],5]]`.
OK.
There's only one problem: **snailfish numbers must always be reduced**, and the process of adding two _snailfish numbers can result in snailfish numbers that need to be reduced.
>
To reduce a snailfish number, you must repeatedly do the first action in this list that applies to the snailfish number:
>
- If any pair is **nested inside four pairs**, the leftmost such pair **explodes**.
- If any regular number is **10 or greater**, the leftmost such regular number **splits**.
Once no action in the above list applies, the snailfish number is reduced.
OK..
During reduction, at most one action applies, after which the process returns to the top of the list of actions. For example, if **split** produces a pair that meets the **explode** criteria, that pair **explodes** before other **splits** occur.
OK...
To **explode** a pair, the pair's left value is added to the first regular number to the left of the exploding pair (if any), and the pair's right value is added to the first regular number to the right of the exploding pair (if any). Exploding pairs will always consist of two regular numbers. Then, the entire exploding pair is replaced with the regular number `0`.
OK....
To **split** a regular number, replace it with a pair; the left element of the pair should be the regular number divided by two and rounded **down**, while the right element of the pair should be the regular number divided by two and rounded **up**. For example, `10` becomes `[5,5]`, `11` becomes `[5,6]`, `12` becomes `[6,6]`, and so on.
OK.....
The homework assignment involves adding up a **list of snailfish numbers** (your puzzle input). The snailfish numbers are each listed on a separate line. Add the first snailfish number and the second, then add that result and the third, then add that result and the fourth, and so on until all numbers in the list have been used once.
OK......
To check whether it's the right answer, the snailfish teacher only checks the **magnitude** of the final sum. The magnitude of a pair is 3 times the magnitude of its left element plus 2 times the magnitude of its right element. The magnitude of a regular number is just that number.
OK.......
Add up all of the snailfish numbers from the homework assignment in the order they appear. **What is the magnitude of the final sum?**
Well, well, well...

OK, first off we need a way to represent snailfish numbers; nothing fancy, we are just going to parse these as list of objects (i.e. `[`, `]`, or a digit):
(defun parse-snumbers (data) (mapcar #'sread data))

(defun sread (string)
  (flet ((ch->number (ch) (- (char-code ch) (char-code #\0))))
    (loop for ch across string
          unless (eql ch #\,) collect (if (find ch "[]") ch (ch->number ch)))))
Let's confirm it's working as expected:
> (sread "[[1,2],[3,4]]")
(#\[ #\[ 1 2 #\] #\[ 3 4 #\] #\])
Now, adding two snailfish numbers together translates to:

- Append the second number to the first one
- Wrap it all up inside `[` and `]` characters
- Then _reduce_ the number

Let's do this:
(defun s+ (number1 number2)
  (sreduce (append
             (list #\[)
             number1
             number2
             (list #\]))))
To reduce a snailfish number:

- Try to _expode_ it
- On success, go back to the first step
- Otherwise, try to _split_ it
- On success, go back to the first step
- Otherwise the number is fully reduced, and there is not anything else we can do with it
(defun sreduce (number)
  (loop
    (uiop:if-let (exploded (sexplode number))
      (setf number exploded)
      (uiop:if-let (split (ssplit number))
        (setf number split)
        (return number)))))
_Exploding_ a snailfish number...now that's a tricky one!

To recap what we have to do:

- Find the first pair nested inside 4 pairs, say `[left, right]`
- Replace the current pair with `0`
- Add `left` to the first regular number _to the left_ of the exploded pair
- Add `right` to the first regular number _to the right_ of the exploded pair

We are going to be iterating over the tokens of the number; as we do it, we are going to keep track of:

- the depth of the current element -- increase it by `1` on each `[` element, decrease it by `1` on each `]` one
- the list of of processed elements, i.e. _to the left_, in the form of a stack
- the list of yet to process elements, i.e. _to the right_

This way, as soon as the depth of the current element equals `4` we will know we found the number to explode.

When this happens:

- The first element inside _to the left_ will be `left` in the recap above, i.e. the element to add to the first regular number _to the left_
- The second element instead will represent `right`
- The third one will be a `]` -- which we can discard, as the pair just exploded

The last step, is to re-assemble the number:

- We add `left` _to the left_, and reverse it (remember, it's a stack)
- Then we add the `0` in place of the exploded number
- We add `right` _to the right_, and append the result

It sounds a bit more complicated than it actually is, so hopefully the following will help understanding what the general idea is:
(defun sexplode (number &aux (depth 0) to-the-left)
  (loop for (v . to-the-right) on number do
        (cond ((numberp v) (push v to-the-left))
              ((eql v #\]) (push v to-the-left) (decf depth))
              ((and (eql v #\[) (/= depth 4)) (push v to-the-left) (incf depth))
              (t (let ((left (pop to-the-right))
                       (right (pop to-the-right)))
                   (assert (and (eql v #\[) (= depth 4)))
                   (assert (eql (pop to-the-right) #\]))
                   (flet ((add-first (addend list)
                            (loop for (v . to-the-right) on list
                                  collect (if (numberp v) (+ v addend) v) into vv
                                  when (numberp v) append to-the-right into vv and return vv
                                  finally (return vv))))
                     (return
                       (append
                         (reverse (add-first left to-the-left))
                         (list 0)
                         (add-first right to-the-right)))))))))
_Splitting_ a snailfish number on the other hand, should hopefully be easier:

- For each element of the number
- If it's not a regular number, or if it's smaller than `10`, we keep it as is
- Otherwise we found our number to split -- so we split it, and assemble things back together
(defun ssplit (number)
  (loop for (v . to-the-right) on number
        if (or (not (numberp v)) (< v 10)) collect v into to-the-left
        else return (append
                      to-the-left
                      (list #\[)
                      (list (floor v 2) (ceiling v 2))
                      (list #\])
                      to-the-right)))
OK, the last missing piece of the puzzle is calculating the _magnitude_ of a snailfish number:

- We are going to push the number elements into a stack
- When we find a `]` it means we found a _leaf_
- So the pop one item from the stack -- the `right` part of the pair
- We pop one more -- the `right` part of the pair
- We pop and discard one more -- the `[` char
- We do the _math_, i.e. `left * 3 + right * 2`, and push the result into the stack
- Eventually, the only element inside the stack will be the magnitude of the number:
(defun smagnitude (number &aux stack)
  (loop for (ch . remaining) on number do
        (case ch
          (#\[ (push ch stack))
          (#\] (let ((right (pop stack))
                     (left (pop stack)))
                 (assert (eql (pop stack) #\[))
                 (push (+ (* left 3) (* right 2)) stack)))
          (otherwise (push ch stack))))
  (pop stack))
Alright, time to implement the solution for part 1:
(defun part1 (numbers) (smagnitude (reduce #'s+ numbers)))
LOL!

Off to part 2:
What is the largest magnitude of any sum of two different snailfish numbers from the homework assignment?
We are just going to brute-force this, keeping into account that addition of snailfish numbers is not commutative:
(defun part2 (numbers)
  (loop for (n1 . remaining) on numbers maximize
        (loop for n2 in remaining
              maximize (smagnitude (s+ n1 n2))
              maximize (smagnitude (s+ n2 n1)))))
And that's it!

Final plumbing:
(define-solution (2021 18) (numbers parse-snumbers)
  (values (part1 numbers) (part2 numbers)))

(define-test (2021 18) (3987 4500))
Test run:
> (time (test-run))
TEST-2021/18..
Success: 1 test, 2 checks.
Evaluation took:
  0.197 seconds of real time
  0.187851 seconds of total run time (0.179690 user, 0.008161 system)
  [ Run times consist of 0.020 seconds GC time, and 0.168 seconds non-GC time. ]
  95.43% CPU
  453,704,694 processor cycles
  403,829,360 bytes consed
It conses up a bit more than I thought it would, but there are a lot of APPEND operations going on in there, so it's all right.

PS. I spent way too much time trying implement a _recursive_ solution for this, but I simply could not wrap my head around it so eventually I gave up and implemented the solution described above.  I did come back to it later though, and finally made it work, though I am not sure if the [recursive solution](https://topaz.github.io/paste/#XQAAAQCbDQAAAAAAAAAUGQimgx+p6PxOfQ0HAVSqOhDGRhLYS4AIXHNjFDrj6MgyqfWSlccS4l0btc+HyT+rEaJLGW07vOLFN859pjh40yell1zYamhcJsv23wn7PYA+QotQJxAOejK0GThz/3JyaALDPO1FOA2Oh9rE+UCIBjPgLv1KgU8B2d4Isir5AoL77R1bJFucR5fG0jnx96p8ADEKuWZ0eer/W2cSVC139qTnga4EZ4U7kIB2ZPkwHwcWifaKp8TShG6LsU1H1LWhjnTRVZ97yPEYWIUM5l91rQVYL2cYFl2IL6hQfK/lafy5migGPSHgDKGRfk0jH0OUfo11MRRUc9kABZzHtKemqfpmgoT2U90kOC4nxD/cTPqX7rvVZoQ0/ECO3Tl/TvbKf9vpSYdGa/6EWYLATSBYI+cHf+kDDhD02OabvQOUVgTmxPiadyqWbi5m4BF6cvHQiANd8pYaxslsvqWp6vVmBMST0Ruvyolglg6x26k3rEqar5xl9RccmxrU1T6xCOSFTGdkbZqAYwk7Yhtw8Tl2eKoC0URbBgWZgVeueZ8aVKnXKdgNRf1BJnuh0Lw1yKOseKlNNkROWE29DU6hDMKwB4WDCs6DRB9qaxN3he3YUQvXPyi3Zh9YQSWQPsjAuLkoKC0iEtjlqitQqdvrv83NcCFHwpnp8OQfOXrY4nKT6RZUnHR+vWaowZVr/51xUkrqatpUJpAGwlvcV4tz9xf5RE742xu0gbuImB1KcpaOM0InJmY+WuibPdpeT8WIAvXAxgsIB3g1aFwsf1MdWJFJhmlQ4B6TOHcBYEaBonWnQVlu+TwEnYeNTxY9qtUKG6qo1qzuhj/7/ziyiNM3ODOCsXMJmXRifRLAuk9Xl28hYHzB6nJtzvARYkM6tUhNBagB+gmsBqQPh4sW2Tv9e9usJv+vxDZQg69XxtW6Vc93rp4snBoDm+n/UzMZBywC1aq5T4//hie7AA==) reads any better than the _stack-based_ one!
PPS. Looking for today's REPL buffer?  It's a [mess](https://topaz.github.io/paste/#XQAAAQD4JwAAAAAAAAAUGQimgulVPYrJhMVjyYOkoNMRrPzuCgpk2vvB8zuHocU/UmwqVZ1sTCnaK0NdCN8xvWnzuO3o0oJ7tsGitzMD6VSohAXfG4WdBUVsQWHtwENe+DuLgLzypkLjE9jdVpaLDtkB1SAZOluLi6VUwc+cK6L7r5mNZ45uwXhWAEB+he1ylnjo1og34aLT4Olg97TfLchGgGxx69H0lWGppixzj8/Ye3xH7xNWjMK5AKgdRw4re0xAJ12W1NjC64skEPhp33qlfUuZhNG6AIF6HoaVw7ihxxO9DRx4hL6+/Q5/ymImMiOBk1M4n5RAa6xXaDGf8sN/YAETfEx688/kVaEJBiYf+ZgnAsMzGn1GLJp/KZipYjReUBGBkbnOzmIvQTaV2s96zAqBDtQz4A+vdGeIvc9QW+3OniYGyCYx5w4FTrh0CmhwVGq0Xk1v/jX/OYGOLx4ESnecHbJS3oH0Pxfivzi2dXqN2GByKB0HAZ3FXbN5icYs+DD3Ba0LGo3G7ajf+ivbM3aozm/vMnTVBCCTTwV9Nawdp5LKKANekBeTqyCKORZ3wDcbYMm5rIvIev2DzXKAj0t2VMFRCaSumE4EtKmgOk0/3+WYz/rFwh8WWBbmM1NijdBixwE2q6qmOdi+wU0r1Fagpga1vRO8F/x2Xn2S7R2QainlYiBMUO6KBbT2Tlow/gB4yrrllQA7NJEiEAxW9y+lgnRnanp94p0FJmT24eFQqahr1/JWUlZN3MWV0iXenX0DVa8u8RM9Sgptk8Ps8WrhBXJVvukOuSUy25jn8prYL3xSNeE9PVK5Y69m1X/AE0bwCX8pW0DLT7a80tuWRtsKamx8kXly2LUFTWCMYkSoZclYlQ3nChB0rrSqKpdpjpQU60/21r7hpA1MUbFI9n7BvDjfH6jhEdKxjt0AJxUEURu7zR2V/l+HH4A3ELNN99xN75pFQ9W1927WtkJdDn8RSSFvqGhIikV2nfrMc/fQZaJUsOfQ8XNxJM3/saiNKUj+zomYIKuVKsBPzgEc15uveinzmXFvOoATeXLAq4ASljHgaBJa9irXVtU1F58/rGkuipGoCSDDR0TGEL9MAgFPDFNAjPmwnSKcJoA9YthyXAhYj4oxhLqNXVVbQXuKD28S9RHQHZet1Zc06nVDixvsX1x7OYXb8eBX1avSMhBRqtczjUrxqMcOiWyQBkpqn/M8xVY/8FaoD1ELMH7Dqvxh7y6tLt+22UGMBIByrC89D7G0JVag4lseMvSbwpLkdScJqAz4TBoOpfG+PRmAqxcvWooCD3Bf0lRwXtzHEXN4brr2lAs1DMrU1nrDFhUy222kG1oUZNe9MckjVRAvlh6YSdmkI4pdNMEP+qrSd1mSQ+lNelL0ihRfpl7sxc+O7sQApbtMrEVOcWxTHaRcEgGmmVkLuIYmZ8HG4rnosSaOZTNLzV0nIRbB8L7Se03CQ6/qx6WFZip/LPW1TpsrtBlbJAPW/XTbMLyTXZT9QWxt9vKvRkdxBwauTXzPWnWkTWLuEmpOA0Jb8nHUpKs8F7qByf1iRlQlM7gea5zHT/tXeWitAZ7AewZK3FKMSOkf9xp9THmed/POhsAmt1jOqTSE2wmY/qgdv5HiVrm80LaESczOeI8l9E1VskJ/RVJ/RueYlI0fZaBoT9Li0DkJnkob6VeIfk1Lxwhp00besZ/7zMgYdp6QoTYIBpiUYlgyzaz757ooHonsb5QurgJRDaT91Bax2xP1gNODPr+y5UEv8cyo4JJgNgIWJVJkugn5Fi3dbln6aemOjc44/9RJ2zBLZ132TECJJ9gTPnpDruCrNDqroA962YmLDLQwPXH0YBJwu9GnUqvP3z24wyk93ytkIrPws7q42H7/6PnpI3uPN3XHwdHkeMw2KN/gCVcCoIJDZo8EFZek6xGROnZYPN8iaLYfDf5b8WH83awmetYvsO60RHx9mURTXwX8OpR+OVYALCbKj8J1wx2WyC/xvdb1TKoleEw5ReKdz+2s5plyO9SDudzNItxWtPDMAvuzSoJ5imnAsOoiuahS3W7LuKoxR60U85Ar34I62BWxxuGLmsZgNEwW0blsRRZpAm288VPHr/Lek9IcV3dXkl5USQJdKFBVrbw/knrnGgRFC6G4SF/joMXVq6fDUKPd/TmhYyUZcmmLmz2r4x8T2g5i18KkDEgkvamZ+WitpTMQ5wARz47L6z5PCSd74ble3VDQ+aFHAVK++mmkDfx2KdCMJCK+82s22cyO1G3j5Vn5R44EsUmyk8k3smaRoY71KYt4wWIbMsg/8mgdmr++pZd1bxNLWhzIhxIGqBVwIV0BQzherLiDMSZY9nc85mCzvBcrZz3ar+5b70cRXbnNjlwYRphta5ilxiY8Zjjj0okzz6OONYfCVCoPx5G3sNsq/7BBSvVnBJVDVZ56HSPdmnG0qQvQoScEmhaGa2MggqwdcbbbInyB3HZ3IITNI2qWBgkyl6gwTYlfcK9z4f+mGCONhMDtT/5oA/POEXuF0TciMfGpbqCG0+etGupHsaL79B2OABKIMuyFdaCiW7c6oJaCPnUpjvK2NPnRPNTq6CkvBm3yiNsjPdcMIg2hbeO+2lbbqzZDxW9gzre5wcOS2+saBulbuqijxa60wF/oyM7y1yJQfKhZqAYJKDOrpJKhFvtksepJ4IRcQ/WyzIaPPSSc2cTlpWS5OijgRUavVerCAT1ebCepnnQoG+D5BX+LOhURz4CrwoCSmkDfsqATXL4KciCKrHG60Yxa4K9cQLYO30xTSFKSKlA+l7oyxJ2DoBuHS6fece+pJtjmb2au8luaTqcTuQhMpboORdZlsKhDeLk2QlLibtwUOBROxRiBIGifPaaxC6FeYm2ykUtGDTHsDYnsktG8kkRfs5g5Ir97/Fsu8//8jFu7)...
PPPS. People on [r/adventofcode](https://www.reddit.com/r/adventofcode/comments/rizw2c/2021_day_18_solutions/hp14ia0/) have been mentioning [Zippers](https://en.wikipedia.org/wiki/Zipper_(data_structure)) as a way to solve today's problem; it's the first time I hear of them, so I might as well look into this again and implement an alternate solution!

2021-12-17 (permalink)

Advent of Code: [2021/17](https://adventofcode.com/2021/day/17)

We treat ourselves with a little physics problem today:
Ahead of you is what appears to be a large ocean trench. Could the keys have fallen into it? You'd better send a probe to investigate. The probe launcher on your submarine can fire the probe with any integer velocity in the `x` (forward) and `y` (upward, or downward if negative) directions. For example, an initial `x`,`y` velocity like `0,10` would fire the probe straight up, while an initial velocity like `10,-1` would fire the probe forward at a slight downward angle.
Continues:
The probe's `x`,`y` position starts at `0,0`. Then, it will follow some trajectory by moving in **steps**. On each step, these changes occur in the following order:
>
- The probe's `x` position increases by its `x` velocity.
- The probe's `y` position increases by its `y` velocity.
- Due to drag, the probe's x velocity changes by `1` toward the value `0`; that is, it decreases by `1` if it is greater than `0`, increases by `1` if it is less than `0`, or does not change if it is already `0`.
- Due to gravity, the probe's `y` velocity decreases by `2`.
OK, let's create a PROBE structure with `x`, `y`, `vx` and `vy` slots, and let's add a MOVE function as well, implementing the _step_ logic above:
(defstruct (probe (:conc-name) (:constructor %make-probe)) x y vx vy)

(defun make-probe (vx vy) (%make-probe :x 0 :y 0 :vx vx :vy vy))

(defun move (probe)
  (with-slots (x y vx vy) probe
    (let ((dx (<=> vx 0)))
      (incf x vx)
      (incf y vy)
      (decf vx dx)
      (decf vy)))
  probe)
For the probe to successfully make it into the trench, the probe must be on some trajectory that causes it to be within a **target area** after any step. The submarine computer has already calculated this target area (your puzzle input).
Here is an example input:
target area: x=20..30, y=-10..-5
For this, we are going to create a new structure, AREA, having slots for the min / max `x` and `y` values it represents; we are also going to implement PARSE-INPUT to extract `x1`, `x2`, `y1`, and `y2` values from our input, and create a new AREA instance off of it:
(defstruct (area (:conc-name) (:constructor %make-area)) x1 x2 y1 y2)

(defun make-area (x1 x2 y1 y2)
  (%make-area :x1 (min x1 x2) :x2 (max x1 x2)
              :y1 (min y1 y2) :y2 (max y1 y2)))

(defun parse-input (data)
  (flet ((parse (string)
           (cl-ppcre:register-groups-bind ((#'parse-integer x1 x2 y1 y2))
               ("target area: x=(-?\\d+)..(-?\\d+), y=(-?\\d+)..(-?\\d+)" string)
             (make-area x1 x2 y1 y2))))
    (parse (first data))))
Find the initial velocity that causes the probe to reach the highest y position and still eventually be within the target area after any step. **What is the highest y position it reaches on this trajectory?**
To recap:

- We know the position where the probe is shot from, i.e. `0,0`
- We know where the target area is, i.e. our input
- We know how the probe moves
- We need to figure out the initial velocity which a) maximises the `y` value, and b) causes the probe to eventually hit the target area (without overshoot!)

We are going to brute-force it:

- For all the horizontal velocities in a _sensible_ range
- For all the vertical velocities in a _sensible_ range
- Shoot the probe
- Check that the probe does not miss the target area
- Maximise `y` values
(defun solve (target)
  (loop for vx from (vx-min target) to (vx-max target) maximize
        (loop for vy from (vy-min target) to (vy-max target)
              when (shoot target vx vy) maximize it)))
The tricky part here, is of course to figure out some _sensible_ ranges to use, for the horizontal and vertical velocities, but let us worry about the simulation part first:

- We shoot
- We keep on moving the probe until it gets past the target
- We keep track of the maximum reached `y` value, and if the probe ever make it to the target area, we return it
(defun shoot (target vx vy &aux (probe (make-probe vx vy)))
  (loop until (past-target-p target probe)
        when (on-target-p target probe) return y-max
        do (move probe)
        maximize (y probe) into y-max))
A probe gets past the target area if its `y` value is smaller than the area _smallest_ `y` value:
(defun past-target-p (target probe) (< (y probe) (y1 target)))
On the other hand, a probe hits the target if its `x` and `y` coordinates are contained within the box defined by the target area min / max `x` and `y` values:
(defun on-target-p (target probe)
  (and (<= (x1 target) (x probe) (x2 target))
       (<= (y1 target) (y probe) (y2 target))))
Now, let's do some guesswork to figure out some _sensible_ ranges to use for our horizontal and vertical velocities.

The maximum horizontal velocity we can shoot the probe at, is the maximum `x` value of the target area; one more, and we are going to miss the target:
(defun vx-max (target) (x2 target))
What about its minimum value? Well, the probe loses speed at each step, and if we don't want to miss our target then it means the horizontal speed of the probe when it reaches the target area, has to be `0` at least; so, if we start from the minimum `x` value of the target area, and keep on _accelerating_ towards `x=0`, we should find our minimal horizontal velocity value:
(defun vx-min (target &aux (x (x1 target)))
  (loop while (> x 0) for vx from 0 do (decf x vx) finally (return vx)))
On the vertical axis too we lose speed at every step; but what's interesting about this, is that if we shoot up with vertical speed equal to `10`, by the time the probe reaches `y=0` again (i.e. after it has decreased, reached its highest `y` value with vertical speed of `0`, and then came down again), its vertical speed would now be `-10`; also, the probe keeps on moving faster and faster after it has come back to `y=0`, and if we don't want miss our target, we cannot set the vertical speed any higher than _minus target's min `y` value_:
(defun vy-max (target) (- (y1 target)))
The minimum value?  Well, `0` of course, given that we are trying to shoot the probe as high as possible:
(defun vy-min (target) (declare (ignore target)) 0)
Cool!
> (solve (parse-input (uiop:read-file-lines "src/2021/day17.txt")))
5565
We change our mind for part 2, and rather than focusing on shooting as high as possible, we want to count all the distinct velocities that cause the probe to hit target:
How many distinct initial velocity values cause the probe to be within the target area after any step?
First we change SOLVE to cater for both part 1 and part 2 requirements:

- For each horizontal velocity
- For each vertical velocity
- If we eventually hit the target
- Maximize the highest `y` value reached by the trajectory
- And add `1` to a counter
(defun solve (target &aux (part1 0) (part2 0))
  (loop for vx from (vx-min target) to (vx-max target) do
        (loop for vy from (vy-min target) to (vy-max target) do
              (uiop:if-let (highest (shoot target vx vy))
                (setf part1 (max part1 highest)
                      part2 (1+ part2)))))
  (values part1 part2))
We are not done yet!  We were trying to maximize the `y` value of our trajectories previously, so we had our VY-MIN always return `0`; well, for part 2 we need to look into negative vertical velocities too, and the smallest value is if we aimed at the smallest `y` value of the target area -- one more, and we would miss it!
(defun vy-min (target) (y1 target))
And that's it!

Final plumbing:
(define-solution (2021 17) (target parse-input) (solve target))

(define-test (2021 17) (5565 2118))
Test run:
> (time (test-run))
TEST-2021/17..
Success: 1 test, 2 checks.
Evaluation took:
  0.338 seconds of real time
  0.307870 seconds of total run time (0.297925 user, 0.009945 system)
  91.12% CPU
  779,177,010 processor cycles
  1,669,536 bytes consed
PS. Here is my [REPL buffer](https://topaz.github.io/paste/#XQAAAQDNCgAAAAAAAAAUGQimgx+p6PZhRh7uIPIyuyAIK6ZLP67xEPqz09kRR9BiodGX5LsCfgvy7BcuypYSfKKqx+AtXqQWFxe0nzhH61j8l3QKC7iUulObpvghgkmWSzGbWXfAo2DfFk/uD/VFsTKmOVQ8gM51BSHXEnuVpXLu3En+2I7QlQuk+v/qdHpkX1k2pvnzU3FY17neHkMiN07lznDaUuQ9FveRKi8KRjH0Tkbz6+e4b1GESJw0SPeXeqbQJIVjsbogeqPejiqBNkvWzwHsvNNDCUMoVFWzeK+lYgUPLHrDPQDnia5mR2YZnmkldkqJN8i1H6o70I47UqxX+Sr37anfTCLucm4vhiVOo9DnaqRoazpfALZhTmwAFxV1NNbaVoXrJtfuEDApdHKT1UPG9PBVcHC/huBEqL7oWGklWU7yaCvsjVgokBwUxOc8XFzV+HSM4BhNrlDcpVLIZGWnG2Ga1SM3IXYUV3/Dcmsux4713urVdHE84mZoY0ZsUmkxYlH5mGLtAL/kop2VawVsveSpBF4ZD7v3a+5fhP88J90bg2dxPi3Dj+wKaQqwq/CACIWwXVWmt+Ea1RstX9fcvOOqp2LY5FhBAaR0TyrEPa+iVawT/eboY2CiYDuQq33e/ENI3a2LYll5GBvMycU2RlaTL7xVXHfjt+lxmu74+DYOq0UosOCaou9aQaMwYlH2X8ecR9j0/t1Fsp+Yfuc8VZUE2jt9DQ1Ix1s3b63dHAG8V4YdtSjHutjGjrDsIZ2a5irbaD83PZsecsxhBgAXouytNJOBp5nwEAc1QFe3Z78PB6PuVU8JWvYQZn9sgVzGouy6LaVcfJfjvKbbTFnCFquXDpYrUf+mbDZak+pxubXUYZs+N6L9iua9OEBYhysjz7cCffsi9zHJI6aMkDC0H3on3i8sQwuhLC2vt/0FPvFLy6ubHhHS/BaKfoMu+whNzHLXFxr1Xf6a3B9yzxDFPpVB92TqaCebEaIG1rb0aTXIjl664T+qAiRBvwTbteQk3EIoKStPd7Cj1MOfuedOXhM/k9xXmxZglPi1LS9dPJlfqCHf3TgQFieew3CzCe0xkGTmzEQTQ2BkxH1Ijuv7/v1CKw==) for the day, and I kind of feel ashamed by the things you can see there:

- `x1` and `x2` are min / max, while `y1` and `y2` are max / min -- why?!
- PAST-TARGET-P was not working, so inside SIMULATE I ended up breaking out of the loop if the probe `y` value is smaller than `-200` -- lol
- With pen and paper I (thought I) figured out the _optimal_ vertical speed for part 1; but was not working with the input, hence the 1+
- For part 2 instead, I made the search space big enough to make it work with the example, and then feed the actual input to it -- and `304.470` seconds later, it spit out the right answer!

It's clear my brain is not _functioning_ too well under time pressure, but _whatever it takes_ right?! ;-)

2021-12-16 (permalink)

Advent of Code: [2021/16](https://adventofcode.com/2021/day/16)

We reached open waters; we received a transmission form the Elves; the transmission uses Buoyancy Interchange Transmission System (BITS), a method for packing numeric expressions into a binary sequence; we need to figure out what the Elves want from us.

(It's one of those wordy problems, so sit tight!)
The first step of decoding the message is to convert the hexadecimal representation into binary. Each character of hexadecimal corresponds to four bits of binary data:
Easy:
(defun parse-input (data)
  (hexadecimal->binary (first data)))
The BITS transmission contains a single packet at its outermost layer which itself contains many other packets. The hexadecimal representation of this packet might encode a few extra 0 bits at the end; these are not part of the transmission and should be ignored.
Let's update PARSE-INPUT to account for that:
(defun parse-input (data)
  (parse-packet (hexadecimal->binary (first data))))
Now, we know we are going to be fiddling with bits a lot (well, `0` and `1` characters more than bits), so let's create ourselves BIN, a utility function to:

- extract a range of characters from a given string
- parse that into a binary number

PARSE-INTEGER already does all this, so BIN will just be a tiny wrapper around it:
(defun bin (s start &optional (end (length s)))
  (parse-integer s :start start :end end :radix 2))
Now we got everything we need to implement the parser.  Let's begin:
Every packet begins with a standard header: the first three bits encode the packet **version**, and the next three bits encode the packet **type ID**. These two values are numbers; all numbers encoded in any packet are represented as binary with the most significant bit first. For example, a version encoded as the binary sequence `100` represents the number `4`.
Continues:
Packets with type ID `4` represent a **literal value**. Literal value packets encode a single binary number. To do this, the binary number is padded with leading zeroes until its length is a multiple of four bits, and then it is broken into groups of four bits. Each group is prefixed by a `1` bit except the last group, which is prefixed by a `0` bit. These groups of five bits immediately follow the packet header.
OK, let's give PARSE-LITERAL a shot:

- We pop `1` bit from the stream, interpret it -- MOREP
- We pop `3` more bits from the stream, and _add_ it to `literal` -- we are reading `4` bits at a time, starting from the most significant ones, so for every group we need to multiply the current result by `16` and then add the current group to it
- We do this until there are no more groups to read
- When done, we return the literal value and the number of consumed characters
(defun parse-literal (s &optional (start 0) &aux (literal 0))
  (loop
    (let ((morep (= (bin s start (incf start)) 1)))
      (setf literal (+ (* literal 16) (bin s start (incf start 4))))
      (unless morep (return (values literal start))))))
Every other type of packet (any packet with a type ID other than `4`) represent an **operator** that performs some calculation on one or more sub-packets contained within. Right now, the specific operations aren't important; focus on parsing the hierarchy of sub-packets.
Cotninues:
An operator packet contains one or more packets. To indicate which subsequent binary data represents its sub-packets, an operator packet can use one of two modes indicated by the bit immediately after the packet header; this is called the **length type ID**:
>
- If the length type ID is `0`, then the next **15** bits are a number that represents the **total length in bits** of the sub-packets contained by this packet.
If the length type ID is `1`, then the next **11** bits are a number that represents the **number of sub-packets immediately contained** by this packet.
>
Finally, after the length type ID bit and the 15-bit or 11-bit field, the sub-packets appear.
OK, that's a lot to unpack, but let's see what we can do:

- We pop `1` bit and do a switch/case on it
- If `0`, we pop `15` bits (the number of bits used by the current packet sub-packets), and recurse into PARSE-PACKET until we cannot parse packets anymore (note: the `3` there is used to discard any padding value introduced by the hex to bin conversion)
- If `1` instead, we pop `11` bits and recurse into PARSE-PACKET a number of times equal to the number we just read
- Lastly, we return all the sub-packets we read, plus the number of consumed characters
(defun parse-operator (s &optional (start 0) &aux pkts)
  (ecase (bin s start (incf start))
    (0 (let* ((remaining (bin s start (incf start 15))))
         (loop while (> remaining 3) do
               (multiple-value-bind (pkt consumed) (parse-packet s start)
                 (setf remaining (- remaining (- consumed start))
                       pkts (cons pkt pkts)
                       start consumed)))))
    (1 (dotimes (_ (bin s start (incf start 11)))
         (multiple-value-bind (pkt consumed) (parse-packet s start)
           (setf pkts (cons pkt pkts) start consumed)))))
  (values (reverse pkts) start))
All is left now, is to implement PARSE-PACKET:

- The first `3` bits represent the packet version
- The next `3` bits represent the packet type
- Next is the call to either PARSE-LITERAL or PARSE-OPERATOR
- The result is all wrapped up inside a PLIST with: `:version`, `:type`, `:literal`, and `:sub-packets`
(defun parse-packet (s &optional (start 0))
  (let ((version (bin s start (incf start 3)))
        (type (bin s start (incf start 3))))
    (multiple-value-bind (parsed start)
        (if (= type 4) (parse-literal s start) (parse-operator s start))
      (values (list :version version
                    :type type
                    :literal (if (= type 4) parsed)
                    :sub-packets (if (/= type 4) parsed))
              start))))
OK, but what's the task for part 1?
Decode the structure of your hexadecimal-encoded BITS transmission; **what do you get if you add up the version numbers in all packets?**
- We take the `:version` of the current packet
- We recursively calculate the version-sum of all the sub-packets
- Finally we add it all up together
(defun part1 (pkt)
  (labels ((recur (pkt)
             (+ (getf pkt :version)
                (loop for next in (getf pkt :sub-packets)
                      sum (recur next)))))
    (recur pkt)))
Cool!

For part 2 instead, we are asked to calculate the value of the expression the packet represents:
Literal values (type ID `4`) represent a single number as described above. The remaining type IDs are more interesting:
>
- Packets with type ID `0` are **sum** packets - their value is the sum of the values of their sub-packets. If they only have a single sub-packet, their value is the value of the sub-packet.
- Packets with type ID `1` are **product** packets - their value is the result of multiplying together the values of their sub-packets. If they only have a single sub-packet, their value is the value of the sub-packet.
- Packets with type ID `2` are **minimum** packets - their value is the minimum of the values of their sub-packets.
- Packets with type ID `3` are **maximum** packets - their value is the maximum of the values of their sub-packets.
- Packets with type ID `5` are **greater than** packets - their value is **1** if the value of the first sub-packet is greater than the value of the second sub-packet; otherwise, their value is **0**. These packets always have exactly two sub-packets.
- Packets with type ID `6` are **less than** packets - their value is **1** if the value of the first sub-packet is less than the value of the second sub-packet; otherwise, their value is **0**. These packets always have exactly two sub-packets.
- Packets with type ID `7` are **equal to** packets - their value is **1** if the value of the first sub-packet is equal to the value of the second sub-packet; otherwise, their value is **0**. These packets always have exactly two sub-packets.
Of all the operators listed above, only three behave differently from builtins: >, <, and =; builtins all return T or NIL, while we want to return `1` or `0` here; let's fix that:
(defun b> (a b) (if (> a b) 1 0))
(defun b< (a b) (if (< a b) 1 0))
(defun b= (a b) (if (= a b) 1 0))
**What do you get if you evaluate the expression represented by your hexadecimal-encoded BITS transmission?**
Again, another recursive solution:

- If the current packet is a _literal_ one, we return the value
- Otherwise we pick the _right_ operator function based on the type
- We recursively _evaluate_ all the sub-packets
- Finally apply the operator to the the values returned by the sub-packets
(defun part2 (pkt)
  (labels ((recur (pkt &aux (literal (getf pkt :literal)))
             (cond (literal literal)
                   (t (apply
                        (aref #(+ * min max identity b> b< b=) (getf pkt :type))
                        (loop for next in (getf pkt :sub-packets)
                              collect (recur next)))))))
    (recur pkt)))
And that's it!

Final plumbing:
(define-solution (2021 16) (pkt parse-input)
  (part2 pkt))

(define-test (2021 16) (943 167737115857))
Test run:
> (time (test-run))
TEST-2021/16..
Success: 1 test, 2 checks.
Evaluation took:
  0.002 seconds of real time
  0.002154 seconds of total run time (0.001430 user, 0.000724 system)
  100.00% CPU
  5,031,393 processor cycles
  1,062,096 bytes consed
Note: replacing APPLY with CONS inside PART2 will cause the function to actually output the AST of the expression stored inside the packet:
> (defun part2 (pkt)
    (labels ((recur (pkt &aux (literal (getf pkt :literal)))
               (cond (literal literal)
                     (t (cons
                          (aref #(+ * min max identity b> b< b=) (getf pkt :type))
                          (loop for next in (getf pkt :children)
                                collect (recur next)))))))
      (recur pkt)))
PART2

> (part2 (parse-input (uiop:read-file-lines "src/2021/day16.txt")))
(+ (MIN 2479731930) 7 (* (B< 10381 100) 8) (* 188 242 135) (* 236 42)
   (MAX 57613) (* 189 (B> 2027 2269116)) (* 131 104 201 105 155)
   (* 2920854314 (B> (+ 6 3 7) (+ 13 3 15))) 6
   (* 2165 (B= (+ 14 8 5) (+ 2 15 7))) 38575271553
   (* (B< (+ 12 5 3) (+ 11 6 12)) 41379) (* (B> 727598 727598) 752530)
   (* 2939 (B< 1007934 660540)) (MAX 498405352 113 1358) (* 3146 (B> 5 5))
   17083744931 (* 11) (MIN 214 46980) 4 (MIN 1 25270 47 240)
   (MAX 247 45266 27 147874 49322) (* 45822 (B< 2491 2491)) 12
   (* (B= 129 1004426) 4030837) 2368 172 (MAX 186 479 31 225) (+ 62424214260)
   (* 3722699 (B= 232 232)) (* 220 (B< 3667 50625)) (+ 960 9)
   (+ 852362 59034 3 10186466 44654) (* (B> 33431 39079) 712787)
   (* 146 140 4 55) (+ 3682 248 76) (MAX 14 184) (MIN 2 1206 15 1476 5005)
   (MAX
    (*
     (+
      (MIN
       (+
        (+
         (+
          (MAX
           (MIN
            (MAX
             (MIN (MIN (+ (MIN (+ (MIN (+ (+ (MAX (* 2673))))))))))))))))))))
   (* 1343 (B> 38853 33177)) (* 119 (B< (+ 3 14 2) (+ 13 8 3)))
   (+ (* 4 5 7) (* 6 7 10) (* 11 3 3)) (+ 202 2004893107 2899 107)
   (* 901241660305 (B> (+ 4 4 8) (+ 13 6 3)))
   (* (+ 9 3 12) (+ 9 3 15) (+ 11 4 12)) (* (B= 217 40) 2513437316) 13
   (* 11 (B< 2706 2706)) (MIN 63 216 14909) (* (B< 3296 812743) 52657) 72078562
   (* (B> 144588154 652519) 5077920))
You feed that to EVAL, and there you will have the answer for part 2:
> (eval *)
167737115857
For comparison, here is the [code](https://topaz.github.io/paste/#XQAAAQDICwAAAAAAAAAUGQimgx+p6GAb9hHoZ3fvJbGJPjDLntuErw51NVAnlOviYJRwBSse/MwTGbUr6VRb/SGDBS7kDEyk5SGdBerzuXLz99I6xz4UE07I7T9xro7z8E3/5FvKuhBM1EyjH89HLViGE1wOfzuYF4dSuAU0kZjDxsAacWmaJ2GqNu7x7WpDV0pNg5DozIPKhOWVzh70FgJjwFGNxhijdV7azmNxn8SZ008PetsCLXUXQ1FT/jqVGAmTyqN/2vDApllBSrD13UxM0cWt63WWmkbx/xQzqCBsb4DoZb/w2+FqOQd83gKDS/YWR23w3BaG2T67KyKI1Zu0dbqWc3rr5M7rqtcyJM84KNZqz2hW3W27glGGzniv8ThG+pxCKZfptUY1/G2ZDQdAVOyVHeZN06972GEJJNtm6PD0PtRWy/Vac7vRmCYPh5K8FmWkQuEBYJAGqOD7jwn/8L9wX+E6jc35rN7bcf4phh/4OjrifPuV89u2hmK0juizfB24T9DBvuStiUgbAWf4pGLPMkQEDw2ADeL5tU121zh/Agza6vISnSjJ5efeWPNvWo68dqURjJRtO8l1mAWTKCsbsOe7VX3cvnu57rEbt6tfLejUcaUIdOuO+91RP4DtJ9n15vAiwuH3xMrz2q+8XT2A3EFbeh/ASEfcssk4hPcnopj7c6sVWLbY7sQpKjcCIYBRqxVR1aHZvV+XPIoix1x5SxSWv2Uj0hGfQesMP8fbKFNlflpC6tMebZ6u09GY+wnvWGBAiQV8HQL3SNF5kYOuqvaIy+2dX7cRtMKM0/sgg3SFKh75KtxYCY2RNSZE1jgTZtiLlsN+1Kh9WJNfMQhEUU0VdD1h24ZUPmQy7jDvXKGs1E/ddB1zxHVmNRz1L1poE18CJIcEOiFIqfgj4gubRTzFJsp8J1ZMJxi4h2N4ZNOersMTz7iRvzx1UkPK1uQS9WftxQ7CWueVHa6GGM3lABVXI0vW/GS2MZ1MSvc4/ZPSqrvx50dQIT4AR2fYE+s3R6bCAM612MP+q1lHcegVLv6Z2E5+Z+rVH1eEOk/3wFkc31NjP98pIJPoil/i6m9Yfuuh5tscRUqoDRUF58qlwxU2zXIlnjv9T3KcBlsZagbElfimHt9YpMUiEM13iJo8rH6K+TUCJMARFj/wJDdwMdo2ECMGwye3sDPpfFbV1D4ZPequ7prPK+VCCltmSKCugTNi/UPntF73fQI2AyMNzOxqXPm/LiUnL7d9fX8EUqLRe5chvLeMt0bvrJw8Tt1MSjBCNRTotImsijgWTz/91cNA) that I used to get the stars; in there:

- The _parser_ and the _evaluator_ are the same thing, so to sum the versions and evaluate the expression (part 1 and part 2 respectively) I had to resort to global ^Wspecial variables to keep track of the sum so far
- I was creating new strings all over the place, instead of simply playing with indices

Note for the future self: when implementing a parser, think about PARSE-INTEGER:

- It uses a string as its input stream
- It optionally can be told to skip a certain amount of characters from the beginning of the string
- It returns not only the parsed object, but also the index of the next unprocessed character -- and that makes for easier composition of the different parsing functions you most likely are going to implement

<3

2021-12-15 (permalink)

Advent of Code: [2021/15](https://adventofcode.com/2021/day/15)
You've almost reached the exit of the cave, but the walls are getting closer together. Your submarine can barely still fit, though; the main problem is that the walls of the cave are covered in chitons, and it would be best not to bump any of them.
Continues:
The cavern is large, but has a very low ceiling, restricting your motion to two dimensions. The shape of the cavern resembles a square; a quick scan of chiton density produces a map of risk level throughout the cave (your puzzle input).
For example:
1163751742
1381373672
2136511328
3694931569
7463417111
1319128137
1359912421
3125421639
1293138521
2311944581
You start in the top left position, your destination is the bottom right position, and you cannot move diagonally. The number at each position is its **risk level**; to determine the total risk of an entire path, add up the risk levels of each position you **enter** (that is, don't count the risk level of your starting position unless you enter it; leaving it adds no risk to your total).
OK; as usual, let's start with the input -- let's parse ours into a HASH-TABLE:
(defun parse-risk-levels (data &aux (levels (make-hash-table :test 'equal)))
  (loop for r below (length data)
        for string in data do
        (loop for c below (length string)
              for ch across string do
              (setf (gethash (list r c) levels)
                    (- (char-code ch) (char-code #\0)))))
  levels)
Next we need to find the path, starting from the top left corner and ending to the bottom right one, that minimizes the total risk, i.e. the sum of all the risk levels along the path.

Well, it looks like we are only going to have to use A* for this, with the usual MANHATTAN-DISTANCE to the end state as heuristic:
(defun lowest-risk (levels &aux
                           (row-max (loop for (r _) being the hash-keys of levels maximize r))
                           (col-max (loop for (_ c) being the hash-keys of levels maximize c))
                           (end (list row-max col-max)))
  (search-cost (a* '(0 0)
                   :goal-state end
                   :test 'equal
                   :neighbors (partial-1 #'neighbors levels)
                   :heuristic (partial-1 #'manhattan-distance end))))
Pretty standard NEIGHBORS function (note: we are not allowed to move diagonally):
(defparameter *nhood* '((-1 0) (0 1) (1 0) (0 -1)))

(defun neighbors (levels p)
  (loop for d in *nhood* for n = (mapcar #'+ p d)
        for level = (gethash n levels)
        when level collect (cons n level)))
Et voila`!  Let's take a look at part 2 now:
The entire cave is actually **five times larger in both dimensions** than you thought; the area you originally scanned is just one tile in a 5x5 tile area that forms the full map. Your original map tile repeats to the right and downward; each time the tile repeats to the right or downward, all of its risk levels **are 1 higher** than the tile immediately up or left of it. However, risk levels above `9` wrap back around to `1`.
And:
Using the full map, what is the lowest total risk of any path from the top left to the bottom right?
So to repeat:

- We need to expand the input map, both horizontally and vertically
- Every time we do this (again, 5 times horizontally, and 5 times vertically), we use the risk levels of the input map, incremented by 1, 2, 3...(0 for the original tile, +1 for the second one, +2 for the third one...)
- When risk levels get higher than `9`, the wrap around starting from `1` (and not `0`)

Let's have the MASSAGE function take care of all that:
(defun massage (levels &aux
                       (rows (1+ (loop for (r _) being the hash-keys of levels maximize r)))
                       (cols (1+ (loop for (_ c) being the hash-keys of levels maximize c)))
                       (rez (make-hash-table :test 'equal)))
  ;; Extend the map -- horizontally first
  (loop for (r c) being the hash-keys of levels using (hash-value risk) do
        (loop for i below 5
              for c1 = (+ c (* cols i))
              for risk1 = (+ risk (* 1 i)) do
              (setf (gethash (list r c1) rez) (if (> risk1 9) (- risk1 9) risk1))))
  ;; Then vertically
  (setf levels (copy-hash-table rez)) ;; copy because we are changing it as we scan it
  (loop for (r c) being the hash-keys of levels using (hash-value risk) do
        (loop for i below 5
              for r1 = (+ r (* rows i))
              for risk1 = (+ risk (* 1 i)) do
              (setf (gethash (list r1 c) rez) (if (> risk1 9) (- risk1 9) risk1))))
  rez)
We call MASSAGE first, then pass the result to LOWEST-RISK, and that's it!

Final plumbing:
(define-solution (2021 15) (risk-levels parse-risk-levels)
  (values (lowest-risk risk-levels) (lowest-risk (massage risk-levels))))

(define-test (2021 15) (745 3002))
Test run:
> (time (test-run))
TEST-2021/15..
Success: 1 test, 2 checks.
Evaluation took:
  1.256 seconds of real time
  1.193335 seconds of total run time (1.131199 user, 0.062136 system)
  [ Run times consist of 0.097 seconds GC time, and 1.097 seconds non-GC time. ]
  94.98% CPU
  2,890,660,005 processor cycles
  206,780,240 bytes consed
Few closing notes:

- I already got a [A*](https://topaz.github.io/paste/#XQAAAQDdBwAAAAAAAAAUGQimgx+p6Nchs0/Hc3cRxcmyfeV+Bzi2FxeDDBZEkgZaDBSyRjbUkuFXP6dhVAaOH8AG99AzdDUHR/lSrXrRzt2ru8sshOcmQ5UeY7P5AZQhS4ncXon3a6r8j5K/gr4xuK1jjTG4PHnVajamzB6t54TV5dSPS63MDOK6jD6CsHfC+VkV4bms/+I8MmYXI4GU2JxpblEiXKntPh1orBPfGpyTCfhx4v+kHU29G6mF1Y4EM6ozuoAyzjwxpY0eKSSV1zFprANFIIlsaPqm/nWF9W4i224u0tUm3IHb+tnMDrFtscipbIplaGiZN8B2Q1apClGT/+51hyLVNGM+4prkx8VWxBWeWuN62LCmMrY0iqRvFeCwKqpwg4s4VRogYMgeG3AOyY1DlJ0UJFetXLjJ+7pMA+8jUmYyE6xRJd9R+jn/cPnkNyDceLd1iFB/ODUgVXLcyrYb7jFIIeucRNGqkgQ7lwe0xvmyiBLkHyIe3ML8lZQcmhyinz9Ynab880cOGsBYBuTRjCS7dCvLr5XKDjw8SpdnWXgjVrzLA49wE+NZXVkPz6tTVzFUR36unRBsIJZJKfYRi3ZUToT1eFyKKayvxZpI9V1QkNJvJehPwqf797u2T57qMuE1UrtnhSso1kQh/BEo33aCHrNXfO/H71RV95KbXakyaLfqTAeeKA2hUrQSbAFnewyRS+3eKmTvbe0+XL1ltwMKAccDiSJR+DZ8iMZhQx91qBpU6W4Gu5c+XA4oT8DFrhGWR1aHzdLLf2ICtG8VUVnLizIL/+Wv790=) implemented in my utilities file, so part 1 was just plumbing things together -- first time I ranked below 1000!!!
- For part 2 instead, I had to use my brain a little more, but apparently neither me nor my brain were actually ready for it: first I got the wrap-around logic wrong, then I realized I was only expanding the map diagonally, and last, I bumped into some weird errors because I was iterating and changing the same HASH-TABLE at the same time

Anyways...
      --------Part 1--------   --------Part 2--------
Day       Time   Rank  Score       Time   Rank  Score
 15   00:10:48    464      0   00:59:10   1927      0
_"50 fucking minutes to _expand_ a map?!"_ Yes, I know, that's ridiculous!

2021-12-14 (permalink)

Advent of Code: [2021/14](https://adventofcode.com/2021/day/14)
The incredible pressures at this depth are starting to put a strain on your submarine. The submarine has polymerization equipment that would produce suitable materials to reinforce the submarine, and the nearby volcanically-active caves should even have the necessary input elements in sufficient quantities.
Continues:
The submarine manual contains instructions for finding the optimal polymer formula; specifically, it offers a **polymer template** and a list of pair **insertion** rules (your puzzle input). You just need to work out what polymer would result after repeating the pair insertion process a few times.
Here is how puzzle input is looking like:
NNCB

CH -> B
HH -> N
CB -> H
NH -> C
HB -> C
HC -> B
HN -> C
NN -> C
BH -> H
NC -> B
NB -> B
BN -> B
BB -> N
BC -> B
CC -> N
CN -> C
A note about how the polymer is being created:
The following section defines the pair insertion rules. A rule like `AB -> C` means that when elements `A` and `B` are immediately adjacent, element `C` should be inserted between them. These insertions all happen simultaneously.
And finally, the task for part 1:
Apply 10 steps of pair insertion to the polymer template and find the most and least common elements in the result. What do you get if you take the quantity of the most common element and subtract the quantity of the least common element?
As we need to make room for new elements, I figured I would use a list of characters instead of a STRING; so, the first line we are going to COERCE it into a LIST; the remaining ones instead, we are going to store them inside a HASH-TABLE:
(defun parse-input (data)
  (list (coerce (first data) 'list)
        (parse-insertion-rules (cddr data))))

(defun parse-insertion-rules (data &aux (rules (make-hash-table :test 'equal)))
  (flet ((rule (string)
           (cl-ppcre:register-groups-bind (from to)
               ("(\\w+) -> (\\w)" string)
             (setf (gethash (coerce from 'list) rules) (coerce to 'list)))))
    (dolist (string data rules)
      (rule string))))
I think we are going to _simulate_ this:

- For 10 times in a row -- DOTIMES
- We are going to evolve the current polymer -- TICK
- And at the end, we are going to return the expected result -- RESULT
(defun evolve (times input &aux (polymer (first input)) (rules (second input)))
  (dotimes (_ times) (setf polymer (tick rules polymer)))
  (result polymer))
What does TICK look like?

- We traverse the polymer by pairs
- See if the set of insertion rules contain one for the current pair
- If it does, we _collect_ the first element of the pair, plus the output of the insertion rule (i.e. this is how we insert the new element between the two existing ones)
- Otherwise, we _collect_ the first element of the pair only
(defun tick (rules curr)
  (loop for (a b) on curr
        for (z) = (gethash (list a b) rules)
        if z collect a and collect z
        else collect a))
Last, we are asked for the difference of the number of occurrences of the most and least common element:

- We calculate the frequency of each element, and sort them
- Pick the last one, the first one, and do the subtraction
(defun result (polymer &aux (freqs (sort (frequencies polymer) #'< :key #'cdr)))
  (- (cdar (last freqs)) (cdar freqs)))
Cool, let's take a look at part 2 now:
Apply **40** steps of pair insertion to the polymer template and find the most and least common elements in the result. What do you get if you take the quantity of the most common element and subtract the quantity of the least common element?
Uh oh...I am pretty sure the above is not going to work:
(defun evolve (times input &aux (polymer (first input)) (rules (second input)))
  (dotimes (i times)
    (setf polymer (tick rules polymer))
    (pr i (length polymer)))
  (result polymer))
Let's take this new version of EVOLVE out for a spin:
> (evolve 20 (parse-input (uiop:read-file-lines "src/2021/day14.txt")))
0 39
1 77
2 153
3 305
4 609
5 1217
6 2433
7 4865
8 9729
9 19457
10 38913
11 77825
12 155649
13 311297
14 622593
15 1245185
16 2490369
17 4980737
18 9961473
19 19922945
8844255
I am afraid we are going to need something clever than that!  But what exactly?

Well, if we think about it, insertion rules are defined for pairs of adjacent elements only; also, if the same pair of elements is present `10` times in the polymer, if an insertion rule for that pair of elements exists, then that will produce `10` new output elements; so what if we simply kept track of the number of occurrences of each pair of elements in the polymer?

Let's begin by _massaging_ the original polymer representation (i.e. the LIST of characters), into a HASH-TABLE counting the frequencies of each pair of adjacent elements:
(defun massage (input &aux (freqs (make-hash-table :test 'equal)))
  (destructuring-bind (template rules) input
    (loop for (a b) on template do (incf (gethash (list a b) freqs 0)))
    (list freqs rules)))
With this, we are then going to update TICK as follows:

- For each pair of adjacent element, `a` and `b` (and their number of occurrences, `n`)
- We check if an insertion rule for the pair exists (insertion rule producing the new element `z`)
- If it does, then we know the new polymer will contain both `(a z)` and `(z b)`; how many of these will it contain? Well, we started from `n` occurrences of `(a b)`, so we will have `n` occurrences of `(a z)` and `(z b)`
- Otherwise, if the no insertion rule is defined for `(a b)`, we simply carry that pair (and all its occurrences) along
(defun tick (rules freqs &aux (next (make-hash-table :test 'equal)))
  (loop for (a b) being the hash-keys of freqs using (hash-value n)
        for (z) = (gethash (list a b) rules)
        if z do (incf (gethash (list a z) next 0) n) and do (incf (gethash (list z b) next 0) n)
        else do (incf (gethash (list a b) next 0) n))
  next)
Last but not least, we got to update RESULT to account for the fact that the polymer has now a different representation.  How do we do this?

- We know the frequency of each pair, `(a b)`
- We know pairs are overlapping, i.e. for the polymer `(a b c)` we are going to have one entry for `(a b)`, one for `(b c)`, and one for `(c nil)`
- If we don't want to double count frequencies, we then have consider only the first element of each entry
(defun result (freqs &aux (ch-freqs (make-hash-table)))
  (loop for (a) being the hash-keys of freqs using (hash-value n) do
        (incf (gethash a ch-freqs 0) n))
  (- (loop for v being the hash-values of ch-freqs maximize v)
     (loop for v being the hash-values of ch-freqs minimize v)))
And that's it!

Final plumbing:
(define-solution (2021 14) (input parse-input)
  (let ((input (massage input)))
    (values (evolve 10 input) (evolve 40 input))))

(define-test (2021 14) (5656 12271437788530))
Test run:
> (time (test-run))
TEST-2021/14..
Success: 1 test, 2 checks.
Evaluation took:
  0.012 seconds of real time
  0.004045 seconds of total run time (0.003179 user, 0.000866 system)
  33.33% CPU
  28,206,847 processor cycles
  976,656 bytes consed

2021-12-13 (permalink)

Advent of Code: [2021/13](https://adventofcode.com/2021/day/13)
You reach another volcanically active part of the cave. It would be nice if you could do some kind of thermal imaging so you could tell ahead of time which caves are too hot to safely enter.
The submarine has a thermal camera; the camera has never been activated; to activate the camera we need to enter the code found on page 1 of the manual, page which falls out as we open the manual page 1; luckily for us, the paper is marked with dots and includes instructions on how to fold it up.  For example:
6,10
0,14
9,10
0,3
10,4
4,11
6,0
6,12
4,1
0,13
10,12
3,4
3,0
8,4
1,10
2,14
8,10
9,0

fold along y=7
fold along x=5
The first set of data represent dots, i.e. `x`, `y` pairs, with `0, 0` representing the top-left corner of the sheet; while the second set of data, are the folding instructions.

What are we asked to do with this?
How many dots are visible after completing just the first fold instruction on your transparent paper?
OK, first things first, the input.  We are going to store all the dots into a HASH-TABLE (with `(row col)` as keys, and Ts as values); for the folding instructions instead, we are simply going to use a list:
(defun parse-input (data &aux
                         (dots (make-hash-table :test 'equal))
                         folds)
  (dolist (s data (list dots (reverse folds)))
    (when-let (point (parse-dot s))
      (setf (gethash point dots) t))
    (when-let (fold (parse-fold s))
      (push fold folds))))
(defun dots (data) (car data))
(defun folds (data) (cadr data))

(defun parse-dot (string)
  (cl-ppcre:register-groups-bind ((#'parse-integer col row))
      ("(\\d+),(\\d+)" string)
    (list row col)))

(defun parse-fold (string)
  (cl-ppcre:register-groups-bind ((#'as-keyword axis) (#'parse-integer n))
      ("fold along (\\w)=(\\d+)" string)
    (list axis n)))
Now, applying the folding instructions should not take that long:

- For each fold instruction (only one in our case)
- For each dot
- If it's on the left of a vertical folding line, or above an horizontal folding line, leave it where it is
- Otherwise, project it left / up, using the folding line as rotation center (it sounds more complicated than it actually is)
- Lastly, count the number of distinct dots
(defun fold (input &aux (curr (dots input)))
  (dolist (f (folds input))
    (let ((next (make-hash-table :test 'equal)))
      (destructuring-bind (axis n) f
        (loop for (row col) being the hash-keys of curr do
              (ecase axis
                (:x (when (> col n) (decf col (* (- col n) 2))))
                (:y (when (> row n) (decf row (* (- row n) 2)))))
              (setf (gethash (list row col) next) t)))
      (setf curr next)
      (return-from fold (hash-table-count curr)))))
Cool!  What about part 2?  Well, we are asked to run all the folding instructions, and to _decode_ the message to use to activate the thermal camera:
Finish folding the transparent paper according to the instructions. The manual says the code is always eight capital letters. What code do you use to activate the infrared thermal imaging camera system?
OK, let's update FOLD to run all the instructions (and still store the answer for part 1), and then see how we could decode the message:
(defun fold (input &aux (curr (dots input)) part1)
  (dolist (f (folds input))
    (let ((next (make-hash-table :test 'equal)))
      (destructuring-bind (axis n) f
        (loop for (row col) being the hash-keys of curr do
              (ecase axis
                (:x (when (> col n) (decf col (* (- col n) 2))))
                (:y (when (> row n) (decf row (* (- row n) 2)))))
              (setf (gethash (list row col) next) t)))
      (setf curr next)
      (unless part1
        (setf part1 (hash-table-count curr)))))
  (values part1 (print-paper curr)))
PRINT-PAPER?  Nothing crazy:

- We figure out the max row and col values
- Then for each point from `(0 0)` to `(row-max col-max)`
- If a dots exists, we output a `#`, or a ` ` otherwise
(defun print-paper (dots &aux
                         (rows (loop for d being the hash-keys of dots maximize (car d)))
                         (cols (loop for d being the hash-keys of dots maximize (cadr d))))
  (with-output-to-string (s)
    (terpri s) ; print an additional newline, so I can better format the expected string
    (dotimes (row (1+ rows))
      (dotimes (col (1+ cols))
        (princ (if (gethash (list row col) dots) #\# #\Space) s))
      (terpri s))))
And that's it!

Final plumbing:
(define-solution (2021 13) (input parse-input) (fold input))

(define-test (2021 13) (724 "
 ##  ###    ## ###  #### ###  #  # #
#  # #  #    # #  # #    #  # #  # #
#    #  #    # ###  ###  #  # #  # #
#    ###     # #  # #    ###  #  # #
#  # #    #  # #  # #    # #  #  # #
 ##  #     ##  ###  #### #  #  ##  ####
"))
Test run:
> (time (test-run))
TEST-2021/13..
Success: 1 test, 2 checks.
Evaluation took:
  0.022 seconds of real time
  0.011407 seconds of total run time (0.005706 user, 0.005701 system)
  50.00% CPU
  51,063,965 processor cycles
  1,038,608 bytes consed
PS. [This morning](https://topaz.github.io/paste/#XQAAAQDUBQAAAAAAAAAUGQimgx+p6PZhRh7uIPCirs4j6zlpXvdP2y9WdJ6sK9eG/VBt3Z4fnkHxTu3NuCMJ2nmF+B13yA+9sGhCoAHyJ2ZXjZpF6phEUDagCs+h8Ufu+I4B+Xi9tITOLsXEvZfLoH40qd+kCDFX2l8GZ6e6N7gbel/1Cdrm2GooklZBGu591PFeRTEdeTCp9z/uectoPBE3rIyaSwHho1c4uTLfVtTHQrRsa2lxYHWf1y1QGSZ/HceOadgyazkIf6sjipC2net/qdX0s8EM+giC1iAp0f0pJDMBVHJncpyD7r+jix3nC0Fx3vmEBtRvkGTbxtpSPL2FFPaH766gk2rfa5U5djoRM+qJFHzhSDrTOGhUq8QB8+KleQsKs+cswBfYJI3+IrJJrRCN8ilo8rJhMp76rJOzDQ/A3KtynWTFA4mc5+HUaTAhbYrNJdjV9exUecAeMv8UMH96PU4xRZPbKAD7fYPqGs2Q9AQlu6vUT/Aa1t25huhQyhVgWIh7Xxit7AMMMX5/pJLvpuh18R+1vqXSHZh0BrfpGdmJCt03ejSNjizp1lNKQNawGyAJNJerOw09ENby0Nxap8j5WzGHlVzj6kyIu87klBKwrs62U6qb+g4Q3Lc5hg0397CqQxaBTh9mL7irkrlMwVLkDuyYCNZCNT9RrpFBWxXWlpkpIoX96shVBJnpiW+Ma/idPjPGhHXplUMNzgpA7cg1YjcYckkvH0+yT8i0W3Qxvj8aUedIlTCyEHEVcTW2DG/tTHgL7thj9k/dX/YDjyb9lZr/q7Jc0miXU9VitIDMCbbMS8l6WUpt/qD1khDJf661iTz7C8bIK1uucMrS9Dc89Gr+2jDd), while solving this, I forgot to swap `x` and `y` while reading dots; all my `(row col)` were flipped, and because of that:

- Had to flip the bindings inside the for loop
- Ended up swapping rows for columns inside PRINT-PAPER (see `(dotimes (c rows) ...)` and `(dotimes (r cols) ...)`)

It's clear and simple, I am just not meant to be productive at 0600!

2021-12-12 (permalink)

Advent of Code: [2021/12](https://adventofcode.com/2021/day/12)
With your submarine's subterranean subsystems subsisting suboptimally, the only way you're getting out of this cave anytime soon is by finding a path yourself. Not just a path - the only way to know if you've found the **best** path is to find **all** of them.
OK...

So, we are given the map of the cave:
    start
    /   \
c--A-----b--d
    \   /
     end
Except that it's encoded, like this:
start-A
start-b
A-c
A-b
b-d
A-end
b-end
And we are asked to find all the paths connecting `start` to `end` that don't visit _small_ caves more than once:
Your goal is to find the number of distinct **paths** that start at `start`, end at `end`, and don't visit small caves more than once. There are two types of caves: big caves (written in uppercase, like A) and small caves (written in lowercase, like b). It would be a waste of time to visit any small cave more than once, but big caves are large enough that it might be worth visiting them multiple times. So, all paths you find should visit small caves at most once, and can visit big caves any number of times.
First off, let's load this map into memory.  We are going to implement an adjacency list storing which cave is connected to which other one, and we are are going to do this using a HASH-TABLE instead of classic association lists:
(defun parse-map (data &aux (map (make-hash-table)))
  (dolist (s data map)
    (destructuring-bind (from . to) (parse-line s)
      (push to (gethash from map))
      (push from (gethash to map)))))

(defun parse-line (string)
  (cl-ppcre:register-groups-bind ((#'make-keyword from to))
      ("(\\w+)-(\\w+)" string)
    (cons from to)))
Next, let's walk through the cave.  We are going to recursively build up all the possible paths starting from `start`, and count the ones that lead up to `end`:

- If we reached `end` then we found one more path to count -- add `1`
- Otherwise, check all the caves reachable from the current position
- And if _visitable_, add the new cave to the current path, and _recur_
(defun walk (map)
  (while-summing (path#)
    (labels ((recur (curr)
               (cond ((eq (car curr) :|end|) (path#))
                     (t (loop for next in (gethash (car curr) map)
                              when (or (big-cave-p next) (not (member next curr))) do
                              (recur (cons next curr)))))))
      (recur '(:|start|)))))
And when can we visit a cave? When a cave is visitable?  If a big one, or if a small one, only if we haven't stepped foot into it already:
(defun visitablep (path next) (or (big-cave-p next) (not (member next path))))
(defun big-cave-p (cave) (not (small-cave-p cave)))
(defun small-cave-p (cave) (loop for ch across (string cave) thereis (lower-case-p ch)))
Cool!  Let's now take a look at part 2:
After reviewing the available paths, you realize you might have time to visit a single small cave **twice**. Specifically, big caves can be visited any number of times, a single small cave can be visited at most twice, and the remaining small caves can be visited at most once. However, the caves named `start` and `end` can only be visited **exactly once each**: once you leave the `start` cave, you may not return to it, and once you reach the `end` cave, the path must end immediately.
So, it looks like we will have to tweak VISITABLEP a little; for part 2, we can visit a cave if:

- It's a big one (as before)
- Or when a small one, if it's not `start`
- Or if the number of small caves, visited more than once, is less than or equal to `1`
(defun part2-visitable-p (path next)
  (cond ((big-cave-p next) t)
        ((eq next :|start|) nil)
        (t (<= (loop for p on (cons next path) for (c) = p
                     when (small-cave-p c) count (> (count c p) 1))
               1))))
Let's update WALK to accept `visitablep` from the caller:
(defun walk (map &optional (visitablep #'visitablep))
  (while-summing (path#)
    (labels ((recur (path)
               (cond ((eq (car path) :|end|) (path#))
                     (t (loop for next in (gethash (car path) map)
                              when (visitablep path next) do
                              (recur (cons next path)))))))
      (recur '(:|start|)))))
And that's it!

Final plumbing:
(define-solution (2021 12) (map parse-map)
  (values (walk map) (walk map #'part2-visitable-p)))

(define-test (2021 12) (3000 74222))
Test run:
> (time (test-run))
TEST-2021/12..
Success: 1 test, 2 checks.
Evaluation took:
  1.981 seconds of real time
  1.867072 seconds of total run time (1.818760 user, 0.048312 system)
  [ Run times consist of 0.007 seconds GC time, and 1.861 seconds non-GC time. ]
  94.25% CPU
  4,557,591,987 processor cycles
  29,753,344 bytes consed
It's not the fastest solution, I will give you that, but I love how self contained the changes for part 2 were, especially if you compare that to my [original solution](https://topaz.github.io/paste/#XQAAAQAvCAAAAAAAAAAUGQimgx+p6PZhRh7uIO9C+TaXA2WvmEoBbmvcTSA2hNnhLNH9+NrXNpDAj8zGpwdcigC0xrwLmwiSZZAqQM7Oije2QNF71VjFvbAaFekLQoen1k9WQoYxw2xUV7tO5VVf85sY5Oa1paBH1vKEy0XuNJQkr4d4ZBPPgQpnUP2/M0gvBS5jcjfPws1bA77znBuGdmednXnc7webfWPkjzoFmI2/JMLMWyd9ChVUdsl89VQNxsrW7nLnAV/fzJVBrXBveYAs8qmrgh28XeCtT3uF6Vnk8jodHq5fjM/wz+fQScMyhH/q/m+SdBAJ+Qn9YYOqoAJqjY/0LdPJOraJAy5Jrox9keHXNZOGNx9qO4m56n2/b0y9V7AkAkeFftoVYskrcdqFROPXVP98FdUvcXv/uNdduoJbcIKeNa4XwMh0v7pSr0yV+U2ZJfUH57G+8812kphcBYJYnwoF8YJMbskQlannpznyiPa2QsPaK9sOYK8uahPK8kuWymzKkl3+R1aMs4bGo+dpfO5D5eUqW4gZ6ZBHfD4FJncXQYPCDmlCUp5dLYIDL8l911ZnaQYBnPGcLCsR7D9mQ+lBOG0ccOety6yKlGTujVWaz9MUKEYXkHBhgmiK8FcuklQlsG68lJTMp/engA6bpjJqilPIXGVU7aZEVTG5bILS7BQNSwuRbcD+iaylfEpg3amoDfSD7KGN+wIU/EEjpBihZPT7mWNKQG3P7H1eeIh+rFNTsg6VgyqvsIQ4orijcjh4V7l+OdeL0ShFU4IsMy4cHeCUzxPIEp7asE97eQH8q9g1JNuNXJ/1opC/61n1SV0F1AwHwGeN1Sfh3pI/HMtGgxe46JzXHl8d9znhNDMeGmpUZ6jcQk87N9mhuziYsKnh3g//wFhpgw==), in which I ended up copying PART1 into PART2 because I needed to pass around a new boolean flag to keep track of whether walking into a small cave was allowed or not.

PS. I was using UIOP:WHILE-COLLECTING to collect all the paths first, and then pass the result to LENGTH to get the answers to today's problems; well, I did not like that, and since I found myself with a little bit of time I decided to create the WHILE-SUMMING macro:
(defmacro while-summing (summers &body body)
  (expand-while-summing summers body))

(defun expand-while-summing (summers body)
  (let* ((gen-syms (mapcar #'expand-while-summing-gensym summers))
         (let-bindings (mapcar #'expand-while-summing-let-binding gen-syms))
         (flet-bindings (mapcar #'expand-while-summing-flet-binding summers gen-syms)))
    `(let* (,@let-bindings)
      (flet (,@flet-bindings)
        ,@body
        (values ,@gen-syms)))))

(defun expand-while-summing-gensym (name) (gensym (string name)))

(defun expand-while-summing-let-binding (gensym) (list gensym 0))

(defun expand-while-summing-flet-binding (name binding)
  `(,name (&optional (delta 1))
    (incf ,binding delta)))
I am sure there are already 100+ different versions of this, better ones for sure, but still, one more would not hurt, so here is mine!

+ untrace function does not work with Vlime

? while-counting

2021-12-11 (permalink)

Advent of Code: [2021/11](https://adventofcode.com/2021/day/11)
You enter a large cavern full of rare bioluminescent dumbo octopuses! They seem to not like the Christmas lights on your submarine, so you turn them off for now.
>
There are 100 octopuses arranged neatly in a 10 by 10 grid. Each octopus slowly gains energy over time and flashes brightly for a moment when its energy is full. Although your lights are off, maybe you could navigate through the cave without disturbing the octopuses if you could predict when the flashes of light will happen.
Each octopus has an energy level whose value we receive as our puzzle input:
5483143223
2745854711
5264556173
6141336146
6357385478
4167524645
2176841721
6882881134
4846848554
5283751526
You can model the energy levels and flashes of light in steps. During a single step, the following occurs:
>
- First, the energy level of each octopus increases by `1`.
- Then, any octopus with an energy level greater than `9` flashes. This increases the energy level of all adjacent octopuses by `1`, including octopuses that are diagonally adjacent. If this causes an octopus to have an energy level greater than `9`, it also flashes. This process continues as long as new octopuses keep having their energy level increased beyond `9`. (An octopus can only flash at most once per step.)
- Finally, any octopus that flashed during this step has its energy level set to `0`, as it used all of its energy to flash.
Note, adjacent flashes can cause an octopus to flash on a step even if it begins that step with very little energy.
Before any steps:
11111
19991
19191
19991
11111

After step 1:
34543
40004
50005
40004
34543

After step 2:
45654
51115
61116
51115
45654
OK, our task for today:
Given the starting energy levels of the dumbo octopuses in your cavern, simulate 100 steps. How many total flashes are there after 100 steps?
As usual, let's parse our input first.  Like we did for [2021/09](https://matteolandi.net/plan.html#day-2021-12-09), we are going to parse this into a HASH-TABLE, mapping from `(row col)` pairs, to the octopus energy level:
(defun parse-octopuses (data &aux (octopuses (make-hash-table :test 'equal)))
  (loop for r below (length data)
        for string in data do
        (loop for c below (length string)
              for ch across string do
              (setf (gethash (list r c) octopuses)
                    (- (char-code ch) (char-code #\0)))))
  octopuses)
With the puzzle input taken care of, we can now try to sketch our solution:

- We safely copy our input -- we are going to mutate it, and one too many times I have been bit by me accidentally using a mutated list / hash-table, so when in doubt, I copy things
- Then we let the octopuses dance for 100 times, and keep track of the number of flashes at the end of each dance
(defun flash-dance (octopuses &aux (octopuses (copy-hash-table octopuses)))
  (loop repeat 100 sum (dance octopuses)))
Great, now we have to figure out how flashing octopuses influence adjacent ones during the dance; here is what we are going to do:

- First off, we increment the energy level of each octopus, and if the octopus energy level is greater than 10, we FLASH it
- FLASH-ing an octopus means adding the octopus itself to the list of octopuses that already flashed (remember, octopuses can flash only once, per step...err dance), plus scheduling all its neighbors for an additional energy bump increase
- Next, until there are octopuses scheduled to be processed -- see `remaining` -- we increase their energy level, and if that winds up being greater than `9`, we call FLASH on them -- causing the octopus to be marked as having flashed, plus its neighbors to be added to `remaining`
- Note: this is very similar to the classic DFS / BFS, with the exception that nodes (i.e. octopuses) are marked as _visited_ not the first time we process them, but the first time their energy becomes greater than `9`
- At the end, we check all the octopus, count the ones that flashed and reset their energy level to `0`
(defun dance (curr &aux remaining (flashed (make-hset nil :test 'equal)))
  (flet ((flashesp (energy) (> energy 9))
         (flash (p)
           (hset-add p flashed)
           (setf remaining (append (neighbors curr p) remaining))))
    (loop for p being the hash-keys of curr
          when (flashesp (incf (gethash p curr))) do (flash p))
    (loop while remaining
          for n = (pop remaining)
          unless (hset-contains-p n flashed)
          when (flashesp (incf (gethash n curr))) do (flash n))
    (loop for p being the hash-keys of curr using (hash-value e)
          count (when (flashesp e) (setf (gethash p curr) 0)))))
NEIGHBORS is the exact same function we used for day 9; the inly difference is that *NHOOD* includes deltas for diagonal locations too:
(defparameter *nhood* '((-1 0) (-1 1) (0 1) (1 1) (1 0) (1 -1) (0 -1) (-1 -1)))

(defun neighbors (energies p)
  (loop for d in *nhood* for n = (mapcar #'+ p d)
        when (gethash n energies) collect n))
Cool, now let's take a look at part 2 instead:
If you can calculate the exact moments when the octopuses will all flash simultaneously, you should be able to navigate through the cavern. What is the first step during which all octopuses flash?
This should be easy enough: we let the octopus dance until the number of the ones that flashed is 100 (i.e. the size of the grid).

Let's update FLASH-DANCE accordingly:

- We use `:for step from 1` instead of `:repeat 100`
- We accumulate flashes into a loop variable, `flash-count`
- When the current step is `100`, then we save `flash-count` onto a new `part1` variable
- When the number of flashes in the current step is `100`, we break out of the function returning `(values part1 step)`
(defun flash-dance (octopuses &aux
                              (octopuses (copy-hash-table octopuses))
                              (part1 0))
  (loop for step from 1 for flashes = (dance octopuses)
        sum flashes into flash-count
        when (= step 100) do (setf part1 flash-count)
        when (= flashes 100) return (values part1 step)))
And that's it!

Final plumbing:
(define-solution (2021 11) (octopuses parse-octopuses) (flash-dance octopuses))

(define-test (2021 11) (1665 235))
Test run:
> (time (test-run))
TEST-2021/11..
Success: 1 test, 2 checks.
Evaluation took:
  0.026 seconds of real time
  0.026022 seconds of total run time (0.022644 user, 0.003378 system)
  [ Run times consist of 0.011 seconds GC time, and 0.016 seconds non-GC time. ]
  100.00% CPU
  59,922,835 processor cycles
  2,554,240 bytes consed
PS. [Here](https://topaz.github.io/paste/#XQAAAQC/BgAAAAAAAAAUGQimgx+p6PZhRh7uIOzWurUlN7fbFBw1HHgfNfGNNS7rNArrqFq/zq9awLKNdO5kmayCD7rk2HvFkonJ+07rAMPt3bdfbDoWgkRF4z+uYA8UeujpEPQ4K6D1P0e5Jc9C40O+I6B0/H+FvCYHKQ2vdlF2PoWqAiAZLsmHiW8H0ADVNJ3BSrpxO2VGYR+ONxGjvs9/lwjYPMB33adgSL9lbXZldKw10ruhWD45sMcoe5VPxiUdPM85iMzaG68++Xt0oUPct+MTdMyqSiq2cKJyUhJ3B4mQWzLZIuISnXaZOodJSkkRypBcS8jnZ7eOuxAGjWrUss2UTlwW//L913A6XkSnGc0y3XzvNySeS/uMMPMdwOdEOgzDtHvKA+jESWWoedLrMtaL/57EpEu9fdxCUrsDMTRdqzzw0aa7EkSUyqyMhjOHsR1eZfiL9DyQ2nOOyh05dQwGefb6r3/5PNMcLxk1QouVVOxfPTKrrEKiEpW3HXKfJqvyBaIvoH4684AQtXFJ9hE7RyR18SM1GWev+HGdr166bvCGmbKRA+YLXHJ4QXAh3F+rwJRa3mfJOJhWD4/8rsCXp2ciYlNiLPB1mBldaSnNOKLNGB06Pc3uDrcq6QJFxPGcQbcCwOlPvpzJbdy46dA6ZhFNOCADnkjEA9pMglyaxBPEnu5jNBV81/8akcYU3hfOJcGNiGFpBGJ/u8RBoH2Cg6mjSms9JWkwCHQ73RWuGvqrc3+IrJ2xAQjyu83lMTliIMGH9iD5/ICfBitI8NVyKcZfYE214UWNZM9gAQ6ExCvl87yEgOs2cVb1G10bI0lQnIyZqv/kKwU4O7so7K0nHHuREfaMONIM0FIMiUmX1waZlPKukXnK/PEJ0Q==) you can find my pre-refactoring REPL buffer, and as you can see, as I cleaned up my solution I mainly renamed things, and combined the _cowboy_ loops into FLASH-DANCE!

2021-12-10 (permalink)

Advent of Code: [2021/10](https://adventofcode.com/2021/day/10)
You ask the submarine to determine the best route out of the deep-sea cave, but it only replies:
>
Syntax error in navigation subsystem on line: all of them
>
All of them?! The damage is worse than you thought. You bring up a copy of the navigation subsystem (your puzzle input).
Continues:
The navigation subsystem syntax is made of several lines containing chunks. There are one or more chunks on each line, and chunks contain zero or more other chunks. Adjacent chunks are not separated by any delimiter; if one chunk stops, the next chunk (if any) can immediately start. Every chunk must open and close with one of four legal pairs of matching characters:
>
- If a chunk opens with `(`, it must close with `)`.
- If a chunk opens with `[`, it must close with `]`.
- If a chunk opens with `{`, it must close with `}`.
- If a chunk opens with `<`, it must close with `>`.
It goes on explaining the difference between _corrupted_ lines (our focus for part 1) and _incomplete_ lines:
Some lines are incomplete, but others are corrupted. Find and discard the corrupted lines first.
>
A corrupted line is one where a chunk closes with the wrong character - that is, where the characters it opens and closes with do not form one of the four legal pairs listed above.
What we are asked to do, is process each _corrupted_ line, stop at the first invalid character, and generate a _score_ as described below:
To calculate the syntax error score for a line, take the first illegal character on the line and look it up in the following table:
>
- `)`: `3` points.
- `]`: `57` points.
- `}`: `1197` points.
- `>`: `25137` points.
To recap:
Find the first illegal character in each corrupted line of the navigation subsystem. What is the total syntax error score for those errors?
To solve this, we are going to use a stack; given a line, for each of its characters:

- If open parenthesis, push the relative closing one into the stack (e.g. if `(` is read, I push `)`)
- If close parenthesis: if it matches the value at the top of the stack, I pop that from the stack; otherwise it's an error, so we calculate the syntax error score

Let's start by defining a table to map between open and close parentheses:
(defparameter *closing*
  '((#\( . #\))
    (#\[ . #\])
    (#\{ . #\})
    (#\< . #\>)))
Next, let's define our main function, SYNTAX-SCORING, which we will use to:

- Process all the lines in our file
- Implement the stack based algorithm mentioned above
- Sum all the error scores together...and return it
(defun syntax-scoring (file &aux (error-score 0))
  (dolist (line file)
    (let (stack)
      (loop for ch across line do
            (cond ((find ch "([{<") (push (cdr (assoc ch *closing*)) stack))
                  ((eq ch (car stack)) (pop stack))
                  (t (return (incf error-score (syntax-error-score ch))))))))
  error-score))
Last missing bit, SYNTAX-ERROR-SCORE, which all it does, is looking up the given character into another table mapping from _invalid_ characters to their score:
(defparameter *syntax-error-table*
  '((#\) . 3)
    (#\] . 57)
    (#\} . 1197)
    (#\> . 25137)))

(defun syntax-error-score (ch) (cdr (assoc ch *syntax-error-table*)))
For part 2 instead, we are asked to focus on _incomplete_ entries:
Incomplete lines don't have any incorrect characters - instead, they're missing some closing characters at the end of the line. To repair the navigation subsystem, you just need to figure out the sequence of closing characters that complete all open chunks in the line.
Once we figure out these completion strings, we need to calculate their score:
Then, for each character, multiply the total score by 5 and then increase the total score by the point value given for the character in the following table:
>
- `)`: `1` point.
- `]`: `2` points.
- `}`: `3` points.
- `>`: `4` points.
Then we need to collect all the scores, sort them, and pick the one in the middle:
Autocomplete tools are an odd bunch: the winner is found by sorting all of the scores and then taking the middle score. (There will always be an odd number of scores to consider.)
>
[...]
>
Find the completion string for each incomplete line, score the completion strings, and sort the scores. What is the middle score?
OK, first off, let's update SYNTAX-SCORING to start processing _incomplete_ lines as well.

In our case, an entry is _incomplete_ if it's not _corrupted_ (i.e. it did not cause an _early_ return from the LOOP form), _and_ if when done processing all the line characters, `stack` turns out not to be empty; in which case:

- we calculate the completion score -- :FINALLY clause added to the LOOP form
- collect all the scores together -- see COMPLETION-SCORES added to the function parameter list
- sort them and pick the middle value -- final LET* form
(defun syntax-scoring (file &aux (error-score 0) completion-scores)
  (dolist (line file)
    (let (stack)
      (loop for ch across line do
            (cond ((find ch "([{<") (push (cdr (assoc ch *closing*)) stack))
                  ((eq ch (car stack)) (pop stack))
                  (t (return (incf error-score (syntax-error-score ch)))))
            finally (when stack
                      (push (completion-score stack) completion-scores)))))
  (values
    error-score
    (let* ((completion-scores (sort completion-scores #'<))
           (n (floor (length completion-scores) 2)))
      (nth n completion-scores))))
Note that in our case, `stack`, actually represents the _completion string_ mentioned above, so we can simply go ahead and calculate its score as instructed:
(defparameter *point-value-table*
  '((#\) . 1)
    (#\] . 2)
    (#\} . 3)
    (#\> . 4)))

(defun completion-score (stack &aux (score 0))
  (dolist (ch stack score)
    (setf score (+ (* score 5) (cdr (assoc ch *point-value-table*))))))
And that's it!

Final plumbing:
(define-solution (2021 10) (data) (syntax-scoring data))

(define-test (2021 10) (323613 3103006161))
Test run:
> (time (test-run))
TEST-2021/10..
Success: 1 test, 2 checks.
Evaluation took:
  0.000 seconds of real time
  0.000799 seconds of total run time (0.000505 user, 0.000294 system)
  100.00% CPU
  1,872,677 processor cycles
  97,968 bytes consed
And we are good to go!

PS. Today's problem was a breath of fresh air, really, especially if we compare it to the problems of the last two days, and even my [pre-refactoring REPL buffer](https://topaz.github.io/paste/#XQAAAQA4BQAAAAAAAAAUGQimgulVkMqJePdJ12sc+KZM7LOyBuIC1dzJtzX33KNaMXj9jmVZTPyfQOmX9PpTg5GvWdrvb7nTb9ehlqWqzZpjVnVnMGhfrX1xdaAgSNaXDCXu2M2WHr9Ka+4Bd6xYmaJlsoLibJAMVSflYybG5/E4vkNUo9nyCiaOJft/b2FbN3paNVzj8HO6fbrKik9iPjcV2n7jQfOxB05bL4ApX44oMtesKX1WwNluA6y2i7r90FAeHyyaMNak929zF5LGhGiARa+nmXNv+6suXMQZiCOAD8O2O7R7OTmdtXH7GCK+Selm1SwZ1vC0BGB4xzp8njvpRtUewleaWkBHZu5aCElvzg9/Hn7gHgXkrzWzL9ONANarpyCBZGmIwIQAmm47PQvTUh1YLAUhSXnIOm6Yyy0lbHzdfEwLmq5PnbuthOsEWJ/aiTtxFGv7r9LvfcfwgxUT1dIyvqHoy6ktffQZGdaMKGy7xzLg/LPwnTzhPHjDbCGmPcTJdDLro1/iXCNwXfDowju2lnfal26uMNhzeBb/Y6GvH3lNlzKH4Yq4tBzQbJOwXUFMGrw/XY3GqH8BUK5TRxRhGBqi2vpQa2vHjRfwkn+oR3Lis3R6kK4BjEBeJK9J5OqyXern7uZkiOhAoKOvNHwIcTY0xQkVqOZJYnXEuI3JCDHiN/wpn1E=), seems to agree with that!

2021-12-09 (permalink)

Advent of Code: [2021/09](https://adventofcode.com/2021/day/9)
These caves seem to be lava tubes. Parts are even still volcanically active; small hydrothermal vents release smoke into the caves that slowly settles like rain.
Rain...underwater? Aren't we still on the submarine, are we?! Anyways...
If you can model how the smoke flows through the caves, you might be able to avoid it and be that much safer. The submarine generates a heightmap of the floor of the nearby caves for you (your puzzle input).
Continues:
Smoke flows to the lowest point of the area it's in. For example, consider the following heightmap:
>
2199943210
3987894921
9856789892
8767896789
9899965678
>
Each number corresponds to the height of a particular location, where 9 is the highest and 0 is the lowest a location can be.
The first step is to parse the input heightmap; we are are going to use a HASHTABLE, mapping from `(row col)` to the corresponding height:
(defun parse-heights (data &aux (heights (make-hash-table :test 'equal)))
  (loop for r below (length data)
        for string in data do
        (loop for c below (length string)
              for ch across string do
              (setf (gethash (list r c) heights)
                    (- (char-code ch) (char-code #\0)))))
  heights)
Now, here is what we are asked to do for part 1:
Find all of the low points on your heightmap. What is the sum of the risk levels of all low points on your heightmap?
So, for each low point, we add one to its height, and sum everything together:
(defun part1 (heights)
  (loop for p in (low-points heights) sum (1+ (gethash p heights))))
But what are the low points of our heightmap? All the points surrounded by locations with a higher height:

- For each point in the heightmap
- For each of its neighbor locations
- Check if the height of the center is smaller
- If it is, and for all the neighbor locations, then it's a low point so we should collect it
(defun low-points (heights)
  (loop for p being the hash-keys of heights using (hash-value h)
        when (every (lambda (n) (< h (gethash n heights))) (neighbors heights p))
        collect p))
But given a point, what are all its neighbor locations?  The ones north, east, south, west, which do not fall off the grid:
(defparameter *nhood* '((-1 0) (0 1) (1 0) (0 -1)))

(defun neighbors (heights p)
  (loop for d in *nhood* for n = (mapcar #'+ p d)
        when (gethash n heights) collect n))
Now, for part 2 instead, we are asked to calculate the size of the largest _basin_:
A basin is all locations that eventually flow downward to a single low point. Therefore, every low point has a basin, although some basins are very small. Locations of height 9 do not count as being in any basin, and all other locations will always be part of exactly one basin.
And in particular:
What do you get if you multiply together the sizes of the three largest basins?
OK, so:

- We find the basins
- Sort them by size (biggest first)
- Take the first 3 values
- Multiply them together
(defun part2 (heights)
  (reduce #'* (subseq (sort (basins heights) #'>) 0 3)))
Let's dig a bit deeper, and see how we are going to calculate the size of the basins:

- For each low point (we know each basin expands around one of the low points from part 1)
- We apply BFS, scanning all the surrounding area, until we either fell off the grid (NEIGHBORS already implements this) or we hit a location with height of `9` (from the text: _locations of height `9` do not count as being in any basin_)
- As we navigate adjacent cells (i.e. our :NEIGHBORS function), we increase a counter representing the size of the current basin

I already have BFS available in my utility file, so implementing the above was quite straightforward:
(defun basins (heights)
  (uiop:while-collecting (acc!)
    (dolist (lp (low-points heights))
      (let ((size 0))
        (bfs lp :test 'equal :neighbors (lambda (p)
                                          (incf size)
                                          (remove-if-not (lambda (n)
                                                           (< (gethash n heights) 9))
                                                         (neighbors heights p))))
        (acc! size)))))
Final plumbing:
(define-solution (2021 09) (heights parse-heights)
  (values (part1 heights) (part2 heights)))

(define-test (2021 09) (502 1330560))
And that's it:
> (time (test-run))
TEST-2021/09..
Success: 1 test, 2 checks.
Evaluation took:
  0.050 seconds of real time
  0.046314 seconds of total run time (0.035313 user, 0.011001 system)
  92.00% CPU
  116,896,357 processor cycles
  9,133,328 bytes consed
I wished I were able to code this up, on the spot, as the clock was ticking; instead, my [REPL buffer](https://topaz.github.io/paste/#XQAAAQDHCwAAAAAAAAAUGQimgx+p6PZhRh7uIO+WahagjQFtPUroo2DHmQaY6XQY1BYJ+FMCLkj+Xpf+GXwi1mkfiahNXzGziXL6bKmTzHNvb+uG5oA435oUBU5eqNnZv0QZ7grjrReQCKmII7Qfq3wAMvM+VOxZt/n90rZ8BZYGbFNoRn3y2DYmMyM/VImdJxNjfq0yhV53UoWNkQL61HpdCStQHM2EJ1Mh9VhTk6oNhbfFah6BArXTOVwK1xvMxv4DAhqQ4F1DcTE/vpl9OinZXp1CKRSxMIHv9k4fkV8F6i1damTN7gRXQS/ahc1TkeJm3e28uw0DJM/hxRuUvrRAzQkX9WQzNqVqvYxNjp2T5c+rPq4uIoBap1pQsiGqBxJ+462us1oa8mDQijvvmNUm+B7F9wYizQfra3lEedhNyea50YTCG7eVi9cXU0+w1AszW7+Qc4N+EF/YHYOQ6pNO9oMWkXKLNwtFxFFZhv0lISG3hjowK5FMycdOZussyrF9M20IO/mq6feOWp309AO/T+Q83Z+iah4GzohefjT05OR0sv8+EWlDOoPadylXaPdXW4kV2Kz3SFXCriiTUkYpr8Nmu3ZSbtBNaEHn0k19uQ9XR8vqw6FKbpfslM+2WzNY7gpxi1TqZ4UtetR/30sSoYE8uly4hHPmd0U0IboW52c4PVg2wUluyUZuk2Dt5hNHUPh2CTR2out6u4EhJ6MOT2eEdtmhDrfHoHhWiuBcTJUVMbtnJstp9hBY/ce15UMEVKqgKQEXu2n8tOLALvezzfOq03TXkC6v2vpL2I3dvBkSkxY1DjGjSxNxpN3wAnsdtYpS5vuHZZnEnQ6YnP4k2QH5hOH8OZVg3kKY7AyAnajGyFtK0UbiF5TlP09/aYAbkBWGibTLWApQYfLyJmtYP7r5H6dZBzUBJiWQSzPcDnHED2m7Rzot9n8X/zXGfZvi3Ie8oRgu2vr1kovWfbzkkHFkt+OhjPMsgybIVfMGbk0lzRgJGHuVTPSfqiICgE/oPBxHkPQbjz8zUgj5IbmBoVKOo2ABq4EFzeYY1b5RpUtkL9/yZEIzyG9z0XLEbnAuo6RJPmKHdHii5xAUWUG1FGR6YEwFNFXoqpB0EeMWBBj8rt3JwNUSaUAGXJTYF6NcYGRpi3accSCK/X8JkA==) for today wound up looking anything but clean:

- Used a 2D array instead of a HASHTABLE to represent the map, and _that_ made pretty much all the rest really **really** verbose
- I was only checking the location above and to the left of the current one in first BASINS attempt, but that clearly generated the wrong answer
- It took me a while before I decided a NEIGHBORS function would have helped with the overall readability of the whole solution -- very important, especially when things ain't working as they should, but go figure why I had not thought about that sooner
- Last but not least, I forgot I already had AREF-OR in my utilities file, for safely accessing array values even with out of bound indices; so, for part 1 I ended up sprinkling a bunch of OR / IGNORE-ERRORS forms all over the place, while for part 2, I wound up re-implementing AREF-OR as SAREF...

Still got my 2 stars home anyway, and _that_ alone is a reason to celebrate!

PS. As it turns out, part 2 could also be solved using disjoint sets / union find:

- Each location starts as a disjoint-set / basin of its own
- For each location, for each of its neighbors, if the neighbor height is not 9 then we join the two sets / basins
- For each set, we find the root element and start counting how many locations roll up to the same set / basin

And these counters, one per basin, actually represent the list of sizes we were looking for:
(defun basins (heights &aux
                       (basins (make-hash-table :test 'equal))
                       (sizes (make-hash-table :test 'eq)))
  (loop for p being the hash-keys of heights using (hash-value h)
        unless (= h 9) do (setf (gethash p basins) (make-dset h)))
  (loop for p being the hash-keys of basins using (hash-value ds) do
        (loop for np in (neighbors heights p)
              for nds = (gethash np basins)
              when nds do (dset-union ds nds)))
  (loop for ds being the hash-values of basins do (incf (gethash (dset:dset-find ds) sizes 0)))
  (hash-table-values sizes))
And just to be 100% sure, let's confirm we did not break anything:
> (time (test-run))
TEST-2021/09..
Success: 1 test, 2 checks.
Evaluation took:
  0.021 seconds of real time
  0.020646 seconds of total run time (0.017324 user, 0.003322 system)
  100.00% CPU
  49,747,608 processor cycles
  6,147,472 bytes consed
We didn't!

2021-12-08 (permalink)

Advent of Code: [2021/08](https://adventofcode.com/2021/day/8)

We managed to escape from the whale, but something is not quite working as it should:
As your submarine slowly makes its way through the cave system, you notice that the four-digit seven-segment displays in your submarine are malfunctioning; they must have been damaged during the escape. You'll be in a lot of trouble without them, so you'd better figure out what's wrong.
Each digit of a seven-segment display is rendered by turning on or off any of seven segments named a through g:
  0:      1:      2:      3:      4:
 aaaa    ....    aaaa    aaaa    ....
b    c  .    c  .    c  .    c  b    c
b    c  .    c  .    c  .    c  b    c
 ....    ....    dddd    dddd    dddd
e    f  .    f  e    .  .    f  .    f
e    f  .    f  e    .  .    f  .    f
 gggg    ....    gggg    gggg    ....

  5:      6:      7:      8:      9:
 aaaa    aaaa    aaaa    aaaa    aaaa
b    .  b    .  .    c  b    c  b    c
b    .  b    .  .    c  b    c  b    c
 dddd    dddd    ....    dddd    dddd
.    f  e    f  .    f  e    f  .    f
.    f  e    f  .    f  e    f  .    f
 gggg    gggg    ....    gggg    gggg
So, to render a `1`, only segments `c` and `f` would be turned on; the rest would be off. To render a `7`, only segments `a`, `c`, and `f` would be turned on.

Continues:
The problem is that the signals which control the segments have been mixed up on each display. The submarine is still trying to display numbers by producing output on signal wires a through g, but those wires are connected to segments randomly. Worse, the wire/segment connections are mixed up separately for each four-digit display! (All of the digits within a display use the same connections, though.)
>
So, you might know that only signal wires `b` and `g` are turned on, but that doesn't mean segments `b` and `g` are turned on: the only digit that uses two segments is `1`, so it must mean segments `c` and `f` are meant to be on. With just that information, you still can't tell which wire (`b`/`g`) goes to which segment (`c`/`f`). For that, you'll need to collect more information.
>
For each display, you watch the changing signals for a while, make a note of all ten unique signal patterns you see, and then write down a single four digit output value (your puzzle input). Using the signal patterns, you should be able to work out which pattern corresponds to which digit.
For example, here is what you might see in a single entry in your notes:
acedgfb cdfbe gcdfa fbcad dab cefabd cdfgeb eafb cagedb ab | cdfeb fcadb cdfeb cdbaf
Each entry consists of ten unique signal patterns, a `|` delimiter, and finally the four digit output value. Within an entry, the same wire/segment connections are used (but you don't know what the connections actually are). The unique signal patterns correspond to the ten different ways the submarine tries to render a digit using the current wire/segment connections. Because `7` is the only digit that uses three segments, `dab` in the above example means that to render a `7`, signal lines `d`, `a`, and `b` are on. Because `4` is the only digit that uses four segments, `eafb` means that to render a `4`, signal lines `e`, `a`, `f`, and `b` are on.
_OK, I kind of get it, but what exactly are we supposed to do?_
In the output values, how many times do digits `1`, `4`, `7`, or `8` appear?
The following algorithm should get the job done:

- For each entry
- For each output value
- If its length is either `2`, `4`, `3`, or `7` (i.e. the number of segments on, for the digits: `1`, `4`, `7`, and `8` respectively), then it means the entry is trying to render one of those digits, so we add 1

However, let's parse our entries first:
(defun parse-entries (data)
  (loop for string in data
        for parts = (cl-ppcre:all-matches-as-strings "[a-z]+" string)
        collect (cons (subseq parts 0 10) (subseq parts 10))))
(defun inputs (entry) (car entry))
(defun outputs (entry) (cdr entry))
Now, the algorithm to solve part 1:
(defun part1 (entries)
  (loop for e in entries sum
        (loop for d in (outputs e) count (member (length d) '(2 4 3 7)))))
An easy part 1 is usually followed by a somewhat complicated part 2, and today is no exception:
Through a little deduction, you should now be able to determine the remaining digits.
So yeah, somehow we need to figure out which wire is connected to which segment, decode each output value, and add them up together:
For each entry, determine all of the wire/segment connections and decode the four-digit output values. What do you get if you add up all of the output values?
OK, before we actually figure out a way to find this wire / segment mapping, let's pretend we had it, and see what we would do with it.

Given the following wiring:
 dddd
e    a
e    a
 ffff
g    b
g    b
 cccc
Our mapping would look something like: `(#\d #\e #\a #\f #\g #\b #\c)`.

Now, if we _had_ this, we would go on and _decode_ the output signals:
(defun decode (mapping signals &aux (rez 0))
  (dolist (s signals)
    (let ((d (signal->digit mapping s)))
      (setf rez (+ (* rez 10) d))))
  rez)
Before looking at SIGNAL->DIGIT though, we are going to have to define a table mapping from digits to the list of segments which are on when the given digit is rendered; and instead of coding the list of segments as a list, we are actually going to use a bitmask instead (e.g. segment `a` -> `1`; segment `b` -> `2`; segment `c` -> `4`):
(defparameter *digits->segments* '((0 . #b1110111)
                                   (1 . #b0100100)
                                   (2 . #b1011101)
                                   (3 . #b1101101)
                                   (4 . #b0101110)
                                   (5 . #b1101011)
                                   (6 . #b1111011)
                                   (7 . #b0100101)
                                   (8 . #b1111111)
                                   (9 . #b1101111)))
And with this, to convert a signal to a digit all we have to do is:

- For each character of the signal -- DOVECTOR
- Find the segment it's connected to -- POSITION
- Set the _right_ bit in the bitmask -- DPB
- Look up the bitmask in the table above -- RASSOC
(defun signal->digit (mapping s &aux (segments-mask 0))
  (dovector (ch s)
    (let ((i (position ch mapping)))
      (setf segments-mask (dpb 1 (byte 1 i) segments-mask))))
  (car (rassoc segments-mask *digits->segments*)))


> (decode '(#\d #\e #\a #\f #\g #\b #\c) '("cdfeb" "fcadb" "cdfeb" "cdbaf"))
5353
Now, let's take another look at what exactly we are asked to do for part 2:
For each entry, determine all of the wire/segment connections and decode the four-digit output values. What do you get if you add up all of the output values?
If we gloss over the "determine all of the wire/segment connects" part, and consider the fact that we already know how to decode a message (once we have the mapping), it should be pretty easy to wire things together and find the answer for part 2:
(defun part2 (entries)
  (loop for e in entries for m = (find-mapping (inputs e))
        sum (decode m (outputs e))))
Now we have to figure out what FIND-MAPPING will actually do.

Well, even though the following algorithm produces the right answer (for my input at least), I honestly think I just have been super lucky because the more I think about this, the more I question _why_ this would work.  But it does seem to work, so...

Anyway, we begin by saying that each segment could be linked to any wire:
0 -> abcdefg
1 -> abcdefg
2 -> abcdefg
3 -> abcdefg
4 -> abcdefg
5 -> abcdefg
6 -> abcdefg
7 -> abcdefg
From here, we process each signal pattern, from the _easy_ ones to the most difficult ones, and try to _refine_ our mapping until any of the following conditions happen:

- Each segment maps to a _different_ wire -- in which case we found our mapping!
- We cannot map segment to a wire -- in which case we will have to backtrack!

Given a signal, first we figure out based on its length the different digits it could refer to; for example:

- `ab` uniquely refers to the digit: `1`
- `abc` uniquely refers to the digit: `7`
- `abcd` uniquely refers to the digit: `4`
- `abcde` could refer to any of the following digits: `2`, `3`, `5`
- `abcdef` could refer to any of the following digits: `0`, `6`, `9`
- `abcdefg` uniquely refers to the digit: `8`

Then, for each of these options (e.g. a single option in case of `ab`, three different options for `abcde`), we see which segments the signal uses, and refine our mapping as follows:

- If a segment is used by the current signal, we know that it can only be linked to any of the wires used by the signal; we can then safely remove (i.e. INTERSECT between the current list of options, for the segment, and the wires of the signal) any other possible option for the current segment, as choosing that would cause the constraint defined by the current signal to break
- If a segment is not used by the current signal instead, we know it will never be linked to any of the wires used by the signal; we can safely remove (i.e. SET-DIFFERENCE between the current list of options, for the segment, and the wires of the signal) all the signal wires from the segment list of options, as again, choosing that, would cause the constraint defined by the current signal to break

We then continue to the next signal until either we find a valid mapping, or we hit a dead end step back.

Again:

- If each segment is linked to a single wire, we found our mapping
- If a segment, for whatever reason, winds up not being linked to any single wire, then we hit a dead end
- If we run out signals (and none of the above hold true), we hit another dead end
- Otherwise, for each digit that the signal could render -- POSSIBLE-DIGITS
- And for each segment of this digit
- We update the list of possible options as described above (i.e. INTERSECTION for the segments which are on, and SET-DIFFERENCE for the remaining ones)
(defun find-mapping (signals)
  (labels ((recur (curr remaining)
             (cond ((loop for x in curr always (= (length x) 1))
                    (return-from find-mapping (mapcar #'car curr)))
                   ((loop for x in curr thereis (zerop (length x))) nil)
                   ((null remaining) nil)
                   (t (let ((s (first remaining)))
                        (dolist (d (possible-digits s))
                          (let ((segs (cdr (assoc d *digits->segments*))))
                            (recur
                              (loop for c in curr for i below 7
                                    collect (if (= (ldb (byte 1 i) segs) 1)
                                              (intersection c (coerce s 'list))
                                              (set-difference c (coerce s 'list))))
                              (rest remaining)))))))))
    (recur
      (loop repeat 7 collect (coerce "abcdefg" 'list))
      (sort (copy-seq signals) #'possible-digits<))))
But what are the digits that a signal could possibly try to render?  As mentioned above, we answer this by looking at the length of the signal:
(defparameter *length->digits* '((2 1)
                                 (3 7)
                                 (4 4)
                                 (5 2 3 5)
                                 (6 0 6 9)
                                 (7 8)))
(defun possible-digits (s) (cdr (assoc (length s) *length->digits*)))
Last, we said we wanted to process the _easy_ signals first; here is the predicate function to pass to SORT to make sure signals with the least number of _renderable_ digits are processed first:
(defun possible-digits< (s1 s2)
  (< (length (possible-digits s1)) (length (possible-digits s2))))
And that's it!

Final plumbing:
(define-solution (2021 08) (entries parse-entries)
  (values (part1 entries) (part2 entries)))

(define-test (2021 08) (330 1010472))
Test run:
> (time (test-run))
TEST-2021/08..
Success: 1 test, 2 checks.
Evaluation took:
  0.005 seconds of real time
  0.005222 seconds of total run time (0.004867 user, 0.000355 system)
  100.00% CPU
  13,017,280 processor cycles
  2,160,512 bytes consed
PS. As it turns out, part 2 could also be solved by brute-force:

- Generate all the possible mappings (it's only seven wires!)
- For each entry, try all the mappings until one successfully decodes the inputs
> (time
    (let ((all-mappings (all-permutations (coerce "abcdefg" 'list)))
          (sum 0))
      (dolist (e (parse-entries (uiop:read-file-lines "src/2021/day08.txt")))
        (dolist (m all-mappings)
          (when (every (partial-1 #'signal->digit m) (inputs e))
            (return (incf sum (decode m (outputs e)))))))
      sum))
Evaluation took:
  0.384 seconds of real time
  0.370098 seconds of total run time (0.362925 user, 0.007173 system)
  96.35% CPU
  884,284,986 processor cycles
  2,194,816 bytes consed

1010472
WTF?! It runs pretty damn fast too!  I guess next time I am going to try to remember that sometimes inefficient but no-brainer is better than efficient but requiring a big-brain.

2021-12-07 (permalink)

Advent of Code: [2021/07](https://adventofcode.com/2021/day/7)

- A whale has decided to eat our submarine
- Lucky us, a swarm of crabs (each in its own tiny submarine...LOL) is willing to help
- They want to drill a hole in the ocean floor so we can escape through it
- They need some help to figure out the exact position where to drill

In particular, given the horizontal position of each crab, we are asked to align them and minimize the amount of fuel each tiny submarine needs to get there.

Parsing the input is a no brainer; it's a comma separated list of numbers, so the usual CL-PPCRE:ALL-MATCHES-AS-STRINGS, and MAPCAR with PARSE-INTEGER will do the trick:
(defun parse-crabs (data)
  (mapcar #'parse-integer (cl-ppcre:all-matches-as-strings "\\d+" (first data))))
Now, to minimize the amount of fuel required to align all the crabs, I am just going to brute-force it:

- For each point between mix and max crab positions
- Calculate the amount of fuel to move all the crabs there
- Minimize this
(defun minimize-fuel (crabs)
  (loop for p from (find-min crabs) to (find-max crabs) minimize
        (loop for c in crabs sum (abs (- c p)))))
For part 2, the situation changes a little bit:
As it turns out, crab submarine engines don't burn fuel at a constant rate. Instead, each change of 1 step in horizontal position costs 1 more unit of fuel than the last: the first step costs 1, the second step costs 2, the third step costs 3, and so on.
Continues:
Determine the horizontal position that the crabs can align to using the least fuel possible so they can make you an escape route! How much fuel must they spend to align to that position?
If a crab needs to move 5 steps, it will consume: 1 + 2 + 3 + 4 + 5 units of fuel; and [in general](https://en.wikipedia.org/wiki/Arithmetic_progression#Derivation), to move `n` steps it will consume: `n * (n + 1) / 2` units of fuel:
(defun minimize-fuel (crabs)
  (loop for p from (find-min crabs) to (find-max crabs) minimize
        (loop for c in crabs sum (let ((n (abs (- p1 p2))))
                                   (floor (* n (1+ n)) 2)))))
Little bit of refactoring plus final plumbing:
(defun minimize-fuel (crabs distance-fun)
  (loop for p from (find-min crabs) to (find-max crabs) minimize
        (loop for c in crabs sum (funcall distance-fun c p))))

(define-solution (2021 07) (crabs parse-crabs)
  (values (minimize-fuel crabs (lambda (p1 p2) (abs (- p1 p2))))
          (minimize-fuel crabs (lambda (p1 p2 &aux (n (abs (- p1 p2))))
                                 (floor (* n (1+ n)) 2)))))

(define-test (2021 07) (359648 100727924))
And that's it:
> (time (test-run))
TEST-2021/07..
Success: 1 test, 2 checks.
Evaluation took:
  0.093 seconds of real time
  0.083847 seconds of total run time (0.081319 user, 0.002528 system)
  90.32% CPU
  215,462,807 processor cycles
  96,608 bytes consed
PS. In the heat of the moment, i.e. at something past 6 in the morning, I could not really remember the formula for the sum of an arithmetic progression; so what did I do, instead? [I simulated it!](https://topaz.github.io/paste/#XQAAAQC3AQAAAAAAAAAUHMind5jo50a72d6QS46YciZNqOxJy4V2WTiur+Qo5cWe6U9r6SyOJAjWcaSIJPsSzysjospBanW0NWsGyUEVOXTNYugyPi3UXtxVSE8V+6fKq9IN0ZrRch0UCsc5gmMVc2vkQkX2qH6njUoOmlI6Xm2D3GZKxJ+qvujI4z6GepPH5hvGuXJR4XqsTpmHnO9yonbOUFSylej8rV9O3jJvc7kMJNFPhxuzkK3ck//MJx9hg/flYZrAMJ/oKg+iWjJBqyliY44AHrsEmGNm2Aa4A7pc1ncIOIHxtWLrvtPRFIT/YnyyAA==), adding 1 unit of fuel at each step...
> (time (loop for p from (find-min crabs) to (find-max crabs) minimize
              (loop for c in crabs sum
                    (loop repeat (- (max c p) (min c p))
                              for f from 1 sum c))))
Evaluation took:
  18.734 seconds of real time
  10.576204 seconds of total run time (10.243097 user, 0.333107 system)
  56.45% CPU
  43,088,527,879 processor cycles
  16 bytes consed

185638080

2021-12-06 (permalink)

Advent of Code: [2021/06](https://adventofcode.com/2021/day/6)

Today we are asked to study a massive school of lanternfish; in particular, we are tasked to see how big this group will be, after 80 days:
Although you know nothing about this specific species of lanternfish, you make some guesses about their attributes. Surely, each lanternfish creates a new lanternfish once every 7 days.
>
However, this process isn't necessarily synchronized between every lanternfish - one lanternfish might have 2 days left until it creates another lanternfish, while another might have 4. So, you can model each fish as a single number that represents the number of days until it creates a new lanternfish.
>
Furthermore, you reason, a new lanternfish would surely need slightly longer before it's capable of producing more lanternfish: two more days for its first cycle.
So, to recap, we are given a comma separated list of numbers, each representing the age of each fish, and we are asked to measure the size of this group after 80 days, knowing that:

- A lanterfish will produce a new lanterfish every 7 days
- A newly spawn lanterfish will require 9 days before it can produce a new lanternfish

As usual, let's start form the input (it's a comma separated list of numbers):
(defun parse-ages (data)
  (mapcar #'parse-integer (cl-ppcre:all-matches-as-strings "\\d" (first data))))
With this, we can evolve the school of lanternfish as follows:

- For each fish in the school
- If it's 0 it means it can reproduce; we add the lanternfish itself back into the school (with the timer set to 6), and then we add a new lanternfish with the reproduction time set to 9: it's a new fish, so it will have to wait a little longer
- Otherwise, we simply decrease the reproduction counter
- We do this 80 times
(defun part1 (ages)
  (let ((an (copy-seq ages)))
    (dotimes (_ 80 (length an))
      (setf an (uiop:while-collecting (next)
                 (dolist (a an)
                   (cond ((= a 0) (next 6) (next 8))
                         (t (next (1- a))))))))))
For part 2 instead:
How many lanternfish would there be after 256 days?
Before you try it, replacing 80 with 256 is not going to work -- you will run out of memory real quick!

The big realization here, is that lanternfish with same age are all alike, and they all are going to reproduce at the same time; we don't care which one comes first; we only care of when in the future they are going to reproduce.

Let's parse the input again; this time though, let's output a 9 elements array, representing the number of fish reproducing in 0 days, 1 day, 2 days...
(defun parse-generations (data &aux
                               (generations (make-array 9 :initial-element 0))
                               (ages (cl-ppcre:all-matches-as-strings "\\d" (first data))))
  (loop for v in ages do (incf (aref generations (parse-integer v))))
  generations)
Evolving the school of lanternfish for a given number of days (so we can use this for part 1 as well) would then become a matter of:

- Rotating array values to the left (i.e. aging + reproduction)
- Increasing the `6-days` slot by the number of fish that just reproduced (i.e. they will be able to reproduce again in 6 days)
(defun evolve (generations days &aux (gg (copy-seq generations)))
  (dotimes (_ days (reduce #'+ gg))
    (let ((n (aref gg 0)))
      (dotimes (i 8)
        (setf (aref gg i) (aref gg (1+ i))))
      (setf (aref gg 8) n)
      (incf (aref gg 6) n))))
Final plumbing:
(define-solution (2021 06) (generations parse-generations)
  (values (evolve generations 80) (evolve generations 256)))

(define-test (2021 06) (371379 1674303997472))
And that's it:
> (time (test-run))
TEST-2021/06..
Success: 1 test, 2 checks.
Evaluation took:
  0.000 seconds of real time
  0.000528 seconds of total run time (0.000354 user, 0.000174 system)
  100.00% CPU
  1,277,524 processor cycles
  32,768 bytes consed

2021-12-05 (permalink)

Advent of Code: [2021/05](https://adventofcode.com/2021/day/5)
You come across a field of hydrothermal vents on the ocean floor! These vents constantly produce large, opaque clouds, so it would be best to avoid them if possible.
Uh oh!  It continues:
They tend to form in lines; the submarine helpfully produces a list of nearby lines of vents (your puzzle input) for you to review. For example:
0,9 -> 5,9
8,0 -> 0,8
9,4 -> 3,4
2,2 -> 2,1
7,0 -> 7,4
6,4 -> 2,0
0,9 -> 2,9
3,4 -> 1,4
0,0 -> 8,8
5,5 -> 8,2
The task for the day:
Consider only horizontal and vertical lines. At how many points do at least two lines overlap?
Let's start by reading our input; we are going to parse each line int 4 elements lists, i.e. `(x1 y1 x2 y2)`:
(defun parse-lines (data)
  (mapcar #'parse-line data))

(defun parse-line (string)
  (mapcar #'parse-integer (cl-ppcre:all-matches-as-strings "\\d+" string)))
With this, the solution to our problem would be a simple:
(defun part1 (lines)
  (count-overlaps (remove-if #'diagonalp lines)))
"The evil is in the details" they use to say, so let's take a look at the details.  DIAGONALP simply checks if the points describing a line have the same `x` or `y` value:
(defun diagonalp (line)
  (destructuring-bind (x1 y1 x2 y2) line
    (and (not (= x1 x2)) (not (= y1 y2)))))
Inside COUNT-OVERLAPS instead:

- For each line in our input
- We generate all its points
- We store these inside a HASH-TABLE, keeping track of duplicates (as that represents our answer)
- A naive `(loop for xx from x1 to x2 ...)` would fail if `x1` is greater than `x2`; similarly, a naive `(loop for yy from y1 to y2 ...)` would fail in `y1` is greater than `y2`
- <=>, a.k.a. the spaceship operator, returns `-1` if the first value is smaller than the second, `+1` if greater, and `0` otherwise; this represents how much to move in a given direction
- Once we know how much in each direction to move, we just have to figure out the number of times we have to move -- `n` in the snippet below
(defun count-overlaps (lines &optional (grid (make-hash-table :test 'equal)))
  (dolist (l lines)
    (destructuring-bind (x1 y1 x2 y2) l
      (let ((dx (<=> x2 x1))
            (dy (<=> y2 y1))
            (n (max (abs (- x2 x1)) (abs (- y2 y1)))))
        (loop repeat (1+ n)
              for xx = x1 then (+ xx dx)
              for yy = y1 then (+ yy dy) do
              (incf (gethash (list xx yy) grid 0))))))
  (loop for o being the hash-values of grid count (> o 1)))
For part 2 instead, we are asked to consider diagonal lines as well; we could not have been any luckier, given that all we have to do is feed COUNT-OVERLAPS with the original list of lines (and not just the horizontal/vertical ones like we did for part 1):
(defun part2 (lines) (count-overlaps lines))
Final plumbing:
(define-solution (2021 05) (lines parse-lines)
  (values (part1 lines) (part2 lines)))

(define-test (2021 05) (6397 22335))
And that's it:
> (time (test-run))
TEST-2015/05..
Success: 1 test, 2 checks.
Evaluation took:
  0.155 seconds of real time
  0.131351 seconds of total run time (0.107549 user, 0.023802 system)
  84.52% CPU
  356,617,473 processor cycles
  44,400,176 bytes consed
PS. This the code I originally used to solve this:

- Part 1: MIN / MAX to figure out where to start and where to stop, when using `(loop for v from ... to ...)` kind of loops
- Part 2: consed up all the ranges of points, and iterated over them
;; Part 1
(loop with grid = (make-hash-table :test 'equal)
      for line in input for (x1 y1 x2 y2) = line
      when (or (= x1 x2) (= y1 y2)) do
      (loop for xx from (min x1 x2) to (max x1 x2) do
            (loop for yy from (min y1 y2) to (max y1 y2) do (incf (gethash (list xx yy) grid 0))))
      finally (return (loop for v being the hash-values of grid count (> v 1))))

;; Part 2
(loop with grid = (make-hash-table :test 'equal)
      for line in input for (x1 y1 x2 y2) = line do
      (loop for xx in (range x1 x2)
            for yy in (range y1 y2) do
            (incf (gethash (list xx yy) grid 0)))
      finally (return (count-if (partial-1 #'> _ 1) (hash-table-values grid))))

(defun range (v1 v2)
  (cond ((= v1 v2) (ncycle (list v1)))
        ((< v1 v2) (loop for v from v1 to v2 collect v))
        (t         (loop for v from v1 downto v2 collect v))))
PPS. I fucking bricked my laptop yesterday while upgrading to Big Sur, so I had to sort that issue first before even start thinking about today's problem -- what a bummer!
      --------Part 1--------   --------Part 2--------
Day       Time   Rank  Score       Time   Rank  Score
  5   07:28:10  27193      0   07:41:09  24111      0
  4   00:30:52   2716      0   00:41:34   2704      0
  3   00:12:42   4249      0   00:48:51   5251      0
  2   00:05:42   3791      0   00:09:24   3731      0
  1   00:02:45   1214      0   00:07:16   1412      0

2021-12-04 (permalink)

Advent of Code: [2021/04](https://adventofcode.com/2021/day/4)

Today, we are playing Bingo with a giant squid (yes, apparently it is on the submarine with us!):
Bingo is played on a set of boards each consisting of a 5x5 grid of numbers. Numbers are chosen at random, and the chosen number is marked on all boards on which it appears. (Numbers may not appear on all boards.) If all numbers in any row or any column of a board are marked, that board wins. (Diagonals don't count.)
This is the kind of input we are expected to process:
7,4,9,5,11,17,23,2,0,14,21,24,10,16,13,6,15,25,12,22,18,20,8,19,3,26,1

22 13 17 11  0
 8  2 23  4 24
21  9 14 16  7
 6 10  3 18  5
 1 12 20 15 19

 3 15  0  2 22
 9 18 13 17  5
19  8  7 25 23
20 11 10 24  4
14 21 16 12  6

14 21 17 24  4
10 16 15  9 19
18  8 23 26 20
22 11 13  6  5
 2  0 12  3  7
I am going to create a new structure, BINGO, with slots to keep track of the numbers still left to draw, and each board; for each board representation instead, I am going to use a 5x5 2D array:
(defstruct (bingo (:conc-name nil))
  to-draw
  boards)

(defun parse-bingo (data)
  (let ((to-draw (extract-integers (car data)))
        (boards (parse-boards (cdr data))))
    (make-bingo :to-draw to-draw :boards boards)))

(defun parse-boards (data)
  (when data
    (cons
      (make-array '(5 5)
                  :initial-contents
                  (let ((lines (subseq data 1 6)))
                    (uiop:while-collecting (rows)
                      (dolist (s lines)
                        (let ((row (extract-integers s)))
                          (rows row))))))
      (parse-boards (subseq data 6)))))

(defun extract-integers (s)
  (mapcar #'parse-integer (cl-ppcre:all-matches-as-strings "\\d+" s)))
Now, let's play some bingo!

- For each number to be drawn
- For each board
- Let's mark the number on the board
- If it turns out to be a winning board, let's return its score:
(defun play (game)
  (with-slots (to-draw boards) game
    (dolist (n to-draw)
      (dolist (b boards)
        (when (mark-number b n)
          (return-from play (* (board-score b) n)))))))
MARK-NUMBER would:

- For each number of the input board
- Check if equal to the number just drawn, and in case, mark it (i.e. remove it)
- Finally return T if the move made the board a winning one
(defun mark-number (board n)
  (loop for i below 5 do
        (loop for j below 5
              when (eql (aref board i j) n) do
              (setf (aref board i j) nil)
              (return-from mark-number (board-won-p board i j)))))

(defun board-won-p (board last-marked-i last-marked-j)
  (or (loop for i below 5 never (aref board i last-marked-j))
      (loop for j below 5 never (aref board last-marked-i j))))
Lastly, the score function, i.e. the sum of all the remaining numbers of a given board:
(defun score (board)
  (loop for i below 25 for v = (row-major-aref board i) when v sum v))
For part 2 instead, we are asked to keep on playing, until there are anymore boards left.  All we have to do is update PLAY as follows:

- Collect the scores of each winning board, instead of simply returning it
- Stop processing winning boards
(defun play (game)
  (uiop:while-collecting (scores)
    (with-slots (to-draw boards) game
      (dolist (n to-draw)
        (dolist (b boards)
          (when (mark-number b n)
            (scores (* (score b) n))
            (setf boards (remove b boards))))))))
Final plumbing:
(define-solution (2021 04) (game parse-bingo)
  (let ((scores (play game)))
    (values (first scores) (car (last scores)))))

(define-test (2021 04) (82440 20774))
And that's it:
> (time (test-run))
TEST-2015/04..
Success: 1 test, 2 checks.
Evaluation took:
  0.006 seconds of real time
  0.005100 seconds of total run time (0.004883 user, 0.000217 system)
  83.33% CPU
  15,920,633 processor cycles
  851,856 bytes consed
PS. If you are curious, [this](https://topaz.github.io/paste/#XQAAAQCcBwAAAAAAAAAUGQimgwpCHOTIZ4cAdwSqqsL7YgkUjovizS3Rs1wUzEvkHpHd8o1Y7gEL0qcj1QOhqpL5g1eXcPnMqtKzYd+JHJvk5TCuqeIZwbOUxyOKy+tnAnfUMPqoLtTHNC71/R6Sv4Ls2YYxCXO/8YKt36evcoX5GU0U8VRI0OwWzBYML+rGFdtD2Wgh8+vxKgohgOBuhIjO89Cyj0VFzH9aSXZkIiuK42nn0rL9Oz6eaYbjUjjV7VU1AqKc5EV43BJVefv7+4HYe3lCBRu+BqyO4PcGrUOYVR4npFsgz7gDvTlOgryFzKpD2E0Vlf6A+k9bRwheXOXGZMgS38+zHzWN8biiDw6u9l6gdfonmBkGDdTyTCB4i6+yJyZrVPhM0FBsHMPe/+D5wPudKuZ13Az6U3B+NyZi5i1p14Uob+U9hA15Yprs3LLfyvdac8MfNXf1QKNoSOEOmZ+EjxBkpHuPWDWsgbDDeJZ5Xr/WPiGZvWrJrBj5am+ohpzDckuRSWyCd4CZGQobh8xSeadJNb9kQ5zuqJRDQiJRxy5UTeKWJTONi3R1EXI1wudHfDOAHlxpri78iWY0XWbCU8HxArT8vLP8RM6A+IdlaMWV8V6YdA5USqjI4v5l/UVc3+k1h0UUcikLRf/C8xetTSJTIfUXQaY4cTHsxnEiB7Gt0GfONV+gkYGPpy+k/hzA+v5mIVUNxAR6YI8WoHuELRuJ/1sNo4oHplkOBrWrHcrLjnKlzJH5YlnNvg0P65FtPnFRkY7OsyHmYBf4KxilEtZzhKxJAkxKegEbY6fcTpXWVTsHISzNLh8sv8OikMGNXYdsklMi9p7KRSXjD5kr01EdP0L0/VuDRDZfdvidseczEawqgtz1EOrCpm1T6VLBu+OiblOolwhUocYqZdndLv9j1pU5LTIvuSa/s+ErqCwh6K36jUEP1E/ZAIKE6lxXXo/+f8ce) is the non-cleaned-up of the code that I used to take my 2 stars home (see how I went overboard and used conditions to notify the runner about winning boards).

2021-12-03 (permalink)

Advent of Code: [2021/03](https://adventofcode.com/2021/day/3)

The submarine has been making some odd creaking noises today, and after we ask it to produce a diagnostic report (our input), we are tasked to _decode_ it and see if everything is all right (or not!):
The diagnostic report (your puzzle input) consists of a list of binary numbers which, when decoded properly, can tell you many useful things about the conditions of the submarine. The first parameter to check is the power consumption.
>
You need to use the binary numbers in the diagnostic report to generate two new binary numbers (called the gamma rate and the epsilon rate). The power consumption can then be found by multiplying the gamma rate by the epsilon rate.
>
Each bit in the gamma rate can be determined by finding the most common bit in the corresponding position of all numbers in the diagnostic report.
>
[...]
>
The epsilon rate is calculated in a similar way; rather than use the most common bit, the least common bit from each position is used.
We are not going to do anything crazy with our input; in fact, we are going to keep it as is (i.e. as a list of strings), and jump right into the core of our solution:

- We calculate the gamma rate...as instructed
- We calculate the epsilon rate starting from the gamma rate (i.e. it's just its complement)
- Parse the two strings into numbers, and multiply them
(defun part1 (strings)
  (let* ((gamma (gamma-rate strings))
         (epsilon (map 'string #'ch-complement gamma)))
    (* (parse-binary gamma) (parse-binary epsilon))))

(defun ch-complement (ch) (if (eq ch #\1) #\0 #\1))

(defun parse-binary (s) (parse-integer s :radix 2))
Note: we could have returned a number and not a string from GAMMA-RATE; but then, to calculate the epsilon rate, we would have had to mess with bits "a bit more" to account for any leading 0 (i.e. 0001, complemented, should be 1110, and not simply 0).

Inside GAMMA-RATE we collect the most common char at each position, and then coerce the result to a string:
(defun gamma-rate (strings &aux (n (length (first strings))))
  (coerce
    (uiop:while-collecting (bit!)
      (dotimes (i n)
        (bit! (most-common-ch-at strings i))))
    'string))

(defun most-common-ch-at (strings i)
  (loop for s in strings for ch = (aref s i)
        count (eq ch #\1) into ones
        finally (return (if (>= ones (/ (length strings) 2)) #\1 #\0))))
Things are going to get messy for part 2:
Next, you should verify the life support rating, which can be determined by multiplying the oxygen generator rating by the CO2 scrubber rating.
>
Both the oxygen generator rating and the CO2 scrubber rating are values that can be found in your diagnostic report - finding them is the tricky part. Both values are located using a similar process that involves filtering out values until only one remains. Before searching for either rating value, start with the full list of binary numbers from your diagnostic report and consider just the first bit of those numbers. Then:
>
- Keep only numbers selected by the bit criteria for the type of rating value for which you are searching. Discard numbers which do not match the bit criteria.
- If you only have one number left, stop; this is the rating value for which you are searching.
- Otherwise, repeat the process, considering the next bit to the right.
>
The bit criteria depends on which type of rating value you want to find:
>
- To find oxygen generator rating, determine the most common value (0 or 1) in the current bit position, and keep only numbers with that bit in that position. If 0 and 1 are equally common, keep values with a 1 in the position being considered.
-To find CO2 scrubber rating, determine the least common value (0 or 1) in the current bit position, and keep only numbers with that bit in that position. If 0 and 1 are equally common, keep values with a 0 in the position being considered.
Thaaaaaaaaaat is a lot, with lots of details too; hopefully we are going to figure this out -- well, if you are reading this, it means I already "figured something out" ;-)

Let's start from the top:
(defun part2 (strings)
  (* (oxygen-generator-rating strings) (co2-scrubber-rating strings)))
Now, to calculate the oxygen generator rating, let's proceed as follows:

- Until there is only one string left
- Figure out what the most common char is -- at a given position
- Remove all the strings having a different char at that position
(defun oxygen-generator-rating (strings &optional (n (length (first strings))))
  (loop until (= (length strings) 1) for i below n
        for ch = (most-common-ch-at strings i) do
        (setf strings (remove ch strings :key (partial-1 #'aref _ i) :test-not 'eq)))
  (parse-binary (first strings)))
To calculate the CO2 scrubber rating instead, we just have to consider least common bits; this is done by wrapping MOST-COMMON-CH-AT inside a call to CH-COMPLEMENT (the rest is the same):
(defun co2-scrubber-rating (strings &optional (n (length (first strings))))
  (loop until (= (length strings) 1) for i below n
        for ch = (ch-complement (most-common-ch-at strings i)) do
        (setf strings (remove ch strings :key (partial-1 #'aref _ i) :test-not 'eq)))
  (parse-binary (first strings)))
Final plumbing:
(define-solution (2021 03) (data)
  (values (part1 data) (part2 data)))

(define-test (2021 03) (3985686 2555739))
And that's it:
> (time (test-run))
TEST-2021/03..
Success: 1 test, 2 checks.
Evaluation took:
  0.001 seconds of real time
  0.001470 seconds of total run time (0.000941 user, 0.000529 system)
  100.00% CPU
  3,440,150 processor cycles
  131,024 bytes consed
PS. My original solution was not as cleaned as this one: it was super messy, I coded it fully in the REPL, i.e. there is not a single function in my solution (I am not proud of this, I am just stating a fact), and even though messy, it still took me forever to came up with it; lastly, cherry on top, to actually get the answer for part 1, I manually converted an ARRAY of Ts and NILs into 1s and 0s!  Anyways, if you really, really want to take a look...[have at it](https://topaz.github.io/paste/#XQAAAQBxBwAAAAAAAAAUHMineAofMgXCOJWpWvfWz/ftuK+H3bIoU+EY9K2IiV45Fs/OqcTa6MWLBv78pFRraVXpcVqiedY9084OCmjcb/QJchZ9UEBYH48m0joS9Aw/nxLo/p6sl85X0fHkzN7M1g4UHvmCT6m5+RQoVi5WzXyVLb8BMgsVfyznajX7BnSwi1DY1pm/mdUrqcbzLt9S1/EJXnn20zKLNFUc8D7Z2GvFp03xL3z5WfIH+9bsPqUQr/7K2MDl21y6zGf0PPLzklkVeuxlkw7W2uXWL4clSS1jDWeSF1v3eiZ700d6El2Z3GzO3VmHKX24dPBwOrqJABV1tUILx7gUs/vmlmOHsSSBLtDMj1bmBC8bR12iX9oiz72csFMK/a5koBQ/mm1HpUShSuhVnevc1bGd0+HJfTqsV5LBRVVguP8fJcC4J+hiPcmv29AP8lM/4KUVH6UobmQLEyNR479SlukyI7cKuFPuO91W6TqvYHCKp810/eiS+28dL6RaktTLtEpYJRjLDSUMt5SyuYqAVGCKPHryDzeqHLvcwsngsZ7wKRIA3X/ZB9iJej0JlNMCYHxLV/FL0hfVWJoAJErxH+XQqATKi5ibPlo+ppSkj+kbNQejQq65jKdk2nkFiXMo5FjY4G/7SJj49Z9xywnPUqSy/NVX3OMUiJ43r8JSF/ypZvk=)

2021-12-02 (permalink)

Advent of Code: [2021/02](https://adventofcode.com/2021/day/2)
It seems like the submarine can take a series of commands like `forward 1`, `down 2`, or `up 3`:
>
- `forward X` increases the horizontal position by X units.
- `down X` increases the depth by X units.
- `up X` decreases the depth by X units.
>
Note that since you're on a submarine, down and up affect your depth, and so they have the opposite result of what you might expect.
>
[...]
>
Calculate the horizontal position and depth you would have after following the planned course. What do you get if you multiply your final horizontal position by your final depth?
OK, let's start by parsing our input, i.e. the list of instructions, into `<direction,unit>` pairs:

- CL-PPCRE:REGISTER-GROUPS-BIND to parse our instructions using regexps (we could have also used SPLIT-SEQUENCE)
- AS-KEYWORD to convert a string into a _keyword_, i.e. `:forward`, `:up`, and `:down`
(defun parse-instructions (data) (mapcar #'parse-instruction data))

(defun parse-instruction (string)
  (cl-ppcre:register-groups-bind ((#'as-keyword dir) (#'parse-integer delta))
      ("(\\w+) (\\d+)" string)
    (cons dir delta)))
(defun dir (i) (car i))
(defun delta (i) (cdr i))
Now to get the answer for part two:

- Iterate over the instructions
- If `:forward`, we move our horizontal position
- If `:up`, we reduce our depth
- If `:down`, we increase our depth
- Last, we return the product of our horizontal position and depth
(defun part1 (instructions &aux (horiz 0) (depth 0))
  (dolist (i instructions (* horiz depth))
    (ecase (dir i)
      (:forward (incf horiz (delta i)))
      (:up (decf depth (delta i)))
      (:down (incf depth (delta i))))))
For part 2 instead, the `:up` and `:down` instructions seem to assume a different meaning:
In addition to horizontal position and depth, you'll also need to track a third value, aim, which also starts at 0. The commands also mean something entirely different than you first thought:
>
- `down X` increases your aim by X units.
- `up X` decreases your aim by X units.
- `forward X` does two things:
>
- It increases your horizontal position by X units.
- It increases your depth by your aim multiplied by X.
>
Again note that since you're on a submarine, down and up do the opposite of what you might expect: "down" means aiming in the positive direction.
>
[...]
>
Using this new interpretation of the commands, calculate the horizontal position and depth you would have after following the planned course. What do you get if you multiply your final horizontal position by your final depth?
Nothing fancy, let's just implement this:

- Iterate over the instructions
- If `:forward`, we move our horizontal position and descend / ascend based on the current aim
- If `:up`, we increase our aim
- If `:down`, we decrease our aim
- Last, we return the product of our horizontal position and depth
(defun part2 (instructions &aux (horiz 0) (depth 0) (aim 0))
  (dolist (i instructions (* horiz depth))
    (ecase (dir i)
      (:forward (incf horiz (delta i)) (decf depth (* (delta i) aim)))
      (:up (incf aim (delta i)))
      (:down (decf aim (delta i))))))
Final plumbing:
(define-solution (2021 02) (instructions parse-instructions)
  (values (part1 instructions) (part2 instructions)))

(define-test (2021 02) (1648020 1759818555))
And that's it:
> (time (test-run))
TEST-2021/02..
Success: 1 test, 2 checks.
Evaluation took:
  0.002 seconds of real time
  0.002600 seconds of total run time (0.001565 user, 0.001035 system)
  150.00% CPU
  6,082,778 processor cycles
  327,536 bytes consed
PS. My [original solution](https://topaz.github.io/paste/#XQAAAQD2AgAAAAAAAAAUGQimgx+p6PZhRh7uIO5BVe7pcMkiG6eOV22j2MYKuFs91c+ybCSTgFinLxYMDBog9+B+g/uA1iuHBXT5kPl5FbfHsjTt/UH6LeDhwtxS++LtE/cfL47hqbZilPQVUp7nWL834NkL1xo124ObhgTHG4Vaq0sDvXEWYVT61bdI9njY95YkUyYD8jeF6mdT+45tB+epQP+OgkkUyyWgSGXG3mXlBPAuGtfQ2d6F6X56fu266chYGK/yaubPY1GuajRiidCeYijfEAPoeMsgyIZcp6U4hgNM0D1M6vPzyThdMrcJtAwVid8BqQUE8ebzylpWag4uZPUQgHKdoZTgteopbArgU34gzK34OzvfXyvwL063kO+uz3rYxgePqqnFxROOQdSe0tseMzpTFeHv+lLiE+efMBoId4Ao8BsmGov79aT/8r1yLg==) parsed instructions into COMPLEX numbers (e.g. `down 4` into `#C(0 -4)`); this way, for part 1 all I had to do was summing all the deltas together; for part 2 instead, I had to check if the delta was horizontal or vertical, and adjust the aim accordingly.  I figured a more readable solution would be better, hence the rewrite.

2021-12-01 (permalink)

Advent of Code: [2021/01](https://adventofcode.com/2021/day/1)

- An elf tripped, and accidentally sent the sleight keys flying into the ocean
- All of a sudden we found ourselves inside a submarine which has an antenna that hopefully we can use to track the keys
- The submarine performs a sonar sweep of the nearby sea floor (our input)
For example, suppose you had the following report:
>
199
200
208
210
200
207
240
269
260
263
>
This report indicates that, scanning outward from the submarine, the sonar sweep found depths of 199, 200, 208, 210, and so on.
We already have an utility function to parse a list of integers, PARSE-INTEGERS, so there isn't much we have to do here, except for calling it:
(parse-integers (uiop:read-file-lines "src/2021/day01.txt"))
Now, here is what we are tasked to do for part 1:
The first order of business is to figure out how quickly the depth increases, just so you know what you're dealing with - you never know if the keys will get carried into deeper water by an ocean current or a fish or something.
>
To do this, count **the number of times a depth measurement increases** from the previous measurement. (There is no measurement before the first measurement.) [...]
Nothing crazy; we simply do as told:
(defun part1 (numbers)
  (loop for (a b) on numbers
        when b count (< a b)))
For part 2 instead:
Instead, consider sums of a three-measurement sliding window. [...]
Here we can re-use PART1 completely, except that we need to pass to it a different list of numbers, each representing the sum of a three-measurement sliding window; again, LOOP/ON makes for a very readable solution:
(defun part2 (numbers)
  (part1
    (loop for (a b c) on numbers
          when c collect (+ a b c))))
Final plumbing:
(define-solution (2021 01) (numbers parse-integers)
  (values (part1 numbers) (part2 numbers)))

(define-test (2021 01) (1557 1608))
And that's it:
> (time (test-run))
TEST-2021/01..
Success: 1 test, 2 checks.
Evaluation took:
  0.001 seconds of real time
  0.001252 seconds of total run time (0.000647 user, 0.000605 system)
  100.00% CPU
  2,873,667 processor cycles
  163,808 bytes consed

2021-11-26 (permalink)

Vlime: Make swank_repl a prompt buffer

(This was posted on [vlime/pull/55](https://github.com/vlime/vlime/pull/55))

Alright, since these changes got merged into @phmarek's fork, and since _that_ branch is the one I am using these days, I would like to provide some feedback about this; also, please note that I am running this with Vim, so expect some of the following feedback not be relevant to Neovim users.

**Performance**
I have no number to back this up, but I noticed that forms generating some non-trivial output are more likely to cause Vim to freeze if the REPL is made a prompt buffer than when it's not; don't take me wrong, t's a bit sluggish anyway, with or without these changes, but more so with the prompt buffer than without.

I have a [project](https://github.com/iamFIREcracker/adventofcode) with quite a few packages, and every time I touch the main .asd file and QL:QUICKLOAD it, Vim becomes very sluggish.

**Stealing focus**
The currently focused window seems to be losing focus every time a new line of output is added to the prompt buffer; this causes all sort of issues, especially if you were in insert mode when the focus got stolen.

I was messing around with a Web application of mine, and had the client fire a HEAD request every second (request which got logged by the Web server on the standard output, i.e. on the REPL buffer); that alone seemed to make editing in a different buffer impossible (e.g. missing characters, unable to move around the buffer), to the point that I had to disable the polling on the client!

I am not exactly sure what's going on, and I would not be surprised if this wasn't somehow related to the performance issue above; however, the moment I disabled this PR's changes, it all become responsive again.

**Current package**
While Vim itself might be the cause of the two problems mentioned above, this one is most definitely a bug with the current implementation.  Previously, forms typed into `vlime_input` buffer would be evaluated inside the package specified in the closest preceding IN-PACKAGE form; for example, imagine you had the following buffer open:
1: (in-package :cl-user)
2:
5: (in-package :foo)
6:
Opening up an input buffer (i.e. `<localleader>si`) from line 2, and typing `*package*` into it would cause `#<PACKAGE "COMMON-LISP-USER">` to be logged to the REPL buffer; opening up the input buffer from line number 6 instead, and typing in the same would output `#<PACKAGE "FOO">`.  Well, with this pull request, it would always output `#<PACKAGE "COMMON-LISP-USER">`.

**Only works while in insert mode (minor, opinionated)**
I am not sure if this is Vim's fault or if the plugin's implementation, but evaluation from the prompt buffer only seems to work while in insert mode, i.e. pressing enter while in insert mode seems to be the only way to trigger the evaluation to happen.  This unfortunately clashes with some of my settings / how I am used to work; I can work around this of course, re-wire my muscle memory, but I wanted to flag this anyway.

I use [delimitMate](https://github.com/Raimondi/delimitMate) to keep my parens balanced (yes, it's a poor man's solution, but gets the job done most of the times); this means that if I want to type in the form:
          1         2
012345678901234567890123
(format t "Hello world")
By the time I type the closing quote at position 22, the closing parenthesis would be there already, so I would just leave insert mode and expect the form to be evaluated; however, as soon as I leave insert mode, a new line is added to the prompt buffer and then I am forced to type it all again, the closing parenthesis included, and then press `Enter`.

**Empty prompt (opinionated)**
Maybe it's just me, but the first time I entered insert mode while in the prompt buffer I started wondering what that initial space was for.  I thought it was a bug, having something to do with Vlime wrongly calculating the indentation level; but then I looked at the code and realized that that was the way we told the prompt buffer to look like.

I think we should either set this to an empty string, or maybe the classic `> ` or `* `, as that would make it super clear what's going on.

**Next steps**
I am OK with tinkering with this a bit more, but here is what I think we should do:

- Understand if the performance issue mentioned above, as well as the focus stealing one, are getting experienced by everybody, i.e. Vim and Neovim users, or if it's just one man's problem (me...sigh); this will help us understand if switching to a prompt buffer for the REPL is a sound idea or not
- If it was, maybe put everything behind a config flag, so people can enable / disable it, at least until it's a bit more stable (not everybody wants to deal with this)
- Work around some of the usability issues mentioned above, like setting the current package, or changing the prompt string, etc.

Let me know what you guys think about this.

M.

2021-11-25 (permalink)

Building a _local_ system with :TRIVIAL-BUILD

(This was posted on [ceramic/trivial-build/issues/2](https://github.com/ceramic/trivial-build/issues/2))

Currently, there are at least two different ways of dealing with locally defined systems, i.e. systems defined in the working directory:

1. Create a symlink inside ~/quicklisp/local-projects (or ~/common-lisp), pointing to the current directory
2. Add the current directory to ASDF:\*CENTRAL-REPOSITORY\* (`(pushnew '*default-pathname-defaults* asdf:*central-registry*)`)

Now, with the first option, everything works as expected: you call TRIVIAL-BUILD:BUILD, it spawns a new CL image, it runs ASDF:LOAD-SYSTEM in it, and since the current directory is inside ~/quicklisp/local-projects (or ~/common-lisp), your system will be loaded just fine.

Unfortunately the current implementation does not seem to play nicely with the second option, i.e. updating ADSF:\*CENTRAL-REPOSITORY\*; that's because the newly spawn CL image, the one in which ASDF:LOAD-SYSTEM is called, has ASDF:\*CENTRAL-REPOSITORY\* in its default state, i.e. without our customizations.

Is there any interest in supporting this use case, or instead you prefer to push people to use option 1 instead?  Because if there was, here is how I _patched_ LOAD-AND-BUILD-CODE (note the line right above the ASDF:LOAD-SYSTEM call):

- I serialize the current value of ASDF:*CENTRAL-REGISTRY*, in reverse order, so while loading them later with PUSH, the original order is preserved; note: this happens on the host image, i.e. the one where :CERAMIC is run
- For each entry, I push it back into ASDF:*CENTRAL-REGISTRY* -- this instead is run on the final app image, the newly spawned one
(defun load-and-build-code (system-name entry-point binary-pathname)
  "Return a list of code strings to eval."
  (list
   "(setf *debugger-hook* #'(lambda (c h) (declare (ignore h)) (uiop:print-condition-backtrace c) (uiop:quit -1)))"
   (format nil "(dolist (entry '~W) (push entry asdf:*central-registry*))" (reverse asdf:*central-registry*))
   (format nil "(asdf:load-system :~A)" system-name)
   (format nil "(setf uiop:*image-entry-point* #'(lambda () ~A))"
           entry-point)
   (format nil "(uiop:dump-image ~S :executable t
  #+sb-core-compression :compression #+sb-core-compression t)"
           binary-pathname)))
Let me know if you see any problems with this and most importantly if you find this somewhat interesting, in which case, I would not mind creating a pull request for this.

Ciao,
M.

2021-11-23 (permalink)

Advent of Code: [2018/10](https://adventofcode.com/2018/day/10)

In today's problem, we are given a sky map: a list of stars, characterized by their position and velocity:
position=< 9,  1> velocity=< 0,  2>
position=< 7,  0> velocity=<-1,  0>
position=< 3, -2> velocity=<-1,  1>
We are also told, that after some time, stars would align to form a message; that message will be the answer to our part 1.

Let's start by parsing the sky map; we are going to define a custom structure, STAR, having one slot for the star position and one for its velocity; we are also going to store all these stars into a simple list:
(defstruct (star (:conc-name nil)) pos vel)

(defun parse-sky (data)
  (mapcar #'parse-star data))

(defun parse-star (string)
  (destructuring-bind (x y vx vy)
      (mapcar #'parse-integer (cl-ppcre:all-matches-as-strings "-?\\d+" string))
    (make-star :pos (list x y) :vel (list vx vy))))
Now, how can to figure out when the stars are aligned? My idea is to take a look at the bounding box containing all the stars: as the velocity of the stars does not change, I would expect the size of this box to become smaller and smaller, until it starts getting bigger again.  Well, the state of the sky when the bounding box is the smallest, is probably the one in which all the stars are aligned; so dumping it on the standard output would hopefully show us the message!

FIND-MESSAGE accepts the sky map as input, and keeps on moving stars (see SKY-NEXT) minimizing the bounding box (I am using the area of the bounding box, SKY-AREA, as a proxy for its size); lastly, SKY-MESSAGE is responsible for analyzing the end sky map, and print `#` when the stars are:
(defun find-message (input)
  (loop :for sky-prev = input :then sky
        :for sky = input :then (sky-next sky)
        :when (> (sky-area sky) (sky-area sky-prev))
        :return (sky-message sky-prev)))

(defun sky-next (sky) (mapcar #'star-move sky))

(defun star-move (star)
  (with-slots (pos vel) star
    (make-star :pos (mapcar #'+ pos vel) :vel vel)))

(defun sky-area (sky)
  (destructuring-bind ((x-min x-max) (y-min y-max)) (sky-box sky)
    (* (- x-max x-min) (- y-max y-min))))

(defun sky-message (sky)
  (destructuring-bind ((x-min x-max) (y-min y-max)) (sky-box sky)
    (let ((output (loop :repeat (1+ (- y-max y-min))
                        :collect (make-string (1+ (- x-max x-min)) :initial-element #\Space))))
      (loop :for star :in sky :for (x y) = (pos star)
            :for row = (nth (- y y-min) output)
            :do (setf (aref row (- x x-min)) #\#))
      (with-output-to-string (s)
        (format s "~%~{~a~^~%~}" output)))))

(defun sky-box (sky)
  (loop :for star :in sky :for (x y) = (pos star)
        :minimize x :into x-min
        :maximize x :into x-max
        :minimize y :into y-min
        :maximize y :into y-max
        :finally (return (list (list x-min x-max) (list y-min y-max)))))
Let's give this a go:
(setq input (parse-sky (uiop:read-file-lines "src/2018/day10.txt")))
> (find-message input)
"
#####   #    #  #####      ###   ####   #       #####   ######
#    #  #    #  #    #      #   #    #  #       #    #  #
#    #  #    #  #    #      #   #       #       #    #  #
#    #  #    #  #    #      #   #       #       #    #  #
#####   ######  #####       #   #       #       #####   #####
#    #  #    #  #           #   #  ###  #       #       #
#    #  #    #  #           #   #    #  #       #       #
#    #  #    #  #       #   #   #    #  #       #       #
#    #  #    #  #       #   #   #   ##  #       #       #
#####   #    #  #        ###     ### #  ######  #       ######"
Good!  Now part 2:
Impressed by your sub-hour communication capabilities, the Elves are curious: exactly how many seconds would they have needed to wait for that message to appear?
We should be able to answer this quite easily, by slightly tweaking FIND-MESSAGE:

- Keep track of how much time has elapsed
- And return it, as second value (the call to 1- is to offset the fact that the first iteration is a dummy one, given that `sky-prev` and `sky` are the same)
(defun find-message (input)
  (loop :for time :from 0
        :for sky-prev = input :then sky
        :for sky = input :then (sky-next sky)
        :when (> (sky-area sky) (sky-area sky-prev))
        :return (values (sky-message sky-prev) (1- time))))
Final plumbing:
(define-solution (2018 10) (sky parse-sky)
  (find-message sky))

(define-test (2018 10) ("
#####   #    #  #####      ###   ####   #       #####   ######
#    #  #    #  #    #      #   #    #  #       #    #  #
#    #  #    #  #    #      #   #       #       #    #  #
#    #  #    #  #    #      #   #       #       #    #  #
#####   ######  #####       #   #       #       #####   #####
#    #  #    #  #           #   #  ###  #       #       #
#    #  #    #  #           #   #    #  #       #       #
#    #  #    #  #       #   #   #    #  #       #       #
#    #  #    #  #       #   #   #   ##  #       #       #
#####   #    #  #        ###     ### #  ######  #       ######" 10831))
And that's it:
> (time (test-run))
TEST-2018/10..
Success: 1 test, 2 checks.
Evaluation took:
  0.440 seconds of real time
  0.416292 seconds of total run time (0.406274 user, 0.010018 system)
  [ Run times consist of 0.029 seconds GC time, and 0.388 seconds non-GC time. ]
  94.55% CPU
  1,012,629,537 processor cycles
  309,925,952 bytes consed

Advent of Code: [2018/7](https://adventofcode.com/2018/day/7)

Given a list of steps and requirements about which steps are blocked by which other step or steps, we are tasked to find the order in which the steps are going to be completed.

Example:
Step C must be finished before step A can begin.
Step C must be finished before step F can begin.
Step A must be finished before step B can begin.
Step A must be finished before step D can begin.
Step B must be finished before step E can begin.
Step D must be finished before step E can begin.
Step F must be finished before step E can begin.
Visually, these requirements look like this:
  -->A--->B--
 /    \      \
C      -->D----->E
 \           /
  ---->F-----
And the order in which the steps are going to be completed is: `CABDFE` (in the event of multiple steps being unblocked, we should select based on the lexicographical order).

We are going to parse our input into an association list, mapping from steps to their dependencies:
(defun parse-instructions (data)
  (let (rez)
    (loop :for string :in data
          :for (step . dep) = (parse-instruction string)
          :for step-entry = (assoc step rez)
          :for dep-entry = (assoc dep rez)
          :if step-entry :do (push dep (cdr step-entry))
          :else :do (push (cons step (list dep)) rez)
          :unless dep-entry :do (push (cons dep nil) rez))
    rez))

(defun parse-instruction (string)
  (cl-ppcre:register-groups-bind ((#'parse-char dep step))
      ("Step (\\w) must be finished before step (\\w) can begin." string)
    (cons step dep)))
Feeding the example above into PARSE-INSTRUCTIONS, should result in the following alist:
((#\E #\F #\D #\B)
 (#\D #\A)
 (#\B #\A)
 (#\F #\C)
 (#\C)
 (#\A #\C))
Now, to find the order in which the different steps are completed, we are going to proceed as follows:

- We sort the instructions, placing entries with the fewest number of dependencies (ideally, 0) first, or lexicographically in case of ties (see SORT-INSTRUCTIONS, and INSTRUCTION<)
- We pop the first item -- this is the next step to complete
- We unblock any other instruction that might be depending on the step we just completed (see REMOVE-DEPENDENCY)
- We do this until we run out of instructions
(defun part1 (instructions &aux completed)
  (loop
    (if (null instructions)
      (return (format nil "~{~C~}" (reverse completed)))
      (destructuring-bind ((step) . rest) (sort-instructions instructions)
        (setf completed (cons step completed)
              instructions (remove-dependency rest step))))))

(defun sort-instructions (instructions)
  (sort (copy-seq instructions) #'instruction<))

(defun instruction< (i1 i2)
  (destructuring-bind (step1 . deps1) i1
    (destructuring-bind (step2 . deps2) i2
      (or (< (length deps1) (length deps2))
          (and (= (length deps1) (length deps2))
               (char< step1 step2))))))

(defun remove-dependency (instructions dependency)
  (loop :for (step . deps) :in instructions
        :collect (cons step (remove dependency deps))))
Things are going to be a tiny bit more complicated for part 2:

- Each step takes a certain amount of time to complete: 60 seconds, plus a certain amount, depending on the step name (1 second for `A`, 2 seconds for `B`, ...)
- We are not alone in executing these instructions -- there are 4 elves helping us out, which means multiple steps can run in parallel

We are going to solve this by simulation; to do this, we are going to define a new structure, WORKERS, with the following slots:

- `'BUSY-FOR`, an array as big as the number of workers we have, to keep track of how much time each worker needs to complete the assigned step
- `'BUSY-WITH`, another array, this time keeping track of the step each worker is currently busy with
(defstruct (workers (:conc-name nil)
                    (:constructor make-workers%))
  busy-for
  busy-with)

(defun make-workers (n)
  (make-workers% :busy-for (make-array n :initial-element 0)
                 :busy-with (make-array n :initial-element nil)))
With this, the skeleton of our solutions is pretty simple: we initialize our WORKERS state, and until there are workers..._working_...we do another tick of our simulation; when done, we simply return the elapsed time:
(defparameter *workers-count* 5)

(defun part2 (instructions)
  (let ((workers (make-workers *workers-count*))
        (time 0))
    (loop
      (setf instructions (workers-tick workers instructions))
      (if (every #'null (busy-with workers))
        (return time)
        (incf time)))))
Let's now take a look at the meat of this, WORKERS-TICK:

- We let all the workers progress on their assigned step, and keep track of the completed ones (as well as the workers that finished them)
- We keep the workers busy until there are workers available and unblocked instructions
- Finally, we return the updated list of instructions -- i.e. instructions we could not assign because either we run out of workers, or because active workers were still working on some dependencies
(defun workers-tick (workers instructions)
  (with-slots (busy-for busy-with) workers
    (flet ((try-release-workers ()
             (loop :for i :from 0
                   :for left :across busy-for
                   :for step :across busy-with
                   :if (> left 1) :do (decf (aref busy-for i))
                   :else :collect i :and :do
                   (setf instructions (remove-dependency instructions step)
                         (aref busy-for i) 0
                         (aref busy-with i) nil)))
           (try-assign-work (available)
             (loop :while (and available instructions) :do
                   (let ((i (first available)))
                     (setf instructions (sort-instructions instructions)
                           (aref busy-for i) 0
                           (aref busy-with i) nil)
                     (destructuring-bind ((step . deps) . rest) instructions
                       (if (> (length deps) 0)
                         (return)
                         (setf (aref busy-for i) (step-time step)
                               (aref busy-with i) step
                               available (rest available)
                               instructions rest)))))))
      (let ((available (try-release-workers)))
        (try-assign-work available)
        instructions))))

(defparameter *step-duration* 60)

(defun step-time (step)
  (+ *step-duration* (1+ (- (char-code step) (char-code #\A)))))
Final plumbing:
(define-solution (2018 7) (instructions parse-instructions)
  (values (part1 instructions) (part2 instructions)))

(define-test (2018 7) ("GRTAHKLQVYWXMUBCZPIJFEDNSO" 1115))
And that's it:
> (time (test-run))
TEST-2018/07..
Success: 1 test, 2 checks.
Evaluation took:
  0.005 seconds of real time
  0.005701 seconds of total run time (0.004362 user, 0.001339 system)
  120.00% CPU
  13,260,235 processor cycles
  2,686,944 bytes consed

2021-11-22 (permalink)

Advent of Code: [2018/12](https://adventofcode.com/2018/day/12)

Today we are given a bunch of pots, each containing (or not) plants; and some notes, describing how these pots change over time, based on the state of the adjacent pots:
The pots are numbered, with 0 in front of you. To the left, the pots are numbered -1, -2, -3, and so on; to the right, 1, 2, 3.... Your puzzle input contains a list of pots from 0 to the right and whether they do (#) or do not (.) currently contain a plant, the initial state. (No other pots currently contain plants.) For example, an initial state of #..##.... indicates that pots 0, 3, and 4 currently contain plants.
>
Your puzzle input also contains some notes you find on a nearby table: someone has been trying to figure out how these plants spread to nearby pots. Based on the notes, for each generation of plants, a given pot has or does not have a plant based on whether that pot (and the two pots on either side of it) had a plant in the last generation. These are written as LLCRR => N, where L are pots to the left, C is the current pot being considered, R are the pots to the right, and N is whether the current pot will have a plant in the next generation. For example:
>
- A note like ..#.. => . means that a pot that contains a plant but with no plants within two pots of it will not have a plant in it during the next generation.
- A note like ##.## => . means that an empty pot with two plants on each side of it will remain empty in the next generation.
- A note like .##.# => # means that a pot has a plant in a given generation if, in the previous generation, there were plants in that pot, the one immediately to the left, and the one two pots to the right, but not in the ones immediately to the right and two to the left.
Here is how our input is going to look like:
initial state: #..#.#..##......###...###

...## => #
..#.. => #
.#... => #
.#.#. => #
.#.## => #
.##.. => #
.#### => #
#.#.# => #
#.### => #
##.#. => #
##.## => #
###.. => #
###.# => #
####. => #
Anyways, as usual, let's get this input parsed; we are going to parse this into a list of 2 elements: in the first one, we are going to store the initial state of the pots; in the second, we are going to store all the notes describing how pots change over time.  I am going to be using a HASH-SET for both the initial state of the pots, as well as the set of rules; for the initial state, I am going to simply keep track of the positions of the pots that contain a plant; for the list of notes instead, I am only going to be storing the ones causing a plant to grow, or to stay unchanged (i.e. I am going to skip all the notes with a `.` on the left-hand side of the note...more to this later):
(defun parse-input (data)
  (list (parse-pots (first data)) (parse-notes (cddr data))))

(defun parse-pots (string)
  (let ((pots (make-hset nil)))
    (loop :for i :from 0
          :for ch :across (subseq string 15)
          :when (eq ch #\#) :do (hset-add i pots))
    pots))

(defun parse-notes (data)
  (let ((notes (make-hset nil :test 'equal)))
    (loop :for string :in data
          :for from = (subseq string 0 5) :for to = (char string 9)
          :when (eq to #\#) :do (hset-add from notes))
    notes))

(defun pots (input) (first input))
(defun notes (input) (second input))
Now, for part 1, we are asked to evolve our array of pots over time, and see how things look after 20 generations:
After one generation, only seven plants remain. The one in pot 0 matched the rule looking for ..#.., the one in pot 4 matched the rule looking for .#.#., pot 9 matched .##.., and so on.
>
In this example, after 20 generations, the pots shown as # contain plants, the furthest left of which is pot -2, and the furthest right of which is pot 34. Adding up all the numbers of plant-containing pots after the 20th generation produces 325.
>
After 20 generations, what is the sum of the numbers of all pots which contain a plant?
As with most of the Game of Life problems, we are going to _evolve_ our state a fixed number of times (i.e. `iterations`), and then inspect the final state to come up withe the challenge's answer:

- We always start anew inside POTS-NEXT, as if all the pots were empty; this is the reason why we don't care about the notes resulting in the central pot to become empty, but only the ones resulting in pots with a plant in it
- As we keep track of plants using a HASH-SET, and not an array, we need to find the min / max positions first, and then iterate for all the values in between
- The `(- ... 4)` and `(+ ... 4)` is to make sure we catch changes happening on the _edge_
- SURROUNDING-POTS returns the state of the pots around a certain given position (this mostly makes up for the fact that we are using a HASH-SET instead of an array)
- SUM-PLANT-POSITIONS simply sums the positions of the pots with a plant (i.e. the answer to our problem)
(defun part1 (input iterations)
  (destructuring-bind (pots notes) input
    (dotimes (i iterations)
      (setf pots (pots-next pots notes)))
    (sum-plant-positions pots)))

(defun pots-next (pots notes)
  (let ((next (make-hash-table))
        (min (find-min (hset-values pots)))
        (max (find-max (hset-values pots))))
    (loop :for i :from (- min 4) :to (+ max 4)
          :for key = (surrounding-pots pots i)
          :when (hset-contains-p key notes) :do (hset-add i next))
    next))

(defun surrounding-pots (pots i)
  (let ((string (make-string 5 :initial-element #\.)))
    (loop :for j :from (- i 2) :to (+ i 2)
          :for k :from 0
          :when (hset-contains-p j pots)
          :do (setf (char string k) #\#))
    string))

(defun sum-plant-positions (pots)
  (loop :for i :being :the :hash-keys :of pots :sum i))
Let's now take a look at part 2:
After fifty billion (50000000000) generations, what is the sum of the numbers of all pots which contain a plant?
What?!  There has to be a loop involved...

I fed this to my FLOYD loop detector utility function, but unfortunately that kept on spinning forever:
(let ((input (parse-input (uiop:read-file-lines "src/2018/day12.txt"))))
  (flet ((next (pots)
           (pots-next pots (notes input))))
    (floyd #'next (pots input))
    (error "Never gonna give you up!")))
Well, as it turns out, after a few generations the array of pots assume a certain configuration; a configuration which keeps on sliding to the right hand side, over and over again:
120: #.##...#.##.##.##.##.##.##...
121: .#.##...#.##.##.##.##.##.##..
122: ..#.##...#.##.##.##.##.##.##.
124: ...#.##...#.##.##.##.##.##.##
So how can we detect that?  Well, we could change SUM-PLANT-POSITIONS to count _relatively_ to the minimum position of a pot with a plant, instead of 0; and then tell FLOYD to use that to detect a loop:
(defun sum-plant-positions (pots &optional (center 0))
  (loop :for i :being :the :hash-keys :of pots :sum (- i center)))

> (let ((input (parse-input (uiop:read-file-lines "src/2018/day12.txt"))))
    (flet ((next (pots)
             (pots-next pots (notes input)))
           (key (pots)
             (let ((min (find-min (hset-values pots))))
               (loop :for i :being :the :hash-keys :of pots :sum (- i min)))))
        (floyd #'next (pots input) :key #'key)
        (list 'LOOP 'DETECTED)))
(LOOP DETECTED)
Once we found our loop, all is left to do is figure out the number of cycles left to be done, how much SUM-PLANT-POSITIONS increases at each cycle, and then do some math to put together our solution:
(defun part2 (input iterations)
  (flet ((next (pots)
           (pots-next pots (notes input)))
         (key (pots)
           (let ((min (find-min (hset-values pots))))
             (sum-plant-positions pots min))))
    (destructuring-bind (cycles-at cycle-size pots-new)
        (floyd #'next (pots input) :key #'key)
      (let* ((base (sum-plant-positions pots-new))
             (pots-new-next (pots-next pots-new (notes input)))
             (per-cycle-increment (- (sum-plant-positions pots-new-next) base))
             (cycles (- iterations cycles-at)))
        (assert (= cycle-size 1))
        (+ base (* per-cycle-increment cycles))))))
Final plumbing:
(define-solution (2018 12) (input parse-input)
  (values (part1 input 20) (part2 input 50000000000)))
And that's it:
> (time (test-run))
TEST-2018/12..
Success: 1 test, 2 checks.
Evaluation took:
  0.060 seconds of real time
  0.108866 seconds of total run time (0.107327 user, 0.001539 system)
  181.67% CPU
  139,759,679 processor cycles
  33,689,536 bytes consed

2021-11-19 (permalink)

Advent of Code: [2018/18](https://adventofcode.com/2018/day/18)

Game of life kind of problem today:
On the outskirts of the North Pole base construction project, many Elves are collecting lumber.
>
The lumber collection area is 50 acres by 50 acres; each acre can be either open ground (.), trees (|), or a lumberyard (#). You take a scan of the area (your puzzle input).
>
Strange magic is at work here: each minute, the landscape looks entirely different. In exactly one minute, an open acre can fill with trees, a wooded acre can be converted to a lumberyard, or a lumberyard can be cleared to open ground (the lumber having been sent to other projects).
This is how our input is going to look like:
.#.#...|#.
.....#|##|
.|..|...#.
..|#.....#
#.#|||#|#|
...#.||...
.|....|...
||...#|.#|
|.||||..|.
...#.|..|.
The only difference is that our input is going to be bigger, i.e. 50x50 instead of 10x10; let's parse this into 2D array:
(defun parse-area (data)
  (let ((rows (length data))
        (cols (length (first data))))
    (make-array (list rows cols) :initial-contents data)))
Before taking a look at the rules governing how the area updates, minute after minute, let's see what exactly are we asked to do:
After 10 minutes, there are 37 wooded acres and 31 lumberyards. Multiplying the number of wooded
acres by the number of lumberyards gives the total resource value after ten minutes: 37 * 31 =
1147.
>
What will the total resource value of the lumber collection area be after 10 minutes?
So we need to:

- Figure out how the area is going to look like after `10` minutes
- Count the number of acres with trees
- Count the number of acres with lumber
- Multiply the two together
(defun part1 (area iterations)
  (dotimes (i iterations)
    (setf area (area-tick area)))
  (resource-value area))
To see how a given area is going to look like, after a minute we will have to:

- For each row
- For each column in the current row
- Inspect the surrounding 8 acres and count which ones are open, which ones have trees, and which ones have lumber
- Update the state of the current acre accordingly

Note: all the updates happen at the same time, so we need to make sure the original area is not updated as we process it.

As for the update rules instead:
- An open acre will become filled with trees if three or more adjacent acres contained trees. Otherwise, nothing happens.
- An acre filled with trees will become a lumberyard if three or more adjacent acres were lumberyards. Otherwise, nothing happens.
- An acre containing a lumberyard will remain a lumberyard if it was adjacent to at least one other lumberyard and at least one acre containing trees. Otherwise, it becomes open.
All of the above, i.e. rules plus algorithm, nicely translates into the following, long-winded, function:
(defun area-next (area)
  (let* ((rows (array-dimension area 0))
         (cols (array-dimension area 1))
         (next (make-array (list rows cols))))
    (loop :for r :below rows :do
          (loop for c :below cols :do
                (loop :with open = 0 :and trees = 0 :and lumby = 0
                      :for rr :from (max (1- r) 0) :upto (min (1+ r) (1- rows)) :do
                      (loop :for cc :from (max (1- c) 0) :upto (min (1+ c) (1- cols))
                            :unless (and (= rr r) (= cc c)) :do
                            (ecase (aref area rr cc)
                              (#\. (incf open))
                              (#\| (incf trees))
                              (#\# (incf lumby))))
                      :finally (setf (aref next r c)
                                     (ecase (aref area r c)
                                       (#\. (if (>= trees 3) #\| #\.))
                                       (#\| (if (>= lumby 3) #\# #\|))
                                       (#\# (if (and (>= lumby 1) (>= trees 1))
                                              #\# #\.)))))))
    next))
At this point, all is left to do is calculating the resource value of a given area:
(defun resource-value (area)
  (loop :for i :below (array-total-size area) :for ch = (row-major-aref area i)
        :count (eq ch #\|) :into trees
        :count (eq ch #\#) :into lumby
        :finally (return (* trees lumby))))
Now, off to part 2:
What will the total resource value of the lumber collection area be after 1000000000 minutes?
_gulp_

Well, it turns out this _game_ is cyclic, i.e. after some time a given area configuration will keep on re-appearing over and over again; this means all we have to do is:

- Figure out when it starts cycling
- How big the cycle is
- How many iterations are left to be done if we remove the cycles

Easy peasy, especially if you already bumped into a similar problem before, and had an implementation for the [Floyd-Marshall](https://en.wikipedia.org/wiki/Floyd%E2%80%93Warshall_algorithm) algorithm handy:
(defun part2 (area)
  (destructuring-bind (cycles-at cycle-size area-new)
      (floyd #'area-next area :test 'equalp)
    (let ((remaining (nth-value 1 (floor (- 1000000000 cycles-at) cycle-size))))
      (part1 area-new remaining))))
Final plumbing:
(define-solution (2018 18) (area parse-area)
  (values (part1 area 10) (part2 area)))

(define-test (2018 18) (549936 206304))
And that's it:
> (time (test-run))
TEST-2018/18..
Success: 1 test, 2 checks.
Evaluation took:
  0.877 seconds of real time
  0.826163 seconds of total run time (0.811340 user, 0.014823 system)
  94.18% CPU
  2,018,921,962 processor cycles
  47,953,424 bytes consed

2021-11-18 (permalink)

Advent of Code: [2018/23](https://adventofcode.com/2018/day/23)

This problem's input is a list of _nanobots_, each representing a 3-d location, and a radius (more to this later):
pos=<0,0,0>, r=4
pos=<1,0,0>, r=1
pos=<4,0,0>, r=3
pos=<0,2,0>, r=1
pos=<0,5,0>, r=3
pos=<0,0,3>, r=1
pos=<1,1,1>, r=1
pos=<1,1,2>, r=1
pos=<1,3,1>, r=1
We are going to put this into a structure with two slots:

- The 3-d location, stored as a list
- The communication radius
(defstruct (nanobot (:type list)
                    :conc-name)
  pos r)
(defun x (pos) (first pos))
(defun y (pos) (second pos))
(defun z (pos) (third pos))

(defun parse-nanobots (data)
  (mapcar #'parse-nanobot data))

(defun parse-nanobot (string)
  (cl-ppcre:register-groups-bind ((#'parse-integer x y z r))
      ("pos=<(-?\\d+),(-?\\d+),(-?\\d+)>, r=(\\d+)" string)
    (make-nanobot :pos (list x y z) :r r)))
Now, let's take a look at the ask for part 1:
Find the nanobot with the largest signal radius. How many nanobots are in range of its signals?
OK, easy enough:
(defun part1 (bots)
  (let ((strongest (find-max bots :key #'r)))
    (count-if (partial-1 #'nanobot-contains-p strongest (pos _)) bots)))

(defun nanobot-contains-p (nb pos)
  (<= (manhattan-distance (pos nb) pos) (r nb)))
Now, things are going to get messy with part 2:
Find the coordinates that are in range of the largest number of nanobots. What is the shortest manhattan distance between any of those points and 0,0,0?
OK, let's figure out what the bounding box of all the bots is, and then look for the point with the highest number of bots in range; except, the search space might be bigger that we thought:
> (let ((bots (parse-nanobots (uiop:read-file-lines "src/2018/day23.txt"))))
    (loop :for bot :in bots :for pos = (pos bot)
          :minimize (x pos) :into x-min
          :maximize (x pos) :into x-max
          :minimize (y pos) :into y-min
          :maximize (y pos) :into y-max
          :minimize (z pos) :into z-min
          :maximize (z pos) :into z-max
          :finally (return (list x-min x-max y-min y-max z-min z-max))))
(-151734092 226831435 -103272011 144820743 -114031252 186064394)
That, and the fact that there will be a lot of overlapping (not only are bots well spread out into the 3d space, but they also have huge communication radius too):
> (let ((bots (parse-nanobots (uiop:read-file-lines "src/2018/day23.txt"))))
    (loop :for bot :in bots
          :minimize (r bot) :into r-min
          :maximize (r bot) :into r-max
          :finally (return (list r-min r-max))))
(49770806 99277279)
We are going to need something clever than this, but what exactly?  Well, I originally solved this by [random walking the search space and drilling into the areas with the highest number of bots in range](https://github.com/iamFIREcracker/adventofcode/commit/75abd836d5502df6307c5c77ccc24b7d91d679e4#diff-e25508cbdc3137033eb7cc9d1a6d6c5f0a278b331eca80162a8defb74d5379a7L1255); however, that was not always guaranteed to work (yes, I got lucky for my second star), and after taking a good look at [r/adventofcode](https://reddit.com/r/adventofcode) and even reading the AoC creator [commenting](https://www.reddit.com/r/adventofcode/comments/aa9uvg/day_23_aoc_creators_logic/) on what he thought people should be implementing to solve this, I decided to give the "Octree scan" solution a try.

The basic idea is simple, and it relies on the recursive 3-d space subdivision logic that [octrees](https://en.wikipedia.org/wiki/Octree) implement when efficiently organizing 3-d objects in the space; what follows is an overview of the algorithm that we are going to implement to solve this:

- We start with the biggest bounding box possible, and we count the number of bots in range (all)
- Then we split this box in 8 equal parts, and count the number of bots in range of each
- We keep on doing this until the cubes becomes 3-d points
- We are going to need a way to prune non optimal branches as soon as possible, or chances are we are going to get stuck in this loop for quite some time (more to this later)

Let's start off by defining a BOX structure and create one instance covering for all the bots in our input:
(defstruct (box (:type list)
                (:constructor make-box%))
  range-x range-y range-z)

(defun make-box (x-min x-max y-min y-max z-min z-max)
  (let* ((box (make-box% :range-x (list x-min x-max)
                         :range-y (list y-min y-max)
                         :range-z (list z-min z-max)))
         (volume (box-volume box)))
    (assert (or (= volume 0) (= (log volume 2) (floor (log volume 2)))))
    box))

(defun bots-box (bots)
  (loop :for bot :in bots :for pos = (pos bot)
        :minimize (x pos) :into x-min
        :maximize (x pos) :into x-max
        :minimize (y pos) :into y-min
        :maximize (y pos) :into y-max
        :minimize (z pos) :into z-min
        :maximize (z pos) :into z-max
        :finally (return (let* ((largest (max (- x-max x-min)
                                              (- y-max y-min)
                                              (- z-max z-min)))
                                (side (make-pow2 largest)))
                           (make-box x-min (+ x-min side)
                                     y-min (+ y-min side)
                                     z-min (+ z-min side))))))

(defun make-pow2 (number)
  (expt 2 (ceiling (log number 2))))
(Note: we decide to make sure the box is actually a cube, and that each side is a power of 2, so we know the splitting will always be _even_)

Now, while waking through the search space, we would like to pick our next cube to analyze based on:

- Number of nanobots in range of the box: the higher, the better, as that's what we want to maximize, eventually
- Volume of the current box: the smaller, the better, as we want to converge to a solution (optimal or not) as soon as possible so we can use that to prune branching
- Distance from origin: again, the smaller, the better

We are going to create a new structure, for this, STATE, and customize its constructor as follows:
(defstruct (state (:type list)
                  (:constructor make-state%)
                  :conc-name)
  box num-nanobots volume distance)

(defun make-state (bots box)
  (let ((x (caar box))
        (y (caadr box))
        (z (caaddr box)))
    (make-state% :box box
                 :num-nanobots (count-in-range bots box)
                 :volume (box-volume box)
                 :distance (manhattan-distance (list x y z) (list 0 0 0)))))
There are a lot of helpers function used here, so let's take a look at their implementations; COUNT-IN-RANGE first:
(defun count-in-range (nanobots box)
  (count-if (partial-1 #'nanobot-overlaps-p _ box) nanobots))
It relies on another helper function, NANOBOT-OVERLAPS-P, which all it does is calculating how far the nanobot is, from the box, and return T if that distance (i.e. it's a Manhattan distance) is smaller or equal than the nanobot radius itself:
(defun nanobot-overlaps-p (nb box)
  (loop :with distance = 0
        :for (v-min v-max) :in box
        :for v :in (pos nb)
        :if (< v v-min) :do (incf distance (- v-min v))
        :else if (> v v-max) :do (incf distance (- v v-max))
        :never (> distance (r nb))))
Lastly, calculating one box's volume should be pretty straightforward:
(defun box-volume (box)
  (destructuring-bind ((x-min x-max) (y-min y-max) (z-min z-max)) box
    (* (- x-max x-min) (- y-max y-min) (- z-max z-min))))
Let's now take a look at the meat of this whole thing:

- We create the bounding box
- Initialize a sorted queue with it
- We start popping items from it until it's empty
- Now, until we have found _one_ solution (it could be the optimal one, or a sub-optimal one), or so long as the current box has a chance of containing a better solution (i.e. by having more nanobots than our current best solution)
- We split the box in 8, add those cubes to the priority queue, rinse and repeat
- Now, if we cannot split the cube further (i.e. it's a 3-d point), then we check for the number of nanobots covering this location, and if better than our best solution so far, we update that and carry on
(defun part2 (bots)
  (let ((box (bots-box bots))
        (best))
    (loop :with queue = (make-hq :predicate #'state-better-p)
          :initially (hq-insert queue (make-state bots box))
          :until (hq-empty-p queue)
          :for state = (hq-pop queue)
          :do (when (or (not best) (>= (num-nanobots state) (num-nanobots best)))
                (if (box-point-p (box state))
                  (if (or (not best) (state-better-p state best))
                    (setf best state))
                  (dolist (subbox (box-subdivisions (box state)))
                    (hq-insert queue (make-state bots subbox)))))
          :finally (return (distance best)))))
Easy uh?! Yeah...

Let's take a look at those functions that we have never seen before; let's begin with STATE-BETTER-P, and as a reminder, this is the logic that we are going to try to implement:

- Number of nanobots in range of the box: the higher, the better, as that's what we want to maximize, eventually
- Volume of the current box: the smaller, the better, as we want to converge to a solution (optimal or not) as soon as possible so we can use that to prune branching
- Distance from origin: again, the smaller, the better
(defun state-better-p (state other)
  (destructuring-bind (num-nanobots volume distance) (rest state)
    (destructuring-bind (o-num-nanobots o-volume o-distance) (rest other)
      (or (> num-nanobots o-num-nanobots)
          (and (= num-nanobots o-num-nanobots)
               (or (< volume o-volume)
                   (and (= volume o-volume)
                        (< distance o-distance))))))))
BOX-POINT-P instead, checks if the current box is actually a point (i.e. each range is `0` units wide):
(defun box-point-p (box)
  (loop :for (min max) :in box
        :always (= min max)))
BOX-SUBDIVISIONS finally, takes care of taking a box / cube, and generating all its 8 subdivisions; one thing to note here: we are going to check for the volume of the box first, and if it's `1`, it means we cannot split it any further, and we will go ahead and return 8 new boxes, each covering for one vertex of the cube; otherwise, we will find the mid point of each range, and create sub-cubes accordingly:
(defun box-subdivisions (box)
  (unless (box-point-p box)
    (if (= (box-volume box) 1)
      (loop :for (x y z) :in (box-vertices box)
            :collect (make-box x x y y z z))
      (destructuring-bind ((x-min x-max) (y-min y-max) (z-min z-max)) box
        (let ((x-mid (+ x-min (floor (- x-max x-min) 2)))
              (y-mid (+ y-min (floor (- y-max y-min) 2)))
              (z-mid (+ z-min (floor (- z-max z-min) 2))))
          (loop :for (x-min x-max) :on (list x-min x-mid x-max)
                :when x-max :append
                (loop :for (y-min y-max) :on (list y-min y-mid y-max)
                      :when y-max :append
                      (loop :for (z-min z-max) :on (list z-min z-mid z-max)
                            :when z-max :collect (make-box x-min x-max
                                                           y-min y-max
                                                           z-min z-max)))))))))

(defun box-vertices (box)
  (destructuring-bind (x-range y-range z-range) box
    (loop :for x :in x-range :append
          (loop :for y :in y-range :append
                (loop :for z :in z-range :collect (list x y z))))))
And that should be pretty much it.

Final plumbing:
(define-solution (2018 23) (bots parse-nanobots)
  (values (part1 bots) (part2 bots)))

(define-test (2018 23) (433 107272899))
Et voila'!
> (time (test-run))
TEST-2018/23..
Success: 1 test, 2 checks.
Evaluation took:
  0.132 seconds of real time
  0.115931 seconds of total run time (0.105885 user, 0.010046 system)
  87.88% CPU
  305,851,279 processor cycles
  1,768,384 bytes consed

2021-11-12 (permalink)

To get started with Sketch (the Common Lisp graphic framework, inspired by the Processing language), you will have to install SDL2 and some of its add-ons.

On a Mac, the most convenient way to do this is by using Homebrew:
brew install sdl2
brew install sdl2_image
brew install sdl2_ttf
With that done, you should now be able to load Sketch's examples just fine:
(ql:quickload :sketch)
(ql:quickload :sketch-examples)
(make-instance 'sketch-examples:brownian)
Note: as mentioned on this issue, [OSX Can't run hello world example #29](https://github.com/vydd/sketch/issues/29), when trying this out on SBCL, you will have to wrap the MAKE-INSTANCE call inside SDL2:MAKE-THIS-THREAD-MAIN, or most likely you will be presented with the following:
debugger invoked on a SDL2::SDL-ERROR in thread
#<THREAD "SDL2 Main Thread" RUNNING {10056843D3}>:
  SDL Error: NSWindow drag regions should only be invalidated on the Main Thread!

The current thread is not at the foreground,
SB-THREAD:RELEASE-FOREGROUND has to be called in #<SB-THREAD:THREAD "main thread" waiting on: #<WAITQUEUE Anonymous condition variable {10056847B3}> {1007C501B3}>
for this thread to enter the debugger.

debugger invoked on a SB-SYS:INTERACTIVE-INTERRUPT @7FFF5DB4C58E in thread
#<THREAD "main thread" RUNNING {1007C501B3}>:
  Interactive interrupt at #x7FFF5DB4C58E.
restarts (invokable by number or by possibly-abbreviated name):
  0: [CONTINUE] Return from SB-UNIX:SIGINT.
  1: [ABORT   ] Exit debugger, returning to top level.
Alternatively, you can always switch to [CCL](https://ccl.clozure.com/) (another Common Lisp implementation) when playing with Sketch, as that one does not seem to be suffering from the same thread-related issue that SBCL does instead.

2021-11-10 (permalink)

Advent of Code: [2018/20](https://adventofcode.com/2018/day/20)

We find ourselves into an area made up entirely of rooms and doors; the thing is, the Elves don't have a map for this area; instead, they hand us over a list of instructions we can use to build the map ourselves:

- `^` is the "start-of-instructions" marker
- `$` is the "end-of-instructions" marker
- `N` means you can go north (i.e. a door exists between the _current_ room, and the one _north_ to it)
- `E` means you can go east
- `S` means you can go south
- `W` means you can go west
- `(` marks a _fork_, meaning that multiple sub-paths can be started from the current room
- `|` marks the end of a sub-path
- `)` marks the end of the fork

A couple of examples will hopefully help understanding this; this simple list of instructions, `^WNE$`, can be used to build up the following map:
#####
#.|.#
#-###
#.|X#
#####
This longer instructions list instead, ` ^ENWWW(NEEE|SSE(EE|N))$`, can be used to create the following map:
#########
#.|.|.|.#
#-#######
#.|.|.|.#
#-#####-#
#.#.#X|.#
#-#-#####
#.|.|.|.#
#########
Before figuring out how to create the map from the instructions, it's probably best to take a look at what we are asked to do for part 1, so here it goes:
What is the largest number of doors you would be required to pass through to reach a room? That is, find the room for which the shortest path from your starting location to that room would require passing through the most doors; what is the fewest doors you can pass through to reach it?
OK, it seems like we are going to have to actually navigate through this map.  Let's parse the list of instructions into a HASH-SET of locations (i.e. COMPLEX numbers), referring to where the doors are; to do so, we are going to use a _stack_ of current positions, so we can keep track of every fork we step foot on; in particular:

- We skip the `^`
- If the current char is `$`, we stop and return the set of doors we went through
- If the current char is `(`, a new fork begins, so push the current position into the stack
- If the current char is `|`, a sub-path just ended, so let's go back to the room of the most recent fork
- If the current char is `)`, we visited all the sub-paths of a fork, so pop an item from the stack, and start from there
- Otherwise, i.e. one of `NESW`, move into the adjacent room and record the door we used to get there
(defun parse-doors (data &aux (string (first data)) (doors (make-hset nil)))
  (loop :with pos = 0 :with pos-stack = (list pos)
        :for ch :across string
        :do (ecase ch
              ((#\^) nil)
              ((#\$) (return doors))
              ((#\() (push pos pos-stack))
              ((#\)) (setf pos (pop pos-stack)))
              ((#\|) (setf pos (first pos-stack)))
              ((#\N) (hset-add (+ pos (complex 0 1)) doors) (setf pos (+ pos (complex 0 2))))
              ((#\E) (hset-add (+ pos (complex 1 0)) doors) (setf pos (+ pos (complex 2 0))))
              ((#\S) (hset-add (- pos (complex 0 1)) doors) (setf pos (- pos (complex 0 2))))
              ((#\W) (hset-add (- pos (complex 1 0)) doors) (setf pos (- pos (complex 2 0))))))
  doors)
All is left to do now, is to explore the complete map, figure out which room is the furthest away from us, in terms of number of doors to go through, and then return that number of doors; we can do this with an unbounded BFS:

- start point: `0,0` (or `(complex 0 0)`)
- no goal state / function -- it will stop when there are no more states to process
- from a given location, you can move into the adjacent room if a door exists in between -- see NEIGHBORS
(defparameter *deltas* '(#C(0 1) #C(1 0) #C(0 -1) #C(-1 0)))

(defun neighbors (doors pos)
  (loop :for d :in *deltas*
        :for door-pos = (+ pos d)
        :when (hset-contains-p door-pos doors)
        :collect (+ pos d d)))

(let ((cost-so-far (nth-value 3 (bfs 0 :neighbors (partial-1 #'neighbors doors)))))
  (reduce #'max (hash-table-values cost-so-far)))
For part 2, instead:
Okay, so the facility is big.
>
How many rooms have a shortest path from your current location that pass through at least 1000 doors?
Easy:
(let ((cost-so-far (nth-value 3 (bfs 0 :neighbors (partial-1 #'neighbors doors)))))
  (count-if (partial-1 #'>= _ 1000) (hash-table-values cost-so-far)))
Let's wire everything together:
(define-solution (2018 20) (doors parse-doors)
  (let ((cost-so-far (nth-value 3 (bfs 0 :neighbors (partial-1 #'neighbors doors)))))
    (values
      (reduce #'max (hash-table-values cost-so-far))
      (count-if (partial-1 #'>= _ 1000) (hash-table-values cost-so-far)))))

(define-test (2018 20) (3835 8520))
And that's it:
> (time (test-run))
TEST-2018/20..
Success: 1 test, 2 checks.
Evaluation took:
  0.029 seconds of real time
  0.028120 seconds of total run time (0.018474 user, 0.009646 system)
  96.55% CPU
  67,925,979 processor cycles
  6,870,096 bytes consed

2021-11-09 (permalink)

Advent of Code: [2018/06](https://adventofcode.com/2018/day/6)

Today, we are given a bunch of coordinates (i.e. X,Y locations), representing what could be either be dangerous safe places to land on -- and yes, we are falling.

This is a sample input:
1, 1
1, 6
8, 3
3, 4
5, 5
8, 9
We are going to be parsing this into _tagged_ coordinates, i.e. coordinates with a tag information attached to it (more to this later); as usual, we would be representing coordinates as COMPLEX numbers:
(defvar *tag* 0)

(defun parse-points (data)
  (let ((*tag* 0))
    (mapcar #'parse-point data)))

(defun parse-point (string)
  (cl-ppcre:register-groups-bind ((#'parse-integer col row))
      ("(\\d+), (\\d+)" string)
    (make-point (incf *tag*) row col)))

(defun make-point (tag row col) (cons tag (make-coord row col)))
(defun tag (coord) (car coord))
(defun (setf tag) (value coord) (setf (car coord) value))
(defun coord (point) (cdr point))

(defun make-coord (row col) (complex col (- row)))
(defun row (coord) (- (imagpart coord)))
(defun col (coord) (realpart coord))
For part 1, we are going to assume these to be _dangerous_ coordinates, and we are going to try and find which one is the furthest away (in terms of Manhattan distance) from all of the others.  How?
Using only the Manhattan distance, determine the area around each coordinate by counting the number of integer X,Y locations that are closest to that coordinate (and aren't tied in distance to any other coordinate).
>
Your goal is to find the size of the largest area that isn't infinite.
Here is how we are going to solve this:

- We find the bounding box, i.e. the smallest rectangle containing all the coordinates of the input -- see BOUNDING-BOX
- For each X,Y point of this bounding box
- We find the input coordinate which is _closest_ to that, being careful to ignore any point which is equally distant to 2 or more coordinates -- see CLOSEST-POINT
- Then if the we are _not_ on the border (see ON-THE-BORDER-P), we mark the location with the tag of the closest input coordinate (we are not actually marking it, but rather counting, but you get the idea)
- Finally, we find the input coordinate with the highest number of _closest_ locations

In code:
(defun part1 (points &aux (bbox (bounding-box points)))
  (destructuring-bind (row-min col-min row-max col-max) bbox
    (let ((sizes (make-hash-table)))
      (loop :for row :from row-min :to row-max :do
            (loop :for col :from col-min :to col-max
                  :for test := (make-coord row col)
                  :for closest := (closest-point test points)
                  :do (if (on-the-border-p test bbox)
                        (setf (gethash (tag closest) sizes) most-negative-fixnum)
                        (incf (gethash (tag closest) sizes 0)))))
      (loop :for size :being :the :hash-values :of sizes :using (hash-key tag)
            :when tag :maximize size))))

(defun bounding-box (points)
  (loop :for p :in points
        :for row := (row (coord p)) :for col := (col (coord p))
        :minimizing row :into row-min
        :maximizing row :into row-max
        :minimizing col :into col-min
        :maximizing col :into col-max
        :finally (return (list row-min col-min row-max col-max))))

(defun closest-point (test points)
  (let (closest closest-distance)
    (dolist (p points)
      (let ((distance (manhattan-distance test (coord p))))
        (when (or (not closest-distance) (<= distance closest-distance))
          (if (or (not closest-distance) (< distance closest-distance))
            (setf closest (list p) closest-distance distance)
            (push p closest)))))
    (when (= (length closest) 1)
      (first closest))))

(defun on-the-border-p (point bbox)
  (destructuring-bind (row-min col-min row-max col-max) bbox
    (or (= (row point) row-min) (= (row point) row-max)
        (= (col point) col-min) (= (col point) col-max))))
For part 2, we are going to assume these input locations to be _safe_ to land on instead, so we are going to try and find the region near as many coordinates as possible:
For example, suppose you want the sum of the Manhattan distance to all of the coordinates to be less than 32. For each location, add up the distances to all of the given coordinates; if the total of those distances is less than 32, that location is within the desired region.
[...]
What is the size of the region containing all locations which have a total distance to all given coordinates of less than 10000?
- For each X,Y location inside the same bounding box as part 1
- Sum all the distances with respect of all the input coordinates -- see TOTAL-DISTANCE
- Increment a counter if this sum is less than 10000
(defun part2 (points &aux (bbox (bounding-box points)))
  (destructuring-bind (row-min col-min row-max col-max) bbox
    (loop :for row :from row-min :to row-max :summing
          (loop :for col :from col-min :to col-max
                :for test := (make-coord row col)
                :for total-distance := (total-distance test points)
                :counting (< total-distance 10000)))))

(defun total-distance (test points)
  (loop :for p :in points
        :summing (manhattan-distance test (coord p))))
Final plumbing:
(define-solution (2018 6) (points parse-points)
  (values (part1 points) (part2 points)))

(define-test (2018 6) (3894 39398))
And that's it:
> (time (test-run))
TEST-2018/06..
Success: 1 test, 2 checks.
Evaluation took:
  0.661 seconds of real time
  0.618545 seconds of total run time (0.585166 user, 0.033379 system)
  93.65% CPU
  1,521,383,260 processor cycles
  14,120,976 bytes consed

2021-11-08 (permalink)

Advent of Code: [2018/04](https://adventofcode.com/2018/day/4)

December 1st is right around the corner, and we better get in "advent-of-code"-shape real quick; and what better way to do that than re-working some of my oldest solutions, i.e. the ones from 2018, when I was just trying to get started with Common Lisp?!  2018, day 4 it is!

For today's challenge we are tasked to sneak inside a supply closet so we can fix our suit; however, the closet is protected by a guard, so we need to figure out our way in without getting caught.  Lucky us, someone has been keeping tabs on guards shift (it's sleeping schedule, our input), and by the look of it it appears guards would fall asleep quite often, so hopefully we can figure out the best time to get in.

This is the kind of input that we are given:
[1518-11-01 00:00] Guard #10 begins shift
[1518-11-01 00:05] falls asleep
[1518-11-01 00:25] wakes up
[1518-11-01 00:30] falls asleep
[1518-11-01 00:55] wakes up
[1518-11-01 23:58] Guard #99 begins shift
[1518-11-02 00:40] falls asleep
[1518-11-02 00:50] wakes up
[1518-11-03 00:05] Guard #10 begins shift
[1518-11-03 00:24] falls asleep
[1518-11-03 00:29] wakes up
[1518-11-04 00:02] Guard #99 begins shift
[1518-11-04 00:36] falls asleep
[1518-11-04 00:46] wakes up
[1518-11-05 00:03] Guard #99 begins shift
[1518-11-05 00:45] falls asleep
[1518-11-05 00:55] wakes up
We are going to parse all this into an ALIST having the guard ID as index; the value would be another ALIST having the minute as index (note: for each day, we are told to analyze the `00:00`-`00:59` time range only); the value, would be the list of days in which the specific guard fell asleep for the specific minute.  The example above would be parsed into the following:
((99 (54 "1518-11-05") (53 "1518-11-05") (52 "1518-11-05") (51 "1518-11-05")
  (50 "1518-11-05") (39 "1518-11-04") (38 "1518-11-04") (37 "1518-11-04")
  (36 "1518-11-04") (49 "1518-11-05" "1518-11-02")
  (48 "1518-11-05" "1518-11-02") (47 "1518-11-05" "1518-11-02")
  (46 "1518-11-05" "1518-11-02") (45 "1518-11-05" "1518-11-04" "1518-11-02")
  (44 "1518-11-04" "1518-11-02") (43 "1518-11-04" "1518-11-02")
  (42 "1518-11-04" "1518-11-02") (41 "1518-11-04" "1518-11-02")
  (40 "1518-11-04" "1518-11-02"))
 (10 (28 "1518-11-03") (27 "1518-11-03") (26 "1518-11-03") (25 "1518-11-03")
  (54 "1518-11-01") (53 "1518-11-01") (52 "1518-11-01") (51 "1518-11-01")
  (50 "1518-11-01") (49 "1518-11-01") (48 "1518-11-01") (47 "1518-11-01")
  (46 "1518-11-01") (45 "1518-11-01") (44 "1518-11-01") (43 "1518-11-01")
  (42 "1518-11-01") (41 "1518-11-01") (40 "1518-11-01") (39 "1518-11-01")
  (38 "1518-11-01") (37 "1518-11-01") (36 "1518-11-01") (35 "1518-11-01")
  (34 "1518-11-01") (33 "1518-11-01") (32 "1518-11-01") (31 "1518-11-01")
  (30 "1518-11-01") (24 "1518-11-03" "1518-11-01") (23 "1518-11-01")
  (22 "1518-11-01") (21 "1518-11-01") (20 "1518-11-01") (19 "1518-11-01")
  (18 "1518-11-01") (17 "1518-11-01") (16 "1518-11-01") (15 "1518-11-01")
  (14 "1518-11-01") (13 "1518-11-01") (12 "1518-11-01") (11 "1518-11-01")
  (10 "1518-11-01") (9 "1518-11-01") (8 "1518-11-01") (7 "1518-11-01")
  (6 "1518-11-01") (5 "1518-11-01")))
Doing the actual parsing turned out way hairier than I originally anticipated, but whatever; anyways, high level:

- Sort entries lexicographically (yes, the input is out of order)
- Recursively
- Parse guard ID -- PARSE
- Parse "falls-asleep" time and day -- FILL-IN-SCHEDULE
- Parse "wakes-up" time and day -- FILL-IN-SCHEDULE
- For each minute in the time range
- Update the schedule accordingly -- ADD-ASLEEP-TIME-DAY
(defun parse-schedule (data)
  (let (schedule)
    (labels ((parse (remaining)
               (let ((guard (parse-schedule-guard (pop remaining))))
                 (when guard (fill-in-schedule guard remaining))))
             (fill-in-schedule (guard remaining)
               (if (not (parse-schedule-timeday (first remaining)))
                 (parse remaining)
                 (let ((falls-asleep (parse-schedule-timeday (pop remaining)))
                       (wakes-up (parse-schedule-timeday (pop remaining))))
                   (loop :with day = (cdr falls-asleep)
                         :for time :from (car falls-asleep) :below (car wakes-up)
                         :do (add-asleep-time-day guard time day))
                   (fill-in-schedule guard remaining))))
             (add-asleep-time-day (guard time day)
               (let ((entry (assoc guard schedule)))
                 (unless entry
                   (setf entry (list guard))
                   (push entry schedule))
                 (let ((st-entry (assoc time (sleeptable entry))))
                   (unless st-entry
                     (setf st-entry (list time))
                     (push st-entry (sleeptable entry)))
                   (push day (days st-entry))))))
      (parse (sort (copy-seq data) #'string<)))
    schedule))

(defun parse-schedule-guard (string)
  (cl-ppcre:register-groups-bind ((#'parse-integer guard))
      ("Guard #(\\d+) begins shift" string)
    guard))

(defun parse-schedule-timeday (string)
  (cl-ppcre:register-groups-bind (date (#'parse-integer time))
      ("(\\d{4}-\\d{2}-\\d{2}) 00:(\\d\\d)] (falls asleep|wakes up)" string)
    (cons time date)))
The rest is a bunch of selectors for writing / pulling data into / out of the schedule:
(defun guard (schedule-entry) (car schedule-entry))
(defun sleeptable (schedule-entry) (cdr schedule-entry))
(defun (setf sleeptable) (value schedule-entry) (setf (cdr schedule-entry) value))

(defun minute (sleeptable-entry) (car sleeptable-entry))
(defun days (sleeptable-entry) (cdr sleeptable-entry))
(defun (setf days) (value sleeptable-entry) (setf (cdr sleeptable-entry) value))
Now, let's get to the juicy part:
Strategy 1: Find the guard that has the most minutes asleep. What minute does that guard spend asleep the most?
We find the schedule entry with the highest count of minutes asleep first; then, in that entry, we look for the _minute_ entry in the sleep table with the highest count of days; finally do the multiplication:
(defun part1 (schedule)
  (let ((entry (find-max schedule :key #'minutes-asleep)))
    (* (guard entry) (sleepiest-minute entry))))

(defun minutes-asleep (entry)
  (reduce #'+ (sleeptable entry)
          :key #'(lambda (e) (length (cdr e)))))

(defun sleepiest-minute (entry)
  (multiple-value-bind (st num-days-asleep)
      (find-max (sleeptable entry)
                :key #'(lambda (st)
                        (length (days st))))
    (values (minute st) num-days-asleep)))
For part 2 instead:
Strategy 2: Of all guards, which guard is most frequently asleep on the same minute?
We find the entry with the highest number of of days associated for any given minute (note: we don't want the minute but the number of days the guard feel asleep for that given minute, that's why we are using NTH-VALUE here); then we call SLEEPIEST-MINUTE on that; we do multiplication last:
(defun part2 (schedule)
  (let ((entry (find-max schedule
                         :key #'(lambda (entry)
                                 (nth-value 1 (sleepiest-minute entry))))))
    (* (guard entry) (sleepiest-minute entry))))
Final plumbing:
(defun part2 (schedule)
  (let ((entry (find-max schedule
                         :key #'(lambda (entry)
                                 (nth-value 1 (sleepiest-minute entry))))))
    (* (guard entry) (sleepiest-minute entry))))
And that's it:
> (time (test-run))
TEST-2018/04..
Success: 1 test, 2 checks.
Evaluation took:
  0.005 seconds of real time
  0.005837 seconds of total run time (0.004618 user, 0.001219 system)
  120.00% CPU
  13,727,746 processor cycles
  720,224 bytes consed

2021-11-07 (permalink)

Abusing Common Lisp feature expressions to comment things out! (another follow up) @vim @commonlisp

The simple syntax rules that I mentioned inside one of my previous [.plan entry](https://matteolandi.net/plan.html#day-2021-10-29) turned out to break quite easily in case of nested parentheses; so I decided to try a different approach instead, and after playing with Vim's syntax regions a bit, I came up with the following:
syntax region lispExcludedFormWithNil
            \ matchgroup=lispExcludedContentStart
            \ start="#+nil ("
            \ skip="|.\{-}|"
            \ matchgroup=lispExcludedContentStop
            \ end=")"
            \ contains=@lispBaseListCluster
highlight link lispExcludedContentStart lispComment
highlight link lispExcludedContentStop lispComment
With this I was able to deal with nested parentheses just fine; however, if I there were `highlight` commands defined for any of the elements contained inside the `@lispBaseListCluster` cluster (and of course there are!), then Vim would end up coloring those elements; what I wanted instead, was the whole region to look like a `lispComment`.  This is where I got stuck; this is where I started questioning the whole approach altogether, and where I ended up asking for [help](https://www.reddit.com/r/vim/comments/qmn052/override_colors_based_on_the_surrounding_syntax/) on the /r/vim sub-reddit.

Anyways, long story short, one reply made me realize that I did not actually need all the definitions included inside the `@lispBaseListCluster` cluster, especially if then I would need to figure out a way to mute their colors; so what I did instead, was creating a new, simpler but recursive syntax region, to cater for my _excluded_ lists:
syntax region lispExcludedList
            \ contained
            \ matchgroup=lispExcludedListOpenParen
            \ start="("
            \ skip="|.\{-}|"
            \ matchgroup=lispExcludedListCloseParen
            \ end=")"
            \ contains=lispExcludedList
highlight link lispExcludedList lispComment
highlight link lispExcludedListOpenParen lispComment
highlight link lispExcludedListCloseParen lispComment
I then updated my original `lispExcludedFormWithNil` definition to _contain_ `lispExcludedList` instead of the `@lispBaseListCluster` cluster:
syntax region lispExcludedFormWithNil
            \ matchgroup=lispExcludedContentStart
            \ start="#+nil ("
            \ skip="|.\{-}|"
            \ matchgroup=lispExcludedContentStop
            \ end=")"
            \ contains=lispExcludedList
highlight link lispExcludedFormWithNil lispComment
highlight link lispExcludedContentStart lispComment
highlight link lispExcludedContentStop lispComment
And that's it; with this, it all started to work as expected:
#+nil (list 1 (list 2 3) 4)
                          ^
                          |
                          `- `lispComment` syntax up until this position

#+nil (list 1 (list 2 (list 3 4) 5))
                                   ^
                                   |
                                   `- `lispComment` syntax up until this position
If I then were to add `lispExcludedFormWithNil` to the `@lispBaseListCluster`:
syn cluster lispBaseListCluster add=lispExcludedFormWithNil
Then this would work even in case the _excluded_ expression was found nested inside another expression / list:
(list #+nil (list 1 (list 2 (list 3 4) 5)) 6)
                                         ^ ^
                                         | |
                                         | `- this is highlighed as `lispNumber`
                                         `- `lispComment` syntax up until this position
Sweet!

So how is my Lisp syntax file looking?  There you go:
...

" Excluded forms {{{

syntax region lispExcludedList
            \ contained
            \ matchgroup=lispExcludedListOpenParen
            \ start="("
            \ skip="|.\{-}|"
            \ matchgroup=lispExcludedListCloseParen
            \ end=")"
            \ contains=lispExcludedList

highlight link lispExcludedList lispComment
highlight link lispExcludedListOpenParen lispComment
highlight link lispExcludedListCloseParen lispComment
highlight link lispExcludedContentStart lispComment
highlight link lispExcludedContentStop lispComment

function! s:createLispExcludedSyntaxRegion(regionName, regionPrefix) abort "{{{
    execute 'syntax region' a:regionName
                \ 'matchgroup=lispExcludedContentStart'
                \ 'start="' . a:regionPrefix . '("'
                \ 'skip="|.\{-}|"'
                \ 'matchgroup=lispExcludedContentStop'
                \ 'end=")"'
                \ 'contains=lispExcludedList'
    execute 'highlight link' a:regionName 'lispComment'
    execute 'syntax cluster lispBaseListCluster add=' . a:regionName
endfunction " }}}


" ... with NIL {{{

syntax match lispExcludedElementWithNil /\v\#\+nil [^(`][^ ]+/
highlight link lispExcludedElementWithNil lispComment
syn cluster lispBaseListCluster add=lispExcludedElementWithNil

call s:createLispExcludedSyntaxRegion("lispExcludedFormWithNil", "#+nil ")
call s:createLispExcludedSyntaxRegion("lispExcludedQuotedFormWithNil", "#+nil '")
call s:createLispExcludedSyntaxRegion("lispExcludedQuasiQuotedFormWithNil", "#+nil `")
call s:createLispExcludedSyntaxRegion("lispExcludedSharpsignedDotFormWithNil", "#+nil #\.")

" }}}
" ... with #:_description_ {{{

syntax match lispExcludedElementWithDescription /\v\#\+\#:[^ ]+ [^(`][^ ]+/
highlight link lispExcludedElementWithDescription lispComment
syn cluster lispBaseListCluster add=lispExcludedElementWithDescription

call s:createLispExcludedSyntaxRegion("lispExcludedFormWithDescription", "#+#:[^ ]\\+ ")
call s:createLispExcludedSyntaxRegion("lispExcludedQuotedFormWithDescription", "#+#:[^ ]\\+ '")
call s:createLispExcludedSyntaxRegion("lispExcludedQuasiQuotedFormWithDescription", "#+#:[^ ]\\+ `")
call s:createLispExcludedSyntaxRegion("lispExcludedSharpsignedDotFormWithDescription", "#+#:[^ ]\\+ #\.")

" }}}

" }}}
PS. Yes, I created a script function, `createLispExcludedSyntaxRegion`, to make the process of creating new syntax regions a tiny bit less hairy.

2021-11-05 (permalink)

(Trying to) Speed up Common Lisp Replit REPLs startup time -- or how to both install Quicklisp and cache *.fasl files in the current directory @commonlisp @replit

All of my Common Lisp Replit REPLs have the following defined inside '.replit-files/init.lisp' (that's the script I am loading when starting up SBCL):
#-quicklisp
(let ((quicklisp-init (merge-pathnames "quicklisp/setup.lisp"
                                       (user-homedir-pathname))))
  (if (probe-file quicklisp-init)
      (load quicklisp-init)
      (progn
        (load (merge-pathnames "quicklisp.lisp" *load-truename*))
        (funcall (find-symbol "INSTALL" (find-package "QUICKLISP-QUICKSTART"))))))
This pretty much translates to checking if '~/quicklisp/setup.lisp' already exists; if it does, it means we already got Quicklisp set up, so we go ahead and load that file up; otherwise, we load 'quicklisp.lisp' from the local directory (it's a copy of the file available on [quicklisp.org](https://www.quicklisp.org/beta/)), and then install Quicklisp altogether.

After this, with Quicklisp installed, all is left to do is loading up our app (the following is an example from a [recent REPL of mine](https://github.com/iamFIREcracker/cl-face-mask-ce-verifier/blob/master/build/replit.lisp)):
(ql:quickload '("face-mask-ce-verifier-web" "swank-tunnel"))
(fmcv-web:start :web-interface "0.0.0.0")
Now, as you can imagine, the following two main activities will be slowing down the REPL startup time:

1. Bootstrapping Quicklisp, i.e. downloading it, installing it, fetching the latest dist
2. Loading / compiling the application system, and all its dependencies

Now, doing this once is perfectly fine, but can we configure things so that we don't have to do this over and over again, when Replit decides your REPL needs to be put to sleep or moved off to a new instance?

The answer to that question is of course: "yes!"; basically all we have to do is make sure that all the artifacts which were dynamically generated and which were needed by the REPL, are saved somewhere inside the REPL workspace (yes, your REPL workspace will pretty much always be kept around).

Alright; let's start off by changing our Quicklisp install script as follows:
;;; Quicklisp (installed in the current directory)
#-quicklisp
(let ((quicklisp-init (merge-pathnames ".quicklisp/setup.lisp" *default-pathname-defaults*)))
  (if (probe-file quicklisp-init)
      (load quicklisp-init)
      (progn
        (load (merge-pathnames "quicklisp.lisp" *load-truename*))
        (funcall (find-symbol "INSTALL" (find-package "QUICKLISP-QUICKSTART"))
                 :path (merge-pathnames ".quicklisp/" *default-pathname-defaults*)))))
Here, we look for Quicklisp's setup script inside '.quicklisp/setup.lisp' instead of the canonical '~/quicklisp/setup.lisp' (note how we are using *DEFAULT-PATHNAME-DEFAULTS* instead of `(user-homedir-pathname)`); we also pass `:path (merge-pathnames ".quicklisp/" *default-pathname-defaults*)` to `QUICKLISP-QUICKSTART:INSTALL`, effectively telling it to install everything inside the local directory, '.quicklisp'.  Nice!

Now, to change the location where ADFS stores compiled files to make sure it's in the current workspace, we will have to play with ADFS's [Configuration DSL](https://common-lisp.net/project/asdf/asdf/Output-Configuration-DSL.html) a bit; it took me some Googling around ([1](https://gitlab.common-lisp.net/asdf/asdf/-/issues/22), [2](https://www.mail-archive.com/asdf-devel@common-lisp.net/msg06372.html), [3](https://stackoverflow.com/questions/25323394/asdf-output-redirection), and [4](https://www.reddit.com/r/lisp/comments/23azqf/need_help_setting_asdf_cache_output_directory/)) and quite a few tries before I got this to work; but ultimately, this is what I came up with:
;;; Locally cached FASLs
(let ((cache-dir (merge-pathnames ".common-lisp/" *default-pathname-defaults*)))
  (asdf:initialize-output-translations
    `(:output-translations (t (,cache-dir :implementation))
      :disable-cache
      :ignore-inherited-configuration)))
Again, you can read more about the DSL in the link above, but in general:

1. Using `:implementation` forces ASDF to use separate directories, inside '.common-lisp', based on the current Lisp implementation; this way, should we change the REPL to use CCL instead of SBCL, it would compile everything from scratch, instead of trying to load the artifacts generated while we were using the other implementation
2. Using `:disable-cache` tells ASDF to stop caching compiled files under '$XDG_CONFIG_HOME/.common-lisp'; this is the behavior that the previous rule is meant to override, so disabling this will hopefully make sure the default caching mechanism won't get in our way
3. Using `:ignore-inherited-configuration` tells ASDF to ignore any other _inherited_ rule; now, I am not entirely sure what this is for, but ASDF was complaining that I should use either `:inherit-configuration` or `:ignore-inherited-configuration` when calling `INITIALIZE-OUTPUT`, so here it is

Anyways, restart the REPL, and you should now finally see the '.quicklisp' and '.common-lisp' local directories being put to good use:
~/Face-mask-CE-marking-verifier$ ls .quicklisp/
asdf.lisp     dists       quicklisp   tmp
client-info.sexp  local-projects  setup.lisp

~/Face-mask-CE-marking-verifier$ ls .common-lisp/
sbcl-2.1.2.nixos-linux-x64
And that's it!  Until the next time...

2021-11-03 (permalink)

TIL: You cannot use plain HTTP with a .dev domain; Google will force a HTTPS redirect! https://ma.ttias.be/chrome-force-dev-domains-https-via-preloaded-hsts/

2021-11-02 (permalink)

* Add new page for `lg` to matteolandi.net

+ Common Lisp / Replit: if we install dependencies inside ./ instead of ~/quicklisp, would Replit cache them between restarts?

? The new version of Vlime (the one supporting JSON protocol) made the REPL editable, but it's a little bit cumbersome to interact with it -- you got to press enter while in insert mode, and if you leave insert mode early, that's it... no way to undo that


Today I woke up to realize that I could not deploy anything to my VPN anymore:
task path: /Users/matteolandi/Workspace/matteolandi.net/system/ansible/roles/matteolandi-index/tasks/main.yml:4
fatal: [matteolandi.net]: FAILED! => {"changed": false, "msg": "Error pulling matteolandi.net:5000/matteolandi-index - code: None message: unable to ping re
gistry endpoint https://matteolandi.net:5000/v0/\nv2 ping attempt failed with error: Get https://matteolandi.net:5000/v2/: x509: certificate has expired or
is not yet valid\n v1 ping attempt failed with error: Get https://matteolandi.net:5000/v1/_ping: x509: certificate has expired or is not yet valid"}
 ____________
< PLAY RECAP >
 ------------
        \   ^__^
         \  (oo)\_______
            (__)\       )\/\
                ||----w |
                ||     ||

matteolandi.net            : ok=2    changed=0    unreachable=0    failed=1    skipped=0    rescued=0    ignored=0
Figured it had something to do with with the recent [Let's Encrypt's root certificate getting expired](https://techcrunch.com/2021/09/21/lets-encrypt-root-expiry), so I went on and upgraded my OS certificates (yeah, that plus all the other packages I had not upgraded in a while):
yum update -y
Then I tried to re-deploy again; it did not succeed, but at least the error message was different:
task path: /Users/matteolandi/Workspace/matteolandi.net/system/ansible/roles/docker-login/tasks/main.yml:1
fatal: [matteolandi.net]: FAILED! => {"changed": false, "msg": "Failed to import the required Python library (Docker SDK for Python: docker (Python >= 2.7)
or docker-py (Python 2.6)) on centos-512mb-fra1-01.localdomain's Python /usr/bin/python. Please read module documentation and install in the appropriate loc
ation. If the required library is installed, but Ansible is using the wrong Python interpreter, please consult the documentation on ansible_python_interpret
er, for example via `pip install docker` or `pip install docker-py` (Python 2.6). The error was: cannot import name certs"}
 ____________
< PLAY RECAP >
 ------------
        \   ^__^
         \  (oo)\_______
            (__)\       )\/\
                ||----w |
                ||     ||

matteolandi.net            : ok=33   changed=1    unreachable=0    failed=1    skipped=0    rescued=0    ignored=0
First I upgraded `pip`:
pip install --upgrade pip
Then uninstalled a bunch of borked packages:
pip uninstall docker requests urllib3
And re-installed them again:
pip install --upgrade urllib3 requests docker
And after this, it all went back to normal... Crisis averted!

2021-10-31 (permalink)

* While releasing `cg`, the created GH release is named `refs/tags/...` -- turns out I was setting the `name` input to `github.ref`: https://github.com/iamFIREcracker/cg/commit/c0c3ef5b17be40872c67709f445dcbc66c1936c2

+ migrate `ap` to GitHub actions -- see `cg`

+ migrate `adventofcode` to GitHub actions -- see `cg`

+ migrate `lg` to GitHub actions -- see `cg`

? migrate `plan-convert` to GitHub actions -- see `cg`

? migrate `xml-emitter` to GitHub actions -- see `cg`

2021-10-29 (permalink)

Abusing Common Lisp feature expressions to comment things out! (a follow up) @vim @commonlisp

I posted a link to my previous [.plan entry](https://matteolandi.net/plan.html#day-2021-10-23) on [/r/Common_Lisp](https://www.reddit.com/r/Common_Lisp/comments/qfgci7/abusing_common_lisp_feature_expressions_to/) the other day, and a couple of things came out.

First off, an evil person could add `:nil` to *FETURES* and break hell loose:
#+nil is not the best way of doing this because, in theory, an evil person can put :nil in *features* and then all hell will break lose. Use #+(or) instead, which is immune to this sort of trickery.
Uh oh...
> (or #+nil "WTF?!" "Hello, world!")
"Hello, world!"

> (pushnew :nil *features*)
(:NIL :QUICKLISP :ASDF3.3 :ASDF3.2 :ASDF3.1 :ASDF3 :ASDF2 :ASDF :OS-MACOSX
 :OS-UNIX :NON-BASE-CHARS-EXIST-P :ASDF-UNICODE :X86-64 :GENCGC :64-BIT
 :ANSI-CL :BSD :COMMON-LISP :DARWIN :IEEE-FLOATING-POINT :LITTLE-ENDIAN :MACH-O
 :PACKAGE-LOCAL-NICKNAMES :SB-CORE-COMPRESSION :SB-LDB :SB-PACKAGE-LOCKS
 :SB-THREAD :SB-UNICODE :SBCL :UNIX)

> (or #+nil "WTF?!" "Hello, world!")
"WTF?!"

> (setf *features* (delete :nil *features*))
(:QUICKLISP :ASDF3.3 :ASDF3.2 :ASDF3.1 :ASDF3 :ASDF2 :ASDF :OS-MACOSX :OS-UNIX
 :NON-BASE-CHARS-EXIST-P :ASDF-UNICODE :X86-64 :GENCGC :64-BIT :ANSI-CL :BSD
 :COMMON-LISP :DARWIN :IEEE-FLOATING-POINT :LITTLE-ENDIAN :MACH-O
 :PACKAGE-LOCAL-NICKNAMES :SB-CORE-COMPRESSION :SB-LDB :SB-PACKAGE-LOCKS
 :SB-THREAD :SB-UNICODE :SBCL :UNIX)
Then someone else suggested to use [unintenred symbols](http://www.lispworks.com/documentation/HyperSpec/Body/02_dhe.htm) instead, as that will not only keep you safe from the previously mentioned problem, but also give you the opportunity to _document_ why the specific form / element got intentionally excluded:
I use #+#:buggy or any other word to give a tiny piece of documentation as to why it’s commented out. Sometimes I’ll write #+#:pedagogical-implementation to provide a pedagogical, easy-to-understand version of an expression that has been made obscure through optimizations.
Again, let's try this out
> (or #+#:excluded "WTF?!" "Hello, world!")
"Hello, world!"

> (pushnew #:excluded *features*)
; in: PUSHNEW #:EXCLUDED
;     (LET* ((#:ITEM #:EXCLUDED))
;       (SETQ *FEATURES* (ADJOIN #:ITEM *FEATURES*)))
;
; caught WARNING:
;   undefined variable: #:EXCLUDED
;
; compilation unit finished
;   Undefined variable:
;     #:EXCLUDED
;   caught 1 WARNING condition

debugger invoked on a UNBOUND-VARIABLE @5351833B in thread
#<THREAD "main thread" RUNNING {1001548143}>:
  The variable #:EXCLUDED is unbound.
restarts (invokable by number or by possibly-abbreviated name):
  0: [CONTINUE   ] Retry using #:EXCLUDED.
  1: [USE-VALUE  ] Use specified value.
  2: [STORE-VALUE] Set specified value and use it.
  3: [ABORT      ] Exit debugger, returning to top level.

((LAMBDA ()))
   source: (LET* ((#:ITEM #:EXCLUDED))
             (SETQ *FEATURES* (ADJOIN #:ITEM *FEATURES*)))
0] 3

> (pushnew :excluded *features*)
(:EXCLUDED :QUICKLISP :ASDF3.3 :ASDF3.2 :ASDF3.1 :ASDF3 :ASDF2 :ASDF :OS-MACOSX
 :OS-UNIX :NON-BASE-CHARS-EXIST-P :ASDF-UNICODE :X86-64 :GENCGC :64-BIT
 :ANSI-CL :BSD :COMMON-LISP :DARWIN :IEEE-FLOATING-POINT :LITTLE-ENDIAN :MACH-O
 :PACKAGE-LOCAL-NICKNAMES :SB-CORE-COMPRESSION :SB-LDB :SB-PACKAGE-LOCKS
 :SB-THREAD :SB-UNICODE :SBCL :UNIX)

> (or #+#:excluded "WTF?!" "Hello, world!")
"Hello, world!"

> (setf *features* (delete :excluded *features*))
(:QUICKLISP :ASDF3.3 :ASDF3.2 :ASDF3.1 :ASDF3 :ASDF2 :ASDF :OS-MACOSX :OS-UNIX
 :NON-BASE-CHARS-EXIST-P :ASDF-UNICODE :X86-64 :GENCGC :64-BIT :ANSI-CL :BSD
 :COMMON-LISP :DARWIN :IEEE-FLOATING-POINT :LITTLE-ENDIAN :MACH-O
 :PACKAGE-LOCAL-NICKNAMES :SB-CORE-COMPRESSION :SB-LDB :SB-PACKAGE-LOCKS
 :SB-THREAD :SB-UNICODE :SBCL :UNIX)
Alright then, I think I like _uninterned symbol_ approach better, so I am going to update my Vim files as follows:
diff --git a/.vim/vimrc b/.vim/vimrc
index 69c5db4..6ca6151 100644
--- a/.vim/vimrc
+++ b/.vim/vimrc
diff --git a/.vim/after/syntax/lisp.vim b/.vim/after/syntax/lisp.vim
index 5a2859c..cdf3c03 100644
--- a/.vim/after/syntax/lisp.vim
+++ b/.vim/after/syntax/lisp.vim
@@ -855,7 +921,7 @@ function! s:vim_sexp_mappings() " {{{
         " Check if it's a #+nil marker
         execute "normal \<Plug>(sexp_move_to_prev_element_head)"
         normal yW
-        if @@ =~? "#+nil"
+        if @@ =~? "#+#:"
             let should_add_comment = 0
         endif
         let @@ = reg_save
@@ -864,7 +930,7 @@ function! s:vim_sexp_mappings() " {{{
         " Do the needed
         if should_add_comment
             " Insert the comment marker, and move back to the element / form
-            execute "normal! i#+nil "
+            execute "normal! i#+#:excluded "
             execute "normal \<Plug>(sexp_move_to_next_element_head)"
         else
             " Move back, and the delete forward
diff --git a/.vim/after/syntax/lisp.vim b/.vim/after/syntax/lisp.vim
index 5a2859c..cdf3c03 100644
--- a/.vim/after/syntax/lisp.vim
+++ b/.vim/after/syntax/lisp.vim
@@ -11,3 +11,11 @@ syn cluster lispBaseListCluster add=lispExcludedForm
 syntax match lispExcludedElement /\v\#\+nil [^(][^ ]+/
 highlight link lispExcludedElement lispComment
 syn cluster lispBaseListCluster add=lispExcludedElement
+
+syntax match lispExcludedFormWithDescription /\v\#\+\#:[^ ]+ \([^)]+\)/
+highlight link lispExcludedFormWithDescription lispComment
+syn cluster lispBaseListCluster add=lispExcludedFormWithDescription
+
+syntax match lispExcludedElementWithDescription /\v\#\+\#:[^ ]+ [^(][^ ]+/
+highlight link lispExcludedElementWithDescription lispComment
+syn cluster lispBaseListCluster add=lispExcludedElementWithDescription
Right on!

2021-10-27 (permalink)

? @read take a look at how mongosh implemented _awaiting_ for promises without the explicit `await` statement -- https://github.com/mongodb-js/mongosh/tree/main/packages/async-rewriter2#next-gen-async-rewriter

? CL/1AM: push tests into a custom special variable, say *MY-TESTS*, and bind 1AM:*TESTS* to *MY-TESTS* when actually running the tests (this way only the current project's tests will be executed)

2021-10-23 (permalink)

Abusing Common Lisp feature expressions to comment things out! @vim @commonlisp

Today I realized that I can easily comment out a form by placing a `#+nil` in front of it, and this turned out to be quite useful and productive especially when throwing things out at the REPL.

Let's take a look at an example (this was taken from a [recent](https://github.com/iamFIREcracker/cl-face-mask-ce-verifier) small project of mine):
(hunchentoot:define-easy-handler (index :uri "/") (q)
  (let ((bodies (and q (fmcv:nb-search q))))
    (with-page (:title "Face Mask CE Verifier")
      ...
The bit to focus your attention to is the second line, which in words, pretty much translates to:
when `q`, the request the query string is non-NIL, call FMCV:NB-SEARCH with it, and assign the result to `bodies`; otherwise, `bodies` is initialized with nil
Now, while working on this, I realized FMCV:NB-SEARCH could sometimes take up to 3 seconds to return, and since there wasn't much I could do about it (i.e. it was the external service the function was calling behind the scenes, which was taking so long to reply), I decide to make the whole development experience a tiny bit more enjoyable by introducing a little bit of caching:

- call the API once
- save the response somewhere
- change the logic to read from the cached response until I was done with everything else
(defvar *cached-bodies* (fmcv:nb-search "00"))

(hunchentoot:define-easy-handler (index :uri "/") (q)
  (let ((bodies (and q *cached-bodies*)))
    (with-page (:title "Face Mask CE Verifier")
The above just works, 100%; however, it also makes it difficult to quickly switch between implementations.  For example, should I want to hit the external API again, I would have to put the FMCV:NB-SEARCH function call back in, and that would be a lot of typing /s

This is where [feature expressions](http://www.ai.mit.edu/projects/iiip/doc/CommonLISP/HyperSpec/Body/sec_24-1-2-1.html) could come to the rescue:

- Keep both *cached-bodies* and the FMCV:NB-SEARCH call in place
- Want to use the cached response?  Place a `#+nil` in front of the FMCV:-NB-SEARCH call
(hunchentoot:define-easy-handler (index :uri "/") (q)
  (let ((bodies (and q *cached-bodies* #+nil (fmcv:nb-search q))))
    (with-page (:title "Face Mask CE Verifier")
- Want to hit the external service instead?  Place a `#+nil` in front of *cached-bodies*
(hunchentoot:define-easy-handler (index :uri "/") (q)
  (let ((bodies (and q #+nil *cached-bodies* (fmcv:nb-search q))))
    (with-page (:title "Face Mask CE Verifier")
Note: `#+` is a reader macro that operates on the next two expressions found in the stream; if the first expression evaluates to non-NIL, then the macro would emit the second expression; otherwise, it would swallow it.  Now, we are always feeding the macro with a NIL as first expression, so basically we are telling it to swallow the second expression, always!

Pretty cool uh?!

_All the Vim haters should stop reading here_

I was so excited by this that I ended up creating two Vim mappings to automate this:

- `gce` to add (or remove) a `#+nil` in front of the current element (_element_ in the [vim-sexp](https://github.com/guns/vim-sexp) sense)
- `gcf` to add (or remove) a `#+nil` in front of the current form
" Comment out the current form / element
function! ToggleCommonLispComment(toggle_form) abort "{{{
    " Move at the start of the current element / form; if on an open paren
    " already, do nothing
    if getline('.')[col('.') - 1] != "("
        let motion = ""
        if a:toggle_form
            let motion = "\<Plug>(sexp_move_to_prev_bracket)"
        else
            let motion = "\<Plug>(sexp_move_to_prev_element_head)"
        endif
        execute "normal " . motion
    endif

    " Figure out if we need to add a comment, or remove the existing one
    let should_add_comment = 1
    let save_cursor = getcurpos()
    let reg_save = @@
    " Move back to the previous element
    " Yank it
    " Check if it's a #+nil marker
    execute "normal \<Plug>(sexp_move_to_prev_element_head)"
    normal yW
    if @@ =~? "#+nil"
        let should_add_comment = 0
    endif
    let @@ = reg_save
    call setpos('.', save_cursor)

    " Do the needed
    if should_add_comment
        " Insert the comment marker, and move back to the element / form
        execute "normal! i#+nil "
        execute "normal \<Plug>(sexp_move_to_next_element_head)"
    else
        " Move back, and the delete forward
        execute "normal \<Plug>(sexp_move_to_prev_element_head)"
        normal dW
    endif
endfunction
" }}}
nmap <silent><buffer>   gce  :call ToggleCommonLispComment(0)<CR>
nmap <silent><buffer>   gcf  :call ToggleCommonLispComment(1)<CR>
It's a bit hack-y to be honest, but whatever!

Also, you can even change the relevant syntax file (e.g. ~/.vim/after/syntax/lisp.vim) and make these regions look like comments:
syntax match lispExcludedForm /\v\#\+nil \([^)]+\)/
highlight link lispExcludedForm lispComment
syn cluster lispBaseListCluster add=lispExcludedForm

syntax match lispExcludedElement /\v\#\+nil [^(][^ ]+/
highlight link lispExcludedElement lispComment
syn cluster lispBaseListCluster add=lispExcludedElement
What a joy!

Until next time...

2021-10-22 (permalink)

? Can I get a Replit instance, open a ssh connection to my VPS, so that I can then use my VPS as jump server to connect back into the REPL?

2021-10-20 (permalink)

How to verify the CE marking on your face mask?!

First off, what does the CE marking mean? From [Wikipedia](https://en.wikipedia.org/wiki/CE_marking):
The CE mark on a product indicates that the manufacturer or importer of that product affirms its compliance with the relevant EU legislation and the product may be sold anywhere in the European Economic Area (EEA). It is a criminal offence to affix a CE mark to a product that is not compliant or offer it for sale.
It continues:
The marking does not indicate EEA manufacture or that the EU or another authority has approved a product as safe or conformant. The EU requirements may include safety, health, and environmental protection. If stipulated in any EU product legislation, assessment by a Notified Body or manufacture according to a certified production quality system may be required. Where relevant, the CE mark is followed by the registration number of the notified body involved in conformity assessment.
So, for example, when you read `CE-2233` on a product, `CE` represents the CE marking (duh), while `2233` refers to ID of the notified body involved in conformity assessment.

Now off to the verification part.

Note: the steps listed below will help you verify that the notified body involved with the assessment is..._legit_, i.e. can actually issue the specific conformity assessment certificate; however, these won't help with verifying that the specified notified body had indeed run a conformity assessment on a given product (i.e. the product seller could lie about this, but as the Wikipedia article states, this is considered a criminal offense and by consequence, and published by law).

- To sell face masks in the UE, you need the CE marking
- For this matter, notified bodies need to be competent with the '2016/425
  Personal protective equipment' legislation
- They also need to be entitled to issue specific conformity assessment modules for 'Equipment providing respiratory system protection' products (e.g. face masks)
- Luckily for us, an online database exists,
  [NANDO](https://ec.europa.eu/growth/tools-databases/nando/index.cfm), storing
  all this information about all the registered bodies

So, without further ado, navigate to: [Nando](https://ec.europa.eu/growth/tools-databases/nando/index.cfm) > [Free search](https://ec.europa.eu/growth/tools-databases/nando/index.cfm?fuseaction=search.main).

Take the CE numerical value (i.e. the `2233` part of `CE-2233`), populate the `Keyword On Notified body number` field with it, and press the `Search` button.

Now `Ctrl+F` for the CE numerical value (i.e. the `2233` part of `CE-2233`), and if you are lucky enough, you should find a match inside the `Body type` column:
NB 2233 [GÉPTESZT Termelőeszközöket Felülvizsgáló és Karbantartó Kft.](https://ec.europa.eu/growth/tools-databases/nando/index.cfm?fuseaction=directive.nb&body_type=NB&refe_cd=NANDO_INPUT_166341) Hungary
Click on the link in the second column to open up the page listing addresses, contact details, and all the legislations that the notified body has scoped to assess and certify against.
Body :
>
GÉPTESZT Termelőeszközöket Felülvizsgáló és Karbantartó Kft.
Jablonka u. 79.
1037 Budapest
Country : Hungary
>
Phone : +36 1 250 3531
Fax : +36 1 250 3531
>
Email : gepteszt@gepteszt.hu
Website : www.gepteszt.hu
>
Notified Body number : 2233
>
Last approval date : 13/04/2010
>
--
>
Legislation
>
- Regulation (EU) 2016/425 Personal protective equipment [HTML](https://ec.europa.eu/growth/tools-databases/nando/index.cfm?fuseaction=notification.html&ntf_id=309049&version_no=1) [PDF](https://ec.europa.eu/growth/tools-databases/nando/index.cfm?fuseaction=notification.pdf&dir_id=155501&ntf_id=309049)
Search for the `Personal protective equipment` entry inside the Legislations table, and click on the link labeled with `HTML`.  On this page you will find all the information the notified body has been certified for.

You should first confirm that there is a entry for `Equipment providing respiratory system protection`; without this, even the PPE certification notified body cannot engage in CE certification activities of protective masks).  Then, you will have to confirm the "conformity assessment" modules that the notified body is entitled to issue; in particular we are looking for:

- [Module B: EU type-examination](https://support.ce-check.eu/hc/en-us/articles/360019298431-Conformity-Assessment-Module-B)
- [Module C2: Supervised product checks at random intervals](https://support.ce-check.eu/hc/en-us/articles/360019507611-Conformity-Assessment-Module-C2-)
- [Module D: Quality assurance of the production process](https://support.ce-check.eu/hc/en-us/articles/360019191712-Conformity-Assessment-Modules-D-D1)

Note: that manufacturers can only enter the EU market legally after obtaining a Module B + Module C2 certificate or a Module B + Module D certificate
Products:
>
- Equipment providing respiratory system protection
- Protective Equipment against falls from heights
>
Procedures:
>
- EU type-examination
- Quality assurance of the production process
- Supervised product checks at random intervals
>
Articles / Annexes
>
- Annex V
- Annex VIII
- Annex VII
By the looks of the information shown above, it appears that this notified body have all it takes to properly run conformity assessments against face masks...Hurray!

Links:

- [Guide to using NANDO website to identify Notified Bodies PPE](https://yewtu.be/watch?v=ceCAceuDOhE)
- [Mascherine Ffp2, ecco i codici certificati CE non a norma e lo strumento di verifica Ue](https://quifinanza.it/info-utili/video/mascherine-ffp2-covid-codici-certificati-ce-non-a-norma/471686/)
- [How to Verify CE Certification?](https://verifyfull.com/how-to-verify-ce-certification%EF%BC%9F/)

2021-10-15 (permalink)

Today I fucking figured out how to make Vim's * / # not suck! @vim
" Star search plugin {{{
"
" Few things that this plugin does, and that others might not:
"
" 1. Works in both normal, and visual mode
" 2. Plays nice with smartcase search -- vim's builtin * / # mappings do not
"    support that
" 3. Does not automatically jump to the _first_ next occurrence; instead, moves
"    the cursor to the beginning of the current occurrence so that if for
"    whatever reason you decided to swap search direction with `N`, you would
"    be taken to the first previous occurrence (and not to the beginning of the
"    current word
"
" Normal mode:
"
" - Save the word under the cursor -- running `expand('<cword>')` while
"   searching actually cuases the cursor to move around one char forward, and
"   if you happened to start from the end of the word, that
"   'move-one-char-ahead' behavior would actually move the cursor over to the
"   next word, causing `expand('<cword>')` to return something unexpected
"   instead
" - Saves the current view (so we can restore it later)
" - Triggers the word-boundary-d search
" - Restores to the previous view -- effectively undoing the first jump
" - Checks if the cursor wasn't already at the beginning of a match, and if
"   not, runs `b` to move at the beginning of the current word
nnoremap <Plug>NormalStarSearchForward
            \ :let star_search_cword = expand('<cword>')<CR>
            \ :let star_search_view = winsaveview()<CR>
            \ /\<<C-R>=escape(star_search_cword, '*\')<CR>\><CR>
            \ :call winrestview(star_search_view)<CR>
            \ :execute "normal! " . (getline('.')[col('.') - 1:] !~# '^' . star_search_cword ? "b" : "")<CR>
nnoremap <Plug>NormalStarSearchBackward
            \ :let star_search_cword = expand('<cword>')<CR>
            \ :let star_search_view = winsaveview()<CR>
            \ ?\<<C-R>=escape(star_search_cword, '*\')<CR>\><CR>
            \ :call winrestview(star_search_view)<CR>
            \ :execute "normal! " . (getline('.')[col('.') - 1:] !~# '^' . star_search_cword ? "b" : "")<CR>
nmap * <Plug>NormalStarSearchForward
nmap # <Plug>NormalStarSearchBackward

" Visual mode:
"
" - Saves the current view (so we can restore it later)
" - Triggers the search -- without word-boundaries...you can use normal the
"   normal mode alternative for that
" - Restores to the previous view -- effectively undoing the first jump
xnoremap <Plug>VisualStarSearchForward
            \ :<C-U>let star_search_view = winsaveview()<CR>
            \ /<C-R>=escape(GetCurrentSelection(visualmode()), '*\')<CR><CR>
            \ :call winrestview(star_search_view)<CR>
xnoremap <Plug>VisualStarSearchBackward
            \ :<C-U>let star_search_view = winsaveview()<CR>
            \ ?<C-R>=escape(GetCurrentSelection(visualmode()), '*\')<CR><CR>
            \ :call winrestview(star_search_view)<CR>
xmap * <Plug>VisualStarSearchForward
xmap # <Plug>VisualStarSearchBackward

" }}}
There is a lot of copy-pasta in there, but it gets the job done and given the amount of time I have already spent on this today, or in the past even, I think it's time to move on!

? git: switch back to per-branch stashes, but register a pre-push hook to abort if any unpushed commit is tagged as SAVEPOINT -- https://gist.github.com/GUI/676bcb25389cd01d47828fddcf37e1f2

+ change Vim/lisp setup so that the REPL is actually started in a different terminal (via Dispatch maybe) and does not take real estate

2021-10-07 (permalink)

? @A @read [Making a Game in Janet, Part 2: Judging Janet](https://ianthehenry.com/posts/janet-game/judging-janet/) -- self-modifying tests, [unified tests](https://www.mercurial-scm.org/wiki/WritingTests), [expect tests](https://blog.janestreet.com/testing-with-expectations/)

? @A @read [Clojure unique way of debugging](https://www.reddit.com/r/Clojure/comments/q31wpo/clojure_unique_way_of_debugging/) -- environment, capture, replay

2021-10-06 (permalink)

* Added GitHub Actions to `cg` -- bye bye TravisCI -- [cg/pull/8](https://github.com/iamFIREcracker/cg/pull/8)

+ See if it's possible to enhance [40ants/setup-lisp](https://github.com/40ants/setup-lisp) to support Windows

2021-09-30 (permalink)

`ScratchThis`, a Vim command to make the current buffer a `scratch` one (see `:h special-buffers` for additional info):
function! s:ScratchThis() abort
    setlocal buftype=nowrite bufhidden=delete noswapfile
endfunction

command! ScratchThis call s:ScratchThis()

2021-09-15 (permalink)

Nice little vim/fugitive gem: use `!` to open a command prompt with the file under the cursor.
nnoremap ! :! <C-R><C-F><Home><Right>
It's very handy if you want to delete an untracked file (see https://github.com/tpope/vim-fugitive/issues/23).

netrw comes with an identical mapping too!

2021-09-13 (permalink)

* finished reading [Tools for Thought: The History and Future of Mind-Expanding Technology](https://www.amazon.com/Tools-Thought-History-Mind-Expanding-Technology/dp/0262681153/ref=sr_1_1?dchild=1&keywords=tools+for+thought&qid=1631599675&s=books&sr=1-1)

2021-09-10 (permalink)

? book: The Dream Machine https://www.amazon.com/Dream-Machine-M-Mitchell-Waldrop/dp/1732265119

2021-09-08 (permalink)

Notes about reverse engineering Basecamp's Recording pattern

Before we begin: I never used RoR nor programmed in Ruby, so apologies if some of the things mentioned are inaccurate or completely wrong; I simply got interested in the pattern, and tried my best to understand how it could have been implemented.  So bear with me, and of course feel free to suggest edits.

Main concepts:

- Buckets are collections of records, i.e. _recordings_, having different types e.g. `Todo`, `Todoset`
- Recordings are just pointers, pointing to the actual data record, i.e. the _recordables_
- Recordables are immutable records, i.e. every time something changes, a new _recordable_ record is created and the relevant recording is updated to point to it
- Events are created to link together 2 recordables (the current version and the previous one), the recording, the bucket, and the author of the change
- This enables data versioning
- This enables data auditing (at the recording level, at the bucket level, and at the user level)

It all started with this [tweet](https://twitter.com/dhh/status/962156053461843968?lang=en):
Probably the single most powerful pattern in BC3 ❤️. Really should write it up at one point.
This is how a Basecamp URL looks like: `/1234567890/projects/12345/todos/67890`

In it:

- `1234567890` is the account ID -- it identifies a company's instance, it represents a tenant
- `12345` is the bucket ID, pointing to a record whose `bucketable_type` is `Project`
- `67890` is the recording ID, pointing to a record whose `recordable_type` is `Todo`

Bucket is not a project, but links to one (project is the _bucketable_ entity):
class Bucket < ActiveRecord::Base
  ...
  belongs_to :account
  delegated_type :bucketable, types: %w[ Project ] # this adds bucketable_id, bucketable_type to the schema ... or `belongs_to :bucketable`
  has_many :recordings, dependent: :destroy
  ...
end
Recording is not a todo, but links to one (todo is the _recordable_ entity):
class Recording < ActiveRecord::Base
  ...
  belongs_to :bucket, touch: true
  belongs_to: :creator, class_name: 'Person', default: -> { Current.person }
  ...
  delegated_type :recordable, types: %w[ Todo Todolist Todoset Dock ] # this adds recordable_id and recordable_type to the schema

  delegate :account, to: :bucket
end

module Recordable extend ActiveSupport::Concern
  included do
    has_many :recordings, as: :recordable # same as specified inside `Recording` as `delegated_type`
  end
end
Recordables are immutable, and do not usually have all the created-at, updated-at metadata...recordings do!  Also, multiple recordings can point to the same recordable, hence the `has_many :recordings` inside the `Recordable` concern.

Recordings have parents, and children; it almost becomes a graph database, with a tree structure:

- recording.recordable_type: Todo
- recording.parent.recordable_type: Todolist
- recording.parent.parent.recordable_type: Todoset
- recording.parent.parent.parent.recordable_type: Dock
- recording.parent.parent.parent.children.map(&:recordable_type): Chat::Transcript, Message::Board, Todoset, Schedule, Questionnaire, Vault

The above, i.e. the tree structure, is most likely implemented via the `Tree` concern:
module Recording::Tree
  extend ActiveSupport::Concern

  included do
    belongs_to :parent, class_name: 'Recording', optional: true
    has_many :children, class_name: 'Recording', foreign_key: :parent_id
  end
end
Note: I don't assume the above to be correct, but it should get the job done; I question whether it would make sense to add a _type_ field on this parent/child relation, but maybe it's not strictly required (i.e. you always know that one todo's parent is a todolist).

As stated above, recordings are pointers, almost like symlinks, and when you update a todo, you don't actually update the record, but create a new one, and update the recording to point to that; since URLs are all keyed-up accordingly (i.e. they use Recordings IDs, and not Recordable ones), the URL does not change but the actual todo the recording point to does.

Controllers then look something like this:
class TodosController < ApplicationController
  def create
    @recording = @bucket.record new_todo
  end

  def update
    @recording.update! recordable: new_todo
  end

  private
    def new_todo
      Todo.new params.require(:todo).permit(:content, :description, :starts_on, :due_on)
    end
end
While this is how `Bucket.record` looks like:
class Bucket < ActriveRecord::Base
  ...
  def record(recordable, children: nil, parent: nil, status: :active, creator: Current.person, **options)
    transaction do
      recordable.save!
      options.merge!(recordable: recordable, parent: parent, status: status, creator: creator)

      recordings.create!(options).tap do | recording |
        Array(children).each do | child |
          record child, parent: recording, status: status, creator: creator
        end
      end
    end
  end
end
This enables you to do things like, "Copy message to a new project", way more efficient: previously you had to set up a new job that literally copied the message, the comments, the events onto the new project; while now, all you have to do is set up a new recording pointing to the existing _immutable_ recordable.

For example:

- Project has messages -- messages are recordables
- Messages have comments -- comments are recordables
- When you copy a message from one project to another, you will have to create a recording not just for the message to be copied, but for all its existing comments as well (that's what `children` inside `Bucket.record` is for)

Note: `children` inside `Bucket.record` is expected to be a list of recordables, and not recordings; this means to pass `children` to `Bucket.record` we will have to do something along the lines of: `message.recording.children.map(&:recordable)`.

Note: `Bucket.record` (at least the version above, which was [shown in the presentation](https://www.youtube.com/watch?v=tmWlm0M6JSk&t=3060s)) does not seem to recurse more than one level, which means you can only create recordings for the parent entity and its direct children, e.g., messages and comments, but not `Todoset`s `Todolist`s _and_ `Todo`s.

Note: is this more efficient, really?  Even though you will not have to copy the immutable records (i.e. the recordables), you will still have to recreate the same recordings tree, right?

After you copied a recording, you should end up with something like this:
>> Recording.find(111111111).recordable.id
=> 33333333333333

>> Recording.find(222222222).recordable.id
=> 33333333333333
Again, two pointers, i.e. recordings, pointing to the same immutable record, i.e. the recordable.

Note: a recordable has many recordings, not one; this is because the same recordable object can be pointed by two different recordings.

This Recording pattern works in tandem with `Event`s:

- They link to the newer version of the recordable, and the previous one
- They belong to a `Recording`: this way you can easily see all the times a recording was updated
- They belong to a `Bucket`: this way you can easily see all the times a bucket was updated
- They belong to a `Person`: this way you can easily see all the changes done by one user
- They have one `Detail`, i.e. a hash containing some additional metadata linked to the current event
class Event < ActiveRecord::Base
  belongs_to: :recording, required: false
  belongs_to: :bucket
  belongs_to: :creator, class_name: 'Person'

  has_one :detail, dependent: :delete, required: true

  include Notifiable...Requested
  ...

end
and with the `Eventable` concern:
module Recording::Eventable
  extends ActiveSupport::Concern
  ...
  included do
    has_many :events, dependent: :destroy

    after_create :track_created
    after_update :track_updated
    after_update_commit :forget_adoption_tracking, :forget_events
  end

  def track_event(action, recordable_previous: nil, **particulars)
    Event.create! \
      recording: self, recordable: recordable, recordable_previous: recordable_previous,
      bucket: bucket, creator: Current.person, action: action,
      detail: Event::Detail.new(particulars)
  end
  ...
end
`track_event` here is the generic method, but more specific ones exist as well, for more common use cases so we don't have to manually plumb arguments.

We track an event every single time an eventable is created, or updated:
    after_create: :track_created
    after_update :track_updated
The private method `track_created` simply delegates to `track_event`:
  private
    def track_created
      track_event :created, status_was: status
    end
A recordable has many events; again, the same recordable could referenced by multiple recordings, and since events are created when recordings are created, then that's how you end up with multiple events.

OK, great, nice, but what schema would enable such a pattern?  Again, not 100% accurate, but hopefully close enough:

- Buckets (id, bucketable_type, bucketable_id, ...)
- Recordings (id, bucket_id, recordable_type, recordable_id, parent_id, ...)
- Events (id, bucket_id, recording_id, recordable_id, prevous_recordable_id, user_id, ...)

Note: recordables are _untyped_ here, but the type information is actually one join-table away, i.e. it's available inside the linked recording

- Projects (id, ...)
- Messages (id, ...)

And this seems to be it really.

Note: this pattern heavily relies on Rails delegate types, i.e. sugar on top of polymorphic associations (see [rais/rails/pull#39341](https://github.com/rails/rails/pull/39341)), to automatically traverse the recording table to automatically get to the recordable entry.  My first thought was: how will this perform?  How is the extra join operation required to fetch any linked entity going to affect performance?  If it's working for Basecamp, hopefully it's working decently enough for other applications as well, but I guess it also depends on how much the different entities are linked together.

Note: in the end, this is not that different from 'time-based versioning' (read: [Keeping track of graph changes using temporal versioning](https://medium.com/neo4j/keeping-track-of-graph-changes-using-temporal-versioning-3b0f854536fa)), with recordings being the _identity_ nodes, and recordables being the _state_ objects.

Some references:

- [Basecamp 3 Unraveled](https://www.youtube.com/watch?v=tmWlm0M6JSk&t=3060s) -- where an overview of the Bucket/Recording pattern is given
- [On Writing Software Well #3: Using globals when the price is right](https://www.youtube.com/watch?v=D7zUOtlpUPw&t=653s) -- where `Eventable` concern, and the `Event` model are shown
- [On Writing Software Well #5: Testing without test damage or excessive isolation](https://www.youtube.com/watch?v=5hN6OZDyQtk&t=419s) -- where the `Recording` model is shown
- [GoRails: Recording pattern (Basecamp 3)](https://gorails.com/forum/recording-pattern-basecamp-3)

2021-09-06 (permalink)

* `gem install`-d all the things again. I guess the latest `brew upgrade` that I run last week might have broken a few things (and while at it, I also `rvm install --default ruby-3.0.0`)

? implement rainbow-blocks for Vim (see: https://twitter.com/lisperati/status/1434114082903367681). A simple hack would be to define 5/6 syntax rules for `(...)` blocks and configure them so that they can only exist within an outer block, i.e. block1, block2 only inside block1, block3 only inside block2.

2021-08-30 (permalink)

* finished reading [Grokking Simplicity](https://www.amazon.it/Grokking-Simplicity-Software-Functional-Thinking/dp/1617296201/ref=sr_1_1?__mk_it_IT=%C3%85M%C3%85%C5%BD%C3%95%C3%91&dchild=1&keywords=grokking+simplicity&qid=1630331790&sr=8-1)


Grokking Simplicity notes

This page of the book says it all:
We have learned the skills of professionals
>
Since we're at the end of the book, let's look back at how far we've come and do a high-level listing of the skills we've learned. These are the skills of professional functional programmers. They have been chose for their power and depth.
>
Part 1: Actions, Calculations, and Data:
>
- Identifying the most problematic parts of your code by distinguishing actions, calculations, and data
- Making your code more resusable and testable by extracting calculations from actions
- Improving the design of actions by replacing implicit inputs and outputs with explicit ones
- Implementing immutability to make reading data into a calculation
- Organizing and improving code with stratified design
>
Part 2: First-class abstractions
>
- Making syntactic operations first-class so that they can be abstracted in code
- Reasoning at a higher level using functional iteration and other functional tools
- Chaining functional tools into data transformation pipelines
- Understanding distributed and concurrent systems by timeline diagrams
- Manipulating timelines to eliminate bugs
- Mutating state safely with higher-order functions
- Using reactive architecture to reduce coupling between cause and effects
- Applying the onion architecture (Interaction, Domain, Language) to design services that interact with the world
Where here is the list of primitives presented in the book to work with timelines.

Serialize tasks with `Queue`:
function Queue(worker) {
  var queue_items = [];
  var working = false;

  function runNext() {
    if(working)
      return;
    if(queue_items.length === 0)
      return;
    working = true;
    var item = queue_items.shift();
    worker(item.data, function(val) {
      working = false;
      setTimeout(item.callback, 0, val);
      runNext();
    });
  }

  return function(data, callback) {
    queue_items.push({
      data: data,
      callback: callback || function(){}
    });
    setTimeout(runNext, 0);
  };
}
Example:
function calc_cart_worker(cart, done) {
  calc_cart_total(cart, function(total) {
    update_total_dom(total);
    done(total);
  });
}

var update_total_queue = Queue(calc_cart_worker);
Serialize tasks and skip duplicate work with `DroppingQueue`:
function DroppingQueue(max, worker) {
  var queue_items = [];
  var working = false;

  function runNext() {
    if(working)
      return;
    if(queue_items.length === 0)
      return;
    working = true;
    var item = queue_items.shift();
    worker(item.data, function(val) {
      working = false;
      setTimeout(item.callback, 0, val);
      runNext();
    });
  }

  return function(data, callback) {
    queue_items.push({
      data: data,
      callback: callback || function(){}
    });
    while(queue_items.length > max)
      queue_items.shift();
    setTimeout(runNext, 0);
  };
}
Example:
function calc_cart_worker(cart, done) {
  calc_cart_total(cart, function(total) {
    update_total_dom(total);
    done(total);
  });
}

var update_total_queue = DroppingQueue(1, calc_cart_worker);
Synchronize (i.e. cut) multiple timelines with `Cut`:
function Cut(num, callback) {
  var num_finished = 0;
  return function() {
    num_finished += 1;
    if(num_finished === num)
      callback();
  };
}
Example:
var done = Cut(3, function() {
  console.log("3 timelines are finished");
});

done();
done();
done(); // only this one logs
Make sure a function is called only once, with `JustOnce`:
function JustOnce(action) {
  var alreadyCalled = false;
  return function(a, b, c) {
    if(alreadyCalled) return;
    alreadyCalled = true;
    return action(a, b, c);
  };
}
Example:
function sendAddToCartText(number) {
  sendTextAjax(number, "Thanks for adding something to your cart. Reply if you have any questions!");
}

var sendAddToCartTextOnce = JustOnce(sendAddToCartText);

sendAddToCartTextOnce("555-555-5555-55"); // only this one logs
sendAddToCartTextOnce("555-555-5555-55");
sendAddToCartTextOnce("555-555-5555-55");
sendAddToCartTextOnce("555-555-5555-55");
Encapsulate state inside _reactive_ `ValueCell`s:
function ValueCell(initialValue) {
  var currentValue = initialValue;
  var watchers = [];
  return {
    val: function() {
      return currentValue;
    },
    update: function(f) {
      var oldValue = currentValue;
      var newValue = f(oldValue);
      if(oldValue !== newValue) {
        currentValue = newValue;
        forEach(watchers, function(watcher) {
          watcher(newValue);
        });
      }
    },
    addWatcher: function(f) {
      watchers.push(f);
    }
  };
}
Example:
var shopping_cart = ValueCell({});

function add_item_to_cart(name, price) {
  var item = make_cart_item(name, price);
  shopping_cart.update(function(cart) {
    return add_item(cart, item);
  });
  var total = calc_total(shopping_cart.val());
  set_cart_total_dom(total);
  update_tax_dom(total);
}

shopping_cart.addWatcher(update_shipping_icons);
Derived values can instead be implemented with `FormulaCell`s:
function FormulaCell(upstreamCell, f) {
  var myCell = ValueCell(f(upstreamCell.val()));
  upstreamCell.addWatcher(function(newUpstreamValue) {
    myCell.update(function(currentValue) {
      return f(newUpstreamValue);
    });
  });
  return {
    val: myCell.val,
    addWatcher: myCell.addWatcher
  };
}
Example:
var shopping_cart = ValueCell({});

var cart_total = FormulaCell(shopping_cart, calc_total);

function add_item_to_cart(name, price) {
  var item = make_cart_item(name, price);
  shopping_cart.update(function(cart) {
    return add_item(cart, item);
  });
}

shopping_cart.addWatcher(update_shipping_icons);
cart_total.addWatcher(set_cart_total_dom);
cart_total.addWatcher(update_tax_dom);

Location sharing alternative to the Google built-in one @idea

- It would be fun
- You would not have to share data with Google (why would anyone share it with you instead? My friends and family probably would!)

Streamlining answering the donors questionnaire @idea

"Which cities have you visited during the last 4 weeks?"
"Which countries have you visited over the last 6 months?"

These are some questions donors get asked over and over again before they are are cleared for the donation, and it always catches me off guard.  Can this information be automatically pulled from Google?

What about my wife's data instead, would I have access to it?  During COVID, most of the time it's not just about the place you have visited, but the places other people you are living with had...

2021-08-27 (permalink)

The power of Lisp is its own worst enemy.
[The Lisp Curse](http://www.winestockwebdesign.com/Essays/Lisp_Curse.html)

2021-08-23 (permalink)

* finished reading [The Friendly Orange Glow: The Untold Story of the PLATO System and the Dawn of Cyberculture](https://www.amazon.com/Friendly-Orange-Glow-Untold-Cyberculture/dp/1101871555/ref=tmm_hrd_swatch_0?_encoding=UTF8&qid=1629734304&sr=1-1)

2021-06-28 (permalink)

* finished reading [10 PRINT CHR$(205.5+RND(1)); : GOTO 10](https://www.amazon.it/dp/0262526743?tag=duc01-21&linkCode=osi&th=1&psc=1)

2021-06-19 (permalink)

Remote live coding a Clack application with Swank, and Ngrok

Today I would like to show you how to create a very simple CL Web application with Clack, and how to use Swank and [Ngrok](https://ngrok.com/) to enable remote [live coding](https://en.wikipedia.org/wiki/Live_coding).

Here is what we are going to wind up with:

- Clack application running on `localhost:5000` -- i.e. the Web application
- Swank server running on `localhost:4006` -- i.e. the "live coding" enabler
- Ngrok tunnel to remotely expose `localhost:4006` (just in case you did not have SSH access into the server the app is running on)

As usual, let's start off by naming all the systems we are depending on:

- `clack`, Web application environment for CL
- `ngrok`, CL wrapper for installing and running Ngrok
- `swank`, what enables remote "live coding"
- `uiop`, utilities that abstract over discrepancies between implementations

Now, `ngrok` has not been published to Quicklisp yet, so you will have to manually download it first:
$ git clone https://github.com/40ants/ngrok.git
$ sbcl --noinform
> ...
...and then make sure it is QL:QUICKLOAD-able:
(pushnew '(merge-pathnames (parse-namestring "ngrok/")
             *default-pathname-defaults*)
           asdf:*central-registry*)
Now you can go ahead and load all the required dependencies:
(ql:quickload '("clack" "ngrok" "swank" "uiop"))
Let's take a look at the MAIN function. It starts the Web application, the Swank server, and Ngrok:
(defun main ()
  (clack)
  (swank)
  (ngrok))
For the Web application:

- We define a special variable, *HANDLER*, to hold the Clack Web application handler
- We start the app, bind it on `localhost:5000`, and upon receiving an incoming request we delegate to SRV
- Inside SRV, we delegate to APP (more to this later)
- Inside APP, we finally process the request and return a dummy message back to the client
(defvar *handler* nil)

(defun clack () (setf *handler* (clack:clackup #'srv :address "localhost" :port 5000)))

(defun srv (env) (app env))

(defun app (env)
  (declare (ignore env))
  '(200 (:content-type "text/plain") ("Hello, Clack!")))
You might be wondering: "why not invoking CLACK:CLACKUP with #'APP? Why the middle-man, SRV?"  Well, that's because Clack would dispatch to whichever function was passed in at the time CLACK:CLACKUP was called, and because of that, any subsequent re-definitions of such function would not be picked up by the framework.  The solution? Add a level of indirection, and so long as you don't need to change SRV, but always APP, then you should be all right!

Let's now take a look at how to setup a Swank server.  First we ask the user for a secret with which to [secure](https://github.com/slime/slime/issues/286) the server (i.e. it will accept incoming connections only from those clients that do know such secret), then we write the secret onto ~/.slime-secret (yes, it's going to overwrite the existing file!), and last we actually start the server:
(defun getenv-or-readline (name)
  "Gets the value of the environment variable, or asks the user to provide
   a value for it."
  (or (uiop:getenv name)
      (progn
        (format *query-io* "~a=" name)
        (force-output *query-io*)
        (read-line *query-io*))))

(defvar *slime-secret* (getenv-or-readline "SLIME_SECRET"))
(defparameter *swank-port* 4006)

(defun swank ()
  (write-slime-secret)
  (swank:create-server :port *swank-port* :dont-close t))

(defun write-slime-secret ()
  (with-open-file (stream "~/.slime-secret" :direction :output :if-exists :supersede)
    (write-string *slime-secret* stream)))
Last but not least, Ngrok: we read the authentication token, so Ngrok knows the account the tunnel needs to be associated with, and then we finally start the daemon by specifying the port the Swank server is listening to (as that's what we want Ngrok to create a tunnel for) and the authentication token:
(defvar *ngrok-auth-token* (getenv-or-readline "NGROK_AUTH_TOKEN"))

(defun ngrok () (ngrok:start *swank-port* :auth-token *ngrok-auth-token*))
Give the MAIN function a go, and you should be presented with a similar output:
> (main)
Hunchentoot server is started.
Listening on localhost:5000.
;; Swank started at port: 4006.
 <INFO> [10:13:43] ngrok setup.lisp (install-ngrok) -
  Ngrok already installed, changing authtoken
 <INFO> [10:13:44] ngrok setup.lisp (start) - Starting Ngrok on TCP port 4006
 <INFO> [10:13:44] ngrok setup.lisp (start) -
  Tunnnel established! Connect to the tcp://6.tcp.eu.ngrok.io:12321
"tcp://6.tcp.eu.ngrok.io:12321"
Let's test the Web server:
$ curl https://localhost:5000
Hello, Clack!
It's working.  Now open Vim/Emacs, connect to the Swank server (i.e. host: `6.tcp.eu.ngrok.io`, port: `12321`), and change the Web handler to return something different:
(defun app (env)
  (declare (ignore env))
  '(200 (:content-type "text/plain") ("Hello, Matteo!")))
Hit the Web server again, and this time it should return a different message:
$ curl https://localhost:5000
Hello, Matteo!
Note how we did not have to bounce the application; all we had to do was re-define the request handler...that's it!

Happy remote live coding!

PS. The above is also available on [GitHub](https://github.com/iamFIREcracker/cl-clack-swank-ngrok), and on [Replit](https://replit.com/@iamFIREcracker/cl-clack-swank-ngrok).

2021-06-17 (permalink)

* Opened [cl-cookbook/pull/385](https://github.com/LispCookbook/cl-cookbook/pull/385): Don't wrap SWANK:CREATE-SERVER inside BT:MAKE-THREAD

2021-06-11 (permalink)

? That will never work: Netflix https://www.amazon.ca/That-Will-Never-Work-Netflix/dp/0316530204

? No rules Netflix Culture Reinvetion https://www.amazon.ca/No-Rules-Netflix-Culture-Reinvention/dp/1984877860

? book: Designing Data-intensive applications https://www.amazon.ca/dp/1449373321/ref=cm_sw_r_wa_awdb_imm_CPN7ADPYR3CCM6EDZFMC

? book: Microservices Patterns https://www.amazon.ca/dp/1617294543?ref=ppx_pop_mob_ap_share

? book: Grokking algorithms https://www.amazon.ca/dp/1617292230/ref=cm_sw_r_wa_api_glt_i_FCXTQ5MH3YVCRVKBMHYR

2021-06-07 (permalink)

* finished reading [Object-Oriented Programming in COMMON LISP: A Programmer's Guide to CLOS](https://www.amazon.com/Object-Oriented-Programming-COMMON-LISP-Programmers/dp/0201175894)

2021-06-02 (permalink)

Web scraping for fun and profit^W for a registration to get my COVID-19 jab

Rumor says that in a few days everyone in Tuscany, irrespective of their age, could finally register for their first shot of the vaccine; that means "click day", i.e. continuously polling the registration website until your registration category is open.  I am sure a bird will eventually announce when the service will be open, i.e. the time of the day, so no need to stay on the lookout for the whole day, but I figured I could use this as an excuse to do a little bit of Web scraping in Common Lisp, so here it goes.

Dependencies first:
(ql:quickload "drakma")     ; to fire HTTP requests
(ql:quickload "local-time") ; to track of when the scraping last happened
(ql:quickload "st-json")    ; well...you guessed it
(ql:quickload "uiop")       ; read env variables
The next step is writing our MAIN loop:

- Wait for categories to be activated: new categories can be added, or existing ones can be activated
- Send an SMS to notify me about newly activated categories (so I can pop the website open, and try to register)
- Repeat
(defun main ()
  (loop
    (with-simple-restart (continue "Ignore the error")
      (wait-for-active-categories)
      (send-sms))))
WAIT-FOR-ACTIVE-CATEGORIES is implemented as loop form that:

- Checks if categories have been activated, in which case the list of recently activated ones is returned
- Otherwise it saves the current timestamp inside *LAST-RUN-AT* and then goes to sleep for a little while
(defvar *last-run-at* nil "When the categories scraper last run")

(defun wait-for-active-categories ()
  (loop :when (categories-activated-p) :return it
        :do (progn
              (setf *last-run-at* (local-time:now))
              (random-sleep))))
Let's now dig into the category processing part.  First we define a special variable, *CATEGORIES-SNAPSHOTS*, to keep track of all the past snapshots of services we scraped so far; next we fetch the categories from the remote website, see if any of them got activated since the last scraping, and last we push the latest snapshot into *CATEGORIES-SNAPSHOTS* and do some _harvesting_ to make sure we don't run out of memory because of all these scraping done so far:
(defvar *categories-snapshots* nil
  "List of categories snapshots -- handy to see what changed over time.")

(defun categories-activated-p ()
  (let ((categories (fetch-categories)))
    (unless *categories-snapshots*
      (push categories *categories-snapshots*))
    (prog1 (find-recently-activated categories)
      (push categories *categories-snapshots*)
      (harvest-categories-snapshots))))
Before fetching the list of categories we are going to define a few utilities: a condition signaling when HTML is returned in spite of JSON (usually a sign that the current session expired):
(define-condition html-not-json-content-type () ())
The URLs for the registration page (this will be used later, when generating the SMS) and for the endpoint returning the list of categories:
(defparameter *prenotavaccino-home-url* "https://prenotavaccino.sanita.toscana.it")
(defparameter *prenotavaccino-categories-url* "https://prenotavaccino.sanita.toscana.it/api/index")
A cookie jar to create a _persistent_ connection with the website under scraping:
(defvar *cookie-jar* (make-instance 'drakma:cookie-jar)
  "Cookie jar to hold session cookies when interacting with Prenotavaccino")
With all this defined, fetching the list of categories becomes as easy as firing a request to the endpoint and confirm the response actually contains JSON and not HTML: if JSON, parse it and return it, otherwise, simply try again.  The idea behind "trying again" is that when the response contains HTML, that will most likely contain a redirect to the home page with a newly created session cookie, and since the same cookie jar is getting used, the simple fact that we processed that response should be enough to make the next API call succeed:
(defun parse-json (response)
  (let* ((string (flexi-streams:octets-to-string response)))
    (st-json:read-json string)))

(defun fetch-categories ()
  (flet ((fetch-it ()
           (multiple-value-bind (response status headers)
               (drakma:http-request *prenotavaccino-categories-url* :cookie-jar *cookie-jar*)
             (declare (ignore status))
             (let ((content-type (cdr (assoc :content-type headers))))
               (if (equal content-type "text/html")
                   (error 'html-not-json-content-type)
                 response))))
         (parse-it (response)
           (st-json:getjso "categories" (parse-json response))))
    (parse-it
      (handler-case (fetch-it)
        (html-not-json-content-type (c)
          (declare (ignore c))
          (format t "~&Received HTML instead of JSON. Retrying assuming the previous session expired...")
          (fetch-it))))))
Note: we retry only once, so if two consecutive responses contain HTML instead of JSON, that would result in the debugger to pop open.

Finding all the recently activated categories consists of the following steps:

- for each category just scraped
- compare it with the same in the last snapshot of categories
- a category is considered as _recently_ activated if it's active and it is not present inside the second to last snapshot, or it was present but either it was inactive or if its title has changed (yes, sometimes the title of a category is changed to advertise that new people with different age can now sign up)

Anyways, all the above translates to the following:
(defun category-name (cat) (st-json:getjso "idCategory" cat))
(defun category-title (cat) (st-json:getjso "title" cat))
(defun category-active-p (cat)
  (and (eql (st-json:getjso "active" cat) :true)
       (eql (st-json:getjso "forceDisabled" cat) :false)))

(defun find-recently-activated (categories)
  (flet ((category-by-name (name categories)
           (find name categories :key #'category-name :test #'equal)))
    (let ((prev-categories (first *categories-snapshots*)))
      (loop :for cat :in categories
            :for cat-prev = (category-by-name (category-name cat) prev-categories)
            :when (and (category-active-p cat)
                       (or (not cat-prev)
                           (not (category-active-p cat-prev))
                           (not (equal (category-title cat-prev) (category-title cat)))))
            :collect cat))))
Harvesting is pretty easy: define a maximum length threshold for the list of snapshots and when the list grows bigger than that, start popping items from the back of the list as new values are pushed to the front:
(defparameter *categories-snapshots-max-length* 50
  "Maximum numbers of categories snapshots to keep around")

(defun harvest-categories-snapshots ()
  (when (> (length *categories-snapshots*) *categories-snapshots-max-length*)
    (setf *categories-snapshots*
          (subseq *categories-snapshots* 0 (1+ *categories-snapshots-max-length*)))))
Note: I could have removed "consecutive duplicates" from the list, or prevented these from getting stored in the list to begin with, but I am going to leave this as an exercise for the reader ;-)

Two pieces of the puzzle are still missing: RANDOM-SLEEP, and SEND-SMS.  For RANDOM-SLEEP we decide the minimum number of seconds that the scraper should sleep for, and then add some _randomness_ to it (like the remote site cared that we pretended to try and act like a _human_, but let's do it anyway):
(defparameter *sleep-seconds-min* 60
  "Minimum number of seconds the scraper will sleep for")
(defparameter *sleep-seconds-jitter* 5
  "Adds a pinch of randomicity -- see RANDOM-SLEEP")

(defun random-sleep ()
  (sleep (+ *sleep-seconds-min* (random *sleep-seconds-jitter*))))
For the SMS part instead, we are going to use Twilio; first we define all the parameters required to send SMSs:

- Account SID
- Auth token
- API URL
- From number
- To numbers (space separated values)
(defun getenv-or-readline (name)
  "Gets the value of the environment variable, or asks the user to provide
  a value for it."
  (or (uiop:getenv name)
      (progn
        (format *query-io* "~a=" name)
        (force-output *query-io*)
        (read-line *query-io*))))

(defvar *twilio-account-sid* (getenv-or-readline "TWILIO_ACCOUNT_SID"))
(defvar *twilio-auth-token* (getenv-or-readline "TWILIO_AUTH_TOKEN"))
(defvar *twilio-messages-api-url*
  (format nil "https://api.twilio.com/2010-04-01/Accounts/~a/Messages.json" *twilio-account-sid*))
(defvar *twilio-from* (getenv-or-readline "TWILIO_FROM_NUMBER"))
(defvar *twilio-to-list*
  (split-sequence:split-sequence #\Space (getenv-or-readline "TWILIO_TO_NUMBERS")))
Next we assemble the HTTP request, fire it, and do some error checking to signal an error in case it failed to deliver the message to any of the _to_ numbers specified inside *TWILIO-TO-LIST*:
(defun send-sms (&optional (body (sms-body)))
  (flet ((send-it (to)
           (drakma:http-request *twilio-messages-api-url*
                                :method :post
                                :basic-authorization `(,*twilio-account-sid* ,*twilio-auth-token*)
                                :parameters `(("Body" . ,body)
                                              ("From" . ,*twilio-from*)
                                              ("To" . ,to)))))
    (let (failed)
      (dolist (to *twilio-to-list*)
        (let* ((jso (parse-json (send-it to)))
               (error-code (st-json:getjso "error_code" jso)))
          (unless (eql error-code :null)
            (push jso failed))))
      (if failed
          (error "Failed to deliver **all** SMSs: ~a" failed)
          t))))
Last but not least, the SMS body; we want to include in the message which services got activated since the last time the scraper run, and to do so we:

- Take the latest snapshot from *CATEGORIES-SNAPSHOTS*
- Temporarily set *CATEGORIES-SNAPSHOTS* to its CDR, like the latest snapshot wasn't recorded yet
- Call FIND-RECENTLY-ACTIVATED effectively pretending like we were inside CATEGORIES-ACTIVATED-P and were trying to understand if anything got recently activated (dynamic variables are fun, aren't they?!)
(defun sms-body ()
  (let* ((categories (first *categories-snapshots*))
         (*categories-snapshots* (cdr *categories-snapshots*)))
    (format nil "Ora attivi: ~{~A~^, ~} -- ~a"
            (mapcar #'category-title
                    (find-recently-activated categories))
            *prenotavaccino-home-url*)))
Note: "Ora attivi:" is the Italian for "Now active:"

And that's it: give `(main)` a go in the REPL, wait for a while, and eventually you should receive an SMS informing you which category got activated!
> (find "LastMinute" (first *categories-snapshots*)
        :key #'category-name
        :test #'string=)
#S(ST-JSON:JSO
   :ALIST (("idCategory" . "LastMinute") ("active" . :TRUE)
           ("forceDisabled" . :FALSE) ("start" . "2021-06-02")
           ("end" . "2021-07-31") ("title" . "Last<br/>Minute")
           ("subtitle"
            . "<b>ATTENZIONE.</b> Prenotazione degli slot rimasti disponibili nelle prossime 24H.")
           ("message" . "Il servizio aprirà alle ore 19:00 di ogni giorno")
           ("updateMaxMinYear" . :TRUE)))

> (sms-body)
"Ora attivi: Last<br/>Minute -- https://prenotavaccino.sanita.toscana.it"
Happy click day, Italians!

2021-05-31 (permalink)

TIL: Untracked files are stored in the third parent of a stash commit.
Untracked files are stored in the third parent of a stash commit. (This isn't actually documented, but is pretty obvious from The commit which introduced the -u feature, 787513..., and the way the rest of the documentation for git-stash phrases things... or just by doing git log --graph stash@{0})
You can view just the "untracked" portion of the stash via:
git show stash@{0}^3
or, just the "untracked" tree itself, via:
git show stash@{0}^3:
or, a particular "untracked" file in the tree, via:
git show stash@{0}^3:<path/to/file>
Source: [stackoverflow](https://stackoverflow.com/questions/12681529/in-git-is-there-a-way-to-show-untracked-stashed-files-without-applying-the-stas#:~:text=Stash%20entries%20can%20be%20made,as%20part%20of%20the%20diff)

2021-05-27 (permalink)

Few days ago Replit announced support for [every programming language](https://twitter.com/Replit/status/1396915485325860868)!

I never heard of [NixOS](https://nixos.org/) (that's what enables them to support "every programming language"), but I decided to give it a go anyway, and after a little bit of struggles I was then able to put together the following Repls:

- [Common Lisp](https://replit.com/@iamFIREcracker/Common-Lisp) -- of course the first one had to be one about CL
- [Common Lisp w/ SSL](https://replit.com/@iamFIREcracker/Common-Lisp-with-SSL) -- it turns out you have to mess with `LD_LIBRARY_PATH` to get SSL to work with `:cl+ssl`
- [Game of Life in Common Lisp](https://replit.com/@iamFIREcracker/Common-Lisp-greater-Game-of-Life) -- why not...
- [10 PRINT.BAS](https://replit.com/@iamFIREcracker/10PRINTBAS) -- ever heard of the famous one-line Commodore 64 BASIC program to generate mazes?  They even wrote a whole [book](https://10print.org/) about it!

This is pretty cool, well done Replit!

This is the famous one-line Commodore 64 BASIC program to generate random mazes (the Commodore 64 uses the [PETSCII](https://www.c64-wiki.com/wiki/PETSCII) character set, not ASCII):
10 PRINT CHR$(205.5+RND(1)); : GOTO 10
While this is a Common Lisp equivalent:
(loop (princ (aref "/\\" (random 2))))
If the above run a little bit too fast int he REPL (it most likely will, these days), then you can try with this other alternative:
(loop
  (finish-output)
  (princ (aref "/\\" (random 2)))
  (sleep (/ 1 30)))
Happy retro-hacking!

2021-05-21 (permalink)

Example of how to use [AbortController](https://developer.mozilla.org/en-US/docs/Web/API/AbortController) to cancel an action when input changes:
let currentJob = Promise.resolve();
let currentController;

function showSearchResults(input) {
  if (currentController) currentController.abort();
  currentController = new AbortController();
  const { signal } = currentController;

  currentJob = currentJob
    .finally(async () => {
      try {
        startSpinner();
        const response = await fetch(getSearchUrl(input), { signal });
        await displayResults(response, { signal });
      } catch (err) {
        if (err.name === 'AbortError') return;
        displayErrorUI(err);
      } finally {
        stopSpinner();
      }
    });
}
Source: [twitter](https://twitter.com/jaffathecake/status/1395682545090633729)

2021-05-16 (permalink)

Configuring lightline.vim to display the quickfix's title

In case anyone was interested in displaying the quickfix title while on quickfix buffers (works with location-list buffers too), all you have to do is loading the following into the runtime:
let g:lightline = {
    \ 'component': {
    \   'filename': '%t%{exists("w:quickfix_title")? " ".w:quickfix_title : ""}'
    \   },
    \ }
Before:
NORMAL  [Quickfix List] | -
And after:
NORMAL  [Quickfix List] :ag --vimgrep --hidden --smart-case --nogroup --nocolor --column stream-name | -
Related link: https://www.reddit.com/r/vim/comments/ej2bvx/change_the_quickfix_title/

Lack's Redis session store does not appear to be thread-safe

Steps to reproduce:

- Load the necessary dependencies:
(ql:quickload '(:clack :lack :lack-session-store-redis :cl-redis))
- Setup a dummy "Hello, World" application:
(defparameter *app*
  (lambda (env)
    (declare (ignore env))
    '(200 (:content-type "text/plain") ("Hello, World"))))
- Setup the middleware chain, one that uses Redis as session storage:
(setf *app*
      (lack:builder
        (:session
         :store (lack.session.store.redis:make-redis-store :connection (make-instance 'redis:redis-connection
                                                                                      :host #(0 0 0 0)
                                                                                      :port 6379
                                                                                      :auth "auth")))
        *app*))
- Start the app:
> (defvar *handler* (clack:clackup *app*))
Hunchentoot server is started.
Listening on 127.0.0.1:5000.
- Run `wrk` against it:
$ wrk -c 10 -t 4 -d 10 http://127.0.0.1:5000
Running 10s test @ http://127.0.0.1:5000
  4 threads and 10 connections
Expected behavior:

- The test runs successfully, and the number of requests per second is reported

Actual behavior:

- Lots of errors are signaled; from unexpected errors:
Thread: 7; Level: 1

Redis error: NIL

ERR Protocool error: invalid multibulk length
   [Condition of type REDIS:REDIS-ERROR-REPLY]

Restarts:
  R 0. ABORT - abort thread (#<THREAD "hunchentoot-worker-127.0.0.1:53815" RUNNING {100154F693}>)

Frames:
  F 0.  ((:METHOD REDIS:EXPECT ((EQL :STATUS))) #<unused argument>) [fast-method]
  F 1.  (REDIS::SET "session:fdf15f46896842bcd84ef2e87d2321e97419f70e" "KDpQQ09ERSAxICg6SEFTSC1UQUJMRSAxIDcgMS41IDEuMCBFUVVBTCBOSUwgTklMKSk=")
  F 2.  ((:METHOD LACK.MIDDLEWARE.SESSION.STORE:STORE-SESSION (LACK.MIDDLEWARE.SESSION.STORE.REDIS:REDIS-STORE T T)) #S(LACK.MIDDLEWARE.SESSION.STORE.REDIS:REDIS-STORE :HOST #(0 0 0 0) :PORT 6379 :NAMESPACE "..
  F 3.  (LACK.MIDDLEWARE.SESSION::FINALIZE #S(LACK.MIDDLEWARE.SESSION.STORE.REDIS:REDIS-STORE :HOST #(0 0 0 0) :PORT 6379 :NAMESPACE "session" :EXPIRES NIL :SERIALIZER #<FUNCTION #1=(LAMBDA (LACK.MIDDLEWARE.S..
  F 4.  ((LAMBDA (LACK.MIDDLEWARE.BACKTRACE::ENV) :IN "/Users/matteolandi/Workspace/lack/src/middleware/backtrace.lisp") (:REQUEST-METHOD :GET :SCRIPT-NAME "" :PATH-INFO "/" ...))
... to unexpected end of file:
Thread: 8; Level: 1

Redis error: end of file on #<FLEXI-STREAMS:FLEXI-IO-STREAM {100AF94323}>
   [Condition of type REDIS:REDIS-CONNECTION-ERROR]

Restarts:
  R 0. RECONNECT - Try to reconnect and repeat action.
  R 1. ABORT     - abort thread (#<THREAD "hunchentoot-worker-127.0.0.1:53818" RUNNING {1008949253}>)

Frames:
  F 0.  (REDIS::SET "session:5b391bd4699e86f5b6e757451572c0a3ba825557" "KDpQQ09ERSAxICg6SEFTSC1UQUJMRSAxIDcgMS41IDEuMCBFUVVBTCBOSUwgTklMKSk=")
  F 1.  ((:METHOD LACK.MIDDLEWARE.SESSION.STORE:STORE-SESSION (LACK.MIDDLEWARE.SESSION.STORE.REDIS:REDIS-STORE T T)) #S(LACK.MIDDLEWARE.SESSION.STORE.REDIS:REDIS-STORE :HOST #(0 0 0 0) :PORT 6379 :NAMESPACE "..
  F 2.  (LACK.MIDDLEWARE.SESSION::FINALIZE #S(LACK.MIDDLEWARE.SESSION.STORE.REDIS:REDIS-STORE :HOST #(0 0 0 0) :PORT 6379 :NAMESPACE "session" :EXPIRES NIL :SERIALIZER #<FUNCTION #1=(LAMBDA (LACK.MIDDLEWARE.S..
  F 3.  ((LAMBDA (LACK.MIDDLEWARE.BACKTRACE::ENV) :IN "/Users/matteolandi/Workspace/lack/src/middleware/backtrace.lisp") (:REQUEST-METHOD :GET :SCRIPT-NAME "" :PATH-INFO "/" ...))
  F 4.  ((:METHOD HUNCHENTOOT:ACCEPTOR-DISPATCH-REQUEST (CLACK.HANDLER.HUNCHENTOOT::CLACK-ACCEPTOR T)) #<CLACK.HANDLER.HUNCHENTOOT::CLACK-ACCEPTOR (host 127.0.0.1, port 5000)> #<HUNCHENTOOT:REQUEST {10071E47B..
  F 5.  ((:METHOD HUNCHENTOOT:HANDLE-REQUEST (HUNCHENTOOT:ACCEPTOR HUNCHENTOOT:REQUEST)) #<CLACK.HANDLER.HUNCHENTOOT::CLACK-ACCEPTOR (host 127.0.0.1, port 5000)> #<HUNCHENTOOT:REQUEST {10071E47B3}>) [fast-met..
A similar error was mentioned on `cl-redis` GitHub's space ([cl-redis/issues#19](https://github.com/vseloved/cl-redis/issues/19)), to which the author replied that one the reasons why this might happen is when trying to share the same connection between multiple threads.  So a took a look the implementation of the Redis store, and it looks like it is indeed sharing the same Redis connection between different HTTP requests (i.e. multiple threads).

The following seems to be fixing the problem though I would like to hear @fukamachi's view on this, before submitting a pull-request:
diff --git a/src/middleware/session/store/redis.lisp b/src/middleware/session/store/redis.lisp
index 9e489d9..3ccfaec 100644
--- a/src/middleware/session/store/redis.lisp
+++ b/src/middleware/session/store/redis.lisp
@@ -54,9 +54,8 @@
 (defun redis-connection (store)
   (check-type store redis-store)
   (with-slots (host port auth connection) store
-    (unless (redis::connection-open-p connection)
-      (setf connection
-            (open-connection :host host :port port :auth auth)))
+    (setf connection
+          (open-connection :host host :port port :auth auth))
     connection))

 (defmacro with-connection (store &body body)

2021-05-06 (permalink)

Is it possible to imagine a future where “concert programmers” are as common a fixture in the worlds auditoriums as concert pianists? In this presentation Andrew will be live-coding the generative algorithms that will be producing the music that the audience will be listening too. As Andrew is typing he will also attempt to narrate the journey, discussing the various computational and musical choices made along the way. A must see for anyone interested in creative computing.
Andrew Sorensen Keynote: "The Concert Programmer" - OSCON 2014 ([link to video](https://www.youtube.com/watch?v=yY1FSsUV-8c))

2021-05-03 (permalink)

Calling MAKE-INSTANCE with a class name which is only known at run-time

In CL, how can you instantiate a class (i.e. call MAKE-INSTANCE) if you had a string representing its name (and the package it was defined in)?
> (intern "chat.channels.turbo::turbo-channel")
|chat.channels.turbo::turbo-channel|

> (intern (string-upcase "chat.channels.turbo::turbo-channel"))
|CHAT.CHANNELS.TURBO::TURBO-CHANNEL|

> (read-from-string "chat.channels.turbo::turbo-channel")
CHAT.CHANNELS.TURBO::TURBO-CHANNEL
At first I gave INTERN ago (I am not exactly sure why though, but that's where my mind immediately went to), but then irrespective of the case of the string I feed into it, it would end up using quote characters (i.e. `|`s).  READ-FROM-STRING on the other hand seemed to get the job done, though I would like to avoid using it as much as possible, especially if the string I am feeding it with is getting generated _externally_ (e.g. sent over a HTTP request).

Well, as it turns out, if you have the strings in `package:symbol` format, then you can parse out the package and symbol names and use FIND-PACKAGE and FIND-SYMBOL to get the symbol you are looking for:
> (find-symbol (string-upcase "turbo-channel") (find-package (string-upcase "chat.channels.turbo")))
CHAT.CHANNELS.TURBO::TURBO-CHANNEL
:INTERNAL

> (make-instance *)
#<CHAT.CHANNELS.TURBO::TURBO-CHANNEL {10016892F3}>
PS. @zulu.inuoe on Discord was kind enough to post this utility function that he uses:
this is a thing I use but uhh no guarantees. it also doesn't handle escape characters like | and \
(defun parse-entry-name (name)
  "Parses out `name' into a package and symbol name.

 eg.
  symbol          => nil, symbol
  package:symbol  => package, symbol
  package::symbol => package, symbol
"
  (let* ((colon (position #\: name))
         (package-name (when colon (subseq name 0 colon)))
         (name-start (if colon
                         (let ((second-colon (position #\: name :start (1+ colon))))
                           (if second-colon
                               (prog1 (1+ second-colon)
                                 (when (position #\: name :start (1+ second-colon))
                                   (error "invalid entry point name - too many colons: ~A" name)))
                               (1+ colon)))
                         0))
         (symbol-name (subseq name name-start)))
    (values package-name symbol-name)))

> (multiple-value-bind (package-name symbol-name)
      (parse-entry-name "chat.channels.turbo::turbo-channel")
    (find-symbol (string-upcase symbol-name)
                 (if package-name (find-package (string-upcase package-name)) *package*)))
CHAT.CHANNELS.TURBO::TURBO-CHANNEL
:INTERNAL

2021-05-02 (permalink)

Splitting routes into packages, with Caveman

If you use [caveman](https://github.com/fukamachi/caveman) to build your Web applications, sooner or later you will realize that it would not let you create an <APP> instance in one package, and use DEFROUTE in another.  Why? Because the library internally keeps a HASH-TABLE mapping <APP> instances to the package that was active at the time these instances were created, and since DEFROUTE looks up the <APP> instance in this map using *PACKAGE* as key, if the current package is different from the one which was active when <APP> was created then no <APP> instance will be found, and error will be signalled, and you will be prompted to do something about it:
(defmethod initialize-instance :after ((app <app>) &key)
  (setf (gethash *package* *package-app-map*) app))

(defun find-package-app (package)
  (gethash package *package-app-map*))
Of course I was not the first one bumping into this (see [caveman/issues/112](https://github.com/fukamachi/caveman/issues/112)), but unfortunately the one workaround listed in the thread did not seem to work: it's not a matter of carrying the <APP> instance around (which might or might not be annoying), but rather of _changing_ *PACKAGE* before invoking DEFROUTE so that it can successfully look up the <APP> instance inside *PACKAGE-APP-MAP* at macro expansion time.

But clearly you cannot easily change *PACKAGE* before executing DEFROUTE, or otherwise all the symbols used from within the route definition will most likely be unbound at runtime (unless of course such symbols are also accessible from the package in which the <APP> instance was created).  So where do we go from here?  Well, if we cannot _override_ *PACKAGE* while calling DEFROUTE from within a different package, then maybe what we can do is _adding_ *PACKAGE* to *PACKAGE-APP-MAP* and make sure it links to the <APP> instance you already created!
(defun register-routes-package (package)
  (setf (gethash package caveman2.app::*package-app-map*) *web*))
Add a call to REGISTER-ROUTES-PACKAGE right before your first call to DEFROUTE and you should be good to go:
(in-package :cl-user)
(defpackage chat.routes.sessions
  (:use :cl :caveman2 :chat.models :chat.views :chat.routes))
(in-package :chat.routes.sessions)
(register-routes-package *package*)

(defroute "/" ()
  (if (current-user)
    (redirect-to (default-chatroom))
    (render :new :user)))
Ciao!

2021-04-26 (permalink)

On adding Webpack assets to a server-side rendered view (in Common Lisp)

First you need to instruct `webpack` to save compilation metadata such as the paths of the assets being generated, into a file in a well-known location:
// webpack.config.json

...
class MetaInfoPlugin {
  constructor(options) {
    this.options = { filename: 'meta.json', ...options };
  }

  apply(compiler) {
    compiler.hooks.done.tap(this.constructor.name, stats => {
      const files = {};
      for (let filename of Object.keys(stats.compilation.assets)) {
        const parts = filename.split(/\./);
        const extension = parts[parts.length - 1];
        if (!files[extension]) {
          files[extension] = [ filename ];
        } else {
          files[extension].push(filename);
        }
      }
      const metaInfo = {
        files,
      };
      const json = JSON.stringify(metaInfo);
      return new Promise((resolve, reject) => {
        fs.writeFile(this.options.filename, json, 'utf8', error => {
          if (error) {
            reject(error);
            return;
          }
          resolve();
        });
      });
    });
  }
}
...

  plugins: [
    ...
    new MetaInfoPlugin({ filename: 'static/dist/meta.json' }),
  ],
...
Run `webpack` and confirm the file is getting generated successfully:
$ cat static/dist/meta.json | json_pp
{
   "files" : {
      "css" : [
         "app-26d385c021c0729b47ba.css"
      ],
      "js" : [
         "app-8f5ff9499dfe4b9fa887.js",
         "vendor~app-300b96fb7678a6c52a50.js"
      ],
      "map" : [
         "app-26d385c021c0729b47ba.css.map",
         "app-8f5ff9499dfe4b9fa887.js.map",
         "vendor~app-300b96fb7678a6c52a50.js.map"
      ]
   }
}
Next, somewhere inside your server application, parse the metadata file:
(defparameter *webpack-meta* (merge-pathnames #P"dist/meta.json" *static-directory*))
(defparameter *webpack-assets* (st-json:read-json (uiop:read-file-string *webpack-meta*)))
...define a function to generate proper HTML to include the assets:
(defun webpack-assets ()
  (with-html
    (loop for css in (st-json:getjso* "files.css" *webpack-assets*)
          do (:link :rel "stylesheet" :href (format nil "/dist/~A" css) :data-turbo-track "reload"))
    (loop for js in (st-json:getjso* "files.js" *webpack-assets*)
          do (:script :src (format nil "/dist/~A" js) :data-turbo-track "reload")) ))
...and use it:
(defmacro with-page ((&key (title "Chat!")) &body body)
  `(with-html-string
    (:doctype)
    (:html
     (:head
      (:title ,title)
      (webpack-assets))
     (:body ,@body))))
That's it!
<!DOCTYPE html>
<html lang=en>
 <head>
  <meta charset=UTF-8>
  <title>Chat!</title>
  <link rel=stylesheet
        href=/dist/app-26d385c021c0729b47ba.css
        data-turbo-track=reload>
  <script src=/dist/app-8f5ff9499dfe4b9fa887.js
          data-turbo-track=reload></script>
  <script
          src=/dist/vendor~app-300b96fb7678a6c52a50.js
          data-turbo-track=reload></script>
 </head>
 <body>
 ...

2021-04-16 (permalink)

* finished reading [Smalltalk Best Practice Patterns](https://www.amazon.it/Smalltalk-Best-Practice-Patterns-Kent-dp-013476904X/dp/013476904X)

2021-04-07 (permalink)

Skip "symbol" regions when matching parentheses

Lisp symbols can contain parenthesis when surrounded by vertical bars (i.e. `|Symbol-Name|`), and in case of _unbalanced_ parenthesis matchparen.vim would fail to highlight the "right" pair:
(foo |)-close|)
|     `-- highlighted
`-- cursor position
After "symbol" is added to the list of regions to test against, matchparen.vim will highlight the _expected_ closing pair:
(foo |)-close|)
|             `-- highlighted
`-- cursor position
I know it's very unlikely for one to define a symbol with such a name, but should that ever happen, this fix will make matchparen.vim behave properly.

Anyways, I opened [PR#8079](https://github.com/vim/vim/pull/8079) for this...let's see what Bram and the others think about it.

Looking into a 3 months old PR that I opened to vlime to get it to properly handle char literals like `#\(` or `#\)`

First things first: _disable_ vim-lispindent, or otherwise you are going to start doubting yourself, because unable to reproduce the issue.  Open .vim/pack/iamFIREcracker/start/vim-lispindent/plugin/lispindent.vim, and replace the following lines:
if !empty(g:lispindent_filetypes)
  augroup lispindent_filetypes
    autocmd!
    execute 'autocmd FileType ' . g:lispindent_filetypes . ' call lispindent#init()'
  augroup END
endif
with:
if !empty(g:lispindent_filetypes)
  " augroup lispindent_filetypes
  "   autocmd!
  "   execute 'autocmd FileType ' . g:lispindent_filetypes . ' call lispindent#init()'
  " augroup END
endif
Restart vim, have vlime take over lisp files indentation, and you should be good to go (i.e. reproduce the issue the MR is trying to fix).

Now, let's have a look at one of the comments which were left on the PR:
There are some more searchpairpos uses; shouldn't they all use this (or a similar) skip argument? Perhaps we should create one or two helper functions?
These are the functions which make use of `searchpairpos()`:

- `vlime#ui#CurTopExprPos()`
- `vlime#ui#CurOperator()`
- `vlime#ui#SurroundingOperator()`
- `vlime#ui#ParseOuterOperators(3)`
- `vlime#ui#CurArgPos()`

Let's load the following into a buffer and see how each of the above behave:
(defun hello-world()
  (format t "Hello, world!"))

(defun open-parenthesis-p (ch)
  "Some documentation containing unbalanced ("
  (char= ch #\())

(defun close-parenthesis-p (ch)
  "Some documentation containing unbalanced )"
  (char= ch #\)))
Placing the cursor onto the `#\(` character literal, and running `echo vlime#ui#CurTopExprPos('begin')` correctly logs: `[4, 1]`; however, if we pass in `'end'` we get `[10, 17]` instead of the expected `[6, 17]`.  Similarly, if we place the cursor onto the `#\)` character literal and run `echo vlime#ui#CurTopExprPos('end')` we get: `[10, 17]`, but if we use `'begin'` we get `[4, 1]` instead of `[8, 1]`.  Switching to our `searchpairpos()` wrapper seems to be fixing the problem (see [6fe69f3](https://github.com/vlime/vlime/commit/6fe69f330999e392493a3d7164a3b93ca614bb58)).

Placing the cursor onto the `#\(` character literal, and running `echo vlime#ui#CurOperator()` returns an empty string, and interestingly enough adding `skip` to `searchpairpos()` does not seem to make much difference.  Why? Because the function calls `vlime#ui#CurExpr()` which returns `()` instead of the expected `(char= ch #\())`.  Why is `vlime#ui#CurExpr()` returning `()`? Because we are re-using the same logic of matchparen.vim, which intentionally does _not_ try to skip over certain syntax regions if the position the was in at the time the search operation was issued happened to be within one of those syntax regions already (i.e. it will highlight parentheses inside comments).

Question: is this the expected behavior, or should `vlime#ui#CurExpr()` return the actual surrounding expression instead?

In any case, since we have to deal with incomplete expressions anyway, I am proposing that we stop calling `vlime#ui#CurExpr()` here, and simply use the `searchpairpos()` wrapper instead (see [debdd92](https://github.com/vlime/vlime/commit/debdd924d8af2112f3d8a7afad0fe96a0e39f6f9)).

Off to `vlime#ui#SurroundingOperator()`; placing the cursor onto the closing parenthesis right after the `#\(` character literal and calling this function outputs an empty string instead of `char=`.  The same happens if we place the cursor at the closing parenthesis right after the `#\)` character literal of the second function definition.  Using our `searchpairpos()` wrapper here, seems to be fixing the problem (see [dac541a](https://github.com/vlime/vlime/commit/dac541a49414152353516e3c10d7f604479f6aea)).

The next one is `vlime#ui#ParseOuterOperators()`; placing the cursor onto the closing parenthesis right after the `#\(` character literal and calling `vlime#ui#ParseOuterOperators(3)` returns `[['', -1, [6, 15]], ['char=', 2, [6, 3]], ['', 1, [5, 45]]]` instead of the expected `[['char=', 3, [6, 3]], ['defun', 7, [4, 1]]]`.  Again, using our `searchpairpos()` wrapper fixes the problem (see [f8a8777](https://github.com/vlime/vlime/commit/f8a8777eafb3a0cd0a47eeec50dddea2d7278cd7)).

Lastly, for `vlime#ui#CurArgPos()`, let's use these function definitions instead:
(defun open-parenthesis-yoda-p (ch)
  "Some documentation containing unbalanced ("
  (char= #\( ch))

(defun close-parenthesis-yoda-p (ch)
  "Some documentation containing unbalanced )"
  (char= #\) ch))
Placing the cursor onto the `ch` symbol right after the `#\(` character literal and calling `vlime#ui#CurArgPos()` outputs `0` instead of the `2`.  Similarly, doing the same from the `ch` symbol right after the `#\)` character literal outputs `5` instead of `2`.  To fix this, not only we had to use our `searchpairpos()` wrapper, but also change the _parser_ to try to deal with character literals (see [5ea8f32](https://github.com/vlime/vlime/commit/5ea8f32fe74591c2cf42c09adf5629e440ce0027)).

2021-04-06 (permalink)

Smalltalk is one of the simplest, smallest programming languages in the world. Thus, it is supremely easy to learn. Its syntax can fit on a post card!
[Syntax on a Post Card](https://richardeng.medium.com/syntax-on-a-post-card-cb6d85fabf88).

PS. Follows the transcription of that postcard:
exampleWithNumber: x

"A method that illustrates every part of Smalltalk method syntax
except primitives. It has unary, binary, and keyword messages,
declares arguments and temporaries, accesses a global variable
(but not and instance variable), uses literals (array, character,
symbol, string, integer, float), uses the pseudo variables
true false, nil, self, and super, and has sequence, assignment,
return and cascade. It has both zero argument and one argument blocks."

    |y|
    true & false not & (nil isNil) ifFalse: [self halt].
    y := self size + super size.
    #($a #a "a" 1 1.0)
        do: [:each | Transcript show: (each class name);
                                 show: ' '].
    ^ x < y
PPS. Yes, lately I got curious about Smalltalk, mostly because its highly interactive development experience; having said that, I am not really sold on the idea of being forced into using an IDE to fully _experience_ the language, but who knows, maybe I am going to give it a try...

a system is something you compile and load into the runtime, while a package is something that you access to get a hold of the functions and variables and classes named by symbols inside that package
and:
QL doesn't install packages, it downloads ASDF systems
ASDF then loads these systems, which might create packages in the Lisp image
These were taken from Discord, a while back.  Surely packages, and systems are one of the most confusing things new Common Lisp programmers need to get accustomed to.

Casually chatting on Discord (Lisp/#web) on the reasons why one would try to use React even for a 100% no-JavaScript site:
Yeah, exactly. You can be as dynamic or static as you want, but what I was specifically referring to was to use webpack or whatever else to render the React pages into Plain Old HTML, you can put that on a dumb static file server and it'd Just Work, with no JS required on the client.
You could, on top of that, or instead of it, do as you hinted at before, and have a React engine running server-side which renders React->HTML on-the-fly for a client request. You might choose to do this if you need dynamic (at the time of request) data, but don't want to offload the rendering & IO (to fetch the data) calls to the client
Finally on top of these options, of you can of course serve 'Normal' React (well, compiled into Plain Old Javascript) which is rendered client-side
and if the question is "Why overcomplicate this", it's because you have a lot of freedom of where to put the work, while keeping the same rendering code regardless
It kind of makes sense if you think about it:

- You use React because it will hopefully make it easier to maintain the different pages of the site, even if it's going to be 100% JavaScript free
- In the meanwhile, yourself/the team gets acquaintanced with the piece of technology that most likely you will end up using when adding some sprinkles of JavaScript to the client -- because it will happen...sooner or later you will want to add that to your "app"

Few pointers:

- [Server side rendering with React and NodeJs](https://aparnajoshi.netlify.app/server-side-rendering-with-react-and-nodejs)
- [How to Generate Static Websites with React](https://www.cloudsavvyit.com/5418/how-to-generate-static-websites-with-react/)

? read book: Gödel, Escher, Bach

? read book: leminal thinking

2021-04-05 (permalink)

* finished reading [Structure and Interpretation of Computer Programs](https://mitpress.mit.edu/sites/default/files/sicp/index.html)

2021-03-25 (permalink)

About playing with remote Node.js REPLs; I [recently](https://matteolandi.net/plan.html#day-2021-03-24) said:
I am sure there might be better and more idiomatic ways of dealing with this, but at least it unblocks me for now.
Well, it turns out _there_ is a better way to make sure socket messages are communicated within a given asynchronous context; all you need is a combination of [`Stream.Transform`](https://nodejs.org/api/stream.html#stream_class_stream_transform) and [`Readable.prototype.pipe`](https://nodejs.org/api/stream.html#stream_readable_pipe_destination_options).  From [GitHub](https://github.com/nodejs/node/issues/37866#issuecomment-805578727):
The socket is created before outside of your async scope therefore it's clearly not bound to it.
The repl instance is created within the async scope but it seems it's not modeled as async resource. It operates in caller context which are synchronous calls via socket stream (which is in turn an EventEmitter). So this ends up in the Async context of the socket...
>
You could try to use a `Transform` stream which passes the data through but switches the async context:
Then the following snippets follow; the custom `Transform` class:
const { Transform } = require("stream");
class MyTransformStream extends Transform {
  constructor(options) {
    super(options);
    this._res = new asyncHooks.AsyncResource("MyTransformStream");
  }
  _transform(chunk, encoding, cb) {
    this._res.runInAsyncScope(cb, null, null, chunk);
  }
}
and how to use it to "properly" setup the remote REPL:
const myStream = new MyTransformStream();
var repl = require("repl").start({
  input: socket.pipe(myStream), // created within your async scope so ALS propagation works
  output: socket,
});
Amazing, I can now carry on with my REPL experiments.

2021-03-24 (permalink)

Alright, I think I figured a way to make [this](https://matteolandi.net/plan.html#day-2021-03-23) work by using a combination of `AsyncResoruce.bind` and `Function.prototype.bind`.  First I use `AsyncResource.bind` to force the `socket.on` callback to run inside the "right" asynchronous context, and then I bind the first argument of that _hooked_ callback (i.e. `thisArg`); lastly, I call the _original_ `socket.on` implementation, passing the _hooked_ callback into it.
net
  .createServer(function (socket) {
    ctx.run((reqId += 1), () => {
      const onOriginal = socket.on;
      socket.on = function onHooked(data, callback) {
        let callbackHooked = asyncHooks.AsyncResource.bind(callback, "REPL");
        // The result of `AsyncResource.bind` is a function expecting
        // its first argument to be `thisArg`, so to properly finish up
        // the wiring we need to bind it before actually using the transformed
        // callback
        callbackHooked = callbackHooked.bind(this, this);
        return onOriginal.apply(socket, [data, callbackHooked]);
      };

      const repl = require("repl").start({ input: socket, output: socket });
      repl.context.ctx = ctx;
      debug("Created repl", ctx.getStore());
    });
  })
  .listen(8081);
debug("Listening on port 8081");
And with this, `AsyncLocalStorage` is not not losing its context anymore.

I am sure there might be better and more idiomatic ways of dealing with this, but at least it unblocks me for now.

2021-03-23 (permalink)

I tried to update to the latest version of Node, but luck, it's still _broken_:
$ node --version
v14.16.0
$ node async_bug.js
Listening on port 8081
For the first snippet, the REPL one, from the remote client's terminal:
$ nc localhost 8081
> ctx.getStore()
undefined
For the second snippet, the one without the REPL, the one with `socket.on(...)` only:
$ node async_bug.js
Listening on port 8081
Connected 1
[undefined] on-data from-1

Connected 2
[undefined] on-data from-2
I read through the whole [#33723](https://github.com/nodejs/node/issues/33723) thread, and realized that if I create an instance of `AsyncResource` when the connection is established, and run the `socket.on` handler within the context of that resource, then the asynchronous state does not get lost any longer:
net
  .createServer(function (socket) {
    ctx.run((reqId += 1), () => {
      const res = new asyncHooks.AsyncResource("REPL");
      debug("Connected", ctx.getStore());
      socket.on("data", (data) => {
        res.runInAsyncScope(
          () => {
            debug(`[${ctx.getStore()}]`, "on-data", data.toString());
          },
          this,
          data
        );
      });
    });
  })
  .listen(8081);
debug("Listening on port 8081");
on the server's stdout:
$ node async_bug.js
Listening on port 8081
Connected 1
[1] on-data from-1

Connected 2
[2] on-data form-2
I quickly hacked the first snippet again, the REPL one, and implemented a man-in-the-middle solution where the socket's input is pumped into the REPL's stdin from inside an `AsyncResource` context, and it's kinda working (i.e. the asynchronous context is _not_ lost, but of course all the evaluation results are dumped on the server's terminal:
net
  .createServer(function (socket) {
    ctx.run((reqId += 1), () => {
      const res = new asyncHooks.AsyncResource("REPL");
      const repl = require("repl").start({ input: null, output: null });
      socket.on("data", (data) => {
        res.runInAsyncScope(
          () => {
            repl.input.emit("data", data);
          },
          this,
          data
        );
      });
      repl.context.ctx = ctx;
      debug("Created repl", ctx.getStore());
    });
  })
  .listen(8081);
debug("Listening on port 8081");
If I run this I get:
$ node async_bug.js
Listening on port 8081
> Created repl 1
ctx.getStore()
1
> Created repl 2
ccttx..etStore(())
2
>
2
Clearly the _wiring_ left something to be desired, but at least the context was not lost.

I guess I can work-around that ([eventemitter-asyncresource](https://www.npmjs.com/package/eventemitter-asyncresource) maybe?!), but I wonder if it wasn't somehow expected for the created REPL to be bound to the async context of the request.

2021-03-22 (permalink)

`AsyncLocalStorage` losing state when used with `net.Server` and `repl.REPLServer`

I am poking around a tiny wrapper around `repl` to make it possible for someone interact with a running Node.js instance, remotely,  but when I tried to use `AsyncLocalStorage` to attach some state to the active session, such state gets gets lost as soon as the `REPLServer` takes over.

Save the following into a file:
var asyncHooks = require("async_hooks");
var fs = require("fs");
var util = require("util");
var net = require("net");

function debug(...args) {
  fs.writeFileSync(1, `${util.format(...args)}\n`, { flag: "a" });
}

var ctx = new asyncHooks.AsyncLocalStorage();
var reqId = 0;

net
  .createServer(function (socket) {
    ctx.run((reqId += 1), () => {
      var repl = require("repl").start({
        input: socket,
        output: socket,
      });
      repl.context.ctx = ctx;
      debug("Created repl", ctx.getStore());
    });
  })
  .listen(8081);
debug("Listening on port 8081");
and run it:
$ node remote-repl.js
Listening on port 8081
Now from a different terminal, connect to port `8081` to get your remote REPL:
$ nc localhost 8081
>
Now, accessing `ctx` (well, its content) from the REPL should return the ID of the REPL itself (i.e. an increasing number), but `undefined` is returned instead:
> ctx
AsyncLocalStorage {
  kResourceStore: Symbol(kResourceStore),
  enabled: true
}
> ctx.getStore()
undefined
I suspect it might have something to do with the underlying socket or `EventEmitter`, because when I try to access the content of `ctx` from within a `socket.on('data'...)` block, I still get `undefined`.
net
  .createServer(function (socket) {
    ctx.run((reqId += 1), () => {
      debug("Connected", ctx.getStore());
      socket.on("data", (data) => {
        debug(`[${ctx.getStore()}]`, "on-data", data.toString());
      });
    });
  })
  .listen(8081);
I filed a bug for this, [node/#37866](https://github.com/nodejs/node/issues/37866), so let's wait to hear back from the core developers.

2021-03-19 (permalink)

for each desired change, make the change easy (warning: this may be hard), then make the easy change
Kent Beck, on [Twitter](https://twitter.com/KentBeck/status/250733358307500032)

2021-03-14 (permalink)

? add page on Netmap to matteolandi.net

2021-03-12 (permalink)

Dynamic variables in Node.js (part 2)

The Node.js implementation of dynamic variables that I presented few weeks ago was a bit...lacking, as it did not support for an _external_ actor to interact with (and change) the dynamic environment of a _running_ application.

Imagine if your application exposed a REPL you could [remotely connect to](https://nodejs.org/en/docs/guides/debugging-getting-started/#enabling-remote-debugging-scenarios); now, if you had all the application settings loaded into the dynamic environment, wouldn't it be amazing if you could change those while the application is running and immediately see the effects of such change?  So do all that, connect to the REPL, run `var { env } = require('dynamic_variables.js')`, only to discover that the actual environment is empty...

That's a bit of a bummer, but it also kind of makes sense if you think about it: all the dynamic bindings are done asynchronously, and when you connect to the REPL you are starting off a completely unrelated execution context, and because of that: a) the dynamic environment turns out to be empty, b) there is no way for you to change it and give the application access to the newly created bindings.

To _fix_ this, we will have to change the internal representation of a dynamic environment, and define it in terms of a stack of _frames_, each containing a set of bindings: a _global_ frame, containing all the _top-level_ bindings (i.e. the ones happening outside of any asynchronous context, when the application boots up); and then a list of _dynamic_ frames to keep track of all the asynchronous bindings.

If we want an _external_ actor to interact with the running application and change its dynamic environment, we will also need a way of _mutating_ existing bindings, living in one of the existing frames, without having to create a new one.  In particular, when run by the external actor this new bind operation is going to affect the global frame; but when run inside an asynchronous context it's going to affect the frames in which those bindings were most recently defined / re-defined.  Hopefully the following Common Lisp snippet will help you understand what I am talking about:
(defparameter *x* 5)

(defun rebind ()
  (setf *x* 15))

(defun test1 ()
  "We create a new _frame_ for *x*, so all the SETF operations affect that"
  (let ((*x* 10))
    (rebind)
    (assert (= *x* 15)))
  (assert (= *x* 5)))

(test1)
(assert (= *x* 5))

(defun test2 ()
  "LET here does not create a new _frame_ for *x*, so SETF ends up actually
  changing the _global_ value"
  (let ((whatever 10))
    (rebind)
    (assert (= *x* 15)))
  (assert (= *x* 15)))

(test2)
(assert (= *x* 15))

(defun test3 ()
  "We don't even create a new _frame_ for *x*...so this behaves like TEST2"
  (rebind)
  (assert (= *x* 15)))

(setf *x* 5)
(test3)
(assert (= *x* 15))
Alright then, let's get to it.  We said the new `DynamicEnvironment` was going to need to keep track of _global_ bindings, as well as _dynamic_ ones; for the first we are going to be using an instance of `Bindings` (i.e. a wrapper around `Map`), while for dynamic ones we will be using an array of `Bindings` all wrapped up inside an instance of `AsyncLocalStorage`:
var DynamicEnvironment = function (...flatBindings) {
  this.globalFrame = new Bindings(parseKVPairs(flatBindings));
  this.dynamicFrames = new AsyncLocalStorage();
};
Note 1: we are not passing `[]` to `AsyncLocalStorage constructor` because reading it back from an _external_ process will result in an _empty_ context, and if we have to deal with `undefined` values anyway then why bother initializing it in the first place?!
Note 2: the `Bindings` type used here is a little bit different from the one I showed [last time](https://matteolandi.net/plan.html#day-2021-02-28); it has a new `Bindings.prototype.has` method to check if a binding for a given variable exists, its `Bindings.prototype.set` method is now a _destructive_ one (i.e. it mutates the current object), it implements the [iteration protocol](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Iteration_protocols) (it's going to come in handy later on):
var Bindings = function (kvpairs) {
  this.data = new Map(kvpairs);
};
Bindings.prototype.get = function (name) {
  if (!this.has(name)) {
    throw new Error(`Dynamic variable, unbound: '${name.toString()}'`);
  }
  return this.data.get(name);
};
Bindings.prototype.has = function (name) {
  assert.ok(name, `Dynamic variable name, invalid: ${name.toString()}`);
  return this.data.has(name);
};
Bindings.prototype.set = function (key, value) {
  return this.data.set(key, value);
};
Bindings.prototype[Symbol.iterator] = function () {
  return this.data[Symbol.iterator]();
};
Looking up a binding inside this new environment now translates to finding first the frame, dynamic or global, that _last_ defined a binding for that variable, and then pull the binding value out of it.
DynamicEnvironment.prototype.get = function (name) {
  return this.findBindingFrameOrGlobal(name).get(name);
};
DynamicEnvironment.prototype.findBindingFrameOrGlobal = function (name) {
  let frame;
  for (let each of this.dynamicFrames.getStore() || []) {
    if (each.has(name)) {
      frame = each;
      break;
    }
  }
  return frame || this.globalFrame;
};
On the topic of setting a new binding: we previously said we wanted to enable users to _create_ new bindings (i.e. create a new frame), as well as _update_ existing ones, as that would enable an external actor to interact with a running application.  To do that, instead of creating two separate methods, we will be having `DynamicEnvironment.prototype.set` check the arguments it receives and figure out what to do based on its length: if it receives an even number of arguments, it means the user specified new bindings only with no body function, in which case the method will proceed and update existing frames; if on the other hand the method is provided with a body function (i.e. odd number of arguments), then a new frame with the specified bindings is going to be added to the stack of frames, and the body function will be executed inside that new dynamic context:
DynamicEnvironment.prototype.set = function (...args) {
  const updatesExistingFrames = args.length % 2 === 0;
  if (updatesExistingFrames) {
    for (let [key, value] of parseKVPairs(args)) {
      this.findBindingFrameOrGlobal(key).set(key, value);
    }
  } else {
    const kvpairs = parseKVPairs(args.slice(0, args.length - 1));
    const body = args[args.length - 1];
    const bindings = new Bindings(kvpairs);
    return this.dynamicFrames.run(
      [bindings, ...(this.dynamicFrames.getStore() || [])],
      body
    );
  }
};
Lastly I figured it would be nice to make it possible to iterate over all the bindings of a dynamic environment, so the following implements that:
DynamicEnvironment.prototype[Symbol.iterator] = function () {
  const bindings = new Map(this.globalFrame);
  for (let frame of (this.dynamicFrames.getStore() || []).reverse()) {
    for (let [key, value] of frame) {
      bindings.set(key, value);
    }
  }
  return bindings[Symbol.iterator]();
};
Alright, let's test this out and see if it works.  We are going to create a timer, and inside of it we are going to be logging the content of the dynamic environment:
var env = new DynamicEnvironment('debug', false);

function timer() {
  console.log(env.get('debug'))
}

var tid = setInterval((...args) => timer(...args), 2000);
(You should now start seeing a bunch of `false` messages getting logged in the REPL, every 2 seconds)

Next we are going to listen for incoming connections on port `5001`, and every time we receive one, set up a _remote_ REPL to let the external actor play with the running environment:
var net = require("net");
var repl = require("repl");

function createREPL(socket) {
  console.log("Connection received; creating a REPL!");
  const remote = repl.start({
    prompt: "remote> ",
    input: socket,
    output: socket,
    useGlobal: true,
  });
}

var server = net.createServer((...args) => createREPL(...args));
server.listen(5001, "localhost");
console.log("Remote REPL started on port 5001");
Let's poke around and see what we can do with it:
$ nc localhost 5001
remote> env
DynamicEnvironment {
  globalFrame: Bindings { data: Map(1) { 'debug' => false } },
  dynamicFrames: AsyncLocalStorage {
    kResourceStore: Symbol(kResourceStore),
    enabled: false
  }
}
It looks like we can get hold of the dynamic environment (well, its global frame at least); let's try and change it:
remote> env.set('debug', true)
If all worked as expected, you should now be seeing a series of `true` messages getting logged in the REPL; _that_ means were successfully able to poke with one application's dynamic environment, from the outside.

"You could have easily done that with global variables" one could argue, and they would be right, we could have.  Let's mix in some more dynamic bindings, to show what the system is capable of.

First we reset the environment and add an additional binding, then we change our timer to create some dynamic bindings and finally print the whole dynamic environment:
env.set('logfile', '/tmp/server.log', 'debug', false);

function timer() {
  env.set("logfile", "/tmp/different.log", "date", new Date(), () =>
    setTimeout(() => {
      console.log(env[Symbol.iterator]());
    }, 500)
  );
}
The messages logged in the REPL should now look like the following:
...
[Map Entries] {
  [ 'debug', false ],
  [ 'logfile', '/tmp/different.log' ],
  [ 'date', 2021-03-12T19:37:59.963Z ]
}
...
Let's go back to the remote REPL again, and change the value of the `debug` binding again:
remote> env.set('debug', true)
Wait a couple of seconds, and the messages logged on the REPL should now look like this:
...
[Map Entries] {
  [ 'debug', true ],
  [ 'logfile', '/tmp/different.log' ],
  [ 'date', 2021-03-12T19:40:54.317Z ]
}
...
As we can see the dynamic environment from within the `setTimeout` handler properly reflects the changes done from the _remote_ REPL, as well as the ones triggered by the dynamic bind operation.

And that's it!  I have pushed all the above on [GitHub](https://github.com/iamFIREcracker/dynamic_variables.js), and in there I also added an [example](https://github.com/iamFIREcracker/dynamic_variables.js/tree/master/examples/express) showing how dynamic variables can be be used within an Express.js application.

Till next time.

2021-03-11 (permalink)

During World War II, the Navy realized it was losing a lot of aircraft and could better armor its planes to increase their survival. After analyzing where its planes had suffered the most damage, it determined that it needed to reinforce the planes’ wingtips, central body and elevators.
But a statistician named Abraham Wald argued otherwise. He thought the Navy should reinforce the armor of the planes’ nose, engines and mid-body. But why would he suggest that when the planes were taking more damage to the wingtips, central body and elevators? Because in reality, they weren’t. The planes getting shot in the nose area, engines and mid-body were being destroyed from the damage and weren’t making it back to be analyzed.
The Navy thought it had discovered where its planes were suffering the most damage. Instead, it had discovered where its planes could take damage without being destroyed. It wasn’t looking at the whole sample set.
From [Finding the Missing Bullet Holes](https://onebiteblog.com/finding-the-missing-bullet-holes/)

2021-03-05 (permalink)

Am I the only one having a hard time understanding the implementation of Data-Directed Programming, presented in SICP?

A little bit of background first; we are shown how to _tag_ data ([2.4.2 Tagged data](https://mitpress.mit.edu/sites/default/files/sicp/full-text/book/book-Z-H-17.html#%_sec_2.4.2)):
(define (attach-tag type-tag contents)
  (cons type-tag contents))
(define (type-tag datum)
  (if (pair? datum)
      (car datum)
      (error "Bad tagged datum -- TYPE-TAG" datum)))
(define (contents datum)
  (if (pair? datum)
      (cdr datum)
      (error "Bad tagged datum -- CONTENTS" datum)))
...how to implement type predicates:
(define (rectangular? z)
  (eq? (type-tag z) 'rectangular))
(define (polar? z)
  (eq? (type-tag z) 'polar))
...how to implement _packages_:
(define (real-part-rectangular z) (car z))
(define (imag-part-rectangular z) (cdr z))
(define (magnitude-rectangular z)
  (sqrt (+ (square (real-part-rectangular z))
           (square (imag-part-rectangular z)))))
(define (angle-rectangular z)
  (atan (imag-part-rectangular z)
        (real-part-rectangular z)))
(define (make-from-real-imag-rectangular x y)
  (attach-tag 'rectangular (cons x y)))
(define (make-from-mag-ang-rectangular r a)
  (attach-tag 'rectangular
              (cons (* r (cos a)) (* r (sin a)))))

;; similar definitions for the "polar" representation
...and how to implement _generic_ selectors:
(define (real-part z)
  (cond ((rectangular? z)
         (real-part-rectangular (contents z)))
        ((polar? z)
         (real-part-polar (contents z)))
        (else (error "Unknown type -- REAL-PART" z))))
(define (imag-part z)
  (cond ((rectangular? z)
         (imag-part-rectangular (contents z)))
        ((polar? z)
         (imag-part-polar (contents z)))
        (else (error "Unknown type -- IMAG-PART" z))))
(define (magnitude z)
  (cond ((rectangular? z)
         (magnitude-rectangular (contents z)))
        ((polar? z)
         (magnitude-polar (contents z)))
        (else (error "Unknown type -- MAGNITUDE" z))))
(define (angle z)
  (cond ((rectangular? z)
         (angle-rectangular (contents z)))
        ((polar? z)
         (angle-polar (contents z)))
        (else (error "Unknown type -- ANGLE" z))))
Clearly it's not OK for REAL-PART, or IMAG-PART, or any other _generic_ selector, to require change every time a new implementation is added to the system, and this is where "Data-directed programming" is going to come to the rescue.

The idea ([2.4.3 Data-Directed Programming and Additivity](https://mitpress.mit.edu/sites/default/files/sicp/full-text/book/book-Z-H-17.html#%_sec_2.4.3)) is to construct a table mapping operations (e.g. REAL-PART, IMAG-PART) and types (e.g. POLAR, RECTANGULAR) to the actual function in charge of fulfilling the user need, and of course to provide a way to pluck the specific implementation out of the table.  In particular:

- `(put <op> <type> <item>)` will install `<item>` in the table, indexed by `<op>` and `<type>`
- `(get <op> <type>)` will look up the `<op>,<type>` entry in the table

Note: we are not given the actual implementations of PUT and GET.

With this in mind, we can then re-define the rectangular package as follows:
(define (install-rectangular-package)
  ;; internal procedures
  (define (real-part z) (car z))
  (define (imag-part z) (cdr z))
  (define (make-from-real-imag x y) (cons x y))
  (define (magnitude z)
    (sqrt (+ (square (real-part z))
             (square (imag-part z)))))
  (define (angle z)
    (atan (imag-part z) (real-part z)))
  (define (make-from-mag-ang r a)
    (cons (* r (cos a)) (* r (sin a))))
  ;; interface to the rest of the system
  (define (tag x) (attach-tag 'rectangular x))
  (put 'real-part '(rectangular) real-part)
  (put 'imag-part '(rectangular) imag-part)
  (put 'magnitude '(rectangular) magnitude)
  (put 'angle '(rectangular) angle)
  (put 'make-from-real-imag 'rectangular
       (lambda (x y) (tag (make-from-real-imag x y))))
  (put 'make-from-mag-ang 'rectangular
       (lambda (r a) (tag (make-from-mag-ang r a))))
  'done)
Here is where things get a bit confusing: why, we are _registering_ REAL-PART for the `(rectangular)` type, and not simply `'rectangular` like we are doing for MAKE-FROM-REAL-IMAG?

The only explanation I could think of is that we are giving the `<type>` argument of the PUT call different meanings:

- For REAL-PART, `<type>` represents the list of the types of the arguments expected by the registered procedure (i.e. one argument, of type RECTANGULAR)
- For MAKE-FROM-REAL-IMAG instead, `<type>` represents the type of the _returned_ instance

And why would we do that?  Because otherwise it would not be possible to dispatch to the "Right" implementation in case of generic functions with the same argument types (both the implementation of MAKE-FROM-REAL-IMAG inside the rectangular and polar packages expects 2 arguments of type NUMBER).

Anyways, a similar package for the polar representation is presented, and then finally we are shown the implementation of APPLY-GENERIC, the procedure responsible for invoking the "Right" procedure based on the _types_ of the arguments of the dispatched action:
(define (apply-generic op . args)
  (let ((type-tags (map type-tag args)))
    (let ((proc (get op type-tags)))
      (if proc
          (apply proc (map contents args))
          (error
            "No method for these types -- APPLY-GENERIC"
            (list op type-tags))))))
Here more doubts come to my mind: how can we use this with the MAKE-FROM-REAL-IMAG?

Clearly we cannot simply run `(apply-generic 'make-from-real-imag 4 2)`, as that would fail when trying to apply TYPE-TAG to `(4 2)`.  I thought, maybe we pass `(attach-tag 'rectangular (list 4 2))` to APPLY-GENERIC, but then `(map contents args)` would evaluate to `((4 2))` and that is incompatible with the registered procedure, which expects two numbers and not a list of 2 numbers, right?

So where do we go from here?  There _has_ to be a way, but I just cannot find it.

Posted this on [/r/sicp](https://www.reddit.com/r/sicp/comments/ly7wj3/datadirected_programming_and_how_to_invoke/), let's see if anyone can help me out.

2021-02-28 (permalink)

Dynamic variables in Node.js

For those of you unfamiliar with it, Common Lisp has this concept of "dynamic variables", which are nothing more than _global_ variables with _dynamic_ scope; but what does it mean for a variable to have dynamic scope? That you can bind a new a value to it, and all the _subsequent_ accesses to that variable (within the scope of the binding operation), will return _that_ new value instead of the _previous_ one.  Let's take a look at an example, as I hope it will make things a little bit easier to understand (the asterisks around a variable name indeed mark the variable as dynamic / special):
(defvar *x* 5)

(defun foo ()
  *x*)

(defun bar ()
  (let ((*x* 42))
    (foo)))

> *x*
5

> (foo)
5

> (bar)
42

> *x*
5
Step by step explanation:

- `*x*` was initially bound to `5`, so when we access it we get `5` back -- all good
- We call FOO, which all it does is returning the value of `*x*`, and without much surprise we get `5` back -- so all good
- We call BAR, which binds `*x*` to `42` before calling FOO and returning its return value, and interestingly enough we now get back `42` instead of `5` -- there it is the _dynamic_ scope I was talking about earlier
- Lastly, we access `*x*` and we get `5` back -- we are outside BAR's new binding scope, so the value of `*x*` has been restored to its previous value

Let's go back the definition that I gave earlier: _global_ variables, with _dynamic_ scope.  I know that the first rule of global variables is: "thou shalt not use global variables", but it's just that _sometimes_ they appear to be right tool for the job, especially in the context of Web applications; think about the following use cases:

- getting a hold of the currently logged in user
- automatically adding the content of the `X-Request-Id` header to each log trace
- querying the _right_ database, based on the logged in user's tenant

How would you implement these?  Either you shove all this information up into a context object, and pass it around, _everywhere_; or maybe you forget about all the bad things you read about global variables, and consciously and carefully agree to use them where it really matters, where it really makes a difference.

Anyways, enough talking about global variables, I do not want this to be an essay about its pros or cons (I am sure the Internet is full of material and opinions about it); instead, let's go back to our original goal: trying to implement dynamic variables in Node.js.

It all begins with the definition of a type representing a bag of bindings (we are going to wrap the standard `Map` type for this):
var Bindings = function (kvpairs) {
  this.data = new Map(kvpairs);
};

> new Bindings().data
Map(0) {}

> new Bindings([['a', 1]]).data
Map(1) { 'a' => 1 }

> new Bindings([['a', 1], ['b', 2]]).data
Map(2) { 'a' => 1, 'b' => 2 }
(Don't worry about that _ugly_ syntax for now, we will deal with it later)

Getting a binding (i.e. getting the value bound to a variable), should be as easy as calling `Map.prototype.get` (plus some additional logic to validate user input):
Bindings.prototype.get = function (name) {
  assert.ok(name, `Dynamic variable name, invalid: ${name}`);
  if (!this.data.has(name)) {
    throw new Error(`Dynamic variable, unbound: '${name}'`);
  }
  return this.data.get(name);
};
First we check that the name of the variable is indeed _valid_, then we confirm that a binding for that variable actually exists, and finally we return the value bound to that variable.  Let's play with it an confirm it's all working fine:
var bb = new Bindings([['a', 1], ['b', 2]])

> bb.get('a')
1

> bb.get('b')
2

> bb.get('c')
Uncaught Error: Dynamic variable, unbound: 'c'
    at getBinding (repl:4:11)

> bb.get()
Uncaught AssertionError [ERR_ASSERTION]: Dynamic variable name, invalid: undefined
    at getBinding (repl:2:10)
    at repl:1:1
    at Script.runInThisContext (vm.js:131:18)
    at REPLServer.defaultEval (repl.js:472:29)
    at bound (domain.js:430:14)
    at REPLServer.runBound [as eval] (domain.js:443:12)
    at REPLServer.onLine (repl.js:794:10)
    at REPLServer.emit (events.js:326:22)
    at REPLServer.EventEmitter.emit (domain.js:486:12)
    at REPLServer.Interface._onLine (readline.js:337:10) {
  generatedMessage: false,
  code: 'ERR_ASSERTION',
  actual: undefined,
  expected: true,
  operator: '=='
}
Lastly, to set new bindings, we will go about and create a new `Binding` object and initialize it with the existing bindings and the newly defined ones..._merged_ together:
Bindings.prototype.set = function (kvpairs) {
  return new Bindings([...this.data, ...kvpairs]);
};
Again, let's can play with this to confirm that it's all working as expected:
var bb = new Bindings([['a', 1], ['b', 2]])

> bb.set([['c', 3]]).data
Map(3) { 'a' => 1, 'b' => 2, 'c' => 3 }

> bb.set([['a', 3]]).data
Map(3) { 'a' => 3, 'b' => 2 }

> bb.data
Map(2) { 'a' => 1, 'b' => 2 }
All good, great!  Before we dive into the details of the implementation of dynamic variables, let's first implement a couple of functions which will come in handy down the line.  The first one is to simplify the _syntax_ for creating new bindings; we don't want users to specify new bindings via nested lists (i.e. `[['a', 1], ['b', 2]]`); instead, we would like them to use a _flattened_ list instead (i.e. `['a', 1, 'b', 2]`):
function parseKVPairs(flatBindings) {
  assert.ok(
    flatBindings.length % 2 === 0,
    `Bindings arguments, expected even number of elements, but got: ${flatBindings.length}`
  );

  const kvpairs = [];
  for (var i = 0; i < flatBindings.length; i += 2) {
    kvpairs.push(flatBindings.slice(i, i + 2));
  }
  return kvpairs;
};
Nothing crazy about this: we first confirm that the number of bindings is indeed even, and then wrap every pair of adjacent elements into a nested list.  Let's give it a go:
> parseKVPairs([])
[]

> parseKVPairs(['a', 1])
[ [ 'a', 1 ] ]

> parseKVPairs(['a', 1, 'b', 2])
[ [ 'a', 1 ], [ 'b', 2 ] ]
Perfect!  The second utility function might look a bit cryptic at first, simply because I am yet to show you what is the problem that it tries to solve, but hopefully soon it will all make more sense.  We want to feed it with a list of elements representing a flattened list of bindings followed by a callback function, and we expect it to return a list whose first element is a list of key-value pairs created from the list of flattened bindings, and the second is the given callback function:
function parseDynamicEnvironmentSetArguments(args) {
  assert.ok(args, `Function arguments, invalid: ${args}`);
  assert.ok(
    args.length % 2 === 1,
    `Function arguments, expected odd number of elements, but got: ${args.length}`
  );

  const kvpairs = parseKVPairs(args.slice(0, args.length - 1));
  const body = args[args.length - 1];

  return [kvpairs, body];
};
Let's test it out:
> parseDynamicEnvironmentSetArguments([() => {}])
[ [], [Function (anonymous)] ]

> parseDynamicEnvironmentSetArguments(['a', 1, () => {}])
[ [ [ 'a', 1 ] ], [Function (anonymous)] ]

> parseDynamicEnvironmentSetArguments(['a', 1, 'b', 2, () => {}])
[ [ [ 'a', 1 ], [ 'b', 2 ] ], [Function (anonymous)] ]
Alright, it's all working as expected, and with all of this defined and taken care for, it's time we took a look at a possible implementation for a _dynamic environment_, i.e. environment getting a hold of a bunch of dynamic variables.

The biggest challenge in implementing dynamic variables in Node.js is figuring out a way to persist _state_ across changes of asynchronous context: you set `*x*` to a `5`, invoke `setTimeout`, and when the callback is invoked you expect `*x*` to still be bound to `5`.  Similarly, if two asynchronous operations happen to re-bind the same dynamic variable, you don't any of them to step on each others toes.

Luckily for us, the Node.js core team has been working on this _problem_ for quite some time now, and you can see the result of their effort in the [`async_hooks`](https://nodejs.org/api/async_hooks.html) module.  I am not going to bore you with its implementation details (mostly because I am not familiar with it myself), but for what we are trying to achieve here, all we need to know is that:

- Each piece of running code (_user_ code), can have an ID attached, identifying its asynchronous execution context
- Each piece of running code (_user_ code), can have another ID attached, identifying the asynchronous context that _triggered_, directly or indirectly, the current one (i.e. if you create three nested promises, each callback, when executed, will probably have a different `asyncId` value but same `triggerAsyncId` one)
- There is a low-level API, [`createHooks`](https://nodejs.org/api/async_hooks.html#async_hooks_async_hooks_createhook_callbacks), that can be used to get notified when an asynchronous execution context is created or destroyed; with it, one could think of attaching some _payload_ to the current execution context, and then expose another API for user code to access it
- There is a high-level API, [`AsyncLocalStorage`](https://nodejs.org/api/async_hooks.html#async_hooks_class_asynclocalstorage), that shields the user from all the above complexity, and offers a simple way of running user code with a given piece of _payload_ attached to the current execution context

It goes without saying it that `AsyncLocalStorage` is what we will use to implement our dynamic environment:

- Getting a binding translates to getting a hold of the current execution context's payload (i.e. the bindings), and returning whichever value is currently bound to the given variable name
- Setting a binding translates to creating a new set of bindings, attaching it to the current execution context, and running user code within it -- old bindings will be automatically restored after user code (synchronous or asynchronous) has finished running

Alright, let's get our hands dirty.  Let's start by creating a new type for the dynamic environment:
var { AsyncLocalStorage } = require("async_hooks");

var DynamicEnvironment = function (...flatBindings) {
  this.ctx = new AsyncLocalStorage();
  this.ctx.enterWith(new Bindings(parseKVPairs(flatBindings)));
};
Here, all we do, is creating the asynchronous context object (i.e. an instance of `AsyncLocalStorage`), and then initialize it with some user defined bindings (e.g. `'a', 1, 'b', 2`).  Let's give it a go to see what happens when we call the constructor (note: `ctx.getStore()` is how you access the _payload_ of the current asynchronous context):
> new DynamicEnvironment().ctx.getStore()
Bindings { data: Map(0) {} }

> new DynamicEnvironment('a', 1).ctx.getStore()
Bindings { data: Map(1) { 'a' => 1 } }

> new DynamicEnvironment('a', 1, 'b', 2).ctx.getStore()
Bindings { data: Map(1) { 'a' => 1, 'b' => 2 } }
Let's now define a method to get the value of a specific binding (note how `Bindings`, our previously defined type, is doing all the heavy lifting here):
DynamicEnvironment.prototype.get = function (name) {
  return this.ctx.getStore().get(name);
};

var env = new DynamicEnvironment('a', 1, 'b', 2)

> env.get('a')
1

> env.get('b')
2

> env.get('c')
Uncaught Error: Dynamic variable, unbound: 'c'
    at Bindings.get (repl:4:11)
    at DynamicEnvironment.get (repl:2:30)

> env.get()
Uncaught AssertionError [ERR_ASSERTION]: Dynamic variable name, invalid: undefined
    at Bindings.get (repl:2:3)
    at DynamicEnvironment.get (repl:2:30)
    at repl:1:5
    at Script.runInThisContext (vm.js:131:18)
    at REPLServer.defaultEval (repl.js:472:29)
    at bound (domain.js:430:14)
    at REPLServer.runBound [as eval] (domain.js:443:12)
    at REPLServer.onLine (repl.js:794:10)
    at REPLServer.emit (events.js:326:22)
    at REPLServer.EventEmitter.emit (domain.js:486:12) {
  generatedMessage: false,
  code: 'ERR_ASSERTION',
  actual: undefined,
  expected: true,
  operator: '=='
}
The last piece of the puzzle, is a mean of setting a new binding (or bindings), and run some user code within the scope of these new bindings; thanks to `Bindings`, `AsyncLocalStorage`, and the cryptic `parseDynamicEnvironmentSetArguments` I showed you before, this could not have been any easier to implement:
DynamicEnvironment.prototype.set = function (...args) {
  const [kvpairs, body] = parseDynamicEnvironmentSetArguments(args);
  const bindings = this.ctx.getStore().set(kvpairs);
  return this.ctx.run(bindings, body);
};
First we parse function arguments into key-value pairs and the callback inside of which the new bindings will be active; then we create a new `Bindings` object merging new bindings with any existing ones; lastly we tell `AsyncLocalStorage` to _do its magic_ (i.e. attach new bindings to the execution context, and run user code).  Let's try this out, and see if it works or not:
async function test(body) {
  try {
    await body();
    console.log("A-OK!!!");
  } catch (err) {
    console.error(err);
  }
}

> test(async () => {
  var env = new DynamicEnvironment("x", 5);

  var foo = function () {
    return env.get("x");
  };

  var bar = function () {
    return env.set("x", 42, () => foo());
  };

  assert.equal(env.get("x"), 5);
  assert.equal(foo(), 5);
  assert.equal(await bar(), 42);
  assert.equal(env.get("x"), 5);
});
Promise { <pending> }
A-OK!!!
It seems like it _is_ indeed working; but what if we added some asynchronous operations within the scope of the `set` call?
> test(async () => {
  var env = new DynamicEnvironment("x", 5);

  var foo = function () {
    return env.get("x");
  };

  var bar = function () {
    return env.set("x", 42, () => {
      return new Promise((resolve) => {
        setTimeout(() => resolve(foo()), 2000);
      });
    });
  };

  assert.equal(env.get("x"), 5);
  assert.equal(foo(), 5);
  assert.equal(await bar(), 42);
  assert.equal(env.get("x"), 5);
});
Promise { <pending> }
A-OK!!!
Still working, great!  What about multiple asynchronous operations at the same time?
> test(async () => {
  var env = new DynamicEnvironment("x", 5);

  var foo = function () {
    return env.get("x");
  };

  var bar = function () {
    return env.set("x", 42, () => {
      return Promise.all([
        foo(),
        env.set(
          "x",
          52,
          () =>
            new Promise((resolve) => {
              setTimeout(() => resolve(foo()), 1000);
            })
        ),
        env.set(
          "x",
          72,
          () =>
            new Promise((resolve) => {
              setTimeout(() => resolve(foo()), 2000);
            })
        ),
      ]);
    });
  };

  assert.equal(env.get("x"), 5);
  assert.equal(foo(), 5);
  assert.deepEqual(await bar(), [42, 52, 72]);
  assert.equal(env.get("x"), 5);
});
Promise { <pending> }
A-OK!!!
It works, and by the look of it it appears we were indeed able to implement "dynamic variables" in Node.js.

I added all the above into a new repository, [`dynamic_variables.js`](https://github.com/iamFIREcracker/dynamic_variables.js), so feel free to play with it in your REPLs and do let me know if anything breaks for you.

Also, it's worth remembering that `async_hooks` is still considered _experimental_, so its API might suddenly change with a new release of Node.js; well, that and the fact that the current implementation might still contain some nasty bugs that might cause your dynamic bindings to get lost across switches of execution context.  This might not be a big deal if you were just to messing around with this, or if you were just planning to use this to enhance your logging capabilities; but if instead, you were planning anything more _serious_ than that, like selecting the "right" database connection based on the logged-in user's tenant, then I would strongly recommend that you tested as many execution paths as possible and confirmed that no binding got lost in the process.  You know, it works...until it doesn't!

PS. For educational purposes, here I am going to show you a different implementation of a dynamic environment, one that does _not_ use `AsyncLocalStorage` to keep track of re-binds (it does that with a stack of _active_ bindings) and because of that, one that most surely is going to fail the expectations in case of multiple nested, asynchronous, re-binds:
var UnsafeDynamicEnvironment = function (...flatBindings) {
  this.snapshots = [new Bindings(parseKVPairs(flatBindings))];
};
UnsafeDynamicEnvironment.prototype.get = function (name) {
  return this._getActiveSnapshot().get(name);
};
UnsafeDynamicEnvironment.prototype.set = function (...args) {
  const [kvpairs, body] = parseDynamicEnvironmentSetArguments(args);
  const bindings = this._getActiveSnapshot().set(kvpairs);
  return this._runWithBindings(bindings, body);
};
UnsafeDynamicEnvironment.prototype._getActiveSnapshot = function () {
  return this.snapshots[this.snapshots.length - 1];
};
UnsafeDynamicEnvironment.prototype._runWithBindings = async function (bindings, body) {
  this.snapshots.push(bindings);
  try {
    return await body();
  } finally {
    this.snapshots.pop();
  }
};

> test(async () => {
  var env = new UnsafeDynamicEnvironment("x", 5);

  var foo = function () {
    return env.get("x");
  };

  var bar = function () {
    return env.set("x", 42, () => foo());
  };

  assert.equal(env.get("x"), 5);
  assert.equal(foo(), 5);
  assert.equal(await bar(), 42);
  assert.equal(env.get("x"), 5);
});
Promise { <pending> }
A-OK!!!

> test(async () => {
  var env = new UnsafeDynamicEnvironment("x", 5);

  var foo = function () {
    return env.get("x");
  };

  var bar = function () {
    return env.set("x", 42, () => {
      return new Promise((resolve) => {
        setTimeout(() => resolve(foo()), 2000);
      });
    });
  };

  assert.equal(env.get("x"), 5);
  assert.equal(foo(), 5);
  assert.equal(await bar(), 42);
  assert.equal(env.get("x"), 5);
});
Promise { <pending> }
A-OK!!!

> test(async () => {
  var env = new UnsafeDynamicEnvironment("x", 5);

  var foo = function () {
    return env.get("x");
  };

  var bar = function () {
    return env.set("x", 42, () => {
      return Promise.all([
        foo(),
        env.set(
          "x",
          52,
          () =>
            new Promise((resolve) => {
              setTimeout(() => resolve(foo()), 1000);
            })
        ),
        env.set(
          "x",
          72,
          () =>
            new Promise((resolve) => {
              setTimeout(() => resolve(foo()), 2000);
            })
        ),
      ]);
    });
  };

  assert.equal(env.get("x"), 5);
  assert.equal(foo(), 5);
  assert.deepEqual(await bar(), [42, 52, 72]);
  assert.equal(env.get("x"), 5);
});
Promise { <pending> }
AssertionError [ERR_ASSERTION]: Expected values to be loosely deep-equal:

[
  42,
  72,
  52
]

should loosely deep-equal

[
  42,
  52,
  72
]
    at repl:34:14
    at async test (repl:3:9) {
  generatedMessage: true,
  code: 'ERR_ASSERTION',
  actual: [Array],
  expected: [Array],
  operator: 'deepEqual'
}

2021-02-26 (permalink)

? book: [The Web Application Hacker's Handbook: Finding and Exploiting Security Flaws: Discovering and Exploiting Security Flaws](https://www.amazon.it/Web-Application-Hackers-Handbook-Exploiting/dp/1118026470)

2021-02-19 (permalink)

+ book: [Smalltalk Best Practice Patterns](https://www.amazon.it/Smalltalk-Best-Practice-Patterns-Kent-dp-013476904X/dp/013476904X)

? book: [Refactoring](https://martinfowler.com/books/refactoring.html)


If your Mac refuses to go to sleep, use `pmset -g assertions` to see who's preventing this from happening...and kill it -- the other day it was one of my browser tabs, and only God knows what it was doing it the background...welcome to 2021!

Relevant read: [Sleepless Mac](https://stuff-things.net/2017/05/10/sleepless-mac/)

2021-02-14 (permalink)

* finished reading [The Common Lisp Condition System](https://www.apress.com/gp/book/9781484261330)


The "earnmuff notation" (extracted from the Common Lisp Condition System)

The "earnmuff notation" in CL is the conventional notation for dynamic variables.

Here is a very poor an concise _refresher_ (I should have used _reminder_ there) about how dynamic variables work in CL:
(defvar *x* 5)

(defun bar ()
  *x*)

(defun foo ()
  (let ((*x* 42))
    (bar)))

> (foo)
42

> (bar)
5

On reporting custom conditions (extracted from The Common Lisp Condition System)

Condition objects are printed in their unreadable form if the dynamic variable `*print-escape*` is bound to T,  However, when that same variable is bound to NIL, printing a restart object causes its _report function_ to be invoked.  That report function accepts a pair of arguments: the condition object itself and the reporting stream that the the report should be written to.  This means that the report function is capable of querying the condition object for all the infromation that is needed for generating a proper report; it is goodf style for condition reporting functions to be self-contained, which means not to depend on any Lisp data other than the condition object itself.

On CERROR, and its signature (extracted from The Common Lisp Condition System)

An interesting feature of CERROR is that the optional arguments passed to the function are used _both_ for constructing the condition object _and_ as format arguments for the passed format string.  Let us analyze the following form:
 (cerror "Continue after signaling a SIMPLE-ERROR ~
          with arguments: ~S ~S ~S ~S"
         'simple-error
         :format-control "A simple error signaled with ~A."
         :format-arguments '(42))
 Executing this form will cause the system to signal a SIMPLE-ERROR; we expect the report of that condition object to be `A simple error signaled with 42.`. In addition, we expect a new restart to be bound around the error site; the restart will have a longer report form, reading `Continue after signaling a SIMPLE-ERROR with arguments: ` and then listing all the opotional arguments that CERROR was called with.

On the condition-restart association (extracted from The Common Lisp Condition System)

Let's take a look at the following code snippet:
Debug> (let ((*print-escape* nil))
         (format t "~&~{;; ~W~%~}" (compute-restarts)))

;; Return to level 2 of the debugger.
;; Retry using SETG.
;; Use specified function
;; Return to level 1 of the debugger.
;; Suppy a new value for *X*.
;; Retry evaluating the form.
;; Return to the top level.
NIL

Debug> (let ((*print-escape* nil))
         (format t "~&~{;; ~W~%~}" (compute-restarts *debugger-condition*)))

;; Return to level 2 of the debugger.
;; Retry using SETG.
;; Use specified function
;; Return to level 1 of the debugger.
;; Retry evaluating the form.
;; Return to the top level.
NIL
Let us differentiate these two lists of restarts via SET-DIFFERENCE to find restarts which appear in the first list but don't in the other:
Debug> (let ((*print-escape* nil))
         (format t "~&~{;; ~W~%~}"
                   (set-difference
                     (compute-restarts)
                     (compute-restarts *debugger-condition*)))
;; Suppy a new value for *X*.
NIL
We can see that calling COMPUTE-RESTARTS _without_ passing it the UNDEFINED-FUNCTION condition has caused one more restart to appear -- the missing STORE-VALUE restart, with a report `Suppy a new value for *X*.`, which was bound by the STORE-VALUE form which we originally evaluated.  From this observation, we may induce that there must be some sort of relationship between condition objects and restart objects that is checked by COMPUTE-RESTARTS and FIND-RESTART, among other functions.  This relationship in CL is named _condition-restart association_.

On programmatically calling MACROEXPAND (extracted from The Common Lisp Condition System)

We can see that the macro definition has been expanded to include a special _environment_ object inside its lambda list; this object is not directly passed to it by the programmer, but is usually provided by the Lisp implementation as part of calling its macroexpander.  While we will not describe environment objects in detail, we will note their traits which are most relevant to us for the matter at hand.  This environment argument is available only in macros; it holds, among other things, information about local macro definitions -- information which is important for passing into the MACROEXPAND calls of our new functions.  Since these calls must perform macroexpansion of their own, it is important that they receive this environment argument to be able to expand macros correctly.  That is why RESTART-CASE passed the argument to EXPAND-RESTART-CASE, which -- in turn -- passes it into our yet-undefined functions RESTART-CASE-SIGNALING-FORM-P and RESTART-CASE-EXPAND-SIGNALING-FORM.

Refresher on generalized booleans (extracted from the Common Lisp Condition System)

In the second test, the function returned a non-NIL value, which in CL is "as good as" returning true.  CL has generalized booleans, which means that anything that is not NIL counts as a true value, evewn though the symbol T is designated to be the "canonical" representation of truth -- the standard type BOOLEAN, after all, is defined as `(member nil t)`.

2021-02-12 (permalink)

? Look at OSC52 and see if I can make `cb` use it instead of piping data through the SSH tunnel. Some useful links: https://medium.com/free-code-camp/tmux-in-practice-integration-with-system-clipboard-bcd72c62ff7b, https://github.com/fcpg/vim-osc52/blob/master/plugin/osc52.vim

? [The UNIX Hate's Handbook](https://www.amazon.com/UNIX-Haters-Handbook-UNIX-Haters-line/dp/1568842031) @book

? [ASK YOUR DEVELOPER - how to harness the power of software developers and win in the 21st century](https://www.askyourdeveloper.com/) @book


From Hacker News: [Is Y Combinator worth it?](https://news.ycombinator.com/item?id=24616649)
Would you disqualify companies that had bootstrapped successfully, were profitable, and did not take any outside money?
Not a judging question - I understand the value of a good investor's due diligence and backing in improving the odds of sane leadership. But it's interesting to think how that can impact a successful, self-funded copmany's access to talent.
Someone replied:
As a founder, self funding makes a lot of sense, as you avoid dilution and aren’t beholden to a group of people expecting 10x growth every year.
But as a candidate I’d be cautious about taking a job at a self funded startup. They’re probably cash constrained so the base salary might not be as much. And if there is equity, I would want management beholden to people expecting 10x growth.
Is the second paragraph true tough?
As an engineer I would love to work for a company that makes money and pays me from that money.
Exactly, freedom from investors, chained to your customers.  The conversation goes on:
Curious, why does it matter if your paycheck comes from VC money or revenue money? In the end the same amount comes to your bank account.
And eventually someone steps in and explain how things work:
Because VCs work by pushing for high returns on some of their investments, not low returns on all of their investments. They therefore push for business decisions to force high growth, when there can be a better chance of success with a slow burn in some situations. So you end up taking business risks to chase high growth, instead of simply focusing on a niche market to stay profitable, and having a flexible timeframe for growth.
As an engineer, this does hit the paycheck directly - if they go hire more devs to speed product features to market while at the same time increasing sales and marketing... it can change from a bootstrapped model with no end to the runway into a situation where you do have a runway, and it is counted in months, not years. Instead of steady pay with plenty of time to deliver new product, I'm being pushed to deliver quickly, and if we fail, we're unemployed.
And yes, we probably get lower salaries for that model because we are taking less risk. But steady, lower stress work is desired by some of us, and a small bootstrapped group delivers it better than a VC.
The above finds me in absolute agreement -- and I could not have explained it in simpler terms.

2021-02-10 (permalink)

While this was happening, the big sharks sensed an opportunity; larger trading professionals joined the fight but only for their own profit. BlackRock and eight other Wallstreet titans made a combined 16 billion dollars on GameStop in a matter of days. You see? It wasn't just the redditors that made money, and that's a key point in the story: Wallstreet always finds a way to win.
[Reddit vs Wallstreet - GameStop, The Movie](https://www.youtube.com/watch?v=YFQ-v1jCpF0)

2021-02-05 (permalink)

I wanted to fix some of the compilation warnings in my AdventOfCode-solutions-in-Common-Lisp [project](https://iamFIREcracker.github.com/adventofcode), and this one in particular caught my attention.

Given this function:
(defun count-hits (map delta-col delta-row)
  (loop with (rows cols) = (array-dimensions map)
        for row below rows by delta-row
        for col = 0 then (mod (+ col delta-col) cols)
        count (char= (aref map row col) #\#)))
Try and evaluate in the REPL (SBCL/2.0.11 in my case) and you should get the following style warnings back:
; in: DEFUN COUNT-HITS
;     (LOOP AOC/2020/03::WITH (AOC/2020/03::ROWS
;                              AOC/2020/03::COLS) = AOC/2020/03::DIMS
;           AOC/2020/03::FOR AOC/2020/03::ROW AOC/2020/03::BELOW AOC/2020/03::ROWS AOC/2020/03::BY AOC/2020/03::DELTA-ROW
;           AOC/2020/03::FOR ...)
; --> BLOCK LET SB-LOOP::LOOP-DESTRUCTURING-BIND DESTRUCTURING-BIND
; --> SB-INT:BINDING* LET* IF
; ==>
;   NIL
;
; caught STYLE-WARNING:
;   The binding of #:LOOP-LIMIT-1 is not a REAL:
;    NIL
;   See also:
;     The SBCL Manual, Node "Handling of Types"
;
; caught STYLE-WARNING:
;   The binding of SB-C::DIVISOR is not a REAL:
;    NIL
;   See also:
;     The SBCL Manual, Node "Handling of Types"
;
; compilation unit finished
;   caught 2 STYLE-WARNING conditions
WARNING: redefining AOC/2020/03::COUNT-HITS in DEFUN
COUNT-HITS
I can avoid the implicit call to DESTRUCTURING-BIND (LOOP-DESTRUCTURING-BIND to be precise, as that is what the `with (rows cols) = ...` expression gets expanded into) by replacing ARRAY-DIMENSIONS with two calls to ARRAY-DIMENSION:
(defun count-hits (map delta-col delta-row)
  (loop with rows = (array-dimension map 0)
        with cols = (array-dimension map 1)
        for row below rows by delta-row
        for col = 0 then (mod (+ col delta-col) cols)
        count (char= (aref map row col) #\#)))
Now _that_ seems to have solved the problem (i.e. no style warning anymore).

What if I wrapped the original LOOP expression into a DESTRUCTURING-BIND form in which I initialize `rows` and `cols`?
(defun count-hits (map delta-col delta-row)
  (destructuring-bind (rows cols) (array-dimensions map)
    (loop for row below rows by delta-row
          for col = 0 then (mod (+ col delta-col) cols)
          count (char= (aref map row col) #\#))))
This version too, does **not** cause the compiler to throw any style warnings.  What exactly is going on?

Let's try and find the minimal snippet to reproduce it:
(defvar arr (make-array '(2 5) :initial-contents '((1 2 3 4 5)
                                                   (6 7 8 9 10))))
(defun test-1 (arr)
  (loop with (rows cols) = (array-dimensions arr)
        for r below rows do
        (loop for c below cols do
              (print (aref arr r c)))))
When I send the above to the REPL I get:
; in: DEFUN TEST-1
;     (LOOP AOC/2020/03::WITH (AOC/2020/03::ROWS
;                              AOC/2020/03::COLS) = (ARRAY-DIMENSIONS
;                                                    AOC/2020/03::ARR)
;           AOC/2020/03::FOR AOC/2020/03::R AOC/2020/03::BELOW AOC/2020/03::ROWS
;           DO (LOOP AOC/2020/03::FOR AOC/2020/03::C AOC/2020/03::BELOW AOC/2020/03::COLS
;                    DO (PRINT
;                        (AREF AOC/2020/03::ARR AOC/2020/03::R AOC/2020/03::C))))
; --> BLOCK LET SB-LOOP::LOOP-DESTRUCTURING-BIND DESTRUCTURING-BIND
; --> SB-INT:BINDING* LET* IF
; ==>
;   NIL
;
; caught STYLE-WARNING:
;   The binding of SB-C::Y is not a REAL:
;    NIL
;   See also:
;     The SBCL Manual, Node "Handling of Types"
;
; caught STYLE-WARNING:
;   The binding of SB-C::Y is not a REAL:
;    NIL
;   See also:
;     The SBCL Manual, Node "Handling of Types"
;
; compilation unit finished
;   caught 2 STYLE-WARNING conditions
Perfect.  Let's try and expand that LOOP expression, _recursively_, and see if anything stands out:
(BLOCK NIL
  (LET ((#:LOOP-DESTRUCTURE-689 (ARRAY-DIMENSIONS ARR)))
    (DECLARE)
    (LET* ((#:G692
            (SB-C::CHECK-DS-LIST/&REST #:LOOP-DESTRUCTURE-689 0 2
                                       '(&OPTIONAL ROWS COLS
                                         . #:LOOP-IGNORED-691)))
           (ROWS
            (IF #:G692
                (LET ((#:G693 (CAR #:G692)))
                  (PROGN (SETQ #:G692 (CDR #:G692)) #:G693))))
           (COLS
            (IF #:G692
                (LET ((#:G694 (CAR #:G692)))
                  (PROGN (SETQ #:G692 (CDR #:G692)) #:G694))))
           (#:LOOP-IGNORED-691 #:G692))
      (DECLARE (IGNORE #:LOOP-IGNORED-691))
      (LET ((#:LOOP-LIMIT-690 ROWS) (R 0))
        (DECLARE (IGNORABLE R)
                 (TYPE (AND NUMBER REAL) R)
                 (IGNORABLE #:LOOP-LIMIT-690)
                 (TYPE (AND NUMBER REAL) #:LOOP-LIMIT-690))
        (TAGBODY
         SB-LOOP::NEXT-LOOP
          (IF (>= R #:LOOP-LIMIT-690)
              (GO SB-LOOP::END-LOOP))
          (BLOCK NIL
            (LET ((#:LOOP-LIMIT-695 COLS) (C 0))
              (DECLARE (IGNORABLE C)
                       (TYPE (AND NUMBER REAL) C)
                       (IGNORABLE #:LOOP-LIMIT-695)
                       (TYPE (AND NUMBER REAL) #:LOOP-LIMIT-695))
              (TAGBODY
               SB-LOOP::NEXT-LOOP
                (IF (>= C #:LOOP-LIMIT-695)
                    (GO SB-LOOP::END-LOOP))
                (PRINT (AREF ARR R C))
                (SETQ C (1+ C))
                (GO SB-LOOP::NEXT-LOOP)
               SB-LOOP::END-LOOP)))
          (SETQ R (1+ R))
          (GO SB-LOOP::NEXT-LOOP)
         SB-LOOP::END-LOOP)))))
I am afraid the compiler is right, isn't it?

- LOOP's destructuring bind would not signal an error if the list you are destructuring bind against does not have enough elements; it would bind all the _extra_ variables to NIL
- This means that both `rows` and `cols` can be NIL
- `rows`, however, is used after the keyword `:below`, and because of that it is expected to be a NUMBER
- But NIL is not a valid NUMBER, hence the warning

Technically the compiler is right, but what a pain...

Anyways, I reported this as a bug to sbcl-bugs@lists.sourceforge.net, let's see if they think it's a bug or not.

2021-02-02 (permalink)

"Great flight up... Gotta work on that landing"
SpaceX commenter, at the end of the [Starship SN9 flight test](https://mobile.twitter.com/SpaceX/status/1356699321840721920)

2021-01-20 (permalink)

any legit use case for PROG2 though? It seems very ad-hoc-y
Here is a _legit_ use case for PROG2:
(prog2 (prologue)
    (something-that-returns-a-value)
  (epilogue))
It turns out there is a library, `:cl-advice`, that exploits PROG2 in an almost identical way (check this out on [GitHub](https://github.com/szos/cl-advice/blob/3d8653da763ab67b8abda1dba8c7783806a10e64/cl-advice.lisp#L137-L142)):
(prog2 (when ,before
         (apply ,before ,dispatch-args))
       (if ,around
         (apply ,around ,dispatch-args)
         (apply ,main ,dispatch-args))
       (when ,after (apply ,after ,dispatch-args)))))
When perceived in this way, PROG2 is a non-object-oriented `:before`/`:after`

2021-01-17 (permalink)

Advent of Code: [2015/25](https://adventofcode.com/2015/day/25)

The task for today, i.e. Christmas, is to boot up the weather machine -- no idea what that is -- and all we have to do is to read a code from the instruction manual and insert it on the console.  Except, we don't know where the manual is, so need to generate the sequence of codes first.

We are given the rules of how this sequence has to be constructed:
   | 1   2   3   4   5   6
---+---+---+---+---+---+---+
 1 |  1   3   6  10  15  21
 2 |  2   5   9  14  20
 3 |  4   8  13  19
 4 |  7  12  18
 5 | 11  17
 6 | 16

   |    1         2         3         4         5         6
---+---------+---------+---------+---------+---------+---------+
 1 | 20151125  18749137  17289845  30943339  10071777  33511524
 2 | 31916031  21629792  16929656   7726640  15514188   4041754
 3 | 16080970   8057251   1601130   7981243  11661866  16474243
 4 | 24592653  32451966  21345942   9380097  10600672  31527494
 5 |    77061  17552253  28094349   6899651   9250759  31663883
 6 | 33071741   6796745  25397450  24659492   1534922  27995004
First, let's begin by parsing the input to know the position in this _diagonal_ sequence where we will find the code to boot up the weather machine.
(defun parse-target-position (lines)
  (cl-ppcre:register-groups-bind ((#'parse-integer row col))
      ("row (\\d+), column (\\d+)" (first lines))
    (list row col)))
Next, let's implement a generator of all the positions of this _diagonal_ sequence:

- Return `(1 1)` the first time
- Then move `1` up, and `1` right
- If the current row is `0`, we need to wrap around, otherwise there we have the next position of the sequence
- Repeat
(defun next-grid-position (&optional prev)
  (if (not prev)
    (list 1 1)
    (let ((next (mapcar #'+ prev (list -1 1))))
      (if (= (first next) 0)
        (list (second next) 1)
        next))))
Next, let's implement a generator of all the values of this _diagonal_ sequence (i.e. all the possible codes):

- Start with `20151125`
- Then multiply it by `252533` and take the remainder from dividing that by `33554393`
- Repeat
(defun next-code (&optional prev)
  (if (not prev) 20151125 (mod (* prev 252533) 33554393)))
With these two generators, finding the solution to part 1 should be pretty straightforward:

- Generate the next position
- Generate the next value
- If the position we are at, is what we are looking for, then the current value is our answer
(defun part1 (target)
  (loop for pos = (next-grid-position) then (next-grid-position pos)
        for code = (next-code) then (next-code code)
        when (equal pos target) return code))
No part 2 for Dec 25, so here goes the final plumbing:
(define-solution (2015 25) (pos parse-target-position)
  (values (part1 pos)))

(define-test (2015 25) (2650453))
And that's it:
> (time (test-run))
TEST-2015/25.
Success: 1 test, 1 check.
Evaluation took:
  1.226 seconds of real time
  1.189050 seconds of total run time (1.169468 user, 0.019582 system)
  [ Run times consist of 0.026 seconds GC time, and 1.164 seconds non-GC time. ]
  96.98% CPU
  2,821,032,228 processor cycles
  1,175,386,480 bytes consed

2021-01-16 (permalink)

Advent of Code: [2015/23](https://adventofcode.com/2015/day/23)
Little Jane Marie just got her very first computer for Christmas from some unknown benefactor. It comes with instructions and an example program, but the computer itself seems to be malfunctioning. She's curious what the program does, and would like you to help her run it.
The problem description goes on by giving you details about the six instructions that this computer supports (i.e. `hlf`, `tpl`, `inc`, `jmp`, `jie`, and `jio`), the number of registers it has (two, `a` and `b`), how programs are run, and when should they halt (when they jumps to position _outside_ the program).

Given the input program, what's the value of the `b` register after the program has finished running?  Looks like we are going to write a little emulator here...

First off, we are going to need an instruction pointer, two registers, `a` and `b`, and some utility functions to operate on these registers:
(defparameter *ip* 0)
(defparameter *a* 0)
(defparameter *b* 0)

(defun reg (name) (if (string= name "a") *a* *b*))
(defun (setf reg) (value name)
  (if (string= name "a") (setf *a* value) (setf *b* value)))
The next step is to _parse_ our program:

- Split each instruction into its _operator_ and its _operands_
- Based on the value of the _operator_, return a function implementing the required logic, and taking care of updating the instruction pointer accordingly
(defun parse-instruction (line)
  (destructuring-bind (rator . rands) (cl-ppcre:split ",? " line)
    (cond ((string= rator "hlf")
           (lambda ()
             (divf (reg (first rands)) 2)
             (incf *ip*)))
          ((string= rator "tpl")
           (lambda ()
             (mulf (reg (first rands)) 3)
             (incf *ip*)))
          ((string= rator "inc")
           (lambda ()
             (incf (reg (first rands)) 1)
             (incf *ip*)))
          ((string= rator "jmp")
           (lambda ()
             (incf *ip* (parse-integer (first rands)))))
          ((string= rator "jie")
           (lambda ()
             (if (evenp (reg (first rands)))
               (incf *ip* (parse-integer (second rands)))
               (incf *ip*))))
          ((string= rator "jio")
           (lambda ()
             (if (= (reg (first rands)) 1)
               (incf *ip* (parse-integer (second rands)))
               (incf *ip*))))
          (t (error "Unable to parse line: ~A" line)))))

(defun parse-program (lines) (map 'vector #'parse-instruction lines))
All is left to do, is running the program until it _exits_, and as requested, return the value of register `b`:
(defun run-program (a b program)
  (let ((*ip* 0) (*a* a) (*b* b))
    (loop while (array-in-bounds-p program *ip*)
          do (funcall (aref program *ip*)))
    *b*))

(defun part1 (program) (run-program 0 0 program))
For part 2, we are asked to run the program again, this time with register `a` initialized to `1` instead of `0`:
(defun part2 (program) (run-program 1 0 program))
Final plumbing:
(define-solution (2015 23) (program parse-program)
  (values (part1 program) (part2 program)))

(define-test (2015 23) (184 231))
And that's it:
> (time (test-run))
TEST-2015/23..
Success: 1 test, 2 checks.
Evaluation took:
  0.000 seconds of real time
  0.000733 seconds of total run time (0.000613 user, 0.000120 system)
  100.00% CPU
  1,797,379 processor cycles
  0 bytes consed

Advent of Code: [2015/24](https://adventofcode.com/2015/day/24)
It's Christmas Eve, and Santa is loading up the sleigh for this year's deliveries. However, there's one small problem: he can't get the sleigh to balance. If it isn't balanced, he can't defy physics, and nobody gets presents this year. No pressure.
We are given a list of the weights of each package, and we are asked to split them into three groups so that each group weights the same; well, not just that, of course we also want to find the combination that:

- Has the fewest **number** of packages in the first group
- (and in case of ties, the one that) Has the smallest "quantum entanglement" (i.e. the product of the weights of the packages in the group)

The input is a set of lines, each representing the weight of a package; a simple call to PARSE-INTEGERS should do just fine.

Now back to the actual problem solution.  Since we are asked to find the configuration that minimizes the number packages in the first group, I figured I could simply focus on that group alone, and find our solution as follows:

- Try and look for all the packages whose weight is equal to the _target_ value (i.e. total weight divided by `3`)
- If none is found, try and look for all pairs of packages whose total weight is equal to the target value
- If none is found, try and look for all triplets of packages whose total weight is equal to the target value

And so on, and so forth, until one or more solutions are found; at which point, I would calculate the "quantum entanglement" of these _winning_ configurations, and return the minimum value.


First let's take care of the easy parts: finding the target weight, and calculating the quantum entanglement of a configuration of packages:
(defun target-weight (weights groups) (/ (reduce #'+ weights) groups))
(defun quantum-entanglement (group) (reduce #'* group))
Now we have to find the configuration for the perfect balance; we know we will iteratively try to find configurations, with increasing number of packages.  But how to find a configuration of `n` packages, whose total weight add up to a specific value?  With a bit of recursion and backtracking:

- If we can select only one package, see if one with the target value exists in the pool of remaining packages, and if it does, then we found ourselves a partial solution
- Otherwise we have to options: add the first available package to the current solution and see if with the remaining packages it's possible to find a solution that adds up to the target value minus the weight of the selected package; or we skip the first available package and try with the next one

This (and some other details that I did not bother mention) translates into the following function:
(defun find-perfect-balance (weights groups
                                     &aux (target (target-weight weights groups)))
  (labels ((recur (n target remaining)
             (cond ((= n 1) (when-let ((found (find target remaining)))
                                  (list (list found))))
                   ((null remaining) '())
                   (t (append
                        (recur n target (rest remaining))
                        (loop for rest in (recur (1- n)
                                                 (- target (first remaining))
                                                 (rest remaining))
                              collect (cons (first remaining) rest)))))))
    (loop for n from 1 for solutions = (recur n target weights)
          when solutions return (reduce #'min solutions
                                        :key #'quantum-entanglement))))
With this, all we have to do for part 1 is calling FIND-PERFECT-BALANCE with the input weights, and the number of groups we want to divide our packages into:
(defun part1 (weights) (find-perfect-balance weights 3))
For part 2, we are asked to repeat the exercise with `4` groups instead of `3` -- turns out we forgot about the trunk of the sleigh:
(defun part2 (weights) (find-perfect-balance weights 3))
Final plumbing:
(define-solution (2015 24) (weights parse-integers)
  (values (part1 weights) (part2 weights)))

(define-test (2015 24) (11266889531 77387711))
And that's it:
> (time (test-run))
TEST-2015/24..
Success: 1 test, 2 checks.
Evaluation took:
  0.017 seconds of real time
  0.017210 seconds of total run time (0.016366 user, 0.000844 system)
  100.00% CPU
  41,322,015 processor cycles
  294,912 bytes consed
Note: I made a big assumption here, which is once I find a possible solution for the first group I take it for granted that the remaining packages can be split into 2 groups (3 for part 2), each adding up to the target value.  This assumption got me my 2 stars today, but I guess I got lucky ;-)

2021-01-15 (permalink)

Advent of Code: [2015/22](https://adventofcode.com/2015/day/22)

Another day, another RPG to play (err, implement); this time however, the game logic is a bit more complicated than the one we implemented for [2015/21](https://matteolandi.net/plan.html#day-2021-01-13), with lots of tiny details that if you get wrong, no way you are going to be able to get your 2 stars for the day.

In this game the player does not have any equipments, but only spells; spells cost mana; spells can protect you, deal damage to the boss, or both; spells can have an _effect_ that lasts more than one turn.

Let's have a look at the list of spells available:
- Magic Missile costs `53` mana. It instantly does `4` damage.
- Drain costs `73` mana. It instantly does `2` damage and heals you for `2` hit points.
- Shield costs `113` mana. It starts an effect that lasts for `6` turns. While it is active, your armor is increased by `7`.
- Poison costs `173` mana. It starts an effect that lasts for `6` turns. At the start of each turn while it is active, it deals the boss `3` damage.
- Recharge costs `229` mana. It starts an effect that lasts for `5` turns. At the start of each turn while it is active, it gives you `101` new mana.
I decided to parse this catalog into a LIST of LISTs; LIST of _structured_ types actually, but let's not get lost in the details:
(defstruct (spell (:type list)) name cost damage heal effect)

(defparameter *spells* '(("Magic Missile" 53  4 0)
                         ("Drain"         73  2 2)
                         ("Shield"        113 0 0 (6 0   0 7))
                         ("Poison"        173 0 0 (6 0   3 0))
                         ("Recharge"      229 0 0 (5 101 0 0))))

(defstruct (effect (:type list)) turns mana damage armor)
Let's have a look at the game, and focus on the player first (i.e. you).  The `player` has some _life_ points, _mana_, and certain _effects_:
(defstruct (player (:type list)) life mana effects)
By looking at the list of _active_ effects it should be quite easy to calculate one's total armor, mana, and damage:
(defun player-armor-effect (player)
  (reduce #'+ (player-effects player) :key #'effect-armor))

(defun player-mana-effect (player)
  (reduce #'+ (player-effects player) :key #'effect-mana))

(defun player-damage-effect (player)
  (reduce #'+ (player-effects player) :key #'effect-damage))
One's total mana is equal to the mana the player has, plus all the mana given by any active effect:
(defun player-total-mana (player)
  (+ (player-mana-effect player)
     (player-mana player)))
Similarly, one's total _dealable_ damage is equal to the total damage of any active effect, plus the damage of the spell they might be about to cast:
(defun player-total-damage (player spell)
  (+ (player-damage-effect player) (spell-damage spell)))
Rest is generating a new `player` object, as a result of playing a turn (i.e. casting a spell, or defending an attack):

- Update life points based on the healing power of the spell, or the boss attack
- Update mana taking into account _recharging_ effects, plus the cost of the spell to cast
- Update the list of active effects:
(defun life-next (life spell-heal damage) (- (+ life spell-heal) damage))

(defun mana-next (mana spell-cost) (- mana spell-cost))

(defun effects-next (effects spell-effect)
  (let ((effects (loop for (turns . rest) in effects
                       when (> (1- turns) 0) collect (cons (1- turns) rest))))
    (if spell-effect
      (cons spell-effect effects)
      effects)))

(defun player-next (player damage
                           &optional (spell  '("Dummy" 0 0 0 nil))
                           &aux (player (copy-seq player)))
  (setf (player-life player) (life-next (player-life player)
                                        (spell-heal spell)
                                        damage)
        (player-mana player) (mana-next (player-total-mana player)
                                        (spell-cost spell))
        (player-effects player) (effects-next (player-effects player)
                                              (spell-effect spell)))
  player)
Now...let's move on to the boss logic: we will first define a structured type for the boss, and some parsing functions to extract our boss specs from our input:
(defstruct (boss (:type list)) life damage)

(defun parse-boss (lines)
  (mapcar #'parse-integer
          (cl-ppcre:all-matches-as-strings "\\d+"
                                           (format nil "~{~A ~}" lines))))
The amount of damage the boss can deal is equal to their damage points minus the player's armor; however, since we play by the boss' rules, the boss is always going to deal at least `1` damage point, irrespective of the player's armor:
(defun boss-attack (boss player)
  (max 1 (- (boss-damage boss) (player-armor-effect player))))
Updating the boss state after a turn is pretty straightforward:
(defun boss-next (boss damage)
  (let ((boss (copy-seq boss)))
    (setf (boss-life boss) (- (boss-life boss) damage))
    boss))
Alright, now it's time to actually implement the game:

- A _turn_ consists in the player casting a spell **and** the boss attacking us
- There might be multiple _castable_ spells at a given time; we should pick one, cast it, and see where this takes us
- If after the boss' move, the player turns out to be dead, then we should backtrack and try with a different spell
- The list of _castable_ spells is obtained by looking at the player's mana, and the already active effect
- Note: you can cast a spell whose effect is already active, if and only if this is the last turn the effect is active
(defstruct (state (:type list) (:conc-name)) player boss)

(defun not-enough-mana-p (mana spell) (> (spell-cost spell) mana))

(defun effect-in-use-p (effects spell)
  (loop for (turns . rest) in effects
        thereis (and (equal (rest (spell-effect spell)) rest)
                     (> turns 1))))

(defun castable-spells (mana effects)
  (loop for spell in *spells*
        unless (or (not-enough-mana-p mana spell)
                   (effect-in-use-p effects spell))
        collect spell))

(defun player-turn (state spell
                           &aux (player (player state)) (boss (boss state)))
  (let ((boss-damage (player-total-damage player spell)))
    (cons (list (player-next player 0 spell) (boss-next boss boss-damage))
          (spell-cost spell))))

(defun all-player-turns (state &aux (player (player state)))
  (loop for spell in (castable-spells (player-total-mana player) (player-effects player))
        collect (player-turn state spell)))

(defun boss-turn (state)
  (let ((player (player state)) (boss (boss state)))
    (let ((boss-damage (player-damage-effect player))
          (player-damage 0))
      (when (< boss-damage (boss-life boss))
        (setf player-damage (boss-attack boss player)))
      (list (player-next player player-damage) (boss-next boss boss-damage)))))

(defun player-alive-p (player) (> (player-life player) 0))

(defun play (state)
  (loop for (next . cost) in (all-player-turns state)
        for next-next = (boss-turn next)
        when (player-alive-p (player next-next))
        collect (cons next-next cost)))
Lastly, the recursive function responsible for playing the game, and backtrack when it reaches a dead end:
(defun part1 (player boss &aux (best 10000))
  (labels ((recur (state cost-so-far)
             (cond ((player-wins-p state) (setf best (min best cost-so-far)))
                   ((>= cost-so-far best) nil)
                   (t
                     (loop for (next . cost) in (play state)
                           do (recur next (+ cost-so-far cost)))))))
    (recur (list player boss) 0)
    best))
Easy uh?! Anyway, for part 2 we are asked to play a slightly different version of the game, where the player loses 1 life point at the beginning of his turn:

- LIFE-IS-HARD takes care of inflicting the damage to player
- PLAY-HARD stops the game the moment the player has no more life points (note: we need this to happen before any effect takes place)
- PART2 is equal to PART1, except that it calls PLAY-HARD instead of PLAY
(defun life-is-hard (state &aux (state (copy-seq state)))
  (setf (player state) (copy-seq (player state)))
  (decf (player-life (player state)))
  state)

(defun play-hard (state &aux (state (life-is-hard state)))
  (when (player-alive-p (player state))
    (play state)))

(defun part2 (player boss &aux (best 10000))
  (labels ((recur (state cost-so-far)
             (cond ((player-wins-p state) (setf best (min best cost-so-far)))
                   ((>= cost-so-far best) nil)
                   (t
                     (loop for (next . cost) in (play-hard state)
                           do (recur next (+ cost-so-far cost)))))))
    (recur (list player boss) 0)
    best))
Final plumbing:
(define-solution (2015 22) (boss parse-boss)
  (values (part1 (list 50 500 nil) boss) (part2 (list 50 500 nil) boss)))

(define-test (2015 22) (900 1216))
And that's it:
> (time (test-run))
TEST-2015/22..
Success: 1 test, 2 checks.
Evaluation took:
  0.252 seconds of real time
  0.248577 seconds of total run time (0.212851 user, 0.035726 system)
  [ Run times consist of 0.004 seconds GC time, and 0.245 seconds non-GC time. ]
  98.81% CPU
  581,845,084 processor cycles
  124,414,480 bytes consed

2021-01-13 (permalink)

Advent of Code: [2015/21](https://adventofcode.com/2015/day/21)

Today, we treat ourselves with a simple RPG game to play with:

- It's you versus the "boss"
- The boss starts with 100 hit points and with some pre-defined attack power and armor (all this comes from the input)
- You start with 100 hit points, bare-handed, and without a armor
- There is a shop selling weapons, armors, and rings, and each has a cost (this is not in the input, but part of the description, so I assume this to the same for everybody)
- To fight, you need a weapon, and you can optionally wear an armor, and one or two rings
- The game is played in turns -- you start first -- and the amount of damage dealt is equal to your attack power minus the defendant's armor

What's the minimum you have to spend, to beat the boss?

Let's start with the shop: I am going to use 3 different parameters (one for the weapons, one for the armors, and one for the rings), and inside I will store items as LISTS having the name of the item as first argument, its cost as second, its damage as third, and armor as fourth:
(defparameter *weapons* '(("Dagger"        8     4       0)
                          ("Shortsword"   10     5       0)
                          ("Warhammer"    25     6       0)
                          ("Longsword"    40     7       0)
                          ("Greataxe"     74     8       0)))

(defparameter *armors* '(("Leather"      13     0       1)
                         ("Chainmail"    31     0       2)
                         ("Splintmail"   53     0       3)
                         ("Bandedmail"   75     0       4)
                         ("Platemail"   102     0       5)))

(defparameter *rings* '(("Damage +1"    25     1       0)
                        ("Damage +2"    50     2       0)
                        ("Damage +3"   100     3       0)
                        ("Defense +1"   20     0       1)
                        ("Defense +2"   40     0       2)
                        ("Defense +3"   80     0       3)))
Then we will parse our boss specs:
Hit Points: 100
Damage: 8
Armor: 2
I am going to join the lines back into a single string (DEFINE-SOLUTION would already parse the input file into a list of strings, so I need to _undo_ that first), then extract all all the numeric values from it:
(defun parse-boss (lines)
  (mapcar #'parse-integer
          (cl-ppcre:all-matches-as-strings "\\d+"
                                           (format nil "~{~A ~}" lines))))
Now, the solution I have in mind for "finding the minimum you have to spend to beat the boss" is pretty simple:

- generate all the possible combinations of items you can buy; for each...
- equip your player with those items, and play against the boss
- if you win, keep track of the amount you spent, and minimize it

Let's begin by generating all the possible combinations of items you can buy:

- you need at least one weapon -- or otherwise you are bound to lose, always
- you may or may not wear an armor -- can't have more though
- you may or may not wear one, or two rings -- you can have no rings, one ring, or two

If we consider _optionality_ as an item with no cost, and no damage nor armor, then the above nicely translates to the following function:
(defun all-items-combinations ()
  (loop for w in *weapons* append
        (loop for a in (cons (list "" 0 0 0) *armors*) append
              (loop for r1 in (cons (list "" 0 0 0) *rings*) append
                    (loop for r2 in (cons (list "" 0 0 0) *rings*)
                          collect (mapcar
                                    #'+
                                    (rest w)
                                    (rest a)
                                    (rest r1)
                                    (rest r2)))))))
Now, equipping our player with an items combination should be as simple as appending the overall items damage and armor to the user hit points (i.e. `(list 100 damage armor)`).

What about playing the game and see if the player won?

- Calculate how many turns it will take the player to defeat the boss (note: damage / armor stats don't change throughout the game)
- Calculate how many turns it will take the boss to the defeat us
- We win the number of turns it take us to the defeat the boss, is smaller than or equal to the turns it takes to the boss (note: we start first, hence the _or equal_ before)
(defun player-wins-p (player boss)
  (destructuring-bind (p-hits p-damage p-armor) player
    (destructuring-bind (b-hits b-damage b-armor) boss
      (let ((p-turns-to-win (/ b-hits (max (- p-damage b-armor) 1)))
            (b-turns-to-win (/ p-hits (max (- b-damage p-armor) 1))))
        (<= p-turns-to-win b-turns-to-win)))))
All there is left to do at this point is iterate all the combinations, play the game, see if we win, and minimize the cost of the game:
(defun part1 (boss)
  (loop for (cost damage armor) in (all-items-combinations)
        when (player-wins-p (list 100 damage armor) boss)
        minimize cost))
For part 2, the shopkeeper can trick you into buying anything they want, and we are asked to find the maximum we can spend and still be defeated by the boss!

That should be pretty easy:

- generate all the possible combinations of items you can buy; for each...
- equip your player with those items, and play against the boss
- if you lose, keep track of the amount you spent, and maximize it
(defun part2 (boss)
  (loop for (cost damage armor) in (all-items-combinations)
        unless (player-wins-p (list 100 damage armor) boss)
        maximize cost))
Final plumbing:
(define-solution (2015 21) (boss parse-boss)
  (values (part1 boss) (part2 boss)))

(define-test (2015 21) (91 158))
And that's it:
> (time (test-run))
TEST-2015/21..
Success: 1 test, 2 checks.
Evaluation took:
  0.004 seconds of real time
  0.003741 seconds of total run time (0.001263 user, 0.002478 system)
  100.00% CPU
  9,978,989 processor cycles
  622,384 bytes consed

2021-01-12 (permalink)

it's quite windy here its side down well though I good it's it's this is oh actually wasn't expected to be this windy hopefully you can actually hear what I'm saying okay great great so this is this is I think the most inspiring thing that I have ever seen and I just like to thank the SpaceX team and the the suppliers and the the people of Brook chic and Brownsville thank you for your support and just like wow what an incredible job by such a great team to build this incredible vehicle so it's like first of all want to stop that I'm just so so so so proud to work with such a great team and it's really ripping here by the way if you're watching this online it is like it was really windy so the the point of this this presentation and this is this event it is really there are two elements to it one is to inspire the public and get people excited about our future in space and and get people fired up about the future the you know what what there are so many things to worry about so many things to be concerned about there's there are many troubles in the world of course and we are important and we need to solve them but we also need things that make us excited to be alive that make us glad to wake up in the morning and be fired up about the future and think yeah the future is gonna be great you know and and this space exploration is one of those things and becoming a spacefaring civilization being out there among the stars this is one of the things that I know makes it makes me glad to be alive I think it makes many people glad to be alive it's one of the best things and this really weird face with a choice which future do you want do you want the future where we become a spacefaring civilization and are in many worlds and now out there among the stars or one where we are forever confined to earth and I say it is the first and I hope you agree with me yeah so so what what the critical breakthrough that's needed for us to become a spacefaring civilization is to make space travel like air travel so with with air travel you could be when you fly a plane you fly that plane many times I mean the risk of stating the obvious it really almost any motor transport whether it's a plane a car a horse the bicycle is reusable you use that motor transport many times and if you had to get a new plane every time you flew somewhere and even get have two planes for a return journey very few people could afford to fly or if you could use a car only once very few people could afford to drive a car so the critical breakthrough that's necessary is a rapidly reusable orbital rocket this is what this is basically the holy grail of space and the fundamental thing that's required
Elon Musk, Starship Update, September 2019 - https://www.youtube.com/watch?v=sOpMrVnjYeY

Advent of Code: [2015/20](https://adventofcode.com/2015/day/20)

Today it's about the elves delivering presents by hand, door to door, and us trying to figure out how many presents each house is going to get.  We have an infinite number of houses, numbered: `1`, `2`, `3`, `4`, `5`, and so on; then we have our elves, each with an assigned number, wandering around and delivering presents as described below:
The first Elf (number `1`) delivers presents to every house: `1`, `2`, `3`, `4`, `5`, ....
The second Elf (number `2`) delivers presents to every second house: `2`, `4`, `6`, `8`, `10`, ....
Elf number `3` delivers presents to every third house: `3`, `6`, `9`, `12`, `15`, ....
Knowing that each elf will deliver at every house they visit presents equal to **10 times** his or her number, what's the first house which is going to receive at least `36000000` presents?

Input parsing was a no-brainer: single line, one integer:
(defun parse-target-presents (lines) (parse-integer (first lines)))
Now, to solve this, I thought about a very naive brute-force:

- For each house, starting from `1`
- Find all the elves which are going to visit it (i.e. find all the _divisors_ of the house number)
- Sum it, and break out of the loop as soon as the sum, times `10`, is bigger than or equal to the input number
(defun divisors (number)
  (loop for divisor from 1 upto (sqrt number)
        when (zerop (mod number divisor))
        if (= (/ number divisor) divisor) collect divisor
        else append (list divisor (/ number divisor))))

(defun part1-slow (target)
  (loop for n from 1 for div-sum = (reduce #'+ (divisors n))
        when (>= (* div-sum 10) target) return n))
Well, calculating all the divisors for all the house numbers turned out to be a very inefficient solution -- simple, but very inefficient:
> (time (part1-slow 36000000))
Evaluation took:
  15.396 seconds of real time
  15.168194 seconds of total run time (15.006065 user, 0.162129 system)
  [ Run times consist of 0.022 seconds GC time, and 15.147 seconds non-GC time. ]
  98.52% CPU
  35,411,623,928 processor cycles
  1 page fault
  183,443,696 bytes consed

831600
Interestingly, _simulating_ this, actually made for an even simpler and definitely more efficient solution:

- For each elf
- For each house the elf is going to visit
- Increase the number of presents it will receive by the elf number times `10`
- At the end, find the first house that got _that_ many presents
(defun part1 (target &aux
                     (all-houses (/ target 10))
                     (houses (make-array all-houses :initial-element 0)))
  (loop for elf from 1 below all-houses do
        (loop for house from elf below all-houses by elf do
              (incf (aref houses house) (* elf 10))))
  (position-if (partial-1 #'>= _ target) houses))
With this:
> (time (part1 36000000))
Evaluation took:
  1.110 seconds of real time
  1.004395 seconds of total run time (0.905748 user, 0.098647 system)
  90.45% CPU
  2,553,495,191 processor cycles
  28,800,016 bytes consed

831600
For part 2, we are asked to answer the same question, except that this time each elf will visit a maximum of `50` houses.  Thanks to CL powerful LOOP macro, all we have to do is adding a `repeat 50` to our inner loop:
(defun part2 (target &aux
                     (all-houses (/ target 10))
                     (houses (make-array all-houses :initial-element 0)))
  (loop for elf from 1 below all-houses do
        (loop repeat 50
              for house from elf below all-houses by elf do
              (incf (aref houses house) (* elf 11))))
  (position-if (partial-1 #'>= _ target) houses))
Final plumbing:
(define-solution (2015 20) (target parse-target-presents)
  (values (part1 target) (part2 target)))

(define-test (2015 16) (831600 884520))
And that's it:
> (time (test-run))
TEST-2015/20..
Success: 1 test, 2 checks.
Evaluation took:
  1.190 seconds of real time
  1.178205 seconds of total run time (1.166144 user, 0.012061 system)
  98.99% CPU
  2,737,747,940 processor cycles
  57,600,032 bytes consed

2021-01-11 (permalink)

Advent of Code: [2015/19](https://adventofcode.com/2015/day/19)

Rudolph (the Red-Nosed reindeer) is sick, and Santa needs your help to cure him; what exactly are you supposed to do? Come up with a medicine.  We are given a molecule (i.e. a string of chars), and a bunch of _replacement_ rules (e.g. `H => OH`); our task for part 1 is to count all the _distinct_ molecules you can obtain by applying any of your _replacement_ rules to the given molecule.

As usual, let's begin by parsing our input:

- The replacement rules we will parse them into an ALIST
- The molecule, well, that would be a STRING on its own
(defun parse-replacement (string &aux (parts (cl-ppcre:split " " string)))
  (cons (first parts) (third parts)))

(defun parse-replacements (lines) (mapcar #'parse-replacement lines))

(defun parse-input (lines &aux (pos (position "" lines :test #'string=)))
  (cons (parse-replacements (subseq lines 0 pos))
        (nth (1+ pos) lines)))
As per the actual solution to the problem, the following approach should do just fine:

- For each replacement rule `(from . to)`
- For each match of `from` inside our input molecule, replace it with `to` and accumulate the result
- Finally remove all the duplicates, and count the number of distinct molecules you are left with
(defun mreplace (molecule from to pos)
  (format nil "~A~A~A"
          (subseq molecule 0 pos)
          to
          (subseq molecule (+ pos (length from)))))

(defun part1 (input)
  (destructuring-bind (replacements . molecule) input
      (length
        (remove-duplicates
          (loop for (from . to) in replacements append
                (loop for start in (cl-ppcre:all-matches from molecule) by #'cddr
                      collect (mreplace molecule from to start)))
          :test #'string=))))
Things get a bit more complicated for part 2:
Given the available replacements and the medicine molecule in your puzzle input, what is the fewest number of steps to go from the origin molecule `e` to the medicine molecule?
I tried "the" very naive approach first, but it did not take me long to realize that replacement rules were recursive and that a basic _unbound_ search would have taken **forever** to find all the possible solutions.

My next try was: start from the medicine molecule, then recursively apply rules, _backward_ (e.g. given `(from . to)` replace all the occurrences of `to` with `from`), until you get to `e`.  This too was taking forever, even get to the _first_ solution...

I tried to sort replacement rules and process first the ones that were reducing our molecule the most, but that too did not seem to help much.

Then I thought: what if I processed first, the rules with the least number of matches? My idea was that this could help and minimize the branching factor, to a point where I could actually find a solution -- and if I have a solution, then I can find others by pruning the search space.

Apparently this heuristic seemed to work: I was indeed able to generate few solutions to the problem.  However, generating them **all** would still take forever.

Kind of hopeless, I let the program go for a while, logging each solution encountered so far, and interestingly enough it started logging the same number over and over again -- and guess what, that happened to be the right solution to the problem.

What's going on, exactly? I believe that my input (as well as others') is generated in such a way that all the possible solutions are indeed the same one (i.e. the _same_ in terms of number of steps to get from `e` to the medicine molecule).

Anyway, the solution I am presenting here assumes that all the solutions are equal, and will stop after the first one -- I know, it's not _generic_ in the sense that technically it does not solve the described problem, but it works well with few other users input ([1](https://topaz.github.io/paste/#XQAAAQCzAwAAAAAAAAAgmwAD0h7yydDUnDAkkUzzn2xSRG8GwalWWqou+BD+9mCfhiVF9YMUm1erCKIIOtVpJOTyxBy9CekYRr3r15uwhhFR0xa+k24YkOl2M/AFQHg2hFFyQ1DRqQVxPHOLvDY33XmfR8NUSqtn6caW7O4d5Tf90unTYACfaysDxiG6OopBeGSNGFaNAd85XU8e60kTaPTihi9orqjeHh2pqeU4S4j9pVYovdi/Iv4/Ggv3GNIeSXS7YFYdnwEcklMNIuc7vL6kpIaOGPThW7qW3RD7Q//HNtOd31s7l5XNBTsXuFncUzZeNM9SaF6NK0wSuIG5jO/TaSBVAelcB/Olu8/tkjlLAjyOUlAwyKXtAwVYg5W/iJ067tfw4AVblSdfhip9OVAhzsxD4zfSEUn9BaGOZ4n+Q68OgmysZOkeF0Pkjew7zwnXKyxegU6FrMvTJd/8HfKO6X+/AxF60++RYDqXJ/+DOBgA), and [2](https://topaz.github.io/paste/#XQAAAQDHAwAAAAAAAAAgmwAD0h7yydDUnDAkkUzzn2xSRG8GwalWWqou+BD+9mCfhiVF9YMUm1erCKIIOtVpJOTyxBy9CekYRr3r15uwhhFR0xa+k24YkOl2M/AFQHg2hFFyQ1DRqQVxPHOLvDY33XmfR8NUSqtn6caW7O4d5Tf90unTYACfaysDxiG6OopBeGSNGFaNAd85XU8e60kTaPTihi9orqjeHh2pqeU4S4j9pVYovdi/Iv4/Ggv3GNIeSXS7YFYdnwEcklMNIuc7vL6kpIaOGPThlaCjJRP6mX3vm2DiW3VjWFksMaEgXONssD3iNtoq3uQVPbFpSyovVwj942gIKq/iPaoqz7rOxll8LGYrcSpYhT+RHkDnxWSorTjt/juvXPYr4LaQMF5WwM9DVUa9c09HFRx466lmUEVLfHdKe3nJUsNzbGvQ47/MgLC0LEiv7i0dglb4KYq1MKyjzIUdWhGRuzlly5fJosTTAnB2WW2vWPf/RqnVAA==)), so I can live with that.
(defun number-of-matches (molecule replacement)
  (destructuring-bind (from . to) replacement
    (length (cl-ppcre:all-matches to molecule))))

(defun part2 (input)
  (destructuring-bind (replacements . molecule) input
    (labels ((recur (molecule steps)
               (cond ((string= molecule "e") (return-from part2 steps))
                     (t
                       (setf replacements
                             (sort (copy-seq replacements) #'<
                                   :key (partial-1 #'number-of-matches molecule)))
                       (loop for (from . to) in replacements
                             for matches = (cl-ppcre:all-matches to molecule)
                             do (loop for start in matches by #'cddr
                                      do (recur (mreplace molecule to from start)
                                                (1+ steps))))))))
      (recur molecule 0))))
Final plumbing:
(define-solution (2015 19) (input parse-input)
  (values (part1 input) (part2 input)))

(define-test (2015 19) (576 207))
And that's it!
> (time (test-run))
TEST-2015/19..
Success: 1 test, 2 checks.
Evaluation took:
  0.242 seconds of real time
  0.239820 seconds of total run time (0.237884 user, 0.001936 system)
  99.17% CPU
  557,004,564 processor cycles
  41,759,200 bytes consed

2021-01-10 (permalink)

Advent of Code: [2015/16](https://adventofcode.com/2015/day/16)
Your Aunt Sue has given you a wonderful gift, and you'd like to send her a thank you card. However, there's a small problem: she signed it "From, Aunt Sue". You have 500 Aunts named "Sue".
You are very lucky because you have your "My First Crime Scene Analysis Machine" with you, and you can use it on the gift to detect what's it made of; with that (i.e. the scan result) and a list of all the things you could remember about each of all the 500 aunt Sues, you hope you will be able to figure which aunt sent the gift to you.

The scan result is given -- not part of the _input_, but known upfront anyway -- and I decided to model it using an ALIST mapping each compound to the detected quantity:
(defparameter *mfcsam-message* '((:children . 3)
                                 (:cats . 7)
                                 (:samoyeds . 2)
                                 (:pomeranians . 3)
                                 (:akitas . 0)
                                 (:vizslas . 0)
                                 (:goldfish . 5)
                                 (:trees . 3)
                                 (:cars . 2)
                                 (:perfumes . 1)))
The actual input instead is the list of things we remember about each aunt:
Sue 1: goldfish: 9, cars: 0, samoyeds: 9
Sue 2: perfumes: 5, trees: 8, goldfish: 8
Sue 3: pomeranians: 2, akitas: 1, trees: 5
We are going to parse these into a single ALIST mapping aunts (i.e. their name), to the list of compounds we can remember for them (`:cl-ppcre` to the rescue):
(defun compound-name (string) (make-keyword (string-upcase string)))

(defun parse-compound (string)
  (cl-ppcre:register-groups-bind ((#'compound-name name) (#'parse-integer value))
      ("(\\w+): (\\d+)" string)
    (cons name value)))

(defun parse-compounds (string)
  (mapcar #'parse-compound
          (cl-ppcre:all-matches-as-strings "\\w+: \\d+" string)))

(defun parse-aunt-memories (string)
  (cl-ppcre:register-groups-bind ((#'parse-integer n) (#'parse-compounds compounds))
      ("Sue (\\d+): (.*)" string)
    (cons n compounds)))
(defun aunt-name (aunt) (first aunt))
(defun aunt-compound (aunt name) (rest (assoc name (rest aunt))))

(defun parse-memories (lines) (mapcar #'parse-aunt-memories lines))
With all the input mangling taken care of, we need to figure out which aunt sent the gift to us; we are going to to that iteratively:

- For each compound from our scan result
- For each _remaining_ aunt
- See if what we remember of that aunt matches or not the result of the scan
- If it does, then _keep_ the current aunt in the set of candidates, otherwise, remove it

Note: the list of things we remember for each aunt is **not** complete (we don't remember everything), so when filtering out candidates, we only want to exclude those aunts that have a value for the given compound which is different from the scanned one (if we don't remember, we cannot make any assumptions, so we will keep that aunt in the list in the list of candidates).

Anyway, all this _nicely_ translates to:
(defun absent-or-equal (value) (lambda (v) (or (not v) (= v value))))

(defun part1 (memoirs &aux (remaining memoirs))
  (loop for (name . value) in *mfcsam-message* do
        (setf remaining (remove-if-not
                          (absent-or-equal value)
                          remaining
                          :key (partial-1 #'aunt-compound _ name))))
  (aunt-name (first remaining)))
In part 2 we realize the scanner is not as _precise_ as we thought it was, and that for certain compounds it's only able to tell us if the detected amount is above / below a given threshold:
In particular, the cats and trees readings indicates that there are greater than that many (due to the unpredictable nuclear decay of cat dander and tree pollen), while the pomeranians and goldfish readings indicate that there are fewer than that many (due to the modial interaction of magnetoreluctance).
Here we create a new function, ABSENT-OR-MATCHES-WITH-RANGES, to properly read the new scan result; rest is pretty much unchanged:
(defun absent-or-matches-with-ranges (name value)
  #'(lambda (v)
     (or (not v)
         (case name
           ((:cats :trees) (> v value))
           ((:pomeranians :goldfish) (< v value))
           (t (= v value))))))

(defun part2 (memoirs &aux (remaining memoirs))
  (loop for (name . value) in *mfcsam-message* do
        (setf remaining (remove-if-not
                          (absent-or-matches-with-ranges name value)
                          remaining
                          :key (partial-1 #'aunt-compound _ name))))
  (name (first remaining)))
Final plumbing:
(define-solution (2015 16) (memoirs parse-memories)
  (values (part1 memoirs) (part2 memoirs)))

(define-test (2015 16) (40 241))
And that's it!
> (time (test-run))
TEST-2015/16..
Success: 1 test, 2 checks.
Evaluation took:
  0.006 seconds of real time
  0.005962 seconds of total run time (0.003711 user, 0.002251 system)
  100.00% CPU
  15,882,844 processor cycles
  818,768 bytes consed

Advent of Code: [2015/17](https://adventofcode.com/2015/day/17)

The elves bought too much eggnod (`150` liters to be precise), and you are asked to help them put it into smaller containers (your input). How many different _combinations_ of containers can you come up with, to fit all that eggnod?

The input is a list of integers, one per line, so I am just going to use my utility function PARSE-INTEGERS for this.

For the actual solution of this problem, I am going to generate all the possible combinations of containers which add up to `150`, in a recursive fashion:

- Until you have containers left to use, or the remaining eggnod is greater than zero
- Try to use the first available container, and recurse
- Or skip the first container, and recurse
- When the remaining eggnod is zero, there we found a possible solution -- save it somewhere
(defun solve (containers &aux solutions)
  (labels ((recur (containers target solution)
             (cond ((zerop target) (push solution solutions))
                   ((< target 0) nil)
                   ((null containers) nil)
                   (t (recur (rest containers) target solution)
                      (recur (rest containers)
                             (- target (first containers))
                             (cons (first containers) solution))))))
    (recur containers 150 nil)
    solutions))
With this, the solution to part 1 simply becomes applying LENGTH to the result of invoking SOLVE with the given list of containers:
(defun part1 (containers) (length (solve containers)))
For part 2, we are asked to count the solutions that use the least number of containers:
(defun part2 (containers &aux (solutions (solve containers)))
  (let ((min (reduce #'min solutions :key #'length)))
    (count min solutions :key #'length)))
Final plumbing:
(define-solution (2015 17) (containers parse-integers)
  (values (part1 containers) (part2 containers)))

(define-test (2015 17) (1304 18))
And that's it!
> (time (test-run))
TEST-2015/17..
Success: 1 test, 2 checks.
Evaluation took:
  0.024 seconds of real time
  0.017904 seconds of total run time (0.006950 user, 0.010954 system)
  75.00% CPU
  56,414,728 processor cycles
  3,440,640 bytes consed

Advent of Code: [2015/18](https://adventofcode.com/2015/day/18)

Looks like we got a little [Game of life](https://en.wikipedia.org/wiki/Conway%27s_Game_of_Life) problem today:

- Given a grid of `1000x1000` lights -- for each, whether it's on or off
- Given a set of rules telling if a specific light should turn on or off based on the state of its 8 adjacent lights -- it turns out the given rules match Conway's

How many lights will be on, after 100 iterations?

Let's begin with parsing the input grid into a bag (i.e. HASH-SET) of coordinates, representing the lights which are on.
(defun parse-grid (lines &aux (grid (make-hset '())))
  (let ((row 1))
    (dolist (string lines grid)
      (loop for ch across string for col from 1
            when (char= ch #\#) do (hset-add (complex col row) grid))
      (incf row))))
With this, and thanks to my utility system, [:gameoflife](https://topaz.github.io/paste/#XQAAAQAbCQAAAAAAAAAUGQimgulVkMqJePdJ12sdAxC/GfcryUc62cNZO2Y9F49pcnga9KUiRymDklgNdDlqcv2dhjGOqX7N8ovjj8SaUTGvr5mN0/ulEICo5hK0220UqGzQVBykFN8pav/Pr4ITd4I3DKGyJRgy7/3T0ryWzgCloLL3GRvf/oikkYBmCALRghWKKt02J3vWavnPlqpnVW/0CYFBYT1CImg7lRpoFmbiwrr4lqIjkFF9HDELYMnxDSBE0fJlw3dgFW6rFaM4YMs0HVdk4wfjJRHUXLkZWm86TDvx+uDMnFWtXK7opquIKAqfRlRKnMzX1aTKqoNyJK/nZ4PRBS+eNkePip+rIxtmHT44/OMoMw+Ib0OR1tm/Wvc0JRb08vblxbpHcX0LLcGntGNYVBxdrKwjoDdRwe8bzTXy8x4o2tL8qyMSWHwsxyAsMwBJQCJSrGjWf0T4E9gkEPAyX8RiAJzcdL9UQDKNSyefq3naHa4/l+bFQlDabeLjVJYh/GDZpW/a5384/Vi/gIFqtFc/5Tlweqwvms+KrkC6ertm0ycdLUNug96wUPyZdA5LkQUb8sCujjboX6FNGfXgLclCf7vmKPj9osR4YueiK9a0DR1q2MvSF3JeFcv/5rLKlt3GY8dSpl9Rbe5fceaQ2pJ2ukOE1pFFqYzHnaSBr+Bf0PMpbba/VoViIzSHb+c3qM1/5PLCtjBPeCvazHVnOU4cI+N3nuzTRnoVItUWvIq4MWAVX6xtnpeRb/s+hq6VImuzAdgpyfUHBzlBq61pCK/qg5h9w+q7plvUXoB35EgDVikCU6wBFIlHz43SAZkaBHBxdNmpb7VKwuL0plBwewcZswzb9cyQnEkJo1Ec67BVaLc05c4mj/rGw+NbA5AL1glqawnAwQUmzKLxgb/sDbSfQcmEMH32YFW5ybZ8v9qkU9KL435TKdPfw8fMzt3teS1QBVFfpP246FC1C2g3clAOnwDNWKNeQ4BgoiXZlpK1S42WA7fM2yT+GC8//i0MgbL/tYtgI+Vl7ILQ9ALTARWez7Pjk9qZnPvA3pVeddhKzy++jwCXP4PP0nQjvoNrQnx5/eSMVasKElJQkV6G+7bfFGQDCh9NgiyD5/2/I+wuvlmCWmMDxCpBsiKhZhGq5rEgPxkAjJU2SxFtd2fLkrsSZJ8ZH9PGfhfTe6clCMmLmrotGQRXtb6LVaTVv3pnhuK+KxKUdzOT3AomFQ1wJyKAIbrMqbeUJrqNBTAKBnzLuEiBXW9HjLz7dJ/oN423vajgAWKyfmoRSIIbbLoozU8/bXZdhzEC3pO4qUAuJpCsQQcnnvb6zeq6ogzuppcq3VjSacH1VaU0czL6LFXVRBZQroPuRol2kHRnjoCpyw8ijSboqRkn117BvJhCOg4P//hNZK8=), that I created to model game of life dynamics, solving this problem should be pretty simple (the only difference with the _classic_ game of life is that here the grid is limited, hence my custom NEIGHBORS function to filter out of bounds positions):
(defparameter *nhood-deltas* '(#C(-1 1) #C(0 1) #C(1 1) #C(-1 0) #C(1 0) #C(-1 -1) #C(0 -1) #C(1 -1)))

(defun neighbors (pos)
  (loop for d in *nhood-deltas* for n = (+ pos d)
        when (and (<= 1 (realpart n) 100) (<= 1 (imagpart n) 100))
        collect n))

(defun part1 (grid)
  (dotimes (n 100 (hset-size grid))
    (setf grid (gol:next grid :neighbors #'neighbors))))
For part 2 we notice the lights placed at the four corners of the grid are _always_ on; how many lights will be on after 100 iterations?  All we have to do, is set those 4 corners on, after each iteration of the game:
(defun turn-corners-on (grid)
  (hset-add #C(1 1) grid)
  (hset-add #C(1 100) grid)
  (hset-add #C(100 1) grid)
  (hset-add #C(100 100) grid)
  grid)

(defun part2 (grid &aux (grid (turn-corners-on grid)))
  (dotimes (n 100 (hset-size grid))
    (setf grid (turn-corners-on (gol:next grid :neighbors #'neighbors)))))
Final plumbing:
(define-solution (2015 18) (grid parse-grid)
  (values (part1 grid) (part2 grid)))

(define-test (2015 18) (814 924))
And that's it!
> (time (test-run))
TEST-2015/18..
Success: 1 test, 2 checks.
Evaluation took:
  0.529 seconds of real time
  0.521719 seconds of total run time (0.515042 user, 0.006677 system)
  [ Run times consist of 0.009 seconds GC time, and 0.513 seconds non-GC time. ]
  98.68% CPU
  1,217,105,812 processor cycles
  280,039,008 bytes consed

2021-01-09 (permalink)

Advent of Code: [2015/15](https://adventofcode.com/2015/day/15)
Today, you set out on the task of perfecting your milk-dunking cookie recipe. All you have to do is find the right balance of ingredients.
Given a list of ingredients (each listing their properties like `capacity`, `durability`, or `calories`), and knowing that there are only `100` teaspoons worth of ingredients left to complete the recipe, which ingredients you are going to use, to maximize the _quality_ of the cookie.

As usual, we will start by parsing the list of ingredients (e.g `Sprinkles: capacity 5, durability -1, flavor 0, texture 0, calories 5`) into -- guess what -- a LIST of properties (note: `capacity` is always the first property, `durability` the second, and so on and so forth, so that makes the parsing a bit easier):
(defun parse-properties (string)
  (mapcar #'parse-integer (cl-ppcre:all-matches-as-strings "-?\\d+" string)))
(defun calories (props) (nth 4 props))

(defun parse-ingredient (string)
  (cl-ppcre:register-groups-bind (name (#'parse-properties properties))
      ("(\\w+): (.*)" string)
    (cons name properties)))
(defun properties (x) (cdr x))

(defun parse-ingredients (lines) (mapcar #'parse-ingredient lines))
We only have `4` ingredients, and a maximum `100` teaspoons to play with, so I am going to brute-force this.

The first step is to list all the possible recipes:

- How many teaspoons of `Sprinkles`? From `0` up to `100`
- How many teaspoons of `PeanutButter`? From `0` up to `100 - sprinkles` (we might have used some teaspoons for sprinkles already, so we need to account for that)
- How many teaspoons of `Frosting`? From `0` up to `100 - sprinkles - peanut`
- How many teaspoons of `Sugar`? `100 - sprinkles - peanut - frosting`

For each of these recipes, make a cookie out of it and calculate its _total score_ (i.e. product of all its _non-negative_ properties, ignoring `calories` for now).  The solution to part 1 will be the maximum of all these scores:
(defun make-cookie (ingredients quantities)
  (assert (= (reduce #'+ quantities) 100))
  (apply #'mapcar
         #'+
         (loop for i in ingredients for q in quantities
               collect (mapcar (partial-1 #'* q) (properties i)))))

(defun total-score (cookie)
  (reduce #'* (remove-if (partial-1 #'< _ 0) (subseq cookie 0 4))))

(defun part1 (ingredients &aux best)
  (labels ((recur (ingredients-left spoons-left recipe)
             (cond ((= ingredients-left 1) (total-score
                                             (make-cookie
                                               ingredients
                                               (cons spoons-left recipe))))
                   (t (loop for spoons from 0 to spoons-left
                            maximize (recur
                                       (1- ingredients-left)
                                       (- spoons-left spoons)
                                       (cons spoons recipe)))))))
    (recur (length ingredients) 100 '())))
For part 2, we are asked to consider only those cookies that turn out to have `500` calories -- they want to use these as meal they say -- so all we have to do, is change our base case condition and have it return the cookie score if the number of calories is `500`, or `0` otherwise (we will be maximizing the result, so we know `0` would not get in the way):
(defun part2 (ingredients)
  (labels ((recur (ingredients-left spoons-left recipe)
             (cond ((= ingredients-left 1)
                    (let ((cookie (make-cookie
                                    ingredients
                                    (cons spoons-left recipe))))
                      (if (= (calories cookie) 500)
                        (total-score cookie)
                        0)))
                   (t (loop for spoons from 0 to spoons-left
                            maximize (recur
                                       (1- ingredients-left)
                                       (- spoons-left spoons)
                                       (cons spoons recipe)))))))
    (recur (length ingredients) 100 '())))
Final plumbing:
(define-solution (2015 15) (ingredients parse-ingredients)
  (values (part1 ingredients) (part2 ingredients)))

(define-test (2015 15) (13882464 11171160))
And that's it!
> (time (test-run))
TEST-2015/15..
Success: 1 test, 2 checks.
Evaluation took:
  0.280 seconds of real time
  0.275511 seconds of total run time (0.230008 user, 0.045503 system)
  [ Run times consist of 0.005 seconds GC time, and 0.271 seconds non-GC time. ]
  98.57% CPU
  645,482,361 processor cycles
  241,477,936 bytes consed

2021-01-08 (permalink)

Advent of Code: [2015/13](https://adventofcode.com/2015/day/13)

Today, we are challenged with finding the _optimal seating arrangement_ that minimizes those awkward conversations.

Given a **round** table, a list of guests, and how much each of them would love / hate to sit next to every other guest on your list list; what's the seating arrangement that maximises the happiness of the table?

Let's start by looking at the entries of the input we are expected to process:
Alice would lose 75 happiness units by sitting next to David.
Alice would gain 71 happiness units by sitting next to Eric.
We will first parse each entry into 3 elements lists (e.g. `("Alice" "David" -75)`), and then store them all into an association list mapping from guests, to how more / less happier they would be if they were to sit next to another guest:
(defun parse-note (string)
  (cl-ppcre:register-groups-bind (person1 action (#'parse-integer amount) person2)
      ("(\\w+) would (lose|gain) (\\d+) happiness units by sitting next to (\\w+)." string)
    (list person1 person2 (* amount (if (string= action "lose") -1 1)))))

(defun parse-notes (lines &aux notes)
  (loop for string in lines for (person1 person2 delta) = (parse-note string)
        for existing = (assoc person1 notes :test #'string=)
        if existing do (push (cons person2 delta) (rest existing))
        else do (push (list person1 (cons person2 delta)) notes))
  notes)
(defun all-people (notes) (mapcar #'first notes))
How can we find the optimal arrangement? Well, it turns out there are only 8 people in our guest list, so we might as well try all the possible arrangements and find the best one.

First we define functions to calculate the positive/negative delta in happiness, if two given people happened to be sitting next to one another...
(defun delta-happiness (notes person1 person2)
  (flet ((lookup (person1 person2)
           (let ((row (rest (assoc person1 notes :test #'string=))))
             (rest (assoc person2 row :test #'string=)))))
    (+ (lookup person1 person2) (lookup person2 person1))))
...and to calculate the total happiness associated with a specific seating arrangement:
(defun circular-table-happyness (notes table)
  (loop for prev = (car (last table)) then curr
        for curr in table
        sum (delta-happiness notes prev curr)))
With this, all there is left to do is generating all the possible seating arrangements, and pick the best one (i.e. the one that maximises CIRCULAR-TABLE-HAPPINESS):
(defun part1 (notes)
  (let ((people (all-people notes)))
    (reduce #'max (all-permutations people)
            :key (partial-1 #'circular-table-happyness notes))))
For part 2, you realize you forgot to add your name to the guest list, so you are asked to find the new best seating arrangement, knowing that you would not make anyone happier/sadder by sitting next to them.

At first I thought about literally implementing this: add a new guest to the list, change LOOKUP inside DELTA-HAPPINESS to return `0` if called for the new guest, and try all the possible combinations (there would be `9!` of them, which is still OK); but then I realized that since nobody is neither positively nor negatively affected by me sitting next to them, we can remove the new guest from the picture, and simply imagine the table not to be _circular_ at all!

Say you have this table arrangement: `A B C D A` (it's a circular table, hence the two `A`s), and wanted to see what would happen to the happiness of the table if I, the new guest, were to sit in between `B` and `C` (i.e. `A B C @ D A`); well, since neither `C` nor `D` will be positively / negatively affected by me sitting next to them, we can then calculate the happiness of the arrangement `D A B C`, without the wrapping around part (in a sense, the table has now become linear).

So we don't have to add ourselves to the list, but we simply have to change how the happiness of the table is calculated!
(defun linear-table-happyness (notes table)
  (loop for prev in table for curr in (rest table)
        sum (delta-happiness notes prev curr)))

(defun part2 (notes)
  (let ((people (all-people notes)))
    (reduce #'max (all-permutations people)
            :key (partial-1 #'linear-table-happyness notes))))
Final plumbing:
(define-solution (2015 13) (notes parse-notes)
  (values (part1 notes) (part2 notes)))

(define-test (2015 13) (618 601))
And that's it!
> (time (test-run))
TEST-2015/13..
Success: 1 test, 2 checks.
Evaluation took:
  0.485 seconds of real time
  0.444067 seconds of total run time (0.440338 user, 0.003729 system)
  91.55% CPU
  1,117,039,360 processor cycles
  34,144,272 bytes consed

Advent of Code: [2015/14](https://adventofcode.com/2015/day/14)
This year is the Reindeer Olympics! Reindeer can fly at high speeds, but must rest occasionally to recover their energy. Santa would like to know which of his reindeer is fastest, and so he has them race.
This is the kind of input entries we will be dealing with:
Comet can fly 14 km/s for 10 seconds, but then must rest for 127 seconds.
We will parse these into lists listing:

- The name of the reindeer (not really needed, but whatever)
- How fast the reindeer is
- How long it can run for
- How long it has to rest for before start running again
(defun parse-reindeer (string)
  (cl-ppcre:register-groups-bind (name (#'parse-integer speed run-for rest-for))
      ("(\\w+) can fly (\\d+) km/s for (\\d+) seconds, but then must rest for (\\d+) seconds." string)
    (list name speed run-for rest-for)))

(defun parse-reindeers (lines) (mapcar #'parse-reindeer lines))
Each reindeer seems to have a _period_ (given by the sum of how long it can run and how long it has to rest after the run), and during this period they will travel a distance equal to their speed; with this, knowing how far a given reindeer would have traveled in a given amount of time, should become a matter of figuring out the number of activity _cycles_ that fit into the given amount of time, and then multiply that by the reindeer speed (you will also have to deal with partial cycles, but you get the idea):
(defun traveled-distance (time reindeer)
  (destructuring-bind (speed run-for rest-for) (rest reindeer)
    (let* ((period (+ run-for rest-for))
           (cycles (floor time period))
           (remaining (mod time period)))
      (* speed (+ (* cycles run-for) (min remaining run-for))))))

(defun part1 (reindeers)
  (reduce #'max reindeers :key (partial-1 #'traveled-distance 2503)))
For part 2, Santa wants you to tell not the reindeer that traveled the most, but the one that scored the highest number of points (every second, the leading reindeer(s), would get `1` point).

I could not figure out a way to solve this _mathematically_ (I am not even sure such solution exists), so I went on and implemented a _simulation_ of this game:

- `scores` represents the score of each reindeer
- `distances` represents the distance each reindeer has traveled
- `run-fors` represents how long each reindeer has been running for (in the current running activity)
- `rest-fors` represents how long each reindeer has been resting for (again, in the current running activity)
- for each simulation _tick_, see which reindeer can move (and have it move), which one needs to rest (and have it rest), and finally assign `1` point to the leading reindeer or reindeers
(defun part2 (reindeers)
  (loop repeat 2503
        with scores = (make-list (length reindeers) :initial-element 0)
        with distances = (make-list (length reindeers) :initial-element 0)
        with run-fors = (make-list (length reindeers) :initial-element 0)
        with rest-fors = (make-list (length reindeers) :initial-element 0)
        do (loop for (name speed run-for rest-for) in reindeers for index from 0
                 if (< (nth index run-fors) run-for)
                    do (incf (nth index distances) speed)
                       (incf (nth index run-fors))
                 else if (< (nth index rest-fors) rest-for)
                    do (incf (nth index rest-fors))
                 if (= (nth index rest-fors) rest-for)
                    do (setf (nth index run-fors) 0
                             (nth index rest-fors) 0))
        do (loop with furthest = (reduce #'max distances)
                 for d in distances for index from 0
                 when (= d furthest) do (incf (nth index scores)))
        finally (return (reduce #'max scores))))
Final plumbing:
(define-solution (2015 14) (reindeers parse-reindeers)
  (values (part1 reindeers) (part2 reindeers)))

(define-test (2015 14) (2660 1256))
And that's it!
> (time (test-run))
TEST-2015/14..
Success: 1 test, 2 checks.
Evaluation took:
  0.003 seconds of real time
  0.003228 seconds of total run time (0.003116 user, 0.000112 system)
  100.00% CPU
  7,435,484 processor cycles
  31,296 bytes consed

2021-01-07 (permalink)

Advent of Code: [2015/10](https://adventofcode.com/2015/day/10)

Today's task, is to implement the [Look-and-say sequence](https://en.wikipedia.org/wiki/Look-and-say_sequence); in particular, given an initial list of digits, we are asked to "play" this game for 40 rounds, and to return the length of the result.

We start off by parsing the input into a list of digits (I am using `(cl-ppcre:split "" ...)` to split the input string into its signle character values):
(defun parse-digits (lines)
(map 'list #'parse-integer (cl-ppcre:split "" (first lines))))
Then we go ahead, and implement the rules of the game (the careful reader would have noticed by now that this is exactly the same as implementing [run-length encoding](https://en.wikipedia.org/wiki/Run-length_encoding)):
(defun look-and-say (digits)
(loop with n = 1
        for (curr . rest) on digits for next = (car rest)
        if (eql curr next) do (incf n)
        else append (list n curr) and do (setf n 1)))

(defun play (digits times)
(dotimes (n times digits)
    (setf digits (look-and-say digits))))
With this, the answer to part 1 becomes a simple:
(defun part1 (digits) (length (play digits 40)))
For part 2 instead, we are asked to apply the same process 50 times. Ok...
(defun part2 (digits) (length (play digits 50)))
Final plumbing:
(define-solution (2015 10) (digits parse-digits)
(values (part1 digits) (part2 digits)))

(define-test (2015 10) (492982 6989950))
And that's it!
> (time (test-run))
TEST-2015/10..
Success: 1 test, 2 checks.
Evaluation took:
  0.367 seconds of real time
  0.363077 seconds of total run time (0.337332 user, 0.025745 system)
  [ Run times consist of 0.183 seconds GC time, and 0.181 seconds non-GC time. ]
  98.91% CPU
  844,084,929 processor cycles
  514,162,784 bytes consed

Advent of Code: [2015/11](https://adventofcode.com/2015/day/11)

Santa's password has expired, and we are asked to help him find the next one to use; we know Santa is a lazy guy, and will try _increase_ the current password, one step at a time (i.e. `ay`, `az`, `ba`), until he finds one that matches certain requirements:
Passwords must include one increasing straight of at least three letters, like `abc`, `bcd`, `cde`, and so on, up to `xyz`. They cannot skip letters; abd doesn't count.
Passwords may not contain the letters `i`, `o`, or `l`, as these letters can be mistaken for other characters and are therefore confusing.
Passwords must contain at least two different, non-overlapping pairs of letters, like `aa`, `bb`, or `zz`.
I think a brute-force approach will do just fine:

- Generate the next password
- Check whether if it violates or not, any of the policies
- If it does, repeat, otherwise, there you have Santa's new password

Let's start by implementing functions that will aid with the generation of the next password (note: given how the next password has to be generated, I opted for reversing the character of the password first, that way I could come up with a nice _recursive_ solution):
(defun next-char (ch)
  (if (char= ch #\z) #\a (code-char (1+ (char-code ch)))))

(defun next-password (password)
  (labels ((recur (password &aux
                           (ch (first password)) (ch-next (next-char ch)))
             (cond ((char/= ch-next #\a) (cons ch-next (rest password)))
                   (t (cons ch-next (recur (rest password)))))))
    (recur password)))
The next step is to implement functions to check the 3 different requirements (note: since the characters in my password are in reverse order, the first requirement, three _increasing_ straight characters now becomes checking for three _decreasing_ straight characters instead):
(defun three-decreasing-straight-p (password)
  (loop for ch1 in password for code1 = (char-code ch1)
        for ch2 in (cdr password) for code2 = (char-code ch2)
        for ch3 in (cddr password) for code3 = (char-code ch3)
        thereis (= code1 (1+ code2) (+ code3 2))))

(defun not-confusing-p (password)
  (not (some (lambda (ch) (find ch "ilo" :test #'char=)) password)))

(defun two-different-non-overlapping-pairs-p (password)
  (loop for (ch1 ch2 . rest) on password
        thereis (and (eql ch1 ch2)
                     (loop for (ch3 ch4 ch5) on rest
                           thereis (and (not (eql ch2 ch3))
                                        (eql ch3 ch4)
                                        (not (eql ch4 ch5)))))))

(defun password-valid-p (password)
  (and (three-decreasing-straight-p password)
       (not-confusing-p password)
       (two-different-non-overlapping-pairs-p password)))
Some final glue, and we should be all set:
(defun new-password (string)
  (flet ((string-to-password (string)
           (reverse (mapcar #'parse-char (cl-ppcre:split "" string))))
         (password-to-string (password)
           (format nil "~{~A~}" (reverse password))))
    (loop with curr = (string-to-password string)
          do (setf curr (next-password curr))
          when (password-valid-p curr) return (password-to-string curr))))
For Part 2 we are asked to find the next new password, after the one we just found, so a simple simple `(new-password part1)` will get the job done.

Final plumbing:
(define-solution (2015 11) (password first)
  (let ((part1 (new-password password)))
    (values part1 (new-password part1))))

(define-test (2015 11) ("vzbxxyzz" "vzcaabcc"))
And that's it!
> (time (test-run))
TEST-2015/11..
Success: 1 test, 2 checks.
Evaluation took:
  0.092 seconds of real time
  0.077239 seconds of total run time (0.075929 user, 0.001310 system)
  83.70% CPU
  212,365,920 processor cycles
  19,824,640 bytes consed

Advent of Code: [2015/12](https://adventofcode.com/2015/day/12)

Given a string with some JSON in it, what is the sum of all numbers in the document?

I believe you should be able to answer this with the use of regular expressions only, but since I never dealt with JSON in CL before, I figured I would use this as an opportunity to play with [some libraries](https://sabracrolleton.github.io/json-review), and ultimately decided to go with [st-json](https://marijnhaverbeke.nl/st-json/).

`:st-json` conveniently ship. with READ-JSON-FROM-STRING, so parsing the input (i.e. a single JSON string) turned out to be ridicolously easy to do:
(defun parse-jso (lines)
  (st-json:read-json-from-string (first lines)))
Now that I have my JavaScript object, in memory, I need a way to sum all of its numeric values (nested ones included); I am going to solve this recursively, as follows:

- If the current object is a STRING, then return `0`
- If the current object is a NUMBER, then return the object itself
- If the current object is a LIST, then recurse into each element, and sum the result
- Otherwise (it's an object), iterate all its values, recurse into them, and sum the result (note: MAPJSO is to JavaScript objects what MAPHASH is to HASH-TABLEs):
(defun part1 (jso)
  (labels ((recur (x)
             (cond ((stringp x) 0)
                   ((numberp x) x)
                   ((consp x) (reduce #'+ x :key #'recur))
                   (t (let ((sum 0))
                        (st-json:mapjso #'(lambda (k v)
                                           (declare (ignore k))
                                           (incf sum (recur v)))
                                        x)
                        sum)))))
    (recur jso)))
For part 2, we are told to ignore any object (and all of its children) which has any property with the value `"red"`.  This translates to the following (not the call to RETURN-FROM to break early out of MAPJSO):
(defun part2 (jso)
  (labels ((recur (x)
             (cond ((stringp x) 0)
                   ((numberp x) x)
                   ((consp x) (reduce #'+ x :key #'recur))
                   (t (let ((sum 0))
                        (st-json:mapjso #'(lambda (k v)
                                           (declare (ignore k))
                                           (when (equal v "red")
                                             (return-from recur 0))
                                           (incf sum (recur v)))
                                        x)
                        sum)))))
    (recur jso)))
Final plumbing:
(define-solution (2015 12) (jso parse-jso)
  (values (part1 jso) (part2 jso)))

(define-test (2015 12) (156366 96852))
That's it!
> (time (test-run))
TEST-2015/12..
Success: 1 test, 2 checks.
Evaluation took:
  0.017 seconds of real time
  0.012932 seconds of total run time (0.008439 user, 0.004493 system)
  76.47% CPU
  39,439,061 processor cycles
  1,189,232 bytes consed

2021-01-06 (permalink)

Advent of Code: [2015/09](https://adventofcode.com/2015/day/9)

Given a list of locations and the distances between every pair (of locations), what's shortest distance you have to travel to visit them all?  This smells like classic [TSP](https://en.wikipedia.org/wiki/Travelling_salesman_problem), and by the look of the input file it appears we are dealing with just 8 different locations, which means testing all the possible combinations will not take years to complete.

Anyway, as usual, let's start with parsing the input.  We are given a bunch of lines representing the distance between two locations (e.g. `London to Dublin = 464`), and we would like to store them in such a way that answering questions like: "what's the distance between a and b" would be easy and quick at the same time.

I am going to use an association list (the input is small and I don't think using a HASH-TABLE will make much of a difference) and since the distance between `a` and `b` will always be the same as the distance between `b` and `a`:

- When adding a new a new entry in the alist, we will use the one with the lexicographically smaller name as key, and the other (plus the distance) as value
- When querying for the distance between two locations, we will use the lexicographically smaller location name as alist key

Anyway, the above translates to:
(defun string-min (string1 string2)
(if (string< string1 string2)
    (values string1 string2)
    (values string2 string1)))

(defun parse-connection (string)
(cl-ppcre:register-groups-bind (from to (#'parse-integer distance))
    ("(\\w+) to (\\w+) = (\\d+)" string)
    (multiple-value-bind (a b) (string-min from to)
    (list a b distance))))

(defun parse-connections (lines &aux table)
(loop for string in lines for (from to distance) = (parse-connection string)
        for existing = (assoc from table :test #'string=)
        if existing do (push (cons to distance) (rest existing))
        else do (push (list from (cons to distance)) table))
table)
Like I said earlier, we are going to brute-force all the possible paths, and find the shortest one.  First we need a way to extract the list of locations from the alist of connections:
(defun all-cities (connections)
(remove-duplicates
    (loop for (from . cities) in connections collect from append
        (loop for (to) in cities collect to))
    :test #'string=))
Then, all we have to do is:

- Generate all the possible routes
- For each, calculate the total travelled distance
- Find the shortest one, and return its total distance
(defun distance (connections from to)
(multiple-value-bind (a b) (string-min from to)
    (let ((row (rest (assoc a connections :test #'string=))))
    (rest (assoc b row :test #'string=)))))

(defun route-distance (connections route)
(loop for from in route and to in (rest route)
        sum (distance connections from to)))

(defun part1 (connections &aux (cities (all-cities connections)))
(reduce #'min (all-permutations cities)
        :key (partial-1 #'route-distance connections)))
In part 2, Santa wants us to find the longest path instead (don't ask why).  Anyway, a simple change from `#'min` to `#'max` should do just fine:
(defun part2 (connections &aux (cities (all-cities connections)))
(reduce #'max (all-permutations cities)
        :key (partial-1 #'route-distance connections)))
Final plumbing:
(define-solution (2015 9) (connections parse-connections)
(values (part1 connections) (part2 connections)))

(define-test (2015 9) (141 736))
And that's it!
> (time (test-run))
TEST-2015/09..
Success: 1 test, 2 checks.
Evaluation took:
0.404 seconds of real time
0.203766 seconds of total run time (0.161870 user, 0.041896 system)
50.50% CPU
930,626,284 processor cycles
34,111,488 bytes consed

2021-01-03 (permalink)

* Changed [vim-lispindent](https://github.com/iamFIREcracker/vim-lispindent/commit/1d34ede0a1a1a66d55e08215e0c12cb0042b681b) to use a customized `indentexpr` function instead of the combination of 'lisp' mode and `&lispwords`. Note: I did not feel like re-implementing `lispindent` in Vimscript, so what I did instead was to call it with the content of the current buffer, have it indent lines, nicely, and then extract the indentation level for the current line -- not the best in terms of performance, but it gets the job done!


Advent of Code: [2015/07](https://adventofcode.com/2015/day/7)

We are are given a _circuit_ (i.e. wires, gates, and the values of some input signals), and we are asked to figure out the signal being transmitted on a specific wire, `a`.

As usual, let's begin by parsing our input:
123 -> x
456 -> y
x AND y -> d
x OR y -> e
x LSHIFT 2 -> f
y RSHIFT 2 -> g
NOT x -> h
NOT y -> i
Each line represent a connection between a signal and a wire (e.g. `123 -> x`), or a gate and a wire (e.g. `NOT y -> i`); signals are numeric inputs (e.g. `123`), while wires are _alphabetical_ ones (e.g. `x`); finally, gates can have one or two inputs (e.g. `NOT x` and `x LSHIFT 2` respectively).

My plan is parse each line into a list having as elements:

- the output wire -- since gates can have 1 or 2 arguments, I figured I would move the output wire first in the list
- the boolean function the _connection_ is describing (i.e. a function implementing the specific boolean operation)
- the arguments of the function
(defun parse-provision (string)
  (cl-ppcre:register-groups-bind (in out)
      ("^(\\w+) -> (\\w+)" string)
    (list out #'identity in)))

(defun parse-and (string)
  (cl-ppcre:register-groups-bind (in n out)
      ("^(\\w+) AND (\\w+) -> (\\w+)" string)
    (list out #'logand in n)))

(defun parse-or (string)
  (cl-ppcre:register-groups-bind (in n out)
      ("^(\\w+) OR (\\w+) -> (\\w+)" string)
    (list out #'logior in n)))

(defun parse-lshift (string)
  (cl-ppcre:register-groups-bind (in n out)
      ("^(\\w+) LSHIFT (\\w+) -> (\\w+)" string)
    (list out #'ash in n)))

(defun parse-rshift (string)
  (cl-ppcre:register-groups-bind (in n out)
      ("^(\\w+) RSHIFT (\\w+) -> (\\w+)" string)
    (list out #'(lambda (in n) (ash in (- n))) in n)))

(defun parse-not (string)
  (cl-ppcre:register-groups-bind (in out)
      ("^NOT (\\w+) -> (\\w+)" string)
    (list out #'lognot in)))

(defun parse-instruction (string)
  (or (parse-provision string)
      (parse-and string)
      (parse-or string)
      (parse-lshift string)
      (parse-rshift string)
      (parse-not string)))

(defun parse-instructions (lines)
  (mapcar #'parse-instruction lines))
How are we going to measure the signal on a given wire? (`a` in our case) I think a _recursive_ solution will do:

- Find the instruction listing our target wire as its output
- Evaluate the left-hand side of the expression (i.e. first recursively get the signal at its input(s), then apply the boolean function)
- I suspect the input circuit was created in such a way that the same gate would need to be evaluated multiple times, so I am going to keep track of the already processed gates, and their output signal (i.e. memoization)
(defun signal-at (wire instructions &aux (memo (make-hash-table :test 'equal)))
  (labels ((recur (wire &aux (cached (gethash wire memo)))
             (if cached
               cached
               (setf (gethash wire memo)
                     (cond ((digit-char-p (char wire 0)) (parse-integer wire))
                           (t
                             (destructuring-bind (fun . args)
                                 (rest (assoc wire instructions :test #'string=))
                               (apply fun (mapcar #'recur args)))))))))
    (recur wire)))
For part 2 we are asked to override wire `b` with whichever signal we measured on wire `a` for part 1, and then measure wire `a` again, and luckily for us, we did not have to change anything except for the input:
(defun prepare-part2 (instructions part1)
  (cons
    (list "b" #'identity (format nil "~A" part1))
    instructions))
Final plumbing:
(define-solution (2015 7) (instructions parse-instructions)
  (let ((part1 (signal-at "a" instructions)))
    (values part1
            (signal-at "a" (prepare-part2 instructions part1)))))

(define-test (2015 7) (46065 14134))
And that's it!
> (time (test-run))
TEST-2015/07..
Success: 1 test, 2 checks.
Evaluation took:
  0.066 seconds of real time
  0.030142 seconds of total run time (0.009823 user, 0.020319 system)
  45.45% CPU
  152,529,453 processor cycles
  1 page fault
  359,520 bytes consed

Advent of Code: [2015/08](https://adventofcode.com/2015/day/8)

In today's problem we are given a list of strings having specific characters _escaped_, and we are asked to calculate the difference, in bytes, between the escaped and unescaped versions of these strings:
"" is 2 characters of code (the two double quotes), but the string contains zero characters.
"abc" is 5 characters of code, but 3 characters in the string data.
"aaa\"aaa" is 10 characters of code, but the string itself contains six "a" characters and a single, escaped quote character, for a total of 7 characters in the string data.
"\x27" is 6 characters of code, but the string itself contains just one - an apostrophe ('), escaped using hexadecimal notation.
Our input will come in as a list of strings already, so no additional effort will be required to parse the input.

The next step is to calculate the length of a string, knowing that certain characters will come in escaped, and hence need to be _unescaped_(i.e. `\\`, `\"`, and `\x` followed by 2 digits).

My idea was to keep an index, `index`, pointing to the next two characters in the string we should look at, `ch` and `ch1`, and:

- If `ch` is `\` and `ch1` is `x`, move `index` ahead `4` positions
- If `ch` is `\`, move `index` ahead `2` positions
- Otherwise, move `index` ahead `1` position

Count the number of times we do this (i.e. `sum 1`) and at the end of this subtract `2` corresponding to the surrounding quotes:
(defun length-unescaped (string &aux (index 0))
  (- (loop while (< index (length string))
           for ch = (char string index)
            for ch1 = (and (< (1+ index) (length string))
                                (char string (1+ index)))
            sum 1
            if (and (char= ch #\\) (char= ch1 #\x)) do (incf index 4)
            else if (char= ch #\\) do (incf index 2)
            else do (incf index 1))
        2))
With this, calculating how much the literla string representation differs from the in memory one should be as simple as:
(defun part1 (strings)
(- (reduce #'+ strings :key #'length)
    (reduce #'+ strings :key #'length-unescaped)))
Part 2 instead asks us to escape input strings first, and then calculate the difference between the escaped and unescaped versions (note: the unescaped version of a string in this case, is equal to the string itself, without modifications).

As it turns out, all we have to do while escaping a string, is:

- Surround the whole thing with quotes
- Escape `\` and `"`

This nicely translates to the following:
(defun length-escaped (string &aux (index 0))
(+ (loop while (< index (length string)) for ch = (char string index)
        sum (case ch ((#\" #\\) 2) (t 1))
        do (incf index))
    2))

(defun part2 (connections)
(let ((cities (mapcar #'first connections)))
    (reduce #'max (all-permutations cities)
            :key (partial-1 #'route-distance connections))))
Final plumbing:
(define-solution (2015 9) (connections parse-connections)
(values (part1 connections) (part2 connections)))

(define-test (2015 9) (141 736))
And that's it!
> (time (test-run))
TEST-2015/08..
Success: 1 test, 2 checks.
Evaluation took:
0.000 seconds of real time
0.000818 seconds of total run time (0.000714 user, 0.000104 system)
100.00% CPU
1,876,498 processor cycles
32,720 bytes consed

2021-01-02 (permalink)

? make vim-sexp _swap_ operations repeatable with `.`


Advent of Code: [2015/05](https://adventofcode.com/2015/day/5)

Given a list of strings, which ones are _nice_ and which ones are _naughty_?  A string is _nice_ if:
It contains at least three vowels (`aeiou` only), like `aei`, `xazegov`, or `aeiouaeiouaeiou`.
It contains at least one letter that appears twice in a row, like `xx`, `abcdde` (`dd`), or `aabbccdd` (`aa`, `bb`, `cc`, or `dd`).
It does not contain the strings `ab`, `cd`, `pq`, or `xy`, even if they are part of one of the other requirements.
There is not much we need to do to parse the input, as it will come in as a list of STRINGs already; rest is implementing a function to check the above rules, and then count which ones of input strings test positive to this predicate.  Note, I am going to use regular expressions (i.e. `:cl-ppcre` to implement the rule checking logic):
(defun nice-string-p (string)
  (and (>= (length (cl-ppcre:all-matches-as-strings "[aeiou]" string)) 3)
       (cl-ppcre:all-matches-as-strings "(\\w)\\1" string)
       (not (cl-ppcre:all-matches-as-strings "(ab|cd|pq|xy)" string))))

(defun part1 (strings)
  (count-if #'nice-string-p strings))
Part 2 is similar to part 1, except that the rule set is different:
It contains a pair of any two letters that appears at least twice in the string without overlapping, like `xyxy` (`xy`) or `aabcdefgaa` (`aa`), but not like `aaa` (`aa`, but it overlaps).
It contains at least one letter which repeats with exactly one letter between them, like `xyx`, `abcdefeghi` (`efe`), or even `aaa`.
Again, pretty easy to implement using regular expressions:
(defun improved-nice-string-p (string)
  (and (cl-ppcre:all-matches-as-strings "(.)(.).*\\1\\2" string)
       (cl-ppcre:all-matches-as-strings "(.).\\1" string)))

(defun part2 (strings)
  (count-if #'improved-nice-string-p strings))
Final plumbing:
(define-solution (2015 5) (strings)
  (values (part1 strings) (part2 strings)))

(define-test (2015 5) (238 1038736))
And that's it!
> (time (test-run))
TEST-2015/05..
Success: 1 test, 2 checks.
Evaluation took:
  0.012 seconds of real time
  0.011163 seconds of total run time (0.009753 user, 0.001410 system)
  91.67% CPU
  27,942,267 processor cycles
  524,160 bytes consed

Advent of Code: [2015/06](https://adventofcode.com/2015/day/6)

Simulation problem: given a 1000x1000 lights grid, and a set of instructions to turn certain lights on or off, or to toggle their state, how many lights will be turned on at the end, after having executed all the instructions?

We start off by parsing the list of instructions, which can be any of the following:

- `turn on 1,2 through 3,4`
- `turn off 5,6 through 7,8`
- `toggle 9,10 through 11,12`

I am going to be using regular expressions again (note: `?:` is the syntax for [non-capturing groups](https://stackoverflow.com/a/3513858)):
(defun parse-coordinate-pair (string)
  (cl-ppcre:register-groups-bind ((#'parse-integer top left bottom right))
      ("(\\d+),(\\d+) through (\\d+),(\\d+)" string)
    (list top left bottom right)))

(defun parse-instruction (string)
  (cl-ppcre:register-groups-bind (action (#'parse-coordinate-pair pair))
      ("(?:turn )?(\\w+) (.+)" string)
    (cons (make-keyword (string-upcase action)) pair)))

(defun parse-instructions (lines)
  (mapcar #'parse-instruction lines))
This will take care of parsing instructions like the ones we have seen above, into:

- `(:on 1 2 3 4)`
- `(:off 5 6 7 8)`
- `(:toggle 9 10 11 12)`

The next step is to run the _simulation_.  I am going to use a two-dimensional ARRAY for the grid, storing T for when the specific light is turn on, and NIL otherwise.
(defun make-lights-grid (instructions)
  (let ((grid (make-array '(1000 1000) :initial-element nil)))
    (loop for (action top left bottom right) in instructions do
          (loop for row from top upto bottom do
                (loop for col from left upto right
                      for already-lit = (aref grid row col) do
                      (ecase action
                        (:toggle (setf (aref grid row col) (not already-lit)))
                        (:on (setf (aref grid row col) t))
                        (:off (setf (aref grid row col) nil))))))
    grid))
The last step is to count the number of turned on lights (i.e. the number of T in the 2d array):
(defun part1 (instructions)
  (count t (array-elements (make-lights-grid instructions))))
For part 2, it turns out each light has adjustable brightness (it's not just on and off), and the instructions assume a slightly different meaning:

- `:on` means increasing the brightness by 1
- `:off` means decreasing the brightness by 1, without going negative
- `:toggle` means increasing the brightness by 2

After having executed all the revised instructions, what's the total brightness of the grid?

We are still going to use a two-dimensional array for the grid, but this time we will store numbers in it (i.e. the brightness value of the specific light):
(defun make-improved-lights-grid (instructions)
  (let ((grid (make-array '(1000 1000) :initial-element 0)))
    (loop for (action top left bottom right) in instructions do
          (loop for row from top upto bottom do
                (loop for col from left upto right do
                      (ecase action
                        (:toggle (incf (aref grid row col) 2))
                        (:on (incf (aref grid row col) 1))
                        (:off (unless (zerop (aref grid row col))
                                (decf (aref grid row col) 1)))))))
    grid))

(defun part2 (instructions)
  (reduce #'+ (array-elements (make-improved-lights-grid instructions))))
Final plumbing:
(define-solution (2015 6) (instructions parse-instructions)
  (values (part1 instructions) (part2 instructions)))
(define-test (2015 6) (569999 17836115))
And that's it!
> (time (test-run))
TEST-2015/06..
Success: 1 test, 2 checks.
Evaluation took:
  0.698 seconds of real time
  0.593268 seconds of total run time (0.452138 user, 0.141130 system)
  84.96% CPU
  1,606,779,011 processor cycles
  112,239,392 bytes consed

2021-01-01 (permalink)

Advent of Code: [2015/04](https://adventofcode.com/2015/day/4)

AdventCoins mining!
To do this, he needs to find MD5 hashes which, in hexadecimal, start with at least five zeroes. The input to the MD5 hash is some secret key (your puzzle input, given below) followed by a number in decimal. To mine AdventCoins, you must find Santa the lowest positive number (no leading zeroes: 1, 2, 3, ...) that produces such a hash.
Unfortunately there is not much we can do here except for bruteforcing (and before you ask, I did **not** implement MD5 myself...I found a package that did it for me):
(defun parse-secret (lines) (first lines))

(defun five-leading-zeros-p (md5-hash)
  (and (zerop (aref md5-hash 0))
       (zerop (aref md5-hash 1))
       (zerop (ldb (byte 4 4) (aref md5-hash 2)))))

(defun part1 (secret)
  (loop for n from 1 for input = (format nil "~A~D" secret n)
        for hash = (md5:md5sum-string input)
        when (five-leading-zeros-p hash) return n))
Part 2 is pretty much the same, except that now we are asked to find a hash with 6 leading `0`.
(defun six-leading-zeros-p (md5-hash)
  (and (zerop (aref md5-hash 0))
       (zerop (aref md5-hash 1))
       (zerop (aref md5-hash 2))))

(defun part2 (secret)
  (loop for n from 1 for input = (format nil "~A~D" secret n)
        for hash = (md5:md5sum-string input)
        when (six-leading-zeros-p hash) return n))
Final plumbing:
(define-solution (2015 4) (secret parse-secret)
  (values (part1 secret) (part2 secret)))

(define-test (2015 4) (254575 1038736))
And that's it!
> (time (test-run))
TEST-2015/04..
Success: 1 test, 2 checks.
Evaluation took:
  1.087 seconds of real time
  1.071366 seconds of total run time (1.061611 user, 0.009755 system)
  [ Run times consist of 0.020 seconds GC time, and 1.052 seconds non-GC time. ]
  98.53% CPU
  2,500,813,970 processor cycles
  455,259,792 bytes consed

2020-12-31 (permalink)

Advent of Code: [2015/02](https://adventofcode.com/2015/day/2)

We are given a list presents/boxes dimensions (i.e. `12x34x45` with `12` being the length of the box, `34` being the width, and `45` the height), and we are asked to calculate how much paper is going to be required to wrap them all:
Fortunately, every present is a box (a perfect right rectangular prism), which makes calculating the required wrapping paper for each gift a little easier: find the surface area of the box, which is 2*l*w + 2*w*h + 2*h*l. The elves also need a little extra paper for each present: the area of the smallest side.
We start by parsing input lines into 3-element lists (I am using `:cl-ppcre` to do the splitting, but `:split-sequence` would have done just fine):
(defun parse-dimentions (string)
  (mapcar #'parse-integer (cl-ppcre:split "x" string)))

(defun parse-input (lines) (mapcar #'parse-dimentions lines))
Then we define a function to calculate the amount of required , apply it to each input set of dimensions, and sum each result.
(defun paper (dimensions)
  (destructuring-bind (l w h) dimensions
    (let ((lw (* l w))
          (wh (* w h))
          (hl (* h l)))
      (+ (+ (* 2 lw) (* 2 wh) (* 2 hl))
         (min (* lw) (* wh) (* hl))))))

(defun part1 (list-of-dimentions)
  (reduce #'+ list-of-dimentions :key #'paper))
What present box is it, without some ribbon? For part 2 we are asked to calculate how much ribbon we will need knowing that:
The ribbon required to wrap a present is the shortest distance around its sides, or the smallest perimeter of any one face.
Each present also requires a bow made out of ribbon as well; the feet of ribbon required for the perfect bow is equal to the cubic feet of volume of the present. Don't ask how they tie the bow, though; they'll never tell.
Again, we define a function to calculate how much ribbon is required for each box, apply it to each input set of dimensions, and sum the result:
 (defun ribbon (dimensions)
   (destructuring-bind (l w h) dimensions
     (let ((lw (+ l w))
           (wh (+ w h))
           (hl (+ h l)))
       (+ (* 2 (min lw wh hl))
          (* l w h)))))

 (defun part2 (list-of-dimentions)
   (reduce #'+ list-of-dimentions :key #'ribbon))
Final plumbing:
(define-solution (2015 2) (list-of-dimensions parse-input)
  (values (part1 list-of-dimensions) (part2 list-of-dimensions)))

(define-test (2015 2) (1598415 3812909))
And that's it!
> (time (test-run))
TEST-2015/02..
Success: 1 test, 2 checks.
Evaluation took:
  0.004 seconds of real time
  0.003570 seconds of total run time (0.002333 user, 0.001237 system)
  100.00% CPU
  9,252,845 processor cycles
  360,400 bytes consed

Advent of Code: [2015/03](https://adventofcode.com/2015/day/3)

Here we are given a list of instructions Santa is going to follow while delivering the presents to the village houses, and we are asked to figure out how many houses will receive at leat one present (yes, following the instructions might bring Santa back to an already visited house).

Anyway, the input is a list of characters (i.e `^` for north, `v` south, `<` for west, and `>` for east), and I am going to parse these into COMPLEX numbers again, like we did for day 1:
(defun parse-instruction (ch)
  (ecase ch (#\< #c(-1 0)) (#\^ #c(0 1)) (#\> #c(1 0)) (#\v #c(0 -1))))

(defun parse-input (data)
  (map 'list #'parse-instruction (first data)))
Next, starting from position `0` we are going to move santa around, and keep track of the unique list of houses he visited (note: I could have used a HASH-TABLE, but given the input size I figured using a LIST and PUSHNEW would have done just fine):
(defun part1 (instructions)
  (loop with pos = 0 and houses = (list 0)
        for d in instructions
        do (pushnew (incf pos d) houses)
        finally (return (length houses))))
For part 2, the elves decide to pair Santa with a Robot (starting at position `0` with Santa), and split instructions amongst them:

- First instruction to Santa
- Second instruction to the robot
- Third instruction to Santa...
(defun part2 (instructions)
  (loop with santa = 0 and robot = 0 and houses = (list 0)
        for d in instructions for santa-turn-p = t then (not santa-turn-p)
        if santa-turn-p do (pushnew (incf santa d) houses)
        else do (pushnew (incf robot d) houses)
        finally (return (length houses))))
Final plumbing:
(define-solution (2015 3) (instructions parse-input)
  (values (part1 instructions) (part2 instructions)))

(define-test (2015 3) (2565 2639))
And that's it!
> (time (test-run))
TEST-2015/03..
Success: 1 test, 2 checks.
Evaluation took:
  0.052 seconds of real time
  0.051087 seconds of total run time (0.048452 user, 0.002635 system)
  98.08% CPU
  120,408,717 processor cycles
  793,776 bytes consed

2020-12-30 (permalink)

Finally fixed some weird indentation problems with my Vim/CL setup

Every time one of my files included a `#\(`, _bad_ things will happen:

- matchit's `%` mappings would land me on the the closing parenthesis immediately _after_ the expected one
- vim-sexp's `vaf` and `vaF` mappings would end up selecting _partial_ forms
- vlime's would start suggesting positive indentation levels even for toplevel expressions

As it turns out all these plugins use `searchpairpos()` behind the scenes to find pairs of matching parentheses, and since that function already supports skipping over configured syntax regions, I _simply_ had to figure out a way to tell it which region to skip:

- matchit: I added `let b:match_skip = 's:comment\|string\|escape\|symbol'` to my vimrc
- vim-sexp: I had to create a [PR](https://github.com/guns/vim-sexp/pull/28)
- vlime: I had to create a [PR](https://github.com/vlime/vlime/pull/64)

No more `(char "(" 0)` hacks!

Advent of Code: [2015/01](https://adventofcode.com/2015/day/1)

We are given a string containing a bunch of `(` and `)` characters representing "move one floor up" and "move one floor down" instructions respectively:

- `(())` means go up two times, and then go down two times
- `(((` means go up 3 times
- `()(` means go up 1 time, then down 1 time, then again up 1 time

We are asked to find the floor we find ourselves at, after having followed all the instructions.

We start off by converting the input into a sequence of _floor movements_ (`(` becomes `1`, and `)` becomes `-1`):
(defun parse-instruction (ch)
  (ecase ch (#\( 1) (#\) -1)))

(defun parse-input (lines)
  (map 'list #'parse-instruction (first lines)))
With this, part 1 simply becomes a matter of summing all the read instructions together:
(defun part1 (instructions)
  (reduce #'+ instructions))
For part 2 instead, we are asked to find the first instruction (i.e. the instruction's _position_ in the list) that takes us to floor `-1`.  Not a big deal; instead of using REDUCE and +, we execute all the instructions inside a LOOP form so we can break out of it as soon as the current floor is `-1`:
(defun part2 (instructions &aux (floor 0))
  (loop for d in instructions and pos from 1
        do (incf floor d)
        when (= floor -1) return pos))
Final plumbing:
(define-solution (2015 1) (instructions parse-input)
  (values (part1 instructions) (part2 instructions)))

(define-test (2015 1) (138 1771))
And that's it!
> (time (test-run))
TEST-2015/01..
Success: 1 test, 2 checks.
Evaluation took:
  0.001 seconds of real time
  0.001664 seconds of total run time (0.000725 user, 0.000939 system)
  200.00% CPU
  3,820,178 processor cycles
  178,624 bytes consed

2020-12-29 (permalink)

TIL: the first LOOP's `:thereis` non-NIL form is also going to be the return value of the LOOP form itself
(loop for e in '(1 2 3 5 7)
      thereis (evenp e))
=>
T
What if you wanted to return return the first _odd_ element, and not just whether such element exist?  You can use AND:
(loop for e in '(1 2 3 5 7)
      thereis (and (evenp e) e))
=>
2

2020-12-26 (permalink)

Is it possible pass to CHECK-TYPE  a _parametric_ type? i.e. a type that depends on the run-time value of some other function argument?

An example: say I have a DIGITS-TO-NUMBER function accepting as arguments:

- the digits of the number -- i.e. list of integers, each less than...
- a given base

Is this _possible_? I think it should, especially because we are talking about run-time checking, but maybe there is something else I am not properly taking into account.

Anyway, first I defined the predicate/type for this:
(defun list-of-digits-p (digits base)
  (and (every #'numberp digits)
       (every #'(lambda (d) (< d base)) digits)))

(deftype list-of-digits (base)
  (let ((predicate (gensym)))
    (setf (symbol-function predicate)
          #'(lambda (digits) (list-of-digits-p digits base)))
    `(and list (satisfies ,predicate))))
Then I tried to add this to DIGITS-TO-NUMBER, `(check-type digits (list-of-digits base))`, but if you have a look at what CHECK-TYPE expands to, it's clear my attempt would not work:
(DO ()
    ((TYPEP DIGITS '(LIST-OF-DIGITS BASE)))
  (SETF DIGITS
          (SB-KERNEL:CHECK-TYPE-ERROR 'DIGITS DIGITS '(LIST-OF-DIGITS BASE))))
I was able to achieve what I wanted with the following:
(unless (typep digits `(list-of-digits ,base)) (error "fuuuuu"))
But then I would need to re-implement the restart mechanism which is already in place if you use CHECK-TYPE right?

Well it turns out you can not:
but, the type passed to check-type must be static
if you want a dynamic type, then use assert typep instead
also, satisfies must be a single symbol
This translates to:
(defun digits-to-number (digits &optional (base 10))
  "Return the non-negative integer in base `base` from the list of
  its digits (this function is the inverse of DIGITS). By default,
  the decimal base is assumed.

The order of the digits is such that the `k`th element of the list
refers to the coefficient of `base^k`. In other words, given the input list

    (c0 c1 c2 ... ck)

the returned integer will be constructed as follows:

    c0 + c1*base + c2*base^2 + ... + ck*base^k."
  (check-type base (integer 2))
  (dolist (d digits)
    (assert (typep d `(integer 0 ,(1- base)))))
  (loop :for d :in digits :and c = 1 :then (* c base)
        :sum (* d c)))
And yes, if you want the restart functionality, then you'll need to modify the code a bit.

2020-12-25 (permalink)

Advent of Code: [2020/25](https://adventofcode.com/2020/day/25)

Solution:

- Bruteforce the "loop-size" param for one of the 2 inputs
- _transform_ the other input with the found "loop-size"
(defparameter *magic* 20201227)

(defun transform (subject loop-size &aux (value 1))
  (dotimes (n loop-size value)
    (setf value (rem (* value subject) *magic*))))

(defun find-loop-size (target &aux (value 1))
  (loop for loop-size from 0
        when (= value target) return loop-size
        do (setf value (rem (* value 7) *magic*))))

(define-solution (2020 25) (integers parse-integers)
  (transform (first integers) (find-loop-size (second integers))))
And that's it, folks!
      --------Part 1--------   --------Part 2--------
Day       Time   Rank  Score       Time   Rank  Score
 25   01:04:57   3328      0   01:05:36   2677      0
 24   00:44:22   2518      0   00:53:15   1519      0
 23   02:41:43   4360      0   10:59:47   5948      0
 22   02:04:15   5221      0   04:08:58   4813      0
 21   04:55:08   5649      0   04:57:55   5338      0
 20   03:52:37   3838      0   05:16:07   1553      0
 19   08:52:01   8253      0   09:28:17   5746      0
 18   03:36:44   7136      0   04:16:03   6268      0
 17   01:46:03   4142      0   01:50:25   3668      0
 16   02:00:52   7462      0   04:35:01   7388      0
 15   01:47:30   7380      0   01:58:04   6301      0
 14   01:49:39   6897      0   07:32:55  11279      0
 13   01:33:50   8042      0   05:25:33   6909      0
 12   01:56:49   8063      0   02:02:13   6403      0
 11   02:34:06   9328      0   03:19:45   8054      0
 10   01:42:33  11666      0   02:18:21   6551      0
  9   01:36:51  11168      0   01:53:55  10491      0
  8   03:03:40  16452      0   03:20:49  13610      0
  7   03:28:16  12620      0   03:38:13  10016      0
  6   02:57:28  15332      0   03:02:12  13914      0
  5   01:35:54  10138      0   01:57:11  10141      0
  4   04:26:24  24047      0   06:03:48  21243      0
  3   01:38:15  11744      0   01:46:12  10883      0
  2   05:26:22  27699      0   05:42:43  26445      0
  1   05:13:06  20406      0   05:17:01  18468      0
PS. I started with AoC in 2018, to learn Common Lisp, and I have been using it since.
PPS. This is the first year that I managed to a) get all the 50 stars by the 25th, b) complete all the problems within 24 hours.
PPPS. I did not know you had to rush and click on the link to get your second star, so basically my part 2 completion time tells you something about how fast/slow I am at reading things ;-)

2020-12-24 (permalink)

Advent of Code: [2020/24](https://adventofcode.com/2020/day/24)

Solution:

- [Pointy topped](https://www.redblobgames.com/grids/hexagons/) hex grids
- Parse instructions into list of steps, i.e. COMPLEX numbers pointing to the next, adjacent, tile
- Part 1: iterate all the instructions, execute all the steps, and flip the state of the tile you find yourself in at the end
- Part 2: another game of life <3 -- my utility libraries (i.e. [:hset](https://topaz.github.io/paste/#XQAAAQBWAwAAAAAAAAAUGknC0wGe+rxWxGbsuM8XKlXe+zDDeRr4lNRd45fFnhvNSenyD67lPZMOacbfl1vMLbImXQNGkRGm0LFaP9G10YtnY4Zaf8bWX8Td4J0uic8xm16hYOSoVvk8BIkKChEqlP7kbfGlKN9jE/jRjtUMzeK1gyEVQNY+RUDoIYDJANFNu1SZhTUGuQnFKdMGTqLAvG13qGropmbb8XSlBVNxNYatfyPAWFxOvGvcPQ3csBNV9Lu0Ko6fp9yALeEc0SXrRItn9ra7r5RsK6PXuKHSlV9cX8P5+jc4z1RDqgFWSaH/tytsU+skUEWp/akDGwakswDIuMN3HNrjU9UP02bs3FFSMZPm9xkNui00uEY9X0Mv/03HzZiPbzu7dmgSy4su/3sSJpozm9pGsNndqDHw1+NWwNPlH1d76RN2/E3pUBUFKlfkvx5mADArsx1+w8YT48cODPJn1xhJ3oci4Etp5lws5JmT0SM4RKZeC37j1mJ5xfNk1pTCQh/Epa8PbQvCU8gzcIy0A4jgQc9Vz7lUMjyhVaGrxSD0ETVSJs4Ja9ZP7T+W6okQd/t4hTg=) and [:gameoflife](https://topaz.github.io/paste/#XQAAAQAbCQAAAAAAAAAUGQimgulVkMqJePdJ12sdAxC/GfcryUc62cNZO2Y9F49pcnga9KUiRymDklgNdDlqcv2dhjGOqX7N8ovjj8SaUTGvr5mN0/ulEICo5hK0220UqGzQVBykFN8pav/Pr4ITd4I3DKGyJRgy7/3T0ryWzgCloLL3GRvf/oikkYBmCALRghWKKt02J3vWavnPlqpnVW/0CYFBYT1CImg7lRpoFmbiwrr4lqIjkFF9HDELYMnxDSBE0fJlw3dgFW6rFaM4YMs0HVdk4wfjJRHUXLkZWm86TDvx+uDMnFWtXK7opquIKAqfRlRKnMzX1aTKqoNyJK/nZ4PRBS+eNkePip+rIxtmHT44/OMoMw+Ib0OR1tm/Wvc0JRb08vblxbpHcX0LLcGntGNYVBxdrKwjoDdRwe8bzTXy8x4o2tL8qyMSWHwsxyAsMwBJQCJSrGjWf0T4E9gkEPAyX8RiAJzcdL9UQDKNSyefq3naHa4/l+bFQlDabeLjVJYh/GDZpW/a5384/Vi/gIFqtFc/5Tlweqwvms+KrkC6ertm0ycdLUNug96wUPyZdA5LkQUb8sCujjboX6FNGfXgLclCf7vmKPj9osR4YueiK9a0DR1q2MvSF3JeFcv/5rLKlt3GY8dSpl9Rbe5fceaQ2pJ2ukOE1pFFqYzHnaSBr+Bf0PMpbba/VoViIzSHb+c3qM1/5PLCtjBPeCvazHVnOU4cI+N3nuzTRnoVItUWvIq4MWAVX6xtnpeRb/s+hq6VImuzAdgpyfUHBzlBq61pCK/qg5h9w+q7plvUXoB35EgDVikCU6wBFIlHz43SAZkaBHBxdNmpb7VKwuL0plBwewcZswzb9cyQnEkJo1Ec67BVaLc05c4mj/rGw+NbA5AL1glqawnAwQUmzKLxgb/sDbSfQcmEMH32YFW5ybZ8v9qkU9KL435TKdPfw8fMzt3teS1QBVFfpP246FC1C2g3clAOnwDNWKNeQ4BgoiXZlpK1S42WA7fM2yT+GC8//i0MgbL/tYtgI+Vl7ILQ9ALTARWez7Pjk9qZnPvA3pVeddhKzy++jwCXP4PP0nQjvoNrQnx5/eSMVasKElJQkV6G+7bfFGQDCh9NgiyD5/2/I+wuvlmCWmMDxCpBsiKhZhGq5rEgPxkAjJU2SxFtd2fLkrsSZJ8ZH9PGfhfTe6clCMmLmrotGQRXtb6LVaTVv3pnhuK+KxKUdzOT3AomFQ1wJyKAIbrMqbeUJrqNBTAKBnzLuEiBXW9HjLz7dJ/oN423vajgAWKyfmoRSIIbbLoozU8/bXZdhzEC3pO4qUAuJpCsQQcnnvb6zeq6ogzuppcq3VjSacH1VaU0czL6LFXVRBZQroPuRol2kHRnjoCpyw8ijSboqRkn117BvJhCOg4P//hNZK8=)) are finally paying off
(defparameter *steps*
  '(("nw"  #c(-1/2 3/4))
    ("ne" #c(1/2 3/4))
    ("e"  #c(1 0))
    ("se" #c(1/2 -3/4))
    ("sw" #c(-1/2 -3/4))
    ("w"  #c(-1 0))))

(defun parse-step (string)
  (second (assoc string *steps* :test #'string=)))

(defun parse-instruction (string)
  (remove nil (mapcar #'parse-step
                     (cl-ppcre:split "(ne|nw|se|sw|w|e)" string
                                     :with-registers-p t))))

(defun parse-instructions (data)
  (mapcar #'parse-instruction data))

(defun create-floor (instructions)
  (loop with flipped = (make-hset '())
        for steps in instructions for delta = (reduce #'+ steps)
        if (hset-contains-p delta flipped) do (hset-rem delta flipped)
        else do (hset-add delta flipped)
        finally (return flipped)))

(defparameter *neighbors-deltas*
  '(#c(-1/2 3/4) #c(1/2 3/4) #c(1 0) #c(1/2 -3/4) #c(-1/2 -3/4) #c(-1 0)))

(defun neighbors (pos)
  (loop for delta in *neighbors-deltas* collect (+ pos delta)))

(defun should-be-flipped-p (pos n state)
  (or (= n 2) (and (= n 1) (hset-contains-p pos state))))

(define-solution (2020 24) (instructions parse-instructions)
  (let ((floor (create-floor instructions)))
    (values (hset-size floor)
            (dotimes (n 100 (hset-size floor))
              (setf floor
                    (gol:next floor :neighbors #'neighbors
                              :should-be-alive-p #'should-be-flipped-p))))))
PS. I woke up early today because I have errands to do (who doesn't, these days...), and figured I did not want to ruin today as well, especially the ridiculous amount of time I spent solving yesterday's problem. Guess what? You wake up early, the problem is easy, and you end up with a new personal score:
      --------Part 1--------   --------Part 2--------
Day       Time   Rank  Score       Time   Rank  Score
 24   00:44:22   2518      0   00:53:15   1519      0

TIL: you cannot freely use `and`s inside LOOP `for` expressions.. you need to be careful!
[SBCL] CL-USER> (loop for n upto 5 and square = (* n n) collect square)
(0 0 1 4 9 16)

[SBCL] CL-USER> (loop for n upto 5 for square = (* n n) collect square)
(0 1 4 9 16 25)

[SBCL] CL-USER> (loop for n in '(0 1 2 3 4 5) for square = (* n n) collect square)
(0 1 4 9 16 25)

[SBCL] CL-USER> (loop for n in '(0 1 2 3 4 5) and square = (* n n) collect square)
The value
  NIL
is not of type
  NUMBER
when binding SB-KERNEL::X
   [Condition of type TYPE-ERROR]

Restarts:
  R 0. ABORT - Exit debugger, returning to top level.

[SBCL] CL-USER> (loop for n upto 5 and m from 5 downto 0 collect (+ n m))
(5 5 5 5 5 5)

[SBCL] CL-USER> (loop for n in '(0 1 2 3 4 5) and m in '(5 4 3 2 1) collect (+ n m))
(5 5 5 5 5)
So it seems like you can use `and`s when chaining assignments of the _same type_ (i.e. sequence iterations with sequence iterations, number ranges with number ranges...).  From the Lisp Discord server:
AND does bindings and updates "in parallel", like LET/PSETF rather than sequentially like LET*/SETF, so you can't refer to the previous variable (you can in the update form, but it will refer to the previous value rather than the current one)

2020-12-23 (permalink)

Advent of Code: [2020/23](https://adventofcode.com/2020/day/23)

Solution:

- Create a _circular_ list -- let's thank Quickutil's [NCYCLE](http://quickutil.org/list?q=ncycle)
- Part 1 (play the _damn_ game)
- select the _next_ three cups
- find the destination cup -- it's a circular list, so MEMBER here will do just fine
- update the list -- I used PSETF to update multiple places...in _parallel_
- Part 2 (play the _damn_ game, again, with 1M cups, and this time stop after 10M moves)
- it's _clear_ our calls to MEMBER are the bottleneck we should try to get rid of
- how? create a HASH-TABLE mapping cup labels to their _places_ in the circular list
- fuss around with LIST pointers
(defun parse-cups (data)
  (reverse (digits (parse-integer (first data)))))

(defun play (cups moves &aux (cups-count (length cups))
                  (cups (ncycle cups))
                  (cells (make-hash-table)))
  (labels ((one-less (current)
             (if (> current 1) (1- current) cups-count))
           (destination (cups)
             (loop with (curr cup1 cup2 cup3) = cups
                   for target = (one-less curr) then (one-less target)
                   if (/= target cup1 cup2 cup3)
                   return (gethash target cells))))
    (loop repeat cups-count
          for ref = cups then (cdr ref) do
          (setf (gethash (car ref) cells) ref))
    (loop repeat moves
          for curr = cups for cup1 = (nthcdr 1 cups)
          for cup3 = (nthcdr 3 cups) for after-cup3 = (cdr cup3)
          for dest = (destination cups) for after-dest = (cdr dest) do
          (psetf (cdr curr) after-cup3
                 (cdr cup3) after-dest
                 (cdr dest) cup1
                 cups (cdr cup3))
          ;; propagate above changes into `cells`
          (psetf (gethash (car after-cup3) cells) (cdr curr)
                 (gethash (car after-dest) cells) (cdr cup3)
                 (gethash (car cup1) cells) (cdr dest))
          finally (return (gethash 1 cells)))))

(defun prepare-part2 (cups total-cups)
  (append cups (iota (- total-cups (length cups)) :start 10)))

(define-solution (2020 23) (data)
  (values (loop repeat 8 ;; skip cup with label 1
                for cup in (rest (play (parse-cups data) 100))
                collect cup into digits
                finally (return (parse-integer (format nil "~{~A~}" digits))))
          (let ((cups (play (prepare-part2 (parse-cups data) 1000000) 10000000)))
            (* (cadr cups)
               (caddr cups)))))
How fast does this run?
[SBCL] AOC/2020/23> (time (solution-run))
Evaluation took:
  5.927 seconds of real time
  5.879336 seconds of total run time (5.767486 user, 0.111850 system)
  [ Run times consist of 0.058 seconds GC time, and 5.822 seconds non-GC time. ]
  99.19% CPU
  13,632,853,237 processor cycles
  130,224,736 bytes consed

25468379
474747880250
It's probably not the fastest thing in the world, but it gets the job done!

2020-12-22 (permalink)

Advent of Code: [2020/22](https://adventofcode.com/2020/day/22)

Solution:

- Nothing crazy, just do as told -- I thought about using a deque to implement a deck, but as it turns out a LIST will do just fine
- Part 1: pop, pop, compare, append, until one of the decks is empty
- Part 2: not as _simple_ as part 1, but close enough
- ALREADY-SEEN-P returns `t` if the current game (i.e. the two decks) has already been _played_ before
- MARK-AS-SEEN marks the current game as played
- PLAYER1-WINS-TURN-P is where most of the logic for part 2 lies (i.e. check if players have enough cards to recurse, then recurse or check the top of their decks)
- The rest is plumbing
(defun parse-decks (data)
  (let (groups current)
    (dolist (string (append data '("")))
      (if (string= string "")
        (setf groups (cons (reverse current) groups) current nil)
        (setf current (cons string current))))
    (list
      (mapcar #'parse-integer (rest (second groups)))
      (mapcar #'parse-integer (rest (first groups))))))

(defun play (decks)
  (loop with deck1 = (first decks) and deck2 = (second decks)
        while (and deck1 deck2)
        for c1 = (pop deck1) and c2 = (pop deck2)
        if (> c1 c2) do (setf deck1 (append deck1 (list c1 c2)))
        else do (setf deck2 (append deck2 (list c2 c1)))
        finally (return (values deck1 deck2))))

(defun play-recursive (decks)
  (labels ((already-seen-p (seen deck1 deck2)
             (gethash (list deck1 deck2) seen))
           (mark-as-seen (seen deck1 deck2)
             (setf (gethash (list deck1 deck2) seen) t))
           (player1-wins-turn-p (c1 deck1 c2 deck2)
             (if (or (< (length deck1) c1) (< (length deck2) c2))
               (> c1 c2)
               (recur (subseq deck1 0 c1) (subseq deck2 0 c2))))
           (recur (deck1 deck2)
             (loop with seen = (make-hash-table :test 'equal)
                   while (and deck1 deck2)
                   if (already-seen-p seen deck1 deck2) return deck1 else do
                   (mark-as-seen seen deck1 deck2)
                   (let ((c1 (pop deck1)) (c2 (pop deck2)))
                     (if (player1-wins-turn-p c1 deck1 c2 deck2)
                       (setf deck1 (append deck1 (list c1 c2)))
                       (setf deck2 (append deck2 (list c2 c1)))))
                   finally (return (values deck1 deck2)))))
    (recur (first decks) (second decks))))

(defun score (deck)
  (reduce #'+ (mapcar #'* (reverse deck) (iota (length deck) :start 1))))

(define-solution (2020 22) (decks parse-decks)
  (flet ((winner-score (deck1 deck2)
           (score (or (pr deck1) deck2))))
    (values (multiple-value-call #'winner-score (play decks))
            (multiple-value-call #'winner-score (play-recursive decks)))))
PS. I finally found a use case for [MULTIPLE-VALUE-CALL](http://clhs.lisp.se/Body/s_multip.htm)
PPS. I know this seemed easy, but it took me forever to solve part 2:

- I forgot to pass :test 'equal to my call to MAKE-HASH-TABLE -- making the "already played game" logic useless..
- Fixed that, I still got the "already played game" logic wrong; I know it was clearly stated in the text description, and I remember paying attention to it, still, I ended up caching across all the played games!
- Lastly, when preparing the decks for the nested game, I had a typo that caused my algorithm to use all the remaining decks and not just as many as the value of the card we just read (e.g. I had `(subseq (rest deck1) 0 n1)` instead `(subseq (rest deck1) 0 c1)`, where `c1` is the last read card, and `n1` is the size of the remaining deck), and because of that I started blaming on the used data structure and tried to implement this using a deque (and halfway through the newer implementation, I realized the silly mistake I had made...)

2020-12-21 (permalink)

Advent of Code: [2020/21](https://adventofcode.com/2020/day/21)

Solution:

- Parse food into a list having the list of ingredients as first element, and the list of allergens as second -- i.e. `'((mxmxvkd kfcds sqjhc nhms) (contains dairy, fish))`
- Then find the `ingredient -> allergen` mapping, using backtracking
- Start with all the foods, and an empty mapping
- For each remaining food, for each of its ingredients, `i`, for each of its allergens, `a`, see if `i -> a` is valid mapping; if it is, update foods by removing all occurrences of `i` and `a` and carry on, otherwise, backtrack
- A new mapping is valid if, for each remaining food, either `a` is not one of its allergens, or if it is, then `i` **has** to be in the list of ingredients
- The stop condition is when there are no more allergens to try
- Part 1: I had my FIND-MAPPING function return the list of foods without a clear mapping, so I went on and counted the remaining ingredients
- Part 2: do as told -- sort the mapping by the allergen name, and concatenated the ingredients
- PS. This naive solution could take forever to complete if you are not careful at selecting which of the remaining foods to try first; I thought it would make sense to try first the foods with the smallest number of ingredients / allergens (see SORT-FOODS), and apparently that did the trick
(defun parse-ingredients (string)
  (cl-ppcre:split " " string))

(defun parse-allergens (string)
  (cl-ppcre:split ", " string))

(defun parse-food (string)
  (cl-ppcre:register-groups-bind ((#'parse-ingredients ingredients)
                                  (#'parse-allergens allergens))
      ("(.*) \\(contains (.*)\\)" string)
    (list ingredients allergens)))

(defun ingredients (food) (first food))
(defun allergens (food) (second food))

(defun parse-foods (data)
  (mapcar #'parse-food data))

(defun find-mapping (foods)
  (labels ((sort-foods (foods)
             (sort foods #'<
                   :key (lambda (food)
                          (* (length (ingredients food))
                             (length (allergens food))))))
           (foods-valid-p (foods) foods)
           (update-foods (foods i a)
             (loop for (ii aa) in foods
                   collect (list (remove i ii :test #'string=)
                                 (remove a aa :test #'string=))
                   when (and (member a aa :test #'string=)
                             (not (member i ii :test #'string=))) return nil))
           (recur (foods mapping &aux (foods (sort-foods foods)))
             (cond ((not (foods-valid-p foods)) nil)
                   ((every #'null (mapcar #'allergens foods))
                    (return-from find-mapping (values mapping foods)))
                   (t
                     (loop for (ii aa) in foods do
                           (loop for i in ii do
                                 (loop for a in aa do
                                       (recur (update-foods foods i a)
                                              (cons (cons i a) mapping)))))))))
    (recur foods nil)))

(define-solution (2020 21) (foods parse-foods)
  (multiple-value-bind (mapping remaining) (find-mapping foods)
    (values (reduce #'+ (mapcar #'ingredients remaining) :key #'length)
            (format nil "~{~a~^,~}"
                    (mapcar #'first (sort mapping #'string< :key #'cdr))))))

Time to mop up [2020/16](https://matteolandi.net/plan.html#day-2020-12-16)'s solution.

Let's start with part 1:

- For each "nearby" ticket
- For each of its values
- See if it's in range range with any of the defined rules -- and if not, sum these values
(defun value-valid-p (rules value)
  (loop for (name (from1 . to1) (from2 . to2)) in rules
        thereis (or (<= from1 value to1)
                    (<= from2 value to2))))

(defun ticket-errors (rules ticket)
  (loop for value across ticket unless (value-valid-p rules value) sum value))

(defun ticket-scanning-error-rate (doc)
  (reduce #'+ (nearby-tickets doc)
          :key (lambda (ticket) (ticket-errors (rules doc) ticket))))
It's that simple: no arrays, no trying to be clever, KISS!

Now let's move on to part 2:

- First we filter out any _invalid_ ticket
- Then we find the mapping; how? Backtracking
- Start with a _table_ mapping each rule to the ticket positions for which **all** the "nearby" tickets have valid values for that rule, and an empty field mapping
- For each remaining rule `n`, try to use that at one of the _compatible_ ticket positions `p` -- remove `n` from the list of remaining rules, and remove `p` from all the remaining rules _compatible_ positions
- If the new set of rules is invalid -- i.e. there is at least a rule without _compatible_ ticket positions -- it means we reached a dead end, and we need to backtrack
- If we run out of rules to try, it means we found our mapping
- Otherwise, repeat
- PS. To speed things up, of all the remaining rules we will try and evaluate the ones with the least number of _compatible_ ticket positions first
(defun ticket-valid-p (rules ticket)
  (loop for value across ticket always (value-valid-p rules value)))

(defun rules-to-ticket-positions (rules tickets &aux result)
  (loop for pos below (length rules) do
        (loop for (name (from1 . to1) (from2 . to2)) in rules
              for valid = (loop for ticket in tickets
                                for value = (aref ticket pos)
                                always (or (<= from1 value to1)
                                           (<= from2 value to2)))
              for existing = (assoc name result :test #'string=)
              when valid do
              (if existing
                (push pos (second existing))
                (push (list name (list pos)) result)))
        finally (return result)))

(defun find-mapping (rules tickets)
  (labels ((sort-rules (rules)
             (sort rules #'< :key (lambda (x) (length (second x)))))
           (rules-valid-p (rules)
             (loop for (name positions) in rules
                   always (> (length positions) 0)))
           (update-rules (rules n p)
             (loop for (name positions) in rules
                   when (string/= name n)
                   collect (list name (remove p positions))))
           (recur (rules mapping &aux (rules (sort-rules rules)))
             (cond ((not (rules-valid-p rules)) nil)
                   ((null rules) (return-from find-mapping mapping))
                   (t
                     (loop for (n pp) in rules do
                           (loop for p in pp do
                                 (recur (update-rules rules n p)
                                        (cons (cons n p) mapping))))))))
    (recur (rules-to-ticket-positions rules tickets) nil)))
That's it!
[SBCL] AOC/2020/16> (time (solution-run))
Evaluation took:
  0.005 seconds of real time
  0.004759 seconds of total run time (0.003597 user, 0.001162 system)
  100.00% CPU
  13,419,408 processor cycles
  425,056 bytes consed

27802
279139880759

2020-12-20 (permalink)

Advent of Code: [2020/20](https://adventofcode.com/2020/day/20)

Solution:

- Parse input tiles into LISTs having the id as first element, followed by the content of the tile (first row of data, second row of data...) -- `(list 1234 "...#....." ".#.#..#.." ...)`
- Define functions to rotate a tile counterclockwise, and to flip it -- these are enough to generate all the possible transformations...you just have to apply them in sequence: rotate, rotate-flip, rotate-flip-rotate, rotate-flip-rotate-flip...
- Part 1
- Backtracking solution
- place a tile in the top-right corner; what remaining tiles (and their transformations) can be placed to its right?
- try one; what remaining tiles (and their transformations) can be placed to its right?
- carry on until you can, back track when reaching a dead end (i.e. no more compatible tiles)
- Caveat: when placing a new tile you have to check not just the tile on its left, but also the one at its top
- Optimization: before running the algorithm I indexed all the input tiles (and their transformations) by their top border, left border, and top and right border combined, hoping that that would speed things up -- and I guess it did
- Part 2
- Re-assemble the image -- I got this part wrong initially, but only realized it at the end...
- Brute force
- For all the possible transformations of the sea monster pattern -- I accidentally added an extra character in one of its lines, and because of that my pattern matching algorithm was not matching anything at all...
- Generate all the image tiles with the same size as the sea monster _pattern_ we are trying to match...
- Try to _match_ the pattern against the tiles -- yes, I did indeed use regexps for this...I replaced all the whitespaces in the pattern with `.`s
- Lastly, count all the `#`s in the reassembled image, and subtract `15` times the number of matching sea monsters -- correct, I assumed that there would be no overlapping matches, and I guess I was right
(defun parse-tile-id (string)
  (cl-ppcre:register-groups-bind ((#'parse-integer id))
      ("Tile (\\d+):" string)
    id))

(defun parse-tile (data)
  (cons (parse-tile-id (first data))
        (rest data)))

(defun id (tile) (car tile))
(defun data (tile) (cdr tile))
(defun data-top (tile) (first (data tile)))
(defun data-bottom (tile) (first (last (data tile))))
(defun data-left (tile) (data-bottom (rotate tile)))
(defun data-right (tile) (data-top (rotate tile)))

(defun parse-tiles (data)
  (let (groups current)
    (dolist (string (append data '("")))
      (if (string= string "")
        (setf groups (cons (reverse current) groups) current nil)
        (setf current (cons string current))))
    (mapcar #'parse-tile groups)))

(defun rotate (tile)
  (cons
    (id tile)
    (reverse
      (apply #'map 'list (lambda (&rest args) (format nil "~{~a~}" args))
             (data tile)))))

(defun flip (tile &aux flipped)
  (dolist (x (data tile) (cons (id tile) flipped))
    (push x flipped)))

(defun transformations (tile)
  (loop repeat 4
        for x = tile then (rotate x)
        collect x collect (flip x)))

(defun index-key (&key top left) (format nil "~a~a" top left))

(defun create-tiles-index (tiles &aux (index (make-hash-table :test 'equal)))
  (flet ((index-set (k v)
           (multiple-value-bind (existing existsp)
               (gethash k index)
             (if existsp
               (setf (gethash k index) (cons v existing))
               (setf (gethash k index) (list v))))))
    (dolist (tile tiles index)
      (dolist (ttile (transformations tile))
        (let ((top (data-top ttile))
              (left (data-left ttile)))
          (index-set (index-key) ttile)
          (index-set (index-key :top top) ttile)
          (index-set (index-key :left left) ttile)
          (index-set (index-key :top top :left left) ttile))))))

(defun reorder-tiles (tiles &aux (size (isqrt (length tiles))))
  (let ((index (create-tiles-index tiles))
        (image (make-array `(,size ,size) :initial-element nil)))
    (labels ((recur (used row col)
               (cond
                 ((not (array-in-bounds-p image row col))
                  (return-from reorder-tiles image))
                 (t
                   (let ((top (and (> row 0) (data-bottom (aref image (1- row) col))))
                         (left (and (> col 0) (data-right (aref image row (1- col))))))
                     (loop with key = (index-key :top top :left left)
                           for tile in (gethash key index)
                           unless (member (id tile) used) do
                           (setf (aref image row col) tile)
                           (if (array-in-bounds-p image row (1+ col))
                             (recur (cons (id tile) used) row (1+ col))
                             (recur (cons (id tile) used) (1+ row) 0))
                           (setf (aref image row col) nil)))))))
      (recur nil 0 0))))

(setf *sea-monster* (list
                      "..................#."
                      "#....##....##....###"
                      ".#..#..#..#..#..#..."))

(defun create-image (tiles)
  (let ((image (make-array '(96 96))))
    (loop for tile-row below 12 do
          (loop for tile-col below 12
                for tile = (aref tiles tile-row tile-col) do
                (loop for row from 0 for string in (subseq (data tile) 1 9)
                      for image-row = (+ (* tile-row 8) row) do
                      (loop for col from 0 for ch across (subseq string 1 9)
                            for image-col = (+ (* tile-col 8) col) do
                            (setf (aref image image-row image-col) ch)))))
    image))

(defun crop (image top left height width)
  (loop for row from top below (+ top height) collect
        (loop for col from left below (+ left width)
              collect (aref image row col) into characters
              finally (return (coerce characters 'string)))))

(defun tiles-match-p (tile1 tile2)
  (every (lambda (x1 x2) (cl-ppcre:scan x1 x2)) tile1 tile2))

(defun count-matches (tile image)
  (let ((height (length (data tile)))
        (width (length (first (data tile)))))
    (loop with (image-height image-width) = (array-dimensions image)
          for row from 0 below (- image-height height) sum
          (loop for col from 0 below (- image-width width)
                count (tiles-match-p (data tile)
                                     (crop image row col height width))))))

(define-solution (2020 20) (tiles parse-tiles)
  (let* ((tiles (reorder-tiles tiles))
         (image (create-image tiles)))
    (values
      (* (id (aref tiles 0 0))
         (id (aref tiles 0 11))
         (id (aref tiles 11 0))
         (id (aref tiles 11 11)))
      (- (loop for index below (* 96 96)
               count (char= (row-major-aref image index) #\#))
         (* 15
            (loop for tile in (transformations (cons 666 *sea-monster*))
                  for matches = (count-matches tile image)
                  when (> matches 0) return matches))))))
Not the _cleanest_ implementation (lots of hard-coded stuff!), but at that point I was just rushing for the second star -- and believe it or not, today I scored my best placement ever:
      --------Part 1--------   --------Part 2--------
Day       Time   Rank  Score       Time   Rank  Score
 20   03:52:37   3838      0   05:16:07   1553      0

2020-12-19 (permalink)

Advent of Code: [2020/19](https://adventofcode.com/2020/day/19)

Solution:

- Use input rules to generate a _massive_ regexp -- it took me a couple of hours to realize I could indeed solve this using regexps!
- Match the regexp against the input messags, and count the ones which test positive
- The regexp is created _recursively_
- The regexp for a given rule can be calculated by calculating the regexps of its sub-rules first, and _combining_ them (i.e. `a|b` where `a` is the regexp for the first sub-rule, and `b` is the regexp for the second one)
- The regexp for a given sub-rule can be calculated by _concatenating_ the regexpes of its rules (i.e. `ab` where `a` is the regexp for the first sub-rule, and `b` is the regexp for the second one)
- To make the matching operation more efficient we are compiling the regexp into a scanner first (i.e. `(cl-ppcre:create-scanner regexp)`)
- Part 2 makes two of the input rules recursive, and to avoid my recursive solution to spin forever, I added a `depth` parameter to my MAKE-REGEXP function and decremented it on every recursive call -- and a recursion limit of `20` seems to be enough to generate the right regexp for part 2
(defun parse-sub-rule (string)
  (if (find #\" string)
    (list (string (char string 1)))
    (mapcar #'parse-integer (cl-ppcre:split " " string))))

(defun parse-sub-rules (string)
  (let ((sub-rules (cl-ppcre:split " \\| " string)))
    (mapcar #'parse-sub-rule sub-rules)))

(defun parse-rule (string)
  (cl-ppcre:register-groups-bind ((#'parse-integer id) rest)
      ("(\\d+): (.*)" string)
    (list id (parse-sub-rules rest))))

(defun id (rule) (car rule))
(defun sub-rules (rule) (cadr rule))

(defun parse-input (data)
  (let (groups current)
    (dolist (string (append data '("")))
      (if (string= string "")
        (setf groups (cons (reverse current) groups) current nil)
        (setf current (cons string current))))
    (list
      (mapcar #'parse-rule (second groups))
      (first groups))))

(defun rules (input) (car input))
(defun messages (input) (cadr input))

(defparameter *max-recursion* 20)

(defun make-regexp (rules)
  (let ((sub-rules-map (make-hash-table)))
    (dolist (rule rules)
      (setf (gethash (id rule) sub-rules-map) (sub-rules rule)))
    (labels ((recur (id depth &aux (sub-rules (gethash id sub-rules-map)))
               (cond
                 ((= depth *max-recursion*) "") ; too much recursion, give up!
                 ((not (numberp id)) id)        ; termination rule, return it.
                 (t
                   (format nil "(~{~A~^|~})"
                           (loop for sub-rule in sub-rules collect
                                 (format nil "~{~A~}"
                                         (loop for id in sub-rule
                                               collect (recur id (1+ depth))))))))))
      (format nil "^~A$" (recur 0 0)))))

(defun count-matches (data &aux (input (parse-input data)))
  (let* ((regexp (make-regexp (rules input)))
         (scanner (cl-ppcre:create-scanner regexp)))
    (count-if (lambda (s) (cl-ppcre:all-matches scanner s)) (messages input))))

(defun prepare-part2 (data)
  (loop for string in data
        if (string= string "8: 42") collect "8: 42 | 42 8"
        else if (string= string "11: 42 31") collect "11: 42 31 | 42 11 31"
        else collect string))

(define-solution (2020 19) (data)
  (values (count-matches data)
          (count-matches (prepare-part2 data))))
PS. My [original solution](https://topaz.github.io/paste/#XQAAAQBcEAAAAAAAAAAUGQimgulVPYrJhMVjyYOkoNMRrSkEDb+VhGpjZScfl0vQffPIveu6zou42QlYi3oXP009ZBKb+73pdKhOT5C+sOCPTrLLxmKoZfx/upwjyRQJVK5QxpWmzVUWY5qy/01i9v9CB49xj+0sI8P96rG17oQR/UE1GaYDQq7CW3x8lQViBxKC0ikxVHqmfbRp7RdQHHBFH3UbsI6Su5ohIHBG4UA9H14+wCjb/qNcyqOQLNlEwX0V8k6JaO24dy2bp/46mg2DlY8nz/WVAcMUGXRZWL6bkK49UKEsediwu46KxfywY5aP/X56cNVSNUEI/CM1TZxuAzw8hZXa237l7P+KP2+tlsHisWpC4LkeKzSIFVSPoSZqALrAHjysHtbirktMiHX3nH2w56zcA0mtHTLWTeU9PyRxKp+UndXzjmlklCRJT5ytgdFJAm/nDgLwe5JdBn1wwMekZEf9yuOGA+I3S7V2e1Pmwhm9LuWjBFAcCabpRLrIgcM16lv1bix8dRVC9l/pnWcME0jJCL6WWAZYoSSDb9TeeduuLH9a14RrHR5x5fK7LIhLVUy7w87lts1yzFkvgwV96z0CndZEluIaqLqiSqtZwcLoqPN3NsAAeSTIFN7hZ2HKAUgUWvtkGV4DXJmTD9MsBGbdM9oPkO7hOE4kD+vL3HFLEIOr5wQPUzLPc3fLZkPJr66RnQN7CuJWddqQaEDf0vdCEMPLA8fKNlUmxl6nJemRYVGFO2YaAWHAMOPqZQ+CKe104YUyB0rTd8MVmBcPsN/jX3P3LfiwSUcnWOOCSV1X5RTMFMnFs+nrD91DTn+inrq3DEuGL3S8nm73l/kN68SA3H4YrhNSz07MUJLrmRzlmvZXoUplaQKiN9DyPMJMXsLpOsSB+iBXOo1/hBmga+cGExX7k4P69Dbg+3J+JINzvwaLRclkrpXkWomgthb5G7/kBd44T+LuquziC7e0sBfFMhJnUyGjmBlsf/G/LCNS+ZjTWf1XIj8XiqhLJmmE/T56I34xPQgnAQlCm3N1+uid2+RQ9Ml/m0Vc0JqFkBM3Cr+7iVEuQIi+/LWDiKzDX1F+XEkp+LpRDzUOjWcFBc2Cmhbg2IkKqd3eytEiQ0QPJZHYtFe5kACz2v1seeU84fGpW+BM11QBn2pLy0tEGoSmnRzuE8Nnk209tXGs2DUhkP13iP07nV1Swdkk9cAR54SbbV0pC+ddPEvznwnigCpV9GRX4Z2N0Sxk7GC6tFRmVYZ4VRtAMEqyL1w9Qs1Y8l7QdJ31F/8jrpo4t6Ad+djOlGRa/ykunLWlu5aoM3wKMRrmbZP/qnzRxaZyDyeOgxlwaU2wActcyKlXvYV7Hjs9I//wCx9Wk8FZvEfcc4lvti3NlF3m3L53CdJCLBpOLc1hZM8Nwn0HI5mFLx5qK6IiXb1LvuK96u7/roJdoqT8HVT2YCCY0vejpFmX8IKj7POD+hYQmj+csMe5Lpu4sjfWwNW/9gB6CGHxWT3lC7D1jUUHgbH3c/FvalIzhj3FdTjAWF3//DZ69g==) implemented the final regexp using a bottom-up approach: start from terminating rules, generate regexps for their parents, then generate regexps for their parents... For part 2 intead, I modified the regexp creator so that, for rule 8 it would output: `(regexp42)+`, while for rule 11 it would output: `((regexp42){1}(regxp31){1}|(regexp42){2}(regxp31){2}|...|(regexp42){10}(regxp31){10})`.
PPS. Clearly I like the recursive solution better!
PPPS. It would be interesting to solve this implementing a [Context-free grammar](https://en.wikipedia.org/wiki/Context-free_grammar) parser.

2020-12-18 (permalink)

Advent of Code: [2020/18](https://adventofcode.com/2020/day/18)

Solution:

- Parse each expression into an [S-expression](https://en.wikipedia.org/wiki/S-expression)
- Feed each to EVAL
- Sum the results

I implemented the parsing logic by combining together three different parsers:

- PARSE-OPERATOR, responsible for converting the next available character into a symbol (i.e. `+` or `*`)
- PARSE-OPERAND, responsible for parsing a single digit number, or an _expression_
- PARSE-EXPRESSION, responsible for ... the rest

Clearly the bulk of the work is getting done inside PARSE-EXPRESSION:

- First we parse an operand
- Then an operator
- If the operator needs to be evaluated immediately, then parse another operand and create an S-expression (i.e. `'(rator rand1 rand2)`)
- Otherwise, the rest of the expression needs to be evaluated first, so call PARSE-EXPRESSION recursively, and finally create the S-expression (i.e. `'(rator rand1 expr1)`)
- Repeat
(defparameter *operators-with-precedence* nil)

(defun parse-value (ch) (parse-integer (string ch)))

(defun parse-operand (string &aux (ch (char string 0)))
  (cond ((digit-char-p ch) (list (parse-value ch) (subseq string 1)))
        ((char= ch #.(char "(" 0))
         (destructuring-bind (e rest) (parse-expression (subseq string 1))
           (list e (subseq rest 1)))) ; skip closing parenthesis
        (t (error "Unexpected text: ~s" string))))

(defun parse-operator (string &aux (ch (char string 0)))
  (list (find-symbol (string ch)) (subseq string 1)))

(defun parse-expression (string)
  (loop with (expr rest) = (parse-operand string)
        until (or (zerop (length rest)) (char= (char rest 0) #.(char ")" 0)))
        for (rator rator-rest) = (parse-operator rest) do
        (if (find rator *operators-with-precedence*)
          (destructuring-bind (rand rand-rest)
              (parse-operand rator-rest)
            (setf expr (list rator expr rand) rest rand-rest))
          (destructuring-bind (expr1 expr1-rest)
              (parse-expression rator-rest)
            (setf expr (list rator expr expr1) rest expr1-rest)))
        finally (return (list expr rest))))

(defun parse (string &aux (string (remove #\Space string)))
  (first (parse-expression (remove #\Space string))))

(define-solution (2020 18) (data)
  (values
    (let ((*operators-with-precedence* '(+ *)))
      (reduce #'+ (mapcar #'parse data) :key 'eval))
    (let ((*operators-with-precedence* '(+)))
      (reduce #'+ (mapcar #'parse data) :key #'eval))))

(define-test (2020 18) (50956598240016 535809575344339))
PS. apparently this is the exact same problem the [Shunting-yard algorithm](https://en.wikipedia.org/wiki/Shunting-yard_algorithm) was designed to solve.

I am finally finding myself with some time to clean up some of my recent Advent of Code solutions, so let's have a look at [2020/09](https://matteolandi.net/plan.html#day-2020-12-09) again.

For part 2, given a list of numbers, we were asked to find a sub-sequence such that the sum of its elements would be equal to the solution for part 1.  This is what I came up with back then.

In words:
- Part 2
- BFS + prefix sums -- I encoded my state as `(psum . seq)` (e.g. `(15 2 3 10)`)
- For each number in the input, try to add it to each of the current states
- If the sum is greater than the target value, discard it
- if the sum is smaller than the target value, add it to the next set of states to evaluate
- If the sum is equal to the target value, there you have your subsequence
In code:
(defun subseq-that-adds-up-to (numbers target &optional frontier)
  (loop with n = (first numbers)
        for (psum . seq) in (cons (cons 0 nil) frontier)
        for psum-next = (+ n psum) for seq-next = (cons n seq)
        if (= psum-next target) return seq-next
        else collect (cons psum-next seq-next) into frontier-next
        finally (return (subseq-that-adds-up-to (rest numbers)
                                                target
                                                frontier-next))))
Well, it turns out a way simpler solution exists, and it goes like this; given a subsequence (defined in terms of its `start` index, `end` index, and the sum of its elements):

- When `sum` is less than `target`, move the `end` index ahead (and update `sum`)
- When `sum` is greater than `target`, move the `start` index ahead (and update `sum`)
- When `sum` is equal to `target`, then we found our subsequence!

Take care of initialization details, and you should end up with something like this:
(defun subseq-that-adds-up-to (numbers target
                                       &aux (numbers (coerce numbers 'vector)))
  (loop
    with start = 0 with end = 0 with sum = (aref numbers start)
    if (= sum target) return (coerce (subseq numbers start (1+ end)) 'list)
    else if (> sum target) do (setf sum (- sum (aref numbers start))
                                    start (1+ start))
    else do (setf end (1+ end) sum (+ sum (aref numbers end)))))

2020-12-17 (permalink)

Advent of Code: [2020/17](https://adventofcode.com/2020/day/17)

Solution:

- Classic [Conway's Game of Life](https://en.wikipedia.org/wiki/Conway%27s_Game_of_Life) (even the rules were the same); the only difference is that the game evolves in 3 dimensions for part 1 -- cells are cubes! -- and in 4 dimensions for part 2!
- I modeled the game is a set of alive cells, and at each iteration I...
- Get the full list of cells, adjacents to the active ones, without discarding duplicates...
- For each unique cell, count the number of its occurrences (the idea is, if `(x y z)` is mentioned twice, it means two of its adjacents cells are active)
- Use this information (i.e cell position, the number of its neighbors) and the _previous_ cell state, to mark the cell as dead or alive
- PS: Kudos to [Christophe Grand](http://clj-me.cgrand.net/2011/08/19/conways-game-of-life/) for coming up with this nice and concise way of implementing Conway's Game of Life (in Clojure)
- PPS: I was afraid of using LISTs as SETs, so I went on and created a new type, HSET
(defun parse-coords (data)
  (let* ((width (length (first data)))
         (height (length data)))
    (loop for y upto height for row in data nconc
          (loop for x upto width for c across row
                when (eql c #\#) collect (list x y 0)))))

(defun make-hset (values &key (test 'eql))
  (let ((h (make-hash-table :test test)))
    (dolist (v values)
      (setf (gethash v h) t))
    h))

(defun hset-contains-p (v hset)
  (multiple-value-bind (unused exists-p)
      (gethash v hset)
    (declare (ignore unused))
    exists-p))

(defun hset-values (hset)
  (loop for v being the hash-keys of hset collect v))

(defun hset-size (hset) (hash-table-count hset))

(defparameter *neighbors-deltas* nil)

(defun neighbors-deltas (dimensions)
  (labels ((recur (n)
             (cond ((= n 0) (list nil))
                   (t (loop for d from -1 upto 1 nconc
                            (loop for rest in (recur (1- n))
                                  collect (cons d rest)))))))
    (remove-if (lambda (x) (apply #'= 0 x)) (recur dimensions))))

(defun neighbors (pos)
  (loop for delta in *neighbors-deltas* collect (mapcar #'+ pos delta)))

(defun frequencies (list &key (test 'eql))
  (let ((f (make-hash-table :test test)))
    (dolist (each list)
      (incf (gethash each f 0)))
    (loop for k being the hash-keys of f collect (cons k (gethash k f)))))

(defun next (state)
  (make-hset
    (loop
      for (pos . n) in (frequencies (mapcan #'neighbors (hset-values state)) :test 'equal)
      when (or (= n 3) (and (= n 2) (hset-contains-p pos state)))
      collect pos)
    :test 'equal))

(defun play (coords &aux (dimensions (length (first coords))))
  (let ((*neighbors-deltas* (neighbors-deltas dimensions))
        (game (make-hset coords :test 'equal)))
    (dotimes (n 6 (hset-size game))
      (setf game (next game)))))

(define-solution (2020 17) (coords parse-coords)
  (values
    (play coords)
    (play (mapcar #'(lambda (x) (append x (list 0))) coords))))

2020-12-16 (permalink)

Advent of Code: [2020/16](https://adventofcode.com/2020/day/16)

Solution:

- Parse rules into LISTs having the name of the rule as first argument, and two _ranges_ as second and third element (a range is just an improper list containing two numbers: the min and max values of the range)
- Parse tickets into ARRAY of numbers
- Part 1
- Create an ARRAY of 1000 elements (all my tickets numbers were smaller than 1000)
- Iterate all the rule ranges and mark each possible value of those ranges as valid
- Iterate all the tickets numbers, and sum the ones which are not valid (i.e the numbers which are not in the array above)
- Part 2
- For each _position_ inside a ticket, pre-calculate the list of _compatible_ fields (e.g. the first position is compatible with the "class" rule, the second is compatible with "row" and "seat" rules)
- Find the mapping using backtracking...
- What fields are compatible with the first first position in the ticket? Pick one...
- What fields are compatible with the second position in the ticket, knowing what we fixed the mapping for the first position? Pick one...
- If you reach a dead end, back-track...
- If you manage to place them all, you found your mapping
(defun parse-ticket-rule (string)
  (cl-ppcre:register-groups-bind (name (#'parse-integer from1 to1 from2 to2))
      ("(.+): (\\d+)-(\\d+) or (\\d+)-(\\d+)" string)
    (list name (cons from1 to1) (cons from2 to2))))

(defun parse-ticket-rules (data) (mapcar #'parse-ticket-rule data))

(defun parse-ticket (string)
  (map 'vector #'parse-integer (split-sequence:split-sequence #\, string)))

(defun parse-your-ticket (data) (parse-ticket (second data))) ; skip header

(defun parse-nearby-tickets (data)
  (mapcar #'parse-ticket (rest data))) ; skip header

(defun parse-document (data)
  (let (groups current)
    (dolist (string (append data '("")))
      (if (string= string "")
        (setf groups (cons (reverse current) groups) current nil)
        (setf current (cons string current))))
    (list
      (parse-ticket-rules (third groups))
      (parse-your-ticket (second groups))
      (parse-nearby-tickets (first groups)))))

(defun rules (doc) (first doc))
(defun your-ticket (doc) (second doc))
(defun nearby-tickets (doc) (third doc))

(defun ticket-valid-p (ticket valid-numbers)
  (every (lambda (n) (aref valid-numbers n)) ticket))

(defun ticket-errors (ticket valid-numbers)
  (loop for n across ticket unless (aref valid-numbers n) sum n))

(defun validate-tickets (doc)
  (let ((valid-numbers (make-array 1000 :initial-element nil)))
    (loop for (name . ranges) in (rules doc) do
          (loop for (min . max) in ranges do
                (loop for i from min upto max
                      do (setf (aref valid-numbers i) name))))
    (loop for ticket in (nearby-tickets doc)
          sum (ticket-errors ticket valid-numbers) into total-error-rate
          when (ticket-valid-p ticket valid-numbers) collect ticket into valid-tickets
          finally (return (values valid-tickets
                            total-error-rate)))))

(defun calculate-compatible-fields (rules tickets)
  (let ((result (make-array (length rules) :initial-element nil)))
    (dotimes (pos (length rules) result)
      (loop for (name (from1 . to1) (from2 . to2)) in rules
            for valid = (loop for ticket in tickets for n = (aref ticket pos)
                              always (or (<= from1 n to1) (<= from2 n to2)))
            when valid do (push name (aref result pos))))))

(defun find-field-mapping (compatibles &aux (n (length compatibles)))
  (labels ((recur (mapping &aux (pos (length mapping)))
             (cond
               ((= pos n) (return-from find-field-mapping (reverse mapping)))
               (t (loop for name in (set-difference
                                      (aref compatibles pos)
                                      mapping
                                      :test 'string=) do
                        (recur (cons name mapping)))))))
    (recur nil)))

(defun part2 (doc tickets)
  (let* ((compatibles (calculate-compatible-fields (rules doc) tickets))
         (mapping (find-field-mapping compatibles)))
    (loop for name in mapping for part across (your-ticket doc)
          when (search "departure" name) collect part into numbers
          finally (return (reduce #'* numbers)))))

(define-solution (2020 16) (doc parse-document)
  (multiple-value-bind (valid-tickets error-rate)
      (validate-tickets doc)
    (values error-rate (part2 doc valid-tickets))))
All good...except it takes 30 seconds to generate the answer for part 2:
[SBCL] AOC/2020/16> (time (solution-run))
Evaluation took:
  29.244 seconds of real time
  28.687495 seconds of total run time (28.453248 user, 0.234247 system)
  [ Run times consist of 0.149 seconds GC time, and 28.539 seconds non-GC time. ]
  98.10% CPU
  67,259,903,251 processor cycles
  5 page faults
  863,202,720 bytes consed

27802
279139880759
Surely if CL implemented SETs with trees instead of flat LISTs, things would run a bit faster; however, the problem here is that we are trying to create the mapping, in order, from first position to last, while instead we should pick the next position trying to reduce the branch factor (i.e. if there is a position compatible with a single field only, we should try that immediately).

2020-12-15 (permalink)

Advent of Code: [2020/15](https://adventofcode.com/2020/day/15)

Solution:

- two HASH-TABLEs
- one to keep track of the last time a given number was spoken
- one to keep track of the second-to-last time a given number was spoken
- for each spoken number:
- if we did not see this before (i.e. the number does not exist in `seen-at-2`), then the next spoken number is `0`
- otherwise, calculate the number _age_ (i.e. `turn-1 - turn-2`), and let that be new new spoken number
- lastly, _rotate_ from one HASH-TABLE to the other: `seen-at-1` into `seen-at-2`, and `turn` (i.e. the current turn) into `seen-at-2`
(defun parse-numbers (data)
  (mapcar #'parse-integer (split-sequence:split-sequence #\, (first data))))

(defun play (numbers turn-max &aux
                     (seen-at-1 (make-hash-table))
                     (seen-at-2 (make-hash-table)))
  (loop for n in numbers for turn from 1 do (setf (gethash n seen-at-1) turn))
  (loop with last = (first (reverse numbers))
        for turn from (1+ (length numbers)) to turn-max
        for turn-1 = (gethash last seen-at-1)
        for turn-2 = (gethash last seen-at-2)
        if (not turn-2) do (setf last 0) else do (setf last (- turn-1 turn-2))
        do (setf (gethash last seen-at-2) (gethash last seen-at-1)
                 (gethash last seen-at-1) turn)
        finally (return last)))

(define-solution (2020 15) (numbers parse-numbers)
  (values (play numbers 2020) (play numbers 30000000)))
That's it -- not the most performant solution, but not that astronomically slow either!
[SBCL] AOC/2020/15> (time (solution-run))
Evaluation took:
  7.049 seconds of real time
  7.023387 seconds of total run time (6.591918 user, 0.431469 system)
  [ Run times consist of 0.466 seconds GC time, and 6.558 seconds non-GC time. ]
  99.63% CPU
  16,212,370,931 processor cycles
  1 page fault
  914,358,672 bytes consed

1238
3745954

2020-12-14 (permalink)

Advent of Code: [2020/14](https://adventofcode.com/2020/day/14)

Solution:

- Parse each "mask = ..." input line into a pair of bit-masks: one for all the "0"s, one for all the "1"s
- "mem[...] = ..." lines instead are parsed into pairs containing the location of the memory to update, and the value to write
- Part 1: mask the value before writing it into memory
- To write 0s, we LOGNOT the _zeros_ mask first, and then LOGAND the value to write with this
- To write 1s instead, we simply LOGIOR the result of the previous step with the _ones_ mask
- Part 2: we generate the list of _masked_ addresses, and then update those memory locations. How? A little bit of dynamic programming
- `dp[i]` represents all the _masked addresses_ considering only the first `i` bits of our mask (the least significant ones)
- Base case: `dp[0] = [0]`; knowing `dp[i]`, `dp[i + 1]` can be calculated as follows
- For each element of `dp[i]` -- let's call it `number`
- If `i`-th bit in the mask is 0, then copy the `i`-th bit of the address into `number`, and append the result to `dp[i + 1]`
- If `i`-th bit in the mask is 1, then set the `i`-th bit of `number` to 1, and append the result to `dp[i + 1]`
- Otherwise, append two new values to `dp[i + 1]`: the first, `number` with its `i`-th bit set to 0, the second with its `i`-th bit set to 1
- Note: each step uses the result of the previous step, so we don't need a LIST/ARRAY to store all the intermediary states
(defun parse-mask (string)
  (cl-ppcre:register-groups-bind (mask)
      ("mask = ([10X]+)" string)
    (let ((zeros 0) (ones 0))
      (loop for c across (reverse mask) for index from 0
            for byte = (byte 1 index)
            when (char= c #\0) do (setf zeros (dpb 1 byte zeros))
            when (char= c #\1) do (setf ones (dpb 1 byte ones)))
      (list :mask (cons zeros ones)))))

(defun zeros (mask) (car mask))
(defun ones (mask) (cdr mask))

(defun parse-mem (string)
  (cl-ppcre:register-groups-bind ((#'parse-integer address value))
      ("mem\\[(\\d+)\\] = (\\d+)" string)
    (list :mem (cons address value))))

(defun address (mem) (car mem))
(defun value (mem) (cdr mem))

(defun parse-line (string)
  (or (parse-mask string) (parse-mem string)))

(defun parse-instructions (data)
  (mapcar #'parse-line data))

(defun value-mask (value mask)
  (logior (logand value (lognot (zeros mask)))
          (ones mask)))

(defun part1 (instructions)
  (loop with mask with mem = (make-hash-table)
        for (type instr) in instructions
        if (eql type :mask) do (setf mask instr)
        else do (setf (gethash (address instr) mem)
                      (value-mask (value instr) mask))
        finally (return (reduce #'+ (hash-table-values mem)))))

(defun masked-addresses (addr mask &aux (dp (list 0)))
  (dotimes (index 36 dp)
    (loop with prev = dp initially (setf dp nil) for number in prev
          for byte = (byte 1 index) do
          (cond ((logbitp index (zeros mask))
                 (push (dpb (ldb byte addr) byte number) dp))
                ((logbitp index (ones mask))
                 (push (dpb 1 byte number) dp))
                (t (push (dpb 0 byte number) dp)
                   (push (dpb 1 byte number) dp))))))

(defun part2 (instructions)
  (loop with mask with mem = (make-hash-table)
        for (type instr) in instructions
        if (eql type :mask) do (setf mask instr)
        else do (loop for addr in (masked-addresses (address instr) mask)
                      do (setf (gethash addr mem) (value instr)))
        finally (return (reduce #'+ (hash-table-values mem)))))

(define-solution (2020 14) (instructions parse-instructions)
  (values (part1 instructions) (part2 instructions)))

2020-12-13 (permalink)

Advent of Code: [2020/13](https://adventofcode.com/2020/day/13)

I knew the `x`s in the input were going to be used for part 2, so I decided to _manually_ parse the input and try to solve this directly from the REPL.

If we take the input timestamp and run the modulus of the timestamp and one of the bus periods, we obtain the time, prior to timestamp, the specific bas was last seen at the bus station; to know when the bus will be back again, we need to subtract this number from the bus period.  Do this for all the buses, find the min, and there you have your solution for part 1:
(- 19 (mod 1001798 19))
=>
15

(- 41 (mod 1001798 41))
=>
37

(- 859 (mod 1001798 859))
=>
655

(- 23 (mod 1001798 23))
=>
13

(- 13 (mod 1001798 13))
=>
8

(- 17 (mod 1001798 17))
=>
12

(- 29 (mod 1001798 29))
=>
7

(- 373 (mod 1001798 373))
=>
80

(- 37 (mod 1001798 37))
=>
14

(* 7 29)
=>
203
For part 2, as expected, things got really complicated... I immediately tried a _naive_ brute-force solution, but that did not take me anywhere (well, it might have, eventually):
(loop for ts from 100000000000000
      when (and (zerop (mod (+ ts 0) 19))) return ts)
=>
100000000000003

(loop for ts from 100000000000003 by 19
      when (zerop (mod (floor ts 19) 1000000)) do (pr ts)
      when (and (zerop (mod (+ ts 0) 19))
                (zerop (mod (+ ts 9) 41))
                (zerop (mod (+ ts 19) 859))
                (zerop (mod (+ ts 27) 23))
                (zerop (mod (+ ts 32) 13))
                (zerop (mod (+ ts 36) 17))
                (zerop (mod (+ ts 48) 29))
                (zerop (mod (+ ts 50) 373))
                (zerop (mod (+ ts 87) 37))) return ts)
=>
; Zzz
Tried to probe solutions using the _slowest_ bus instead, but that solution too was not converging fast enough:
(loop for ts from 100000000000000
      when (and (zerop (mod (+ ts 19) 859))) return ts)
=>
100000000000850

(loop for ts from 100000000000850 by 859
      when (zerop (mod (floor ts 859) 1000000)) do (pr ts)
      when (and (zerop (mod (+ ts 0) 19))
                (zerop (mod (+ ts 9) 41))
                (zerop (mod (+ ts 19) 859))
                (zerop (mod (+ ts 27) 23))
                (zerop (mod (+ ts 32) 13))
                (zerop (mod (+ ts 36) 17))
                (zerop (mod (+ ts 48) 29))
                (zerop (mod (+ ts 50) 373))
                (zerop (mod (+ ts 87) 37))) return ts)
=>
; Zzz
Then I tried with pen and paper, and after a bit realized that all I was trying to do, was solving the following system of equations:
x + 0  = 0 (mod 19)
x + 9  = 0 (mod 41)
x + 19 = 0 (mod 859)
x + 27 = 0 (mod 23)
x + 32 = 0 (mod 13)
x + 36 = 0 (mod 17)
x + 48 = 0 (mod 29)
x + 50 = 0 (mod 373)
x + 87 = 0 (mod 37)
Searching on DuckDuckGo for "system of modular equations" brought me to this YouTube [video](https://www.youtube.com/watch?v=-hUldZiDUj8) with a very interesting title; I stepped forward a little bit as I had no idea what the guy was talking about, and eventually at [1:38](https://youtu.be/-hUldZiDUj8?t=98) I saw what kind of equation system this theorem is helping you solve, and then I knew that "Chinese Remainder Theorem" was going to help me solve part 2.

I immediately looked for an online solver but the first one I found did not support more than 8 equations, and guess what, our problem had 9!  Luckily, the [next one](https://comnuan.com/cmnn02/cmnn0200a/cmnn0200a.php) did not have such limitation, however, I had to _massage_ my system a little bit before I were able to use the online solver.

First I had to move "time offsets" to the right side of the equation:
x =   0 (mod 19)
x =  -9 (mod 41)
x = -19 (mod 859)
x = -27 (mod 23)
x = -32 (mod 13)
x = -36 (mod 17)
x = -48 (mod 29)
x = -50 (mod 373)
x = -87 (mod 37)
Next I had to make all _negative_ values, _positive_, by adding the bus frequencies until the result were positives:
x =   0 (mod 19)
x =  32 (mod 41)
x = 840 (mod 859)
x =  19 (mod 23)
x =   7 (mod 13)
x =  15 (mod 17)
x =  10 (mod 29)
x = 323 (mod 373)
x =  24 (mod 37)
Lastly, convert the above into the solver's _expected_ input:
1 0 19
1 32 41
1 840 859
1 19 23
1 7 13
1 15 17
1 10 29
1 323 373
1 24 37
Click "Calculate", and you should be presented with the following message:
solution = 905694340256752 + 1361317053288127*n, where n is an integer
Well, the first number right there, `905694340256752`, would be the solution to your part 2 problem!

PS. I want to come back to this, and _implement_ a proper solution, but not today!

2020-12-12 (permalink)

Advent of Code: [2020/12](https://adventofcode.com/2020/day/12)

Solution:

- Use complex numbers to represent the position and the bearing of the ship
- Turning left by 90 degrees means multiplying your bearing by _i_ (i.e. `#c(0 1)`)
- Turning right by 90 degrees means multiplying your bearing by _-i_ (i.e. `#c(0 -1)`)
- Part 1: movement instructions (e.g. `N`, `S`, `W`, `E`) apply to the ship itself (that was a bit confusing at first, that the the ship could [strafe](https://en.wikipedia.org/wiki/Strafing_(video_games)), but whatever)
Part 2: movement instructions apply to the bearing of the ship
(defun parse-instruction (string)
  (cons (char string 0) (parse-integer string :start 1)))

(defun parse-instructions (data)
  (mapcar #'parse-instruction data))

(defun part1 (instructions)
  (loop with pos = 0 with dir = #c(1 0) for (ch . n) in instructions do
        (ecase ch
          (#\N (incf pos (* n #c(0 1))))
          (#\S (incf pos (* n #c(0 -1))))
          (#\E (incf pos (* n #c(1 0))))
          (#\W (incf pos (* n #c(-1 0))))
          (#\L (mulf dir (expt #c(0 1) (/ n 90))))
          (#\R (mulf dir (expt #c(0 -1) (/ n 90))))
          (#\F (incf pos (* n dir))))
        finally (return (manhattan-distance pos 0))))

(defun part2 (instructions)
  (loop with pos = 0 with dir = #c(10 1) for (ch . n) in instructions do
        (ecase ch
          (#\N (incf dir (* n #c(0 1))))
          (#\S (incf dir (* n #c(0 -1))))
          (#\E (incf dir (* n #c(1 0))))
          (#\W (incf dir (* n #c(-1 0))))
          (#\L (mulf dir (expt #c(0 1) (/ n 90))))
          (#\R (mulf dir (expt #c(0 -1) (/ n 90))))
          (#\F (incf pos (* n dir))))
        finally (return (manhattan-distance pos 0))))

(define-solution (2020 12) (instructions parse-instructions)
  (values (part1 instructions) (part2 instructions)))

2020-12-11 (permalink)

Advent of Code: [2020/11](https://adventofcode.com/2020/day/11)

Solution:

- Parse the seating layout into a HASH-TABLE (I thought about using a 1D or 2D ARRAY, but figured the HASH-TABLE would make things a bit easier as I would not have to worry about reading out of bounds positions)
- Seat positions are COMPLEX numbers
- Each step I copy the old state, update it based on the problem rules, return the new state
- If the new and the old state are EQUALP, then we found our end state
- Part 1: for each position, simply count the adjacent ones which are occupied
- Part 2: recursively check in the neighboring direction until we find a `#`, `L`, or get out of bounds
(defun parse-layout (data)
  (let* ((layout (make-hash-table))
         (row 0)
         pos)
    (dolist (string data layout)
      (dotimes (col (length string))
        (setf pos (complex col row)
              (gethash pos layout) (char string col)))
      (incf row))))

(defparameter *neighbors-deltas* '(#C(-1 -1) #C(0 -1) #C(1 -1) #C(-1 0) #C(1 0) #C(-1 1) #C(0 1) #C(1 1)))

(defun neighbors-occupied (seat layout)
  (count #\# *neighbors-deltas*
         :key (lambda (d) (gethash (+ seat d) layout #\.))))

(defun in-sight-occupied (seat layout)
  (labels ((recur (seat d &aux (state (gethash seat layout)))
             (cond ((not state) #\.)
                   ((char= #\. state) (recur (+ seat d) d))
                   (t state))))
    (count #\# *neighbors-deltas*
           :key (lambda (d) (recur (+ seat d) d)))))

(defun next (layout neighbors threshold &aux (result (make-hash-table)))
  (loop for seat being the hash-keys of layout
        for state = (gethash seat layout)
        for occupied-count = (funcall neighbors seat layout)
        do (setf (gethash seat result) state)
        when (and (char= state #\L) (zerop occupied-count)) do (setf (gethash seat result) #\#)
        when (and (char= state #\#) (>= occupied-count threshold)) do (setf (gethash seat result) #\L)
        finally (return result)))

(defun simulate (layout neighbors threshold)
  (loop for prev = layout then next
        for next = (next prev neighbors threshold)
        until (equalp next prev)
        finally (return (count #\# (hash-table-values prev)))))
This works, but it's not that _efficient_ (see later), so whenever I get a chance I would like to try to a) cache recursive calls inside IN-SIGHT-OCCUPIED, b) see if [changing how the problem is modeled](http://clj-me.cgrand.net/2011/08/19/conways-game-of-life/) might help or not.
[SBCL] AOC/2020/11> (time (solution-run))
Evaluation took:
  2.040 seconds of real time
  1.965362 seconds of total run time (1.891473 user, 0.073889 system)
  [ Run times consist of 0.105 seconds GC time, and 1.861 seconds non-GC time. ]
  96.32% CPU
  4,691,074,014 processor cycles
  724,157,584 bytes consed

2424
2208

2020-12-10 (permalink)

Advent of Code: [2020/10](https://adventofcode.com/2020/day/10)

Solution:

- Part 1
- Sort connectors
- Calculate deltas between adjacents connectors
- Part 2
- Top-down dynamic programming (i.e. recursion + memoization)
- Start from joltage of 0, and call it `p`
- Look at the next adapter `n`: if `n` and `p` differs by less than 3 jolts, then you can...
- Take it, and count all the possible ways you can combine the remaining adapters, knowing that `p` is now `n`
- Or skip it, and count all the possible ways you can combine the remaining adapters, knowing that `p` has not changed
- Stop when there are no more adapters to play with
(defun part1 (adapters)
  (loop for p = 0 then c for c in adapters for delta = (- c p)
        count (= delta 1) into ones
        count (= delta 3) into threes
        finally (return (* ones (1+ threes)))))


(defun part2 (adapters)
  (let ((memo (make-hash-table :test 'equal)))
    (labels ((recur (p rest &aux (n (first rest)) (key (list p n)))
               (cond ((or (null rest) (> (- n p) 3)) 0) ; no luck
                     ((null (rest rest)) 1) ; found it
                     ((gethash key memo) (gethash key memo))
                     (t (setf (gethash key memo)
                              (+ (recur p (rest rest))
                                 (recur n (rest rest))))))))
      (recur 0 adapters))))

(define-solution (2020 10) (numbers parse-integers)
  (let* ((sorted (sort numbers #'<)))
    (values (part1 sorted) (part2 sorted))))
Alternatively, one could still use dynamic programming but go with a bottom-up approach:

- `dp[n]` represents the solution to our problem, considering only the adapters up until `n`
- Base case, the charging outlet: `dp[0] = 1`
- Transformation function: knowing `dp[0]...dp[n]`, the next value `dp[n + 1]` can be calculated simply by summing together the solutions for the 3 previous adapters, as those are the ones in range (i.e. `dp[n]`, `dp[n - 1]`, and `dp[n - 2]`)
(defun part2 (numbers &aux (dp (make-hash-table)))
  (loop initially (setf (gethash 0 dp) 1)
        for n in numbers do
        (setf (gethash n dp) (+ (gethash (- n 1) dp 0)
                                (gethash (- n 2) dp 0)
                                (gethash (- n 3) dp 0)))
        finally (return (gethash n dp))))
I think I like this last one better!

2020-12-09 (permalink)

Advent of Code: [2020/09](https://adventofcode.com/2020/day/09)

Solution:

- Part 1
- Implement a sliding-window HASH-SET of 25 elements
- For each number after the 25th, check if it's valid, and if not, remove the oldest element from the set, and add the current one
- Checking if a number `n` is valid is simple: for each element `x` in the sliding-set, if `n - x` exists in the set as well then `n` is valid
- Part 2
- BFS + prefix sums -- I encoded my state as `(psum . seq)` (e.g. `(15 2 3 10)`)
- For each number in the input, try to add it to each of the current states
- If the sum is greater than the target value, discard it
- if the sum is smaller than the target value, add it to the next set of states to evaluate
- If the sum is equal to the target value, there you have your subsequence
(defun make-set (numbers)
  (let ((set (make-hash-table)))
    (dolist (x numbers set)
      (setf (gethash x set) t))))

(defun sadd (validator number) (setf (gethash number validator) t))
(defun srem (validator number) (remhash number validator))

(defun validp (validator number)
  (loop for n being the hash-keys of validator
        thereis (gethash (- number n) validator)))

(defun find-first-invalid (numbers)
  (loop with s = (make-set (subseq numbers 0 25))
        for oldest in numbers for newest in (nthcdr 25 numbers)
        if (not (validp s newest)) return newest
        else do (srem s oldest) (sadd s newest)))

(defun subseq-that-adds-up-to (numbers target &optional frontier)
  (loop with n = (first numbers)
        for (psum . seq) in (cons (cons 0 nil) frontier)
        for psum-next = (+ n psum) for seq-next = (cons n seq)
        if (= psum-next target) return seq-next
        else collect (cons psum-next seq-next) into frontier-next
        finally (return (subseq-that-adds-up-to (rest numbers)
                                                target
                                                frontier-next))))

(define-solution (2020 9) (numbers parse-integers)
  (let* ((invalid (find-first-invalid numbers))
         (subseq (subseq-that-adds-up-to numbers invalid)))
    (values
      invalid
      (+ (apply 'min subseq) (apply 'max subseq)))))

2020-12-08 (permalink)

Advent of Code: [2020/08](https://adventofcode.com/2020/day/08)

Solution:

- Instructions are parsed into a plist, e.g. `(:jmp -1)`
- The whole program is an array if instructions -- array, not a list, so I can quickly jump around
- RUN-PROGRAM returns 2 values: the last value of the accumulator, and whether the program exited normally or not
- Part 1: run the program and return the accumulator value
- Part 2: brute force -- toggle instruction, run, on clear exit return the value, otherwise toggle the instruction again and try with the next one
(defun parse-instruction (string)
  (destructuring-bind (name arg)
      (split-sequence:split-sequence #\Space string)
    (cons (make-keyword (string-upcase name)) (parse-integer arg))))

(defun parse-program (data)
  (map 'vector #'parse-instruction data))

(defun run-program (program)
  (loop with ip = 0 with acc = 0
        for (name . arg) = (aref program ip) do
        (incf ip (if (eql name :jmp) arg 1))
        when (eql name :acc) do (incf acc arg)
        when (member ip seen) return (values acc)
        when (= ip (length program)) return (values acc t)
        collect ip into seen))

(defun toggle-instruction (program i)
  (let ((name (car (aref program i))))
    (unless (eql name :acc)
      (setf (car (aref program i))
            (if (eql name :jmp) :nop :jmp)))))

(define-solution (2020 8) (program parse-program)
  (values
    (run-program program)
    (loop for i below (length program)
          when (toggle-instruction program i) do
          (multiple-value-bind (acc clear-exit-p)
              (run-program program)
            (when clear-exit-p (return acc)))
          (toggle-instruction program i))))

Regarding my 2020/07 solution:
- Part 1: For each bag type, recursively unfold its content and see if we can find a `:shiny-gold` bag in it
It turns a more efficient solution is to:

- Invert the _bag-x-contains-these-many-bags mappings_ (and get a _bag-x-is-contained-inside-these-other-bags_ mapping)
- Start from `:shiny-gold` and see what's the number of _distinct_ bags you can get to
(defun invert (table)
  (loop with inverted = nil
        for (from mapping) in table do
        (loop for (to . _) in mapping
              for existing = (assoc to inverted) do
              (if existing
                (push from (second existing))
                (push (list to (list from)) inverted)))
        finally (return inverted)))

(defun collect-containing-bags (curr rules)
  (remove-duplicates
    (loop for type in (second (assoc curr rules))
          append (cons type (collect-containing-bags type rules)))))
With this, the solution to part 1 simply becomes: `(length (collect-containing-bags :shiny-gold (invert rules)))`

Advent of Code: [2018/17](https://adventofcode.com/2018/day/17)

Solution:

- Simulation (i.e. let the water spill and see what happens)
- Start from the position of the spring of water and (recursively) try to spill water down, to the left, or to the right
- Base case 1: the position we are at has already been visited
- Base case 2: the position we are at is _below_ the lowest point of the input map
- Mark the current position as _dripping water_ (i.e. `|`)
- Recurse down, and on the way up try to see if any dripping water at the level below can be marked as _still_ (i.e. `~`) -- this is achievede by going left/right, until we reach a block (`#`) or still water
- If the position we just came _up_ from (after we recursed _down_) turns out to be a block or still water, then we recurse to the left or the right, effectively filling up the reservoirs)
(defun parse-clay-vein (string)
  (cl-ppcre:register-groups-bind (x-or-y (#'parse-integer n m o))
      ("(x|y)=(\\d+).*=(\\d+)..(\\d+)" string)
    (if (string= x-or-y "x")
        (loop with col = n for row from m upto o collect (complex n row))
        (loop with row = n for col from m upto o collect (complex col n)))))

(defun parse-map (data)
  (let ((map (make-hash-table)))
    (dolist (vein (mapcar #'parse-clay-vein data) map)
      (dolist (pos vein)
        (setf (gethash pos map) #\#)))))

(defun row-min-max (map)
  (loop for pos being the hash-keys of map
        minimize (imagpart pos) into min
        maximize (imagpart pos) into max
        finally (return (cons min max))))

(defun left (pos) (+ #c(-1 0) pos))
(defun right (pos) (+ #c(1 0) pos))
(defun down (pos) (+ #c(0 1) pos))

(defun should-be-still-water-p (map pos)
  (labels ((recur (next pos &aux (ch (gethash pos map)))
             (when ch
               (or (find ch "#~")
                   (and (char= ch #\|) (recur next (funcall next pos)))))))
    (and (find (gethash (down pos) map) "#~")
         (recur #'left pos)
         (recur #'right pos))))

(defun mark-row-as-still-water (map pos)
  (loop for cur = pos then (left cur) for ch = (gethash cur map)
        until (find ch "#~") do (setf (gethash cur map) #\~))
  (loop for cur = (right pos) then (right cur) for ch = (gethash cur map)
        until (find ch "#~") do (setf (gethash cur map) #\~)))

(defun spill-water (map)
  (destructuring-bind (row-min . row-max) (row-min-max map)
    (labels ((dfs (pos)
               (cond ((gethash pos map))
                     ((> (imagpart pos) row-max))
                     (t (setf (gethash pos map) #\|)
                        (unless (gethash (down pos) map)
                          (dfs (down pos))
                          (when (should-be-still-water-p map (down pos))
                            (mark-row-as-still-water map (down pos))))
                        (when (find (gethash (down pos) map) "#~")
                          (dfs (left pos))
                          (dfs (right pos)))))))
      (dfs (complex 500 row-min)))))

(define-solution (2018 17) (map parse-map)
  (spill-water map)
  (loop for ch being the hash-values of map
        count (char= ch #\|) into dripping
        count (char= ch #\~) into still
        finally (return (values (+ dripping still) still))))

2020-12-07 (permalink)

Advent of Code: [2020/07](https://adventofcode.com/2020/day/07)

Solution:

- Parse the input into a list of _rules_, where each rule looks like: `(:muted-white ((:posh-chartreuse . 1) (:dim-silver . 1) (:posh-bronze . 4) (:striped-black . 1)))`
- Part 1: For each bag type, recursively unfold its content and see if we can find a `:shiny-gold` bag in it
- Part 2: Starting from the `shiny-gold` bag, recursively unfold its content and count how many bags are in it
(defun bag-type (string)
  (make-keyword (string-upcase (substitute #\- #\Space string))))

(defun parse-bag-content (string)
  (cl-ppcre:register-groups-bind ((#'parse-integer number)
                                  (#'bag-type type)
                                  rest)
      ("(\\d+) ([a-z ]+) bags?(.*)" string)
    (cons (cons type number) (parse-bag-content rest))))

(defun parse-rule (string)
  (cl-ppcre:register-groups-bind ((#'bag-type type) rest)
      ("([a-z ]+) bags contain (.*)" string)
    (list type (parse-bag-content rest))))

(defun parse-rules (data)
  (mapcar #'parse-rule data))

(defun contains-shiny-gold-p (rules curr)
  (or (eq curr :shiny-gold)
      (let ((bag-rule (assoc curr rules)))
        (some (partial-1 'contains-shiny-gold-p rules (first _))
              (second bag-rule)))))

(defun count-bags-inside (curr rules)
  (loop for (type . n) in (second (assoc curr rules))
        sum (+ (* (1+ (count-bags-inside type rules)) n))))

(define-solution (2020 7) (rules parse-rules)
  (values
    (count-if (partial-1 #'contains-shiny-gold-p rules)
              (remove :shiny-gold rules :key #'first)
              :key #'first)
    (count-bags-inside :shiny-gold rules)))

2020-12-06 (permalink)

Advent of Code: [2020/06](https://adventofcode.com/2020/day/06)

Solution:

- Parse groups of answers using similar logic used for 2020/04 -- a group of answers is a list of list of characters
- Part 1: reduce the answers of each group using UNION to get all the distinct answers
- Part 2: reduce the answers of each group using INTERSECTION to get all the unanimous answers
(defun parse-groups (data)
  (let (groups current)
    (dolist (string (append data '("")) groups)
      (if (string= string "")
        (setf groups (cons current groups) current nil)
        (setf current (cons (coerce string 'list) current))))))

(define-solution (2020 6) (groups parse-groups)
  (loop for answers in groups
        sum (length (reduce #'union answers)) into part1
        sum (length (reduce #'intersection answers)) into part2
        finally (return (values part1 part2))))

2020-12-05 (permalink)

Advent of Code: [2020/05](https://adventofcode.com/2020/day/05)

Solution:

- Intuition 1: boarding passes are just binary numbers (e.g. 'F' -> '0', 'B' -> '1')
- Intuition 2: seat ids are just the binary numbers mentioned above -- _seat-row times 8 plus seat-col_
- Convert boarding passes into seat IDs
- Part 1: take the max of these IDs
- Part 2: sort the IDs, iterate through them, in pairs, and find the _missing_ value
(defun parse-seat (string)
  (parse-integer
    (map 'string
         #'(lambda (ch)
            (ecase ch
              ((#\F #\L) #\0)
              ((#\B #\R) #\1)))
         string)
    :radix 2))

(defun parse-seats (data)
  (mapcar #'parse-seat data))

(define-solution (2020 5) (data)
  (loop with seats = (sort (parse-seats data) #'<) with part2
        for curr in seats for next in (rest seats)
        when (= (- next curr) 2) do (setf part2 (1+ curr))
        finally (return (values curr part2))))
PS. Of course I did not realize about intuition 1 and 2 until after I got my 2 stars and started reading other solutions on Reddit.  For part 1 I originally used binary search to figure out where the boarding passes would take me to; for part 2 instead, I marked as seen all the seats that I encountered, and then iterated through them again looking for the missing one (i.e. first find a row with at least one used seat, then look for another one that has one free).

2020-12-04 (permalink)

Advent of Code: [2020/04](https://adventofcode.com/2020/day/04)

Solution:

- Parse the passports into a plist of fields, skip "cid" ones completely
- Part 1
- Count the passports that have 7 fields (i.e. 14 elements)
- Part 2
- Implement _validators_ for each field type
- Count the number of passports from part1, that also pass these new checks
(defun parse-line (string)
  (loop for part in (split-sequence:split-sequence #\Space string)
        for (name value) = (split-sequence:split-sequence #\: part)
        unless (string= name "cid")
        append (list (make-keyword (string-upcase name)) value)))

(defun parse-passports (data)
  (let (passports current)
    (dolist (string (append data '("")) passports)
      (if (string= string "")
        (setf passports (cons current passports) current nil)
        (setf current (append current (parse-line string)))))))

(defun has-all-required-fields-p (passport)
  (= (length passport) 14)) ; it's a plist, and we skipped the "cid" field

(defun in-range-p (string left right)
  (<= left (parse-integer string) right))

(defun valid-height-p (string)
  (let ((number (subseq string 0 (- (length string) 2)))
        (unit (subseq string (- (length string) 2))))
    (cond ((string= unit "cm") (in-range-p number 150 193))
          ((string= unit "in") (in-range-p number 59 76)))))

(defun valid-hex-color-p (string)
  (and (char= (char string 0) #\#)
       (parse-integer (subseq string 1) :radix 16)))

(defun valid-color-name-p (string)
  (member string '("amb" "blu" "brn" "gry" "grn" "hzl" "oth")
          :test #'string=))

(defun valid-passportid-p (string)
  (and (= (length string) 9) (parse-integer string)))

(defun field-valid-p (name value)
  (handler-case (case name
                  (:byr (in-range-p value 1920 2002))
                  (:iyr (in-range-p value 2010 2020))
                  (:eyr (in-range-p value 2020 2030))
                  (:hgt (valid-height-p value))
                  (:hcl (valid-hex-color-p value))
                  (:ecl (valid-color-name-p value))
                  (:pid (valid-passportid-p value)))
    (error () nil)))

(defun all-fields-valid-p (passport)
  (loop for (name value) on passport by #'cddr
        always (field-valid-p name value)))

(define-problem (2020 04) (data)
  (let ((passports (remove-if-not #'has-all-required-fields-p
                                  (parse-passports data))))
    (values
      (length passports)
      (count-if #'all-fields-valid-p passports))))

How to programmatically create keywords in Common Lisp?

All I am trying to do is re-implementing what the _evalutor_ would do when presented with an expression like `:foo`; the thing is, you need to take into account the _reader_ as well, and in particular remember that by default it would STRING-UPCASE all the symbols it reads.

Say you had the following function, responsible for programmatically creating a keyword symbol:
(defun make-keyword (name)
  (intern (string name) :keyword))
Also say that you wanted to use the generated keyword like this (`name` contains the keyword generated with MAKE-KEYWORD above):
(defun field-valid-p (name value)
  (handler-case (case name
                  (:byr (in-range-p value 1920 2002))
                  (:iyr (in-range-p value 2010 2020))
                  (:eyr (in-range-p value 2020 2030))
                  (:hgt (valid-height-p value))
                  (:hcl (valid-hex-color-p value))
                  (:ecl (valid-color-name-p value))
                  (:pid (valid-passportid-p value)))
    (error () nil)))
If you did not STRING-UPCASE your input (e.g. `(make-keyword "byr")`), the _evaluator_ would end up comparing symbols with different case (e.g. `:|byr|` and `:BYR`), and because of that none of the CASE clauses would match.

To make things work one would have to:

- Simulate what the _reader_ would do (i.e. `(make-keyword (string-upcase "byr"))`)
- Change CASE clauses to use bars (e.g. `:|byr|`, `:|iyr|`)

Good stuff!
[SBCL] AOC/2020/04> (defun make-keyword (name)
                      (intern (string name) :keyword))
MAKE-KEYWORD

[SBCL] AOC/2020/04> (make-keyword "foo")
:|foo|
:EXTERNAL

[SBCL] AOC/2020/04> (make-keyword "FOO")
:FOO
:EXTERNAL
Someone on [StackOverflow](https://stackoverflow.com/questions/211717/common-lisp-programmatic-keyword) suggested to set the value associated with the newly created symbol to the symbol itself, but the [standard](http://www.lispworks.com/documentation/HyperSpec/Body/11_abca.htm) seems to say that this is already automatically done:
The KEYWORD package is treated differently than other packages in that special actions are taken when a symbol is interned in it. In particular, when a symbol is interned in the KEYWORD package, it is automatically made to be an external symbol and is automatically made to be a constant variable with itself as a value.

2020-12-03 (permalink)

Few weeks [ago](https://matteolandi.net/plan.html#day-2020-10-25) I started looking for ways to synchronize my dotfiles / Workspace across multiple computers without being forced to push new changes to the underlying repositories, and I think I might have found a workflow for this.

As I did not particularly like [the way](https://help.dropbox.com/files-folders/restore-delete/ignored-files) Dropbox wants me to tell which directory should be synchronized, and which one should not (mostly because on different OSes the commands to run differ), I started looking for alternatives and stumbled upon `unison`, the file synchronizer ([user manual](https://www.cis.upenn.edu/~bcpierce/unison/download/releases/stable/unison-manual.html)):

- Has clients for MacOS, Windows, and Linux... check
- Supports ways of ignoring directories... check
- Supports ways of ignoring files... check
- Terminal UI... check

`unison` is mostly meant to keep two roots in sync (i.e. the dotfiles on my laptop always in sync with the dotfiles on my work laptop), but what if I wanted my dotfiles synced with a third machine too?  From the user manual:
If you need to do this, the most reliable way to set things up is to organize the machines into a “star topology,” with one machine designated as the “hub” and the rest as “spokes,” and with each spoke machine synchronizing only with the hub. The big advantage of the star topology is that it eliminates the possibility of confusing “spurious conflicts” arising from the fact that a separate archive is maintained by Unison for every pair of hosts that it synchronizes.
Great!

What if that _hub_ node was... Dropbox?! We would get our unsaved changes to automatically propagate _for-free_ (so long as you have Dropbox installed of course).  This is exactly what I tried:

- Clone your dotfiles locally -- as you normally would
- Add your laptop / workstation to your Dropbox account -- so it can sync
- Install `unison` -- it's highly recommended that you run the same version across all your clients, even though I am not 100% sure it's a requirement with this setup
- Use `watch` to periodically fire `unison` and sync up your replicas

That's it! I am still experimenting with this, but it seems to be working.

Few tips if you want to start this journey:

- Don't bother syncing .git directories -- manually clone the repositories on each client and sync the files of the working directory instead
- Try not to sync symbolic links (on MacOS at least), or Dropbox might start abusing of your CPU
- If you tell `unison` to skip links (i.e. `-links false`), it will effectively skip them but still exit with a non-zero value
- `watch` seems to [error out](https://stackoverflow.com/questions/62536396/linux-cli-watch-switch-e-errexit-unexpected-exit-with-command-exit-with-a-n) when the program you told it to run generates more lines of output that the current terminal height, even though its exit code is 0
- Lastly, if you intend to start syncing multiple directories, you might want to create your own `unison` wrapper fixing some of these annoyances, and also containing your favourite settings, common `-ignore` rules -- here is mine
# uniosnw
set -u

ROOT1=${1?Need a value for ROOT1}
ROOT2=${2?Need a value for ROOT2}

unison "$ROOT1" "$ROOT2" \
    -auto \
    -batch \
    -confirmbigdel \
    -confirmmerge \
    -prefer newer \
    -links false \
    -ignore 'Name .git' \
    -ignore 'Name .hg' \
    -ignore 'Name .DS_Store' \
    -ignore 'Name .ropeproject' \
    -ignore 'Name .vagrant' \
    -ignore 'Name __pycache__' \
    -ignore 'Name node_modules' \
    -ignore 'Name target' \
    -ignore 'Name .dependenciesSources' \
    -ignore 'Name venv' \
    -ignore 'Name *.db' \
    -ignore 'Name *.dmg' \
    -ignore 'Name *.lock' \
    -ignore 'Name *.log' \
    -ignore 'Name *.pyc' \
    -ignore 'Name *.vdi' \
    -ignore 'Name tags*' \
    -ignore 'Name *.egg-info' \
    "${@:3}"

test $? -le 1 # exit code: 1 means something was skipped but rest went fine
You can then easily "wrap this wrapper" into another script responsible for synchronizing two specific replicas:
# uniosnw-me
set -e

exec >>~/unison-me.log 2>&1 # shut up, `watch`: https://stackoverflow.com/q/62536396
date

unisonw \
    "$HOME/my-env" \
    "$HOME/Dropbox/unison/my-env" \
    -ignore 'Regex .*/\.mutt/cache' \
    -ignore 'Regex .*/\.mutt/temp' \
    -ignore 'Regex .*/\.vim/spell' \
    -ignore 'Regex .*/\.vim/tmp' \
    -ignore 'Regex .*/\.vim/.netrwbook' \
    -ignore 'Regex .*/\.vim/.netrwhist' \
    "$@"
And then periodically synchronize replicas as follows:
$ watch --interval 60 --beep --errexit --exec unisonw-me
That's it for today!

Advent of Code: [2020/03](https://adventofcode.com/2020/day/03)

Solution:

- Parse the input into a 2D array
- Start from `row=0,col=0`
- Move as per the given delta, wrapping around when horizontally moving outside of the map
- While doing so (i.e. moving inside the map), count the number of trees you hit
(defun parse-map (data)
  (let ((rows (length data)) (columns (length (first data))))
    (make-array (list rows columns) :initial-contents data)))

(defun count-hits (map delta-col delta-row)
  (loop with (rows cols) = (array-dimensions map)
        for row below rows by delta-row
        for col = 0 then (mod (+ col delta-col) cols)
        count (char= (aref map row col) #\#)))

(define-problem (2020 03) (map parse-map)
  (values
    (count-hits map 3 1)
    (loop for (delta-col delta-row) in '((1 1) (3 1) (5 1) (7 1) (1 2))
          collect (count-hits map delta-col delta-row) into hits
          finally (return (reduce #'* hits)))))

2020-12-02 (permalink)

+ change my `unison` wrapper to delete broken links when syncing to Dropbox


Advent of Code: [2020/02](https://adventofcode.com/2020/day/02)

Solution:

- Parse the input into `(n m ch text)` lists
- Part 1
- count entries for which the number of occurrences of `ch` in `text` is between `n` and `m`
- Part 2
- count the entries for which `text[n] == ch` and `text[m] != ch` or `text[n] != ch` and `text[m] == ch`
(defun parse-password (string)
  (cl-ppcre:register-groups-bind ((#'parse-integer n m) (#'parse-char ch) text)
      ("(\\d+)-(\\d+) (\\w): (\\w+)" string)
    (list n m ch text)))

(defun parse-passwords (data)
  (mapcar #'parse-password data))

(defun valid-password-part1-p (min max ch text)
  (<= min (count ch text) max))

(defun valid-password-part2-p (pos1 pos2 ch text)
  (let ((ch1 (aref text (1- pos1)))
        (ch2 (aref text (1- pos2))))
    (cond ((char= ch1 ch) (char/= ch2 ch))
          ((char= ch2 ch) (char/= ch1 ch)))))

(define-problem (2020 2) (passwords parse-passwords)
  (loop for (n m ch text) in passwords
        count (valid-password-part1-p n m ch text) into part1
        count (valid-password-part2-p n m ch text) into part2
        finally (return (values part1 part2))))

2020-12-01 (permalink)

Advent of Code: [2020/01](https://adventofcode.com/2020/day/01)

Solution:

- Part 1
- iterate the list of numbers
- for each, see if in the remainder of the list there is one element such that the sum of both is 2020
- Part 2
- iterate the list of numbers
- for each, see if in the remainder of the list there are two elements such that their sum is "2020 minus the currently selected element"
(defun find-pair-that-adds-up-to (target integers)
  (loop for (n . rest) on integers
        for m = (find target rest :key (partial-1 #'+ n))
        when m return (list n m)))

(define-problem (2020 1) (integers parse-integers)
  (values (reduce #'* (find-pair-that-adds-up-to 2020 integers))
          (loop for (n . rest) on integers
                for (m o) = (find-pair-that-adds-up-to (- 2020 n) rest)
                when m return (* n m o))))

2020-11-30 (permalink)

Advent of Code: [2016/24](https://adventofcode.com/2016/day/24)

Solution:

- First precompute the best path between any two sites (i.e. cells with numeric values) of the map -- this is done using A*
- Next run another A* trying to get the robot to visit each site
- The state is codified as the current position of the robot (i.e. which site it's at), and a bitmask representing what other sites have already been visited
- Part1: no heuristic, and return the final cost
- Part2: use the cost to go back to position 0 as heuristic, and return the final cost plus the gost to go back to position 0
(defstruct blueprint
  (walls (make-hash-table))
  sites)

(defun char- (a b) (- (char-code a) (char-code b)))

(defun read-blueprint (data)
  (let ((blueprint (make-blueprint))
        (points-of-interest (make-hash-table)))
    (with-slots (walls sites) blueprint
      (loop for row from 0 for string in data do
            (loop for column from 0 for ch across string
                  for pos = (complex column row) do
                  (cond
                    ((char= ch #\#) (setf (gethash pos walls) t))
                    ((char= ch #\.) nil)
                    (t (setf (gethash pos points-of-interest) (char- ch #\0))))))
      (setf sites (make-array (hash-table-count points-of-interest)))
      (maphash #'(lambda (pos poi) (setf (aref sites poi) pos)) points-of-interest))
    blueprint))

(defun find-path (walls from to)
  (nth-value 1 (a* from :goal-state to
                   :neighbors (lambda (p)
                                (loop for n in (adjacents p)
                                      unless (gethash n walls)
                                      collect (cons n 1)))
                   :heuristic (partial-1 #'manhattan-distance to))))

(defun precompute-paths (blueprint)
  (with-slots (walls sites) blueprint
    (let* ((n (length sites))
           (paths (make-array (list n n))))
      (loop for i from 0 below n do
            (loop for j from i below n
                  for cost = (find-path walls (aref sites i) (aref sites j)) do
                  (setf (aref paths i j) cost
                        (aref paths j i) cost)))
      paths)))

(defstruct (state (:conc-name)) robot visited)

(defun neighbors (paths site-count state)
  (with-slots (robot visited) state
    (loop for j below site-count
          for cost = (aref paths robot j)
          unless (logbitp j visited)
          collect (cons (make-state :robot j
                                    :visited (logior visited (ash 1 j)))
                        cost))))

(defun visit-sites (blueprint)
  (let* ((init-state (make-state :robot 0 :visited 1))
         (site-count (length (blueprint-sites blueprint)))
         (all-sites (1- (ash 1 site-count)))
         (paths (precompute-paths blueprint)))
    (nth-value 1 (a* init-state :goalp (partial-1 #'= (visited _) all-sites)
                     :neighbors (partial-1 #'neighbors paths site-count)))))

(defun visit-sites-part2 (blueprint)
  (let* ((init-state (make-state :robot 0 :visited 1))
         (site-count (length (blueprint-sites blueprint)))
         (all-sites (1- (ash 1 site-count)))
         (paths (precompute-paths blueprint)))
    (multiple-value-bind (end-state end-state-cost)
        (a* init-state :goalp (partial-1 #'= (visited _) all-sites)
            :neighbors (partial-1 #'neighbors paths site-count)
            :heuristic #'(lambda (s)
                          (aref paths (robot s) 0)))
      (+ end-state-cost (aref paths (robot end-state) 0)))))

(define-problem (2016 24) (blueprint read-blueprint)
  (values (visit-sites blueprint)
          (visit-sites-part2 blueprint)))

Advent of Code: [2016/25](https://adventofcode.com/2016/day/25)

Solution:

- Brute-force
- Use/abuse CL condition system (the new "out" instruction simply SIGNALs the value up, so we can use HANDLER-BIND to inspect the output without unwinding the execution stack)
- For each emitted value
- ...if it's not part of the clock signal sequence, signal an error, and restart the program with the _next_ input (we use HANDLER-CASE here, as we want to unwind the stack and move on)
- ...if it's part of the clock signal sequence, and it's been like that for the last 10 emitted elements (it's an heuristic), then we found our magic input
(define-condition not-clock-signal () ())
(define-condition clock-signal-indeed () ())

(defun generates-clock-signal-p (input program)
  (let ((remaining 10)
        (expected (ncycle (list 0 1))))
    (handler-case
        (handler-bind ((assembunnycode:program-output
                         (lambda (c)
                           (if (/= (assembunnycode:output-value c) (car expected))
                             (error 'not-clock-signal)
                             (progn
                               (setf remaining (1- remaining)
                                     expected (cdr expected))
                               (when (zerop remaining)
                                 (signal 'clock-signal-indeed)))))))
          (assembunnycode:run program (list input 0 0 0)))
      (not-clock-signal () nil)
      (clock-signal-indeed () t))))

(define-problem (2016 25) (program assembunnycode:parse-program)
  (loop for n from 0
        when (generates-clock-signal-p n program) return n))

2020-11-29 (permalink)

Advent of Code: [2016/23](https://adventofcode.com/2016/day/23)

Solution:

- Started from the solution of 2016/12
- Made dynamic variables for the instruction pointer and the program being run, so I can easily access them both from the body of the new instruction, `tgl`
- Implement the new instruction, `tgl`, to use `*ip*` to figure out which program instruction to _toggle_, and then update `*program*` accordingly
- Note 1: as the example shows, `jnz` now supports reading the jump offset from a registry, so its implementation needs to be updated
- Note 2: not just its implementation, but it's parser too! I wasted too much time figuring out why my program was looping...and it turns out it was because I forgot to update PARSE-JNZ
- Part 1: set register `a` to `7`, and run the program
- Part 2: set register `a` to `12`, and run the program...and after only a couple of **minutes** it should spit out the corrent answer
(defvar *regs* nil)
(defvar *ip* nil)
(defvar *program* nil)

(defun reg-name-index (reg)
  (- (char-code reg) (char-code #\a)))

(defun reg (name)
  (aref *regs* (reg-name-index name)))

(defun (setf reg) (value name)
  (setf (aref *regs* (reg-name-index name)) value))

(defun reg-or-value (x)
  (if (numberp x) x (reg x)))

(defun i-cpy (x reg)
  (prog1 1 (setf (reg reg) (reg-or-value x))))

(defun i-inc (reg) (prog1 1 (incf (reg reg))))

(defun i-dec (reg) (prog1 1 (decf (reg reg))))

(defun i-jnz (x y)
  (if (zerop (reg-or-value x)) 1 (reg-or-value y)))

(defun i-tgl (x)
  (prog1 1
    (let* ((offset (+ (reg-or-value x) *ip*)))
      (when (array-in-bounds-p *program* offset)
        (destructuring-bind (fun . args) (aref *program* offset)
          (setf (car (aref *program* offset))
                (ecase (length args)
                  (1 (if (eq fun #'i-inc) #'i-dec #'i-inc))
                  (2 (if (eq fun #'i-jnz) #'i-cpy #'i-jnz)))))))))

(defun parse-value (string)
  (parse-integer string :junk-allowed t))

(defun parse-reg (string)
  (aref string 0))

(defun parse-reg-or-value (string)
  (or (parse-value string) (parse-reg string)))

(defun parse-cpy (string)
  (let ((parts (split-sequence:split-sequence #\Space string)))
    (when (string= (first parts) "cpy")
      (list #'i-cpy (parse-reg-or-value (second parts)) (parse-reg (third parts))))))

(defun parse-inc (string)
  (let ((parts (split-sequence:split-sequence #\Space string)))
    (when (string= (first parts) "inc")
      (list #'i-inc (parse-reg (second parts))))))

(defun parse-dec (string)
  (let ((parts (split-sequence:split-sequence #\Space string)))
    (when (string= (first parts) "dec")
      (list #'i-dec (parse-reg (second parts))))))

(defun parse-jnz (string)
  (let ((parts (split-sequence:split-sequence #\Space string)))
    (when (string= (first parts) "jnz")
      (list #'i-jnz (parse-reg-or-value (second parts)) (parse-reg-or-value (third parts))))))

(defun parse-tgl (string)
  (let ((parts (split-sequence:split-sequence #\Space string)))
    (when (string= (first parts) "tgl")
      (list #'i-tgl (parse-reg-or-value (second parts))))))

(defun parse-instruction (string)
  (or (parse-cpy string) (parse-inc string) (parse-dec string) (parse-jnz string)
      (parse-tgl string)))

(defun parse-program (data)
  (map 'vector #'parse-instruction data))

(defun run (program regs)
  (let ((*regs* regs)
        (*ip* 0)
        (*program* program))
    (loop while (< *ip* (length program))
          for (fun . args) = (aref program *ip*)
          do (incf *ip* (handler-case (apply fun args)
                          (type-error (c)
                            (format t "Error when evaluating ~a with ~a: ~s ~s~%" fun args c c)
                            1))))
    (reg #\a)))

(define-problem (2016 23) (data)
  (values
    (run (parse-program data) (make-array 4 :initial-contents (list 7 0 0 0)))
    (swallow (run (parse-program data) (make-array 4 :initial-contents (list 12 0 0 0))))))
Clearly there is a more clever solution to this, especially for part 2, but I did not feel like optimizing my input / implementing a new `mul` instruction (as hinted in the problem description), so I am going to leave that to my future self for now!

2020-11-28 (permalink)

Advent of Code: [2016/22](https://adventofcode.com/2016/day/22)

Solution:

- Part 1
- nothing crazy, compare each node with every other one and count the pairs which are _viable_
- Note to future self 1: read the text carefully
- Note to future self 2: read the text carefully
- Note to future self 3: read the text carefully
- Note to future self 4: don't make up any additional constraint
- Part 2
- intuition 1: to move data from disk a to c passing from b, you need to create some space on disk b first, move data from a to b, create some space to disk c, and finally move data from b to c
- intuition 2: you can only move data on disks which are _compatible_ with the whichever disk in your input happens to be unused (what in my solution are called: interchangeable nodes)
- A*: starting from the _target_ node, try to move its data to `0,0` -- the state object keeps track of where the target data is at, and which node is empty
- the cost of moving the target data into one of its adjacents disks is `1` plus the cost it takes to _empty that disk first_
- The cost of emptying a specific disk is implemented with another A* in which we _move_ the empty disk to the target location
- Note: when _moving_ the empty disk, make sure you don't touch the node where the target data is currently located at
(defun pos (n) (nth 0 n))
(defun size (n) (nth 1 n))
(defun used (n) (nth 2 n))
(defun avail (n) (nth 3 n))

(defun parse-node (string)
  (cl-ppcre:register-groups-bind ((#'parse-integer c r size used avail))
      ("node-x(\\d+)-y(\\d+)\\s+(\\d+)T\\s+(\\d+)T\\s+(\\d+)" string)
    (list (complex c r) size used avail)))

(defun parse-nodes (data)
  (remove nil (mapcar #'parse-node data)))

(defun viable-pair-p (n1 n2)
  (and (plusp (used n1))
       (not (eq n1 n2))
       (< (used n1) (avail n2))))

(defun viable-pairs (nodes)
  (loop for n1 in nodes
        append (loop for n2 in nodes
                     when (viable-pair-p n1 n2)
                     collect (list n1 n2))))

(defun find-target-node (nodes)
  (let (c-max max)
    (dolist (n nodes max)
      (with-complex-parts (c r) (pos n)
        (when (and (zerop r)
                   (or (null c-max)
                       (> c c-max)))
          (setf c-max c
                max n))))))

(defun find-empty-node (nodes)
  (find 0 nodes :key #'used))

(defun interchangeable-nodes (nodes)
  (loop with empty = (find-empty-node nodes)
        for n in nodes
        when (and (<= (used n) (size empty)))
        collect n))

(defstruct (state (:conc-name)) cur empty)

(defun cost-to-make-space (grid fixed from to)
  (nth-value 1 (a* from
                   :goal-state to
                   :neighbors (search-unit-cost (lambda (pos)
                                                  (loop for n in (adjacents pos)
                                                        when (and (gethash pos grid)
                                                                  (/= pos fixed))
                                                        collect n)))
                   :heuristic (partial-1 #'manhattan-distance to))))

(defun neighbors (state grid)
  (with-slots (cur empty) state
    (loop for next in (adjacents cur)
          for cost = (and (gethash next grid)
                          (cost-to-make-space grid cur empty next))
          when cost collect (cons (make-state :cur next
                                              :empty cur)
                                  (1+ cost)))))

(defun move-data (nodes)
  (let* ((intechangeables (interchangeable-nodes nodes))
         (grid (list-hash-table intechangeables #'pos))
         (init-state (make-state :cur (pos (find-target-node intechangeables))
                                 :empty (pos (find-empty-node intechangeables)))))
    (nth-value 1 (a* init-state
                     :goalp (partial-1 #'= (cur _) 0)
                     :neighbors (partial-1 #'neighbors _ grid)
                     :heuristic (partial-1 #'manhattan-distance (cur _) 0)
                     :test 'equalp))))

(define-problem (2016 22) (nodes parse-nodes)
  (values (length (viable-pairs nodes))
          (move-data nodes)))

2020-11-27 (permalink)

Advent of Code: [2016/21](https://adventofcode.com/2016/day/21)

Solution:

- Implement scramble operations (tried to be smart about it, and failed...so please don't be me, do the simplest thing possible)
- Parse the input
- Part1: scramble the input
- Part2: almost stared _inverting_ each function, then noticed there were only `8!` possible inputs to try, so I went on and bruteforced it (i.e. generate all inputs, scramble them, and find the one that's equal to "fbgdceah")
(defun swap-position (string x y)
  (let ((copy (copy-seq string)))
    (rotatef (aref copy x) (aref copy y))
    copy))

(defun swap-letter (string c1 c2)
  (let ((copy (copy-seq string))
        (pos1 (position c1 string))
        (pos2 (position c2 string)))
    (rotatef (aref copy pos1) (aref copy pos2))
    copy))

(defun rotate-left (string steps &aux (steps (mod steps (length string))))
  (concatenate 'string
               (subseq string steps)
               (subseq string 0 steps)))

(defun rotate-right (string steps)
  (let ((tail-size (- (length string) (mod steps (length string)))))
    (concatenate 'string
                 (subseq string tail-size)
                 (subseq string 0 tail-size))))

(defun rotate-on-position (string c)
  (let ((index (position c string)))
    (rotate-right string (if (>= index 4) (+ index 2) (1+ index)))))

(defun reverse-in-between (string start end)
  (concatenate 'string
               (subseq string 0 start)
               (reverse (subseq string start (1+ end)))
               (subseq string (1+ end))))

(defun move (string x y)
  (let ((with-x-popped (concatenate 'string
                           (subseq string 0 x)
                           (subseq string (1+ x)))))
    (concatenate 'string
                 (subseq with-x-popped 0 y)
                 (string (aref string x))
                 (subseq with-x-popped y))))

(defun parse-swap-position (string)
  (cl-ppcre:register-groups-bind ((#'parse-integer x y))
      ("swap position (\\d+) with position (\\d+)" string)
    (list #'swap-position x y)))

(defun parse-char (string)
  (char string 0))

(defun parse-swap-letter (string)
  (cl-ppcre:register-groups-bind ((#'parse-char c1 c2))
      ("swap letter (\\w) with letter (\\w)" string)
    (list #'swap-letter c1 c2)))

(defun parse-rotate-leftright (string)
  (cl-ppcre:register-groups-bind (dir (#'parse-integer x))
      ("rotate (left|right) (\\d+) step" string)
    (list (if (string= dir "left") #'rotate-left #'rotate-right) x)))

(defun parse-rotate-on-position (string)
  (cl-ppcre:register-groups-bind ((#'parse-char c))
      ("rotate based on position of letter (\\w)" string)
    (list #'rotate-on-position c)))

(defun parse-reverse-in-between (string)
  (cl-ppcre:register-groups-bind ((#'parse-integer start end))
      ("reverse positions (\\d+) through (\\d+)" string)
    (list #'reverse-in-between start end)))

(defun parse-move (string)
  (cl-ppcre:register-groups-bind ((#'parse-integer x y))
      ("move position (\\d+) to position (\\d+)" string)
    (list #'move x y)))

(defun parse-scramble-op (string)
  (or (parse-swap-position string)
      (parse-swap-letter string)
      (parse-rotate-leftright string)
      (parse-rotate-on-position string)
      (parse-reverse-in-between string)
      (parse-move string)
      (error string)))

(defun parse-scramble-script (data)
  (mapcar #'parse-scramble-op data))

(defun scramble (script input)
  (loop for string = input then (apply fun string args)
        for (fun . args) in script
        finally (return string)))

(defun permutations-string (string)
  (mapcar #'(lambda (x) (coerce x 'string))
          (all-permutations (coerce string 'list))))

(define-problem (2016 21) (script parse-scramble-script)
  (values (scramble script "abcdefgh")
          (loop for input in (permutations-string "abcdefgh")
                when (string= (scramble script input) "fbgdceah")
                return input)))
I am not quite happy with how long/verbose the solution turned out to be, but whatever...

2020-11-26 (permalink)

Advent of Code: [2016/20](https://adventofcode.com/2016/day/20)

Solution:

- Merge overlapping ranges
- At first I thought about using union-find, but ultimately opted for...
- ...sort ranges (smallest start value first, then smallest end value)
- ...scan the sorted list of ranges and merge the _overlapping_ ones
- Part1: take the smallest non-overlapping range, and return `end + 1`
- Part2: iterate non-overlapping ranges, and start from `4294967295` keep on subtracting the size of the current range
(defparameter *ip-max* 4294967295)

(defun parse-range (string)
  (mapcar #'parse-integer (split-sequence:split-sequence #\- string)))

(defun parse-ranges (data)
  (mapcar #'parse-range data))

(defun range< (range1 range2)
  (destructuring-bind (left1 right1) range1
    (destructuring-bind (left2 right2) range2
      (or (< left1 left2)
          (and (= left1 left2)
               (<= right1 right2))))))

(defun merge-overlapping (ranges)
  (let ((sorted (sort (copy-seq ranges) #'range<)))
    (loop with (merged-start merged-end) = (first sorted)
          for (start end) in (rest sorted)
          if (> (1- start) merged-end)
            collect (list merged-start merged-end) into ranges
            and do (setf merged-start start merged-end end)
          else do (setf merged-end (max end merged-end))
          finally (return (append ranges
                                  (list (list merged-start merged-end)))))))

(define-problem (2016 20) (ranges parse-ranges)
  (loop with part1 with part2 = (1+ *ip-max*)
        for (start end) in (merge-overlapping ranges)
        for size = (1+ (- end start)) do
        (when (not part1) (setf part1 (1+ end)))
        (decf part2 size)
        finally (return (values part1 part2))))

2020-11-25 (permalink)

Advent of Code: [2016/19](https://adventofcode.com/2016/day/19)

Solution:

- I thought I could map the _staling_ operation with some mathematical formula, so then I tried the _naive_ approach (i.e. simulate it), and it just worked!
- Create a _circular_ list containing the index associated with each elf
- Keep a pointer to the _next_ element to be removed
- At each iteration remove such element, update the pointer, and repeat until only one elf is left
- Part 1: the _next_ element to be removed is simply the second element of the circular list
- Part 2: we start from the _middle_ of the list (i.e. which elf is sitting _in front_ of...), remove the _next_ element, and update the pointer _accordingly_
- ...if the size prior to the latest removal was odd, then we move our pointer one step forward (to skip the other elf that happend to be sitting in front of ...)
- ...otherwise, we don't update the pointer (i.e. the next element to remove is right after the current pointer)
(defun parse-num-elfes (data)
  (parse-integer (first data)))

(defun part1 (players)
  (let ((playing (ncycle (loop for i from 1 upto players collect i))))
    (loop repeat players
          for remaining = playing then (cdr remaining) do
          (setf (cdr remaining) (cddr remaining))
          finally (return (first remaining)))))

(defun part2 (players)
  (let ((playing (ncycle (loop for i from 1 upto players collect i))))
    (loop with middle = (nthcdr (1- (floor players 2)) playing)
          for size from players downto 1
          for remaining = middle then (if (evenp size) (cdr remaining) remaining) do
          (setf (cdr remaining) (cddr remaining))
          finally (return (first remaining)))))

(define-problem (2016 19) (elves parse-num-elfes)
  (values
    (part1 elves)
    (part2 elves)))

2020-11-24 (permalink)

Advent of Code: [2016/18](https://adventofcode.com/2016/day/18)

Solution:

- Initially thought about solving this using numbers and bit masks, but then opted for the simpler string-based approach
- Nothing crazy with the implementation, except:
- added two extra "." (i.e. gutters) to the input so I won't have to deal the _edges_
- for the logic to decide whether a tile is safe or it's a trap -- thanks to [Karnaugh](https://en.wikipedia.org/wiki/Karnaugh_map) I was able to simplify the original set of rules to: _are left / right chars the same? then it's safe, otherwise it's not_
(defun add-gutters (string)
  (format nil ".~A." string))

(defun parse-trap-row (data &aux (string (first data)))
  (add-gutters string))

(defun its-a-trap-p (row index)
  ;; Then, a new tile is a trap only in one of the following situations:
  ;;
  ;; - Its left and center tiles are traps, but its right tile is not.
  ;; - Its center and right tiles are traps, but its left tile is not.
  ;; - Only its left tile is a trap.
  ;; - Only its right tile is a trap.
  ;;
  ;; In any other situation, the new tile is safe.
  ;;
  ;; It turns out (thanks Karnaugh) that this can be simplified to:
  ;;
  ;; - Are left/right the same? Then it's safe
  ;; - Otherwise, it's a trap
  (char/= (aref row (1- index)) (aref row (1+ index))))

(defun next (row)
  (let ((next (make-string (length row) :initial-element #\.)))
    (loop repeat (- (length row) 2)
          for index from 1 do
          (setf (aref next index) (if (its-a-trap-p row index) #\^ #\.)))
    next))

(defun generate (row times)
  (loop repeat times
        summing (count #\. row) into safe do
        (setf row (next row))
        finally (return (- safe (* times 2))))) ; offset 2 gutters per row

(define-problem (2016 18) (state parse-trap-row)
  (values
    (generate state 40)
    (generate state 400000)))
PS. Even with all these string operations, part2 runs in around around 1.5 second; 8 seconds instead, if we did not apply any Karnaugh magic.

2020-11-23 (permalink)

Alright, alright, the waiting is _almost_ over: a couple of weeks ago [Eric Wastl](http://was.tl/) (creator of Advent of Code) [tweeted](https://twitter.com/ericwastl/status/1328254144545763328) that the 2020 [site](https://adventofcode.com/2020) is finally up -- don't get too excited though, for the first problem you will still have to wait until December 1st, midnight time (EST/UTC-5)!

For those of you not yet familiar with this (taken from the site's [about](https://adventofcode.com/2020/about) page):
Advent of Code is an Advent calendar of small programming puzzles for a variety of skill sets and skill levels that can be solved in any programming language you like. People use them as a speed contest, interview prep, company training, university coursework, practice problems, or to challenge each other.
What do you think?

- Will you participate?  (Yes)
- What programming language would you use? ([Common Lisp](https://github.com/iamFIREcracker/adventofcode) of course)
- Editor / IDE?  ([Vim](https://github.com/iamFIREcracker/dotfiles/tree/master/.vim), the only text editor)

It's going to be fantastic...and brutal at the same time (too many smart cookies here, waking up at 6 am to get ahold of their stars as quickly as possible), but that's the beauty of it!

Speaking of waking up early, it [turns out](https://twitter.com/ericwastl/status/1329683037651673089) they got a DJ to play a live set with a big drop at the exact time the first problem will be revealed, so if you planned on waking up early, or simply happen to live on the right part of Earth, you might want to [tune in](https://t.co/9r8pd95G8v?amp=1).

_In bocca al lupo!_

Advent of Code: [2016/17](https://adventofcode.com/2016/day/17)

Solution:

- BFS
- A state is a combination of the current position, and how we got there (e.g. "RLRLDU...")
- Try to move up, down, left, right if
- ...it's not off the map
- ...the door is open
- To see if a door is unlocked, run md5 of the path so far, and extract the first 2 bytes -- nothing crazy, I simply followed instructions
- Stop when you made it to the bottom-right corner -- for Part 1 at least
- Part 2: BFS + pruning
- ...don't stop when you made it to the bottom-right corner
- ...instead check the size of the path, and then _prune_ the result
(defparameter *nhood* '(#C(0 1) #C(0 -1) #C(-1 0) #C(1 0)))
(defparameter *steps* "UDLR")

(defun door-states (seed path)
  (let* ((string (format nil "~A~A" seed path))
         (bytes (md5:md5sum-string string))
         (result (make-array 4 :element-type t :fill-pointer 0)))
    (dotimes (i 2)
      (let ((byte1 (ldb (byte 4 4) (aref bytes i)))
            (byte2 (ldb (byte 4 0) (aref bytes i))))
        (vector-push (> byte1 10) result)
        (vector-push (> byte2 10) result)))
    result))

(defstruct (state (:conc-name)) pos path)

(defun neighbors (seed state)
  (with-slots (pos path) state
    (loop for delta in *nhood*
          for step across *steps*
          for openp across (door-states seed path)
          for next = (+ pos delta)
          when (and (<= 0 (realpart next) 3)
                    (>= 0 (imagpart next) -3)
                    openp)
          collect (make-state :pos next
                              :path (format nil "~A~C" path step)))))

(defun part1 (seed)
  (path (bfs (make-state :pos #c(0 0) :path "")
             :goalp #'(lambda (state) (= (pos state) #c(3 -3)))
             :neighbors (partial-1 #'neighbors seed)
             :test 'equalp)))

(defun part2 (seed)
  (let (max)
    (bfs (make-state :pos #c(0 0) :path "")
         :prunep #'(lambda (state cost)
                    (when (null state) (break))
                    (with-slots (pos) state
                      (when (= pos #c(3 -3))
                        (when (or (null max)
                                  (> cost max))
                          (setf max cost))
                        t)))
         :neighbors (partial-1 #'neighbors seed)
         :test 'equalp)
    max))

(define-problem (2016 17) (seed first)
  (values
    (part1 seed)
    (part2 seed)))

2020-11-22 (permalink)

? vim-sexp's element mappings do not work with COMPLEX


Advent of Code: [2016/14](https://adventofcode.com/2016/day/14)

Solution:

- Cache md5 hashes -- you don't want to calculate the same hash over and over again
- Implement the cache with a ring buffer -- you don't want to use a hash-table, as there is no need to keep around previously processed hashes
- Rest is implementing logic for finding triplets, and quintets (done using `cl-ppcre`)
(defvar *salt* nil)
(defvar *key-stretching* nil)

(defun circular-cache (size &optional initial-element)
  (let ((cache (make-list size :initial-element initial-element)))
    (ncycle cache)))

(defun generate-hash (index)
  (let ((string (format nil "~A~D" *salt* index)))
    (dotimes (n (1+ *key-stretching*) string)
      (setf string (hexadecimal-string (md5:md5sum-string string))))))

(defun hash (index cache)
  (if-let ((cached (car cache)))
    cached
    (setf (car cache) (generate-hash index))))

(defun find-triplet-char (string)
  (when-let ((pos (cl-ppcre:scan "(.)\\1\\1" string)))
    (aref string pos)))

(defun contains-quintet-p (char string)
  (cl-ppcre:scan (format nil "~A{5}" char) string))

(defun keyp (index cache char)
  (loop :repeat 1000
        :for lcache = cache :then (cdr lcache)
        :for lindex = index :then (1+ lindex)
        :for hash = (hash lindex lcache)
        :thereis (contains-quintet-p char hash)))

(defun solve (salt &key (key-stretching 0))
  (let ((*salt* salt)
        (*key-stretching* key-stretching))
    (loop :for cache = (circular-cache 1001) :then (cdr cache)
          :for index = 0 :then (1+ index)
          :for hash = (hash index cache)
          :for char = (find-triplet-char hash)
          :count (and char (keyp (1+ index) (cdr cache) char)) :into keys
          :when (= keys 64) :return index
          :do (setf (car cache) nil))))

(define-problem (2016 14) (salt first)
  (values
    (solve salt)
    (solve salt :key-stretching 2016)))
Good, except when we set key-stretching to 2017 as part 2 requires, this implementation takes like _forever_ to generate the _correct_ solution:
[SBCL] AOC/2016/14> (time (problem-run))
Evaluation took:
  377.863 seconds of real time
  335.047006 seconds of total run time (329.634168 user, 5.412838 system)
  [ Run times consist of 13.299 seconds GC time, and 321.749 seconds non-GC time. ]
  88.67% CPU
  869,059,756,917 processor cycles
  106,067,224,416 bytes consed

16106
22423
Instead of:

- Find triplet char
- For each of of the next 1000 hashes, find one that contains a quintet with that char

What if we pre-calculated all the possible quintet characters, and searched the triplet one inside of it?
(defun find-all-quintet-chars (string)
  (mapcar #'(lambda (match) (aref match 0))
          (cl-ppcre:all-matches-as-strings "(.)\\1\\1\\1\\1" string)))

(defun hash (index cache)
  (if-let ((cached (car cache)))
    cached
    (let ((hash (generate-hash index)))
      (setf (car cache) (cons hash (find-all-quintet-chars hash))))))

(defun keyp (index cache char)
  (loop :repeat 1000
        :for lcache = cache :then (cdr lcache)
        :for lindex = index :then (1+ lindex)
        :for (hash . quintets) = (hash lindex lcache)
        :thereis (member char quintets)))

(defun solve (salt &key (key-stretching 0))
  (let ((*salt* salt)
        (*key-stretching* key-stretching))
    (loop :for cache = (circular-cache 1001) :then (cdr cache)
          :for index = 0 :then (1+ index)
          :for (hash . ignored) = (hash index cache)
          :for char = (find-triplet-char hash)
          :count (and char (keyp (1+ index) (cdr cache) char)) :into keys
          :when (= keys 64) :return index
          :do (setf (car cache) nil))))
Let's run it and see how long it takes:
[SBCL] AOC/2016/14> (time (problem-run))
Evaluation took:
  281.838 seconds of real time
  276.823709 seconds of total run time (274.384517 user, 2.439192 system)
  [ Run times consist of 10.966 seconds GC time, and 265.858 seconds non-GC time. ]
  98.22% CPU
  648,208,539,419 processor cycles
  104,197,076,032 bytes consed

16106
22423
Well, that's _something_, though it's still taking a bit too much time.

I suspected my dumb `HEXADECIMAL-STRING` might have had something to do with it, so I took a look at [ironclad](https://quickref.common-lisp.net/ironclad.html)'s `BYTE-ARRAY-TO-HEX-STRING` function, and gut the following out of it (did not feel like neither copying that over, in its full length, nor adding an additional dependency):
(defun md5-bytes-string (bytes)
  (let ((hexdigits #.(coerce "0123456789abcdef" 'simple-base-string)))
    (loop :with string = (make-string 32 :element-type 'base-char)
          :for byte :across bytes
          :for index :below 32 :by 2
          :do (setf (aref string index)
                    (aref hexdigits (ldb (byte 4 4) byte))
                    (aref string (1+ index))
                    (aref hexdigits (ldb (byte 4 0) byte)))
          :finally (return string))))
Run the program again, and:
[SBCL] AOC/2016/14> (time (problem-run))
Evaluation took:
  38.225 seconds of real time
  36.210049 seconds of total run time (35.759976 user, 0.450073 system)
  [ Run times consist of 1.091 seconds GC time, and 35.120 seconds non-GC time. ]
  94.73% CPU
  87,914,983,655 processor cycles
  18,883,617,520 bytes consed

16106
22423
Not bad, I can live with that!

PS. For the curious ones wondering how badly `HEXADECIMAL-STRING` might have been implemented: very badly (for this use case at least)
(defun hexadecimal-string (bytes)
  (format nil "~(~{~2,'0X~}~)" (coerce bytes 'list)))

Advent of Code: [2016/15](https://adventofcode.com/2016/day/15)

Solution:

- Disk #1 has 5 positions and starts at 4, meaning it will be at position 0 at time: 1, 6, 11, 16...
- It will take the capsure 1 second to get to the first disk, which means the button should pressed at any of the instants above, minus 1 second: 0, 5, 10, 15...
- Disk #2 has 2 positions and starts at 1, meaning it will be at position 0 at time: 1, 3, 5, 7...
- The capsule will take 2 seconds to get to disk #2, so in order for the disk to be at position 0 when the capsule reaches it, is at time: -1, 1, 3, 5...
- For each disk, we can then create _generators_ (i.e. returning all the times the disk will be at position 0)
- We start pulling values out of these, until we find a common value
- Note: start pulling from the generator with the lowest value
(defun parse-disk (string)
  (cl-ppcre:register-groups-bind ((#'parse-integer disk size time position))
      ("Disc #(\\d+) has (\\d+) positions; at time=(\\d+), it is at position (\\d+)." string)
    (list disk size time position)))

(defun position-generator (disk size time position)
  (let* ((at-pos-0 (- time position))
         (current (- at-pos-0 disk)))
    (labels ((next ()
               (incf current size)
               (cons current #'next)))
      (cons current #'next))))

(defun make-generators (data)
  (let* ((size (length data))
         (disks (mapcar #'parse-disk data))
         (generators (mapcar (partial-1 #'apply #'position-generator) disks)))
    (make-array size :initial-contents generators)))

(defun minmax (gens)
  (loop :with min :with min-pos :with max :with max-pos
        :for (value . ignored) :across gens
        :for i :from 0
        :when (or (not min) (< value min)) :do (setf min value min-pos i)
        :when (or (not max) (> value max)) :do (setf max value max-pos i)
        :finally (return (cons min-pos max-pos))))

(defun solve (data)
  (loop :with gens = (make-generators data)
        :for (min . max) = (minmax gens)
        :if (= min max) :return (car (aref gens min))
        :else :do (setf (aref gens min) (funcall (cdr (aref gens min))))))

(defun prepare-part2 (data)
  (let* ((new-disk-id (1+ (length data)))
         (new-disk (format nil "Disc #~D has 11 positions; at time=0, it is at position 0." new-disk-id)))
    (append data (list new-disk))))

(define-problem (2016 15) (data)
  (values
    (solve data)
    (solve (prepare-part2 data))))

Advent of Code: [2016/16](https://adventofcode.com/2016/day/16)

Solution:

- Pretty much do as instructed
- Generate random data using an expandable VECTOR...
- ...pre-load it with the input
- ...at each step append "0"
- ...then walk back to front, appending the opposite of the current element
- Calculate the checksum by _reducing_ adjacents elements, until the length is odd...
- ...do all this on the _same_ array, so we don't waste memory
(defun generate-random-data (seed size)
  (let ((array (make-array size
                           :element-type 'base-char
                           :adjustable t
                           :fill-pointer 0)))
    (loop for c across seed do (vector-push c array))
    (loop while (< (length array) size)
          for last = (1- (fill-pointer array)) do
          (vector-push #\0 array)
          (loop for i from last downto 0
                for c = (aref array i) do
                (vector-push (if (char= #\0 c) #\1 #\0) array)))
    array))

(defun checksum-reduction (array)
  (loop for i below (length array) by 2
        for j from 0
        for c1 = (aref array i)
        for c2 = (aref array (1+ i)) do
        (setf (aref array j) (if (char= c1 c2) #\1 #\0))
        finally (setf (fill-pointer array) (1+ j)))
  array)

(defun checksum (array)
  (prog1 array
    (loop do (checksum-reduction array)
          while (evenp (length array)))))

(define-problem (2016 16) (seed first)
  (values
    (checksum (generate-random-data seed 272))
    (checksum (generate-random-data seed 35651584))))

2020-11-20 (permalink)

Advent of Code: [2016/12](https://adventofcode.com/2016/day/12)

Solution:

- Implement the 4 functions, one per each defined instruction (need to be careful to support value and register arguments)
- Parse the input into an array of functions + respective arguments
- Run the _program_ using a LOOP, `ip` starting from 0, and 4 registers...
- At each iteration, fetch the istruction at `ip` position, and execute it (each instruction returns the offset `ip` should be updated with)
- Stop when `ip` is bigger than the input program
(defvar *regs* nil)

(defun reg-name-index (reg)
  (- (char-code reg) (char-code #\a)))

(defun reg (name)
  (aref *regs* (reg-name-index name)))

(defun (setf reg) (value name)
  (setf (aref *regs* (reg-name-index name)) value))

(defun reg-or-value (x)
  (if (numberp x) x (reg x)))

(defun i-cpy (x reg)
  (prog1 1 (setf (reg reg) (reg-or-value x))))

(defun i-inc (reg) (prog1 1 (incf (reg reg))))

(defun i-dec (reg) (prog1 1 (decf (reg reg))))

(defun i-jnz (x offset)
  (if (zerop (reg-or-value x)) 1 offset))

(defun parse-value (string)
  (parse-integer string :junk-allowed t))

(defun parse-reg (string)
  (aref string 0))

(defun parse-reg-or-value (string)
  (or (parse-value string) (parse-reg string)))

(defun parse-cpy (string)
  (let ((parts (split-sequence:split-sequence #\Space string)))
    (when (string= (first parts) "cpy")
      (list #'i-cpy (parse-reg-or-value (second parts)) (parse-reg (third parts))))))

(defun parse-inc (string)
  (let ((parts (split-sequence:split-sequence #\Space string)))
    (when (string= (first parts) "inc")
      (list #'i-inc (parse-reg (second parts))))))

(defun parse-dec (string)
  (let ((parts (split-sequence:split-sequence #\Space string)))
    (when (string= (first parts) "dec")
      (list #'i-dec (parse-reg (second parts))))))

(defun parse-jnz (string)
  (let ((parts (split-sequence:split-sequence #\Space string)))
    (when (string= (first parts) "jnz")
      (list #'i-jnz (parse-reg-or-value (second parts)) (parse-value (third parts))))))

(defun parse-instruction (string)
  (or (parse-cpy string) (parse-inc string) (parse-dec string) (parse-jnz string)))

(defun parse-program (data)
  (map 'vector #'parse-instruction data))

(defun run (program regs)
  (let ((*regs* regs))
    (loop :with ip = 0
          :while (< ip (length program))
          :for (fun . args) = (aref program ip)
          :do (incf ip (apply fun args)))
    (reg #\a)))

(define-problem (2016 12) (program parse-program)
  (values
    (run program (make-array 4 :initial-contents (list 0 0 0 0)))
    (run program (make-array 4 :initial-contents (list 0 0 1 0)))))

Advent of Code: [2016/13](https://adventofcode.com/2016/day/13)

Solution:

- Part 1: A*
- Part 2: BFS, with prune enabled after 50 steps
- State: position we are in (implemented as COMPLEX)
- Neighbor function: try up, right, down, left, if coordinates are not negative, and the they refer to an open space
(defvar *favorite-number* nil)

(defun magic (x y)
  (+ (* x x)
     (* 3 x)
     (* 2 x y)
     y
     (* y y)))

(defun openp (x y)
  (loop :with number = (+ (magic x y) *favorite-number*)
        :for index :below (integer-length number)
        :count (logbitp index number) :into bits
        :finally (return (evenp bits))))

(defun neighbors (pos)
  (loop :for next :in (adjacents pos)
        :when (and (>= (realpart next) 0)
                   (>= (imagpart next) 0)
                   (openp (realpart next) (imagpart next)))
        :collect next))

(defun part1 (favorite-number end-state)
  (let ((*favorite-number* favorite-number))
    (nth-value 1 (a* #c(1 1) :goal-state end-state
                     :neighbors (search-unit-cost #'neighbors)
                     :heuristic (partial-1 #'manhattan-distance end-state)))))

(defun part2 (favorite-number)
  (let ((*favorite-number* favorite-number))
    (hash-table-count (nth-value 3 (bfs #c(1 1) :neighbors #'neighbors
                                        :prunep #'(lambda (state cost)
                                                   (declare (ignore state))
                                                   (> cost 50)))))))

(define-problem (2016 13) (data)
  (values
    (part1 (parse-integer (first data)) #c(31 39))
    (part2 (parse-integer (first data)))))

2020-11-19 (permalink)

? write a page about the keyboard mappings that I use


Changing the message `git` uses when merging branches...is not something you can easily [configure](https://stackoverflow.com/questions/3148863/how-can-i-customize-gits-merge-commit-message); your best bet, is to _override_ it completely.

Place the following somewhere in your `$PATH`
#!/usr/bin/env bash

set -eu

REMOTE_BRANCH=${1?Need a value for REMOTE_BRANCH}
CURRENT_BRANCH=$(git currentbranch)

git merge --no-ff origin/$1 --edit -m "Merge branch '$REMOTE_BRANCH' into '$CURRENT_BRANCH'"
`chmod +x` it, and that's it:
$ git-merge-origin story/cx-532/neo4j-running-out-of-memory-when-taking-backup
Merge made by the 'recursive' strategy.
 system/ansible/roles/neo4j/templates/database-backup.sh.j2 | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
$ git l -n 1
b99eb03303 Merge branch 'story/cx-532/neo4j-running-out-of-memory-when-taking-backups' into 'master' (by Matteo Landi) (HEAD -> master, origin/master)

Advent of Code: [2016/11](https://adventofcode.com/2016/day/11)

Solution:

- BFS
- a state contains: the floor the elevator is at, the elements located at each floor
- from a given state, all the _neighbor_ ones can be generated by:
- ...moving the elevator up **or** down
- ...picking 1 **or** 2 items from the current floor (my input contains 5 generators and 5 microchips, so this _naive_ implementation should work just fine)
- ...confirming that no microchip would fry on the floor we are leaving, as well as the one we are getting into
- stop when all the items are at the 4th floor
- Part 2 is the same Part 1, with 4 additional items on the first floor -- and that will sloooooow things down

Easy right?!  Not really, implementing this took me way too long, so let's try to tackle this in steps.  First things first, let's define our _state_ object:

- Two properties, `elevator`, `floors`
- `elevator` is an integer
- `floors` is a LIST containing 4 numbers, bit-sets, each representing which element is currently located at that floor
- we have to deal with 10 items for part 1, and 14 for part 2, so bit-masks of 10/14 bits respectively will get the job done
- in particular, the least significant 5/7 bits will be used for microchips, while the rest for the generators
(defparameter *elements-mapping*
  '(("strontium" 1)
    ("plutonium" 2)
    ("thulium" 4)
    ("ruthenium" 8)
    ("curium" 16)
    ("elerium" 32)
    ("dilithium" 64)))

(defparameter *all-elements* nil)
(defparameter *all-elements-mask* nil)
(defparameter *microchips-mask* nil)

(defstruct (state :conc-name) elevator floors)

(defun floor-generators (floor)
  (ash floor (- *all-elements*)))

(defun floor-microchips (floor)
  (logand *microchips-mask* floor))

(defun parse-generators (string)
  (mapcar (lambda (each)
            (let ((name (first (split-sequence:split-sequence #\Space each))))
              (cadr (assoc name *elements-mapping* :test #'string=))))
          (cl-ppcre:all-matches-as-strings "\\w+ generator" string)))

(defun parse-microchips (string)
  (mapcar (lambda (each)
            (let ((name (first (split-sequence:split-sequence #\- each))))
              (cadr (assoc name *elements-mapping* :test #'string=))))
          (cl-ppcre:all-matches-as-strings "\\w+-compatible" string)))

(defun parse-floor (string)
  (let ((gens (loop :for each :in (parse-generators string)
                    :sum each))
        (ucs (loop :for each :in (parse-microchips string)
                   :sum each)))
    (+ (ash gens *all-elements*)
       ucs)))

(defun parse-state (data)
  (make-state :elevator 0
              :floors (make-array 4 :initial-contents (loop :for string :in data
                                                            :collect (parse-floor string)))))
Previously we said that a state is _valid_ if no microchip at each floor is frying; and again, that translates to:

- No generators are on the given floor, or...
- For each microchip on the floor, its generator is there too
(defun nothing-fries-p (floor)
  (let ((generators (floor-generators floor))
        (microchips (floor-microchips floor)))
    (or (zerop generators)
        (= (logand microchips generators) microchips))))
Stop condition: all the items are at the fourth floor... or the fourth floor contains 10 items for part 1 and 14 items for part 2
(defun all-items-at-fourth-p (state)
  (let* ((floor (aref (floors state) 3)))
    (= floor *all-elements-mask*)))
Let's now try to generate all the neighbor states starting from a _known_ one; again, we said that:

- We can go up, or down
- We can pick one, or two items
- So long as nothing fries the floor that we are leaving
- and nothing fries at the floor we are entering into
(defun floor-without (floor index index2)
  (logandc2 floor (logior (ash 1 index)
                          (ash 1 index2))))

(defun floor-with (floor index index2)
  (logior floor (ash 1 index) (ash 1 index2)))

(defun change (floors floor-id floor floor2-id floor2)
  (let ((copy (copy-seq floors)))
    (setf (aref copy floor-id) floor
          (aref copy floor2-id) floor2)
    copy))

(defun neighbors (state)
  (with-slots (elevator floors) state
    (loop :with floor = (aref floors elevator)
          :for index :below (integer-length floor)
          :when (logbitp index floor)
          :append (loop :for index2 :from index :below (integer-length floor) ;; index2 starts from index, that's how we try to move a single item
                        :when (and (logbitp index2 floor)
                                   (nothing-fries-p (floor-without floor index index2)))
                        :append (loop :for dir :in (list -1 1)
                                      :for elevator2 = (+ elevator dir)
                                      :for floor2 = (and (<= 0 elevator2 3) (aref floors elevator2))
                                      :when (and floor2 (nothing-fries-p (floor-with floor2 index index2)))
                                      :collect (make-state :elevator elevator2
                                                           :floors (change floors
                                                                           elevator
                                                                           (floor-without floor index index2)
                                                                           elevator2
                                                                           (floor-with floor2 index index2))))))))
Let's add some final plumbing:
(defun solve (data &optional part2)
  (let* ((*all-elements* (if part2 7 5))
         (*all-elements-mask* (1- (ash 1 (* *all-elements* 2))))
         (*microchips-mask* (1- (ash 1 *all-elements*))))
    (nth-value 1 (bfs (parse-state data)
                      :test 'equalp
                      :goalp #'all-items-at-fourth-p
                      :neighbors #'neighbors))))

(defun prepare-part2 (data)
  (let ((copy (copy-seq data)))
    (setf (nth 0 copy) (format nil "~A ~{~A ~}" (nth 0 copy)
                               (list "elerium generator"
                                     "elerium-compatible microchip"
                                     "dilithium generator"
                                     "dilithium-compatible microchip")))
    copy))

(define-problem (2016 11) (data)
  (values
    (solve data)
    (solve (prepare-part2 data) t)))
It works...
[SBCL] AOC/2016/11> (time (problem-run))
Evaluation took:
  50.306 seconds of real time
  43.853553 seconds of total run time (40.663221 user, 3.190332 system)
  [ Run times consist of 6.507 seconds GC time, and 37.347 seconds non-GC time. ]
  87.17% CPU
  115,699,129,876 processor cycles
  7,316,583,024 bytes consed

37
61
...a bit slow, but it works.  Well, that does not surprise me, as NEIGHBORS is very dumb in what it does -- it blindly moves items around, without trying to immediately prune sub-optimal steps.

Anyway, I am fine with this...for now!

2020-11-16 (permalink)

[Codeflows 2020](https://codeflows.io/) qualification round: post-mortem

It was a total disaster: I did not manage to _fully_ finish any of the 6 problems, and ended up scoring 40 points only (I believe 400 points were required to make it to the second round).  I knew I sucked at programming competitions... I just did not know how **badly** I did.

Too many mistakes from my part:

- I took this too lightly: "you have more than 24h to complete this, but we don't anticipate it will take you more than 2 hours" said the email that I received, and since the town where I am living was about to pull another lock-down due to the increased number of COVID cases, I figured I could spend a few more hours outside and start working on this later...in the end, it's just a qualification round, right?!
- I had never used CodeChef before (I did not even have an account when I read the first problem's statement), and it took me a bit of time to get myself acquainted with the platform: _read-from-stdin-and-write-to-stdout_ vs _implement-this-function-that-accepts-these-arguments-and-returns-this-value_ (a la LeetCode), inability to debug failures, _funky_ error messages (see later), lack of commonly available libraries like `SPLIT-SEQUENCE`
- I spent almost an hour trying to figure out why one of my solutions was throwing a [NZEC](https://discuss.codechef.com/t/why-do-i-get-an-nzec/1775), and as am I am writing this, I still have no clue whatsoever of what did I do wrong: I changed my parsers to account for trailing whitespaces in the input (someone suggested it might happen), I changed my input parsing logic to minimize consing, I tried forcing the garbage collector in between runs (I remember I had to do or the Google Code Jam grader would not accept my solution)...but none of these seems to have made any difference
- Kept _switching_ problem the moment I realized the solution I was getting at was not as _straightforward_ as I initially anticipated; this is not new with me (i.e. "it cannot be that complicated, there has to be an easier way..."), and between the clock ticking and the lack of solved problems, I suspect things got even worse as I desperately looked for a low hanging fruit problem with which to build up some confidence
- I was not white-boarding (e.g. pen and paper, Gimp) as much as I should have: "oh, this is easy", I kept saying, and then immediately switched to the editor to hack something up...when clearly I should have spent more time _validating_ the solution, thinking about edge-cases, input constraints, etc.

Let's take a look at the problems, so you can gauge their complexity: the first one, had something to do with an array of integers, and commands that operate over it:

- Increase the values of a given range L-R, by X
- Sum values stored in odd positions of the array
- Sum values stored in even positions of the array

Easy right? Apparently not, as the grader keep on throwing NZECs at me.
(defun read-line-or-eof ()
  (read-line nil nil :eof))

(defun read-integers (n)
  (loop :repeat n
        :collect (read)))

(defun read-heights ()
  (let* ((n (read))
         (heights (read-integers n)))
    (make-array n :initial-contents heights)))

(defun run ()
  (loop :with heights = (read-heights)
        :with q = (parse-integer (read-line-or-eof))
        :repeat q
        :for id = (read)
        :do (cond ((= id 1) (let ((l (read)) (r (read)) (x (read)))
                              (loop :for i :from (1- l) :below r
                                    :do (incf (aref heights i) x))))
                  (t (loop :for i = (- id 2) :then (+ i 2)
                           :while (< i (length heights))
                           :sum (aref heights i) :into sum
                           :finally (format t "~D~%" sum)))))) ;; out of memory exceptions?!
In retrospect, one could have implemented this way more efficiently by simply keeping track of the odd/even buckets, and keep them up to date when tasked to increase the values in a given range, but shouldn't the grader have flagged my solution with a different error message in case it run out of memory, or simply took too much time to complete?  I guess we will never know...

With the second problem, you were given an array containing 0s, 1s, 2s, and were asked to find the maximum number of partitions so that the sum of the numbers in each partition would be divisible by 3.  The realization that I had about this problem was that:

- It takes a 0 to make a partition that is divisible by 3
- It takes a 1 and a 2 to make a partition that is divisible by 3
- It takes three 1s to make a partition that is divisible by 3
- It takes three 2s to make a partition that is divisible by 3

I came up with the following recursive solution, but the grader did not seem to like it (another NZEC error):
(defun read-integers (n)
  (loop :repeat n
        :collect (read)))

(defun solve (s0 s1 s2)
  (labels ((recur (s1 s2)
             (let ((best 0))
               (when (>= s1 3)
                 (setf best (max (1+ (recur (- s1 3) s2)) best)))
               (when (and (>= s1 1) (>= s2 1))
                 (setf best (max (1+ (recur (1- s1) (1- s2))) best)))
               (when (>= s2 3)
                 (setf best (max (1+ (recur s1 (- s2 3))) best)))
               best)))
    (+ s0 (recur s1 s2))))

(defun run ()
  (loop :with n = (read)
        :repeat n
        :for (s0 s1 s2) = (read-integers 3)
        :do (format t "~D~%" (solve s0 s1 s2))))
I am not sure what the grader was complaining about, but I can only speculate that with big values of `s1` or `s2` my recursive solution would very likely exhaust the stack space.

Do we need to solve this iteratively/recursively, or can we solve this mathematically?

- Under the assumption that the fewer elements we use, to create a group, the better, when you have a chance we should create groups with one 1 and one 2 first
- How many of these groups can we create? `min(s1, s2)`
- With the remaining 1s, `s1 - min(s1, s2)`, we can create groups using three 1s each
- With the remaining 2s, `s2 - min(s1, s2)`, we can create groups using three 2s each

This led me to this other more _efficient_ solution:
(defun solve (s0 s1 s2)
  (let ((min-s1s2 (min s1 s2)))
    (+ s0 min-s1s2 (truncate (- s1 min-s1s2) 3) (truncate (- s2 min-s1s2)))))
Unfortunately the grader did not like this either -- this time, the error message was WA (i.e. wrong answer).  I figured my approach was flawed, so I gave up and moved to the next problem.

Before we do that though, did you spot the _silly_ mistake that I had made? I forgot to pass `3` to the second call to `TRUNCATE`, and unfortunately none of my local tests did catch this...
(defun solve (s0 s1 s2)
  (let ((min-s1s2 (min s1 s2)))
    (+ s0 min-s1s2 (truncate (- s1 min-s1s2) 3) (truncate (- s2 min-s1s2) 3))))
Would the grader like this solution? Hard to say...

The third problem felt more of a mathematical problem, than a programming one: given an array of integers `[A1, A2, ... AN]`, you had to find an array `[B1, B2, ... BN]` such that:

- Each `Bi` is smaller than a very big number
- Each `Ai` divides `Bi`
- Each `Bi` divides `Bi-1 x Bi + 1` (imagine the array is circular, so `Bn+1` is `B1`)

I thought I could manually _solve_ these equations and find the relation between these different constants, but even though I managed to _simplify_ some of these, I did not know how to search for the _correct_ set of values amongst the _possible_ ones, so I gave up on this pretty quickly too.

The fourth problem was about finding two partitions of a binary string (all "0"s and "1"s) such that:

- The first one, is a palindrome (i.e. `s = rev(s)`)
- The second one, is an _antipalindrome_ (i.e. `s = rev(flip(s))`)

I could not immediately _see_ how to solve this, so I tagged this as _too-complicated-for-now_, and moved on...

Fifth problem: you are given the implementation of an inefficient function that, recursively, calculates the **bitwise OR** of the elements of a given array (see later); your task was to permute the input and minimize the total number of function calls.
int calculate_or (int i) {
    if (i == N) return A[i];
    if (calculate_or(i+1) == A[i]) return A[i];
    return calculate_or(i+1) | A[i];
}
The function calculates the result starting from the tail of the array and _slowly_ moving to the first element; in particular, the _extra_ recursive call is avoided if `calculate_or(i+1) == A[i]`, so if we want to minimize the number of recursive calls, we would have to arrange elements so that the ones with wider 1s mask come firs.

Anyway, since I had to return the minimum number of function calls, I figured I would start with that:
(defun count-fcalls (array &aux (n (length array)))
  (let ((suffix-or (make-array n :initial-contents (loop :for i :from (1- n) :downto 0
                                                         :for value = (aref array i)
                                                         :for each = value :then (logior each value)
                                                         :collect each))))
    (1+ (loop :for i :from 1 :below n
           :for each = (aref array (1- i))
           :for remaining-or = (aref suffix-or i)
           :for remaining-calls = (if (= each remaining-or) 1 2)
           :for acc = remaining-calls :then (* acc remaining-calls)
           :sum acc))))
Next: how could we _smartly_ shuffle the input array to minimize the number of recursive calls?  The first solution that I tried, was to sort elements by the number of raised bits:
(defun num-raised-bits (number)
  (loop :for i :below (integer-length number)
        :count (logbitp i number)))

(defun run ()
  (loop :with n = (read)
        :repeat n
        :for array = (read-integers (read))
        :for sorted = (sort array #'> array :key #'num-raised-bits)
        :do (format t "~D~%" (count-fcalls sorted))))
The grader did not like it.  What about sorting by most-significant bit first, and then by the number of raised bits?
(defun comes-first (n m)
  (let ((len-n (integer-length n))
        (len-m (integer-length m)))
    (or (> len-n len-m)
        (and (= len-n len-m)
             (let ((raised-n (num-raised-bits n))
                   (raised-m (num-raised-bits m)))
               (> raised-n raised-m))))))

(defun run ()
  (loop :with n = (read)
        :repeat n
        :for array = (read-array)
        :for sorted = (sort array #'comes-first)
        :do (format t "~D~%" (count-fcalls sorted))))
I am not sure why I thought this one; was I high or something? Anyway, of course the grader did not accept this too.  Lastly, I came up with the idea of solving this with DFS + memoization, but I could not finish this in time:

- Implement a recursive function to return the minimum number of calls required to calculate the bitwise OR (i.e. the solution to the problem), as well as bitwise OR mask
- I used a bitset to keep track of which element of the input array still had to be processed
- Base case: if a call is made with the same `remaining` bit-mask, return the cached value (memoization)
- Base case: when all the elements have been visited, return `(0 0)` -- 0 remaining function calls, 0 is the bitwise OR for the remaining items (none in this case)
- At each step, try to add one of the elements which are not part of our solution already, and see which of these additions minimize the number of recursive calls
- How? Each recursive calls returns the number of remaining function calls, `fcalls`, as well as the bit-mask of all the remaining elements, `or-suffix`; if the currently tried element is equal to `or-suffix` than we would only call the recursive function once, otherwise twice; this means that adding the currently tried element will require `fcalls` or `2 * fcalls` total function calls respectively
(defun solve (array &aux)
  (let* ((n (length array))
         (all (1- (ash 1 n)))
         (memo (make-hash-table)))
    (labels ((mget (key)
               (gethash key memo))
             (mset (key value)
               (setf (gethash key memo) value))
             (dfs (remaining &aux (cached (mget remaining)))
               (cond (cached cached)
                     ((zerop remaining) (mset remaining (list 0 0)))
                     (t (let (fcalls-best or-suffix-best)
                          (loop :for i :below n
                                :for each = (aref array i)
                                :when (logbitp i remaining)
                                :do (destructuring-bind (fcalls-rest or-suffix-rest)
                                        (dfs (logxor (ash 1 i) remaining))
                                      (let* ((no-extra-call-required (= or-suffix-rest each))
                                             (fcalls-curr (1+ (* (if no-extra-call-required 1 2) fcalls-rest))))
                                        (when (or (not fcalls-best)
                                                  (< fcalls-curr fcalls-best))
                                          (setf fcalls-best fcalls-curr
                                                or-suffix-best (logior each or-suffix-rest))))))
                          (mset remaining (list fcalls-best or-suffix-best)))))))
      (first (dfs all)))))

(defun run ()
  (loop :with n = (read)
        :repeat n
        :for array = (read-array)
        :do (format t "~D~%" (solve array))))
This seems to return the **right** value -- for the inputs I tried this with, at least -- however, it took more than a second to output the answer for `#(1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 29)`, so I hardly believe the grader would accept this...

Anyway, the sixth problem? It had something to do with doing some mathematical calculations on a given input array, but I was immediately put off by the double multiplication at the beginning of the problem description, so ended up not spending much time on this -- I bet this was the easy problem I was desperately looking for...

And this was the last one...what an epic fail ;-)

So much for a _reality check_, uh?!

2020-11-14 (permalink)

I was having a hard-time figuring out the **name** of one of the running applications on my Mac -- I wanted to create a [shortcut](https://github.com/iamFIREcracker/dotfiles/blob/c46cc374d2053d47a0f6664650786f3af7783fd9/.hammerspoon/init.lua#L35) for it, using Hammerspoon -- and it turns the project's [FAQ](https://www.hammerspoon.org/faq/) page had exactly an entry for people facing this problem:
If you are trying to find Microsoft Office applications (e.g. using hs.appfinder.appFromName("Word")), they present a different application name to the OS than they display in the system menu bar. Use "Microsoft Word", "Microsoft Excel", etc. instead.
You can get a list of the real names of all running applications in the Hammerspoon Console with the following snippet:
hs.fnutils.each(hs.application.runningApplications(), function(app) print(app:title()) end)

2020-11-13 (permalink)

Advent of Code: [2016/10](https://adventofcode.com/2016/day/10)

Solution:

- Parse input and return a hash table containing all the _bots_
- A _bot_ is a struct containing `id`, `logic` (i.e. low-value-microchip goes to bot/output, high-value-microchip goes to bot/output), `microchips` (i.e. list of microchips in the bot's hands)
- Find the first bot which is _unblocked_ (i.e. has 2 microchips)
- Move microchips to other bots/outputs
- Repeat until there are no more bots which are unblocked
(defstruct bot id microchips logic)

(defun bot-microchips-sorted (bot)
  (destructuring-bind (a b) (bot-microchips bot)
    (if (< a b) (list a b) (list b a))))

(defun parse-init-instruction (string)
  (cl-ppcre:register-groups-bind ((#'parse-integer value bot-id))
      ("value (\\d+) goes to bot (\\d+)" string)
    (list value bot-id)))

(defun parse-logic-instruction (string)
  (cl-ppcre:register-groups-bind ((#'parse-integer bot-id)
                                  low-to-type
                                  (#'parse-integer low-to-id)
                                  high-to-type
                                  (#'parse-integer high-to-id))
      ("bot (\\d+) gives low to (bot|output) (\\d+) and high to (bot|output) (\\d+)" string)
    (list bot-id
          (string= low-to-type "bot") low-to-id
          (string= high-to-type "bot") high-to-id)))

(defun parse-bots (data)
  (let ((bots (make-hash-table)))
    (flet ((get-or-create (bot-id)
             (or (gethash bot-id bots)
                 (setf (gethash bot-id bots) (make-bot :id bot-id)))))
      (dolist (string data bots)
        (let (match)
          (cond ((setf match (parse-init-instruction string))
                 (destructuring-bind (value bot-id) match
                   (let ((bot (get-or-create bot-id)))
                     (push value (bot-microchips bot)))))
                ((setf match (parse-logic-instruction string))
                 (let ((bot (get-or-create (first match))))
                   (setf (bot-logic bot) (rest match))))))))))

(defun find-unblocked-bot (bots)
  (loop :for each :being :the :hash-values :of bots
        :for microchips = (bot-microchips each)
        :when (= (length microchips) 2) :return each))

(define-problem (2016 10) (bots parse-bots)
  (loop :with part1 :with outputs = (make-hash-table)
        :for bot = (find-unblocked-bot bots)
        :while bot
        :for (low high) = (bot-microchips-sorted bot)
        :when (and (= high 61) (= low 17)) :do (setf part1 (bot-id bot))
        :do (destructuring-bind (low-to-bot-p low-to-id high-to-bot-p high-to-id)
                (bot-logic bot)
              (if low-to-bot-p
                (push low (bot-microchips (gethash low-to-id bots)))
                (setf (gethash low-to-id outputs) low))
              (if high-to-bot-p
                (push high (bot-microchips (gethash high-to-id bots)))
                (setf (gethash high-to-id outputs) high))
              (setf (bot-microchips bot) nil))
        :finally (return (values part1
                                 (* (gethash 0 outputs)
                                    (gethash 1 outputs)
                                    (gethash 2 outputs))))))

2020-11-12 (permalink)

+ lispindent breaks with cl-ppre group bindings form like `((#'intgeer first second))`


Interesting Rework podcast about PTO policies:
Unlimited paid time off is a common perk in the tech industry, but as one company discovered, an open-ended vacation policy led to confusion and even burnout. Dan Jimenez of Chatbooks comes on Rework to talk about how they shifted from unlimited to mandatory PTO, and how they're recalibrating expectations for work, productivity, and rest during a turbulent time.
Link: [Take Some Time Off (We Mean it!)](https://share.transistor.fm/s/0000e516)

Vim/vim-sexp were acting _weirdly_ when working on a file that contained the following form:
(let* ((match-size (1+ (position #\) string)))
  ...)
It turns out the plugin was tripping over the `#\)` literal; I could have used `#\right_parenthesis` instead, but unfortunately that's not a portable solution.

So I asked this on the Lisp Discord server, and one suggestion that came in was:
you can use `(char ")" 0)` as a workaround, optionally armed with a `#.` in non-evaluated environments
That's clever, I had not thought about using the compile-time evaluation syntax for this.

Anyway, if you are wondering: yes, that did actually fix my problem, and now Vim is not behaving weirdly anymore -- I am still on the fence whether this looks more readable or not.

2020-11-11 (permalink)

Advent of Code: [2016/08](https://adventofcode.com/2016/day/8)

Solution:

- Parse instructions
- Implement the screen as a 2d array, and implement _drawing_ functions not caring much about performance (rotate ones in particular, could be implemented without allocating an extra array for the column/row, but I was too lazy to implement that)
- Run all the drawing instructions.  At the end
- Part 1: count lit pixels
- Part 2: _print_ the screen, and _visually_ OCR it :D
(defun parse-draw-rect (string)
  (cl-ppcre:register-groups-bind ((#'parse-integer width height))
      ("rect (\\d+)x(\\d+)" string)
    (list :draw-rect height width)))

(defun parse-rotate-row (string)
  (cl-ppcre:register-groups-bind ((#'parse-integer i shift))
      ("rotate row y=(\\d+) by (\\d+)" string)
    (list :rotate-row i shift)))

(defun parse-rotate-column (string)
  (cl-ppcre:register-groups-bind ((#'parse-integer j shift))
      ("rotate column x=(\\d+) by (\\d+)" string)
    (list :rotate-column j shift)))

(defun parse-single-instruction (string)
  (or (parse-draw-rect string)
      (parse-rotate-row string)
      (parse-rotate-column string)))

(defun parse-instructions (data)
  (mapcar #'parse-single-instruction data))

(defun draw-rect (screen height width)
  (dotimes (i height)
    (dotimes (j width)
      (setf (aref screen i j) #\#))))

(defun rotate-row (screen i shift &aux (n (array-dimension screen 1)))
  (let ((new-values (make-array n :element-type 'character)))
    (loop :for j :below n
          :for k = (mod (+ j shift) n)
          :do (setf (aref new-values k) (aref screen i j)))
    (loop :for v :across new-values
          :for j :from 0
          :do (setf (aref screen i j) v))))

(defun rotate-column (screen j shift &aux (n (array-dimension screen 0)))
  (let ((new-values (make-array n :element-type 'character)))
    (loop :for i :below n
          :for k = (mod (+ i shift) n)
          :do (setf (aref new-values k) (aref screen i j)))
    (loop :for v :across new-values
          :for i :from 0
          :do (setf (aref screen i j) v))))

(defun count-lit-pixels (screen)
  (loop :for i :below (array-total-size screen)
        :count (eql (row-major-aref screen i) #\#)))

(defun print-screen (screen)
  (with-output-to-string (s)
    (terpri s) ; print an additional newline, so I can better format the expected string
    (let ((height (array-dimension screen 0))
          (width (array-dimension screen 1)))
      (dotimes (i height)
        (dotimes (j width)
          (princ (aref screen i j) s))
        (terpri s)))))

(define-problem (2016 08) (instructions parse-instructions)
  (loop :with screen = (make-array (list 6 50) :initial-element #\Space)
        :for (type . args) :in instructions
        :do (case type
              (:draw-rect (apply #'draw-rect screen args))
              (:rotate-row (apply #'rotate-row screen args))
              (:rotate-column (apply #'rotate-column screen args)))
        :finally (return (values (count-lit-pixels screen)
                                 (print-screen screen)))))

Advent of Code: [2016/09](https://adventofcode.com/2016/day/9)

Solution:

- Part 1: reconstruct the uncompressed message, implementing a recursive descent parser
- when parsing a marker means, one should extract the `size` of the repetition, and the number of `times` the message should be repated -- reconstruct that, and move on
- when parsing _everything else_, simply process until the next marker
(defun parse-message (string)
  (or (parse-marker string)
      (parse-rest string)))

(defun parse-marker (string)
  (cl-ppcre:register-groups-bind ((#'parse-integer size times))
      ("^\\((\\d+)x(\\d+)\\)" string)
    (let* ((match-size (1+ (position #\) string)))
           (seed (subseq string match-size (+ match-size size))))
      (concatenate 'string
                   (format nil "~V@{~a~:*~}" times seed) ;; meh
                   (parse-message (subseq string (+ match-size size)))))))

(defun parse-rest (string)
  (cl-ppcre:register-groups-bind (raw)
      ("^([^(]+)" string)
    (concatenate 'string raw (parse-marker (subseq string (length raw))))))
- Part 2: as the problem statement suggests, a _naive_ solution would not work with this new version of the format
- The intuition is that we don't really care about the message... we only care about its length
- Recursive solution
- The size a compressed portion (i.e. one starting with a marker), is equal the size (i.e. recursive call) of the message after the marker that needs to be repeated, _times_ the number of times that needs to be repeated
- The size of an uncompressed portion is... well the size of that portion
(defun message-length (string)
  (labels ((compressed (string)
             (cl-ppcre:register-groups-bind ((#'parse-integer size times))
                 ("^\\((\\d+)x(\\d+)\\)" string)
               (let ((match-size (1+ (position #\) string))))
                 (+ (* (message-length (subseq string match-size (+ match-size size))) times)
                    (message-length (subseq string (+ match-size size)))))))
           (uncompressed (string)
             (cl-ppcre:register-groups-bind (raw)
                 ("^([^(]+)" string)
               (+ (length raw) (message-length (subseq string (length raw)))))))
    (or (compressed string) (uncompressed string) 0)))
- This works, and we can easily adapt this solution to work for Part 1 too
- The difference is that for part one, we don't _recurse_ when calculating the length of a compressed area
(defparameter *version* 1)

(defun message-length (string)
  (or (message-compressed-length string)
      (message-uncompressed-length string)
      0))

(defun message-compressed-length (string)
  (cl-ppcre:register-groups-bind ((#'parse-integer size times))
      ("^\\((\\d+)x(\\d+)\\)" string)
    (let* ((match-size (1+ (position #\) string)))
           (size-uncompressed (if (= *version* 1)
                                size
                                (message-length (subseq string match-size (+ match-size size))))))
      (+ (* size-uncompressed times)
         (message-length (subseq string (+ match-size size)))))))

(defun message-uncompressed-length (string)
  (cl-ppcre:register-groups-bind (raw)
      ("^([^(]+)" string)
    (+ (length raw) (message-length (subseq string (length raw))))))

(define-problem (2016 09) (message first)
  (values (message-length message)
          (let ((*version* 2))
            (message-length message))))

2020-11-10 (permalink)

Advent of Code: [2016/05](https://adventofcode.com/2016/day/4)

Solution:

- Brute-force it
- Let someone else deal with [md5](https://quickref.common-lisp.net/md5.html)
- Check if a hash is _good_ (first 5 characters of the string representation are "0"), then accumulate...
- Part 1: accumulate the 6th character
- Part 2: if 6th character is below "8", write the 7th character on the _right_ spot
- PS: this solution is slow... on my laptop it takes around 40 seconds to run
(defun goodp (md5-hash)
  (and (zerop (aref md5-hash 0))
       (zerop (aref md5-hash 1))
       (zerop (ash (aref md5-hash 2) -4))))

(defun int-hex (n)
  (write-to-string n :base 16))

(defun destruct-hash (md5-hash)
  (let ((valid (goodp md5-hash))
        (part1-digit (aref (string-downcase (int-hex (aref md5-hash 2))) 0))
        (part2-offset (aref md5-hash 2))
        (part2-digit (aref (string-downcase (int-hex (ash (aref md5-hash 3) -4))) 0)))
    (list valid part1-digit part2-offset part2-digit)))

(define-problem (2016 05) (door-id first)
  (loop :with part1 = (make-array 8 :element-type 'character :adjustable t :fill-pointer 0)
        :with part2 = (make-string 8 :initial-element #\_)
        :with remaining = 8
        :for index :from 0
        :for md5-hash = (md5:md5sum-string (mkstr door-id index))
        :for (valid part1-digit part2-offset part2-digit) = (destruct-hash md5-hash)
        :when valid :do (progn
                          (when (< (fill-pointer part1) 8)
                            (vector-push part1-digit part1))
                          (when (and (< part2-offset 8)
                                     (eql (aref part2 part2-offset) #\_))
                            (setf (aref part2 part2-offset) part2-digit
                                  remaining (1- remaining))))
        :when (zerop remaining) :return (values part1 part2)))

Advent of Code: [2016/06](https://adventofcode.com/2016/day/6)

Solution:

- Parse input
- Part 1: iterate by columns, run `FREQUENCIES`, and return the most frequent char
- Part 2: use the least frequent char instead
(defun parse-recording (data)
  (make-array (list (length data) (length (first data)))
              :initial-contents data))

(defun array-column (array j)
  (loop :for i :below (array-dimension array 0)
        :collect (aref array i j) :into values
        :finally (return (coerce values 'simple-vector))))

(define-problem (2016 06) (recording parse-recording)
  (loop :with columns = (array-dimension recording 1)
        :for j :below columns
        :for column = (array-column recording j)
        :for freqs = (sort (hash-table-alist (frequencies column)) #'> :key #'cdr)
        :collect (car (first freqs)) :into part1
        :collect (caar (last freqs)) :into part2
        :finally (return (values (coerce part1 'string)
                                 (coerce part2 'string)))))

Advent of Code: [2016/07](https://adventofcode.com/2016/day/7)

Solution:

- Parse IPs into splits, where the even ones represent the hypernet sequences
- Part 1
- ... iterate each split (odd splits **might** contain an ABBA, even ones **should not**)
- ... if it **does** contains an ABBA in an odd split, and **not** contain an ABBA in an even one, then count the entry
(defun parse-ip (string)
  (->< string
       (split-sequence:split-sequence #\[ ><)
       (mapcan (lambda (s) (split-sequence:split-sequence #\] s)) ><)))

(defun parse-ips (data)
  (mapcar #'parse-ip data))

(defun contains-abba-p (string)
  (loop :for i :from 3 :below (length string)
        :for a = (aref string (- i 3))
        :for b = (aref string (- i 2))
        :for c = (aref string (- i 1))
        :for d = (aref string (- i 0))
        :when (and (char= a d) (char= b c) (char/= a b)) :return t))

(defun supports-tls-p (ip)
  (loop :with found-one-abba
        :for part :in ip
        :for hypernetp = nil :then (not hypernetp)
        :if (and hypernetp (contains-abba-p part)) :return nil
        :else :do (unless found-one-abba
                    (setf found-one-abba (contains-abba-p part)))
        :finally (return found-one-abba)))

(define-problem (2016 07) (ips parse-ips)
  (loop :for ip :in ips
        :count (supports-tls-p ip) :into part1
        :finally (return (values part1 part2))))
- Part 2
- ... it's a bit tricker, because there could be multiple ABA inside odd splits, but only one of them might have a BAB in one of the hypernet sequences
- For each ABA inside any of the odd splits, check if the same BAB exist inside any of the hypernet sequences
(defun extract-abas (string)
  (loop :for i :from 2 :below (length string)
        :for a = (aref string (- i 2))
        :for b = (aref string (- i 1))
        :for c = (aref string (- i 0))
        :when (and (char= a c) (char/= a b)) :collect (mkstr a b c)))

(defun aba-bab (aba)
  (mkstr (aref aba 1) (aref aba 0) (aref aba 1)))

(defun contains-bab-p (bab hypernet)
  (some (partial-1 #'search bab _) hypernet))

(defun supports-ssl-p (ip)
  (destructuring-bind (normal hypernet)
      (loop :for split :in ip
            :for i :from 1
            :if (oddp i) :collect split :into normal
            :else :collect split :into hypernet
            :finally (return (list normal hypernet)))
    (loop :for split :in normal
          :for babs = (mapcar #'aba-bab (extract-abas split))
          :when (some (partial-1  #'contains-bab-p _ hypernet) babs) :return t)))

(define-problem (2016 07) (ips parse-ips)
  (loop :for ip :in ips
        :count (supports-tls-p ip) :into part1
        :count (supports-ssl-p ip) :into part2
        :finally (return (values part1 part2))))
- This works, but I think we can try and simplify the implementation a bit
- Let's **always** partition splits into _normal_, and _hypernet_ (not just for part 2)
- Part 1: count IPs with an ABBA inside the _normal_ part, and **not** an ABBA inside the _hypernet_ part
- Part 2: count IPs having at least one ABA part inside the _normal_ part, and a complementary BAB inside the _hypernet_ part
(defun parse-ip-parts (string)
  (->< string
       (split-sequence:split-sequence #\[ ><)
       (mapcan (lambda (s) (split-sequence:split-sequence #\] s)) ><)))

(defun parse-ip (string)
  (loop :for split :in (parse-ip-parts string)
        :for i :from 1
        :if (oddp i) :collect split :into normal
        :else :collect split :into hypernet
        :finally (return (list normal hypernet))))

(defun parse-ips (data)
  (mapcar #'parse-ip data))

(defun contains-abba-p (string)
  (loop :for i :from 3 :below (length string)
        :for a = (aref string (- i 3))
        :for b = (aref string (- i 2))
        :for c = (aref string (- i 1))
        :for d = (aref string (- i 0))
        :when (and (char= a d) (char= b c) (char/= a b)) :return t))

(defun supports-tls-p (normal hypernet)
  (and (some #'contains-abba-p normal)
       (notevery #'contains-abba-p hypernet)))

(defun extract-babs (string)
  (loop :for i :from 2 :below (length string)
        :for a = (aref string (- i 2))
        :for b = (aref string (- i 1))
        :for c = (aref string (- i 0))
        :when (and (char= a c) (char/= a b)) :collect (mkstr b a b)))

(defun contains-bab-p (hypernet)
  (lambda (bab)
    (some (lambda (split) (search bab split)) hypernet)))

(defun supports-ssl-p (normal hypernet)
  (loop :for each :in normal
        :for babs = (extract-babs each)
        :when (some (contains-bab-p hypernet) babs) :return t))

(define-problem (2016 07) (ips parse-ips)
  (loop :for (normal hypernet) :in ips
        :count (supports-tls-p normal hypernet) :into part1
        :count (supports-ssl-p normal hypernet) :into part2
        :finally (return (values part1 part2))))

2020-11-09 (permalink)

Advent of Code: [2016/04](https://adventofcode.com/2016/day/4)

Solution:

- Parse room specs
- Part 1: calculate the frequency of each character in the encrypted name (dashes excluded)
- ...sort frequency tuples by most frequent first (use lexicographical order to break tie)
- ...find the entries for which the order of the most frequent characters matches the letters defined inside the checksum
- ...sum their IDs
- Part 2: [Caesar cipher](https://rosettacode.org/wiki/Caesar_cipher)
(defun read-single-room-specification (string)
  (cl-ppcre:register-groups-bind (name (#'parse-integer sector) checksum)
      ("([a-z\\-]+)-(\\d+)\\[([a-z-]+)\]" string)
    (list name sector checksum)))

(defun read-specifications (data)
  (mapcar #'read-single-room-specification data))

(defun validp (freqs checksum)
  (setf freqs (sort freqs (lambda (e1 e2)
                            (destructuring-bind ((c1 . f1) (c2 . f2))
                                (list e1 e2)
                              (or (> f1 f2)
                                  (and (= f1 f2)
                                       (char< c1 c2)))))))
  (and (<= (length checksum) (length freqs))
       (loop :for e :across checksum
             :for (c) :in freqs
             :always (eql e c))))

(defun decipher-char (c offset &aux (base (char-code #\a)))
  (->< (char-code c)
    (- >< base)
    (+ offset)
    (mod >< 26)
    (+ base)
    code-char))

(defun caesar-decipher (string offset)
  (map 'string #'(lambda (c)
                   (if (eql c #\-) #\Space (decipher-char c offset)))
       string))

(define-problem (2016 04) (specs read-specifications)
  (loop :with part2
        :for (name sector checksum) :in specs
        :for freqs = (hash-table-alist (frequencies (remove #\- name)))
        :for deciphered = (caesar-decipher name sector)
        :when (validp freqs checksum) :sum sector :into part1
        :when (and (not part2) (search "northpole" deciphered)) :do (setf part2 sector)

2020-11-08 (permalink)

Advent of Code: [2016/01](https://adventofcode.com/2016/day/1)

Solution:

- Nothing crazy: start from `#c(0 0)`, heading `#c(0 1)`, and apply each instruction one after the other
- A left rotation is implemented by multiplying `heading` by `#c(0 1)`; a right rotation is implemented by multiplying `heading` by `#c(0 -1)`
- Part 1: apply all the movements, and finally apply `(MANHATTAN-DISTANCE POS 0)`
- Part 2: apply all the movements, _one step at a time_, and when you land on an already seen position, return `(MANHATTAN-DISTANCE DUPLICATE 0)`
(defun read-instructions (data)
  (mapcar (lambda (step)
            (let ((rotation (if (eql (aref step 0) #\L) #c(0 1) #c(0 -1)))
                  (walk-for (parse-integer (subseq step 1) :junk-allowed t)))
              (cons rotation walk-for)))
          (split-sequence:split-sequence #\Space (first data))))

(define-problem (2016 01) (instructions read-instructions)
  (loop
    :with pos = #c(0 0) :with part2 :with seen
    :for (change-dir . walk-for) :in instructions
    :for heading = (* #c(0 1) change-dir) :then (* heading change-dir)
    :do (loop
          :repeat walk-for
          :do (progn
                (incf pos heading)
                (when (and (not part2) (member pos seen))
                  (setf part2 pos))
                (push pos seen)))
    :finally (return (values (manhattan-distance pos 0)
                             (manhattan-distance part2 0)))))

Advent of Code: [2016/02](https://adventofcode.com/2016/day/2)

Solution:

- Create a 2d array representing the keypad, and start from `i = 1`, and `j = 1`
- Move the current position until its out of bounds
- Collect the value on the keypad at the end of each single-key instruction
- Part 1: use the standard 3x3 keypad
- Part 2: use the funky shaped keypad, still represented as a 2d array, with some cells marked as _invalid_
(defparameter *keypad-part1* (make-array '(3 3) :initial-contents '("123"
                                                                    "456"
                                                                    "789")))

(defparameter *keypad-part2* (make-array '(5 5) :initial-contents '("--1--"
                                                                    "-234-"
                                                                    "56789"
                                                                    "-ABC-"
                                                                    "--D--")))

(defun read-single-key-instructions (string)
  (map 'list (lambda (c)
               (ecase c
                 (#\U '(-1 . 0))
                 (#\R '(0 . 1))
                 (#\D '(1 . 0))
                 (#\L '(0 . -1))))
       string))

(defun read-instructions (data)
  (mapcar #'read-single-key-instructions data))

(defun find-code (instructions keypad i j)
  (loop
    :for single-key-instructions :in instructions
    :do (loop
          :for (di . dj) :in single-key-instructions
          :for ni = (+ i di) :for nj = (+ j dj)
          :when (and (array-in-bounds-p keypad ni nj)
                     (not (eql (aref keypad ni nj) #\-)))
          :do (setf i ni
                    j nj))
    :collect (aref keypad i j) :into numbers
    :finally (return (format nil "~{~a~}" numbers))))

(define-problem (2016 02) (instructions read-instructions)
  (values (find-code instructions *keypad-part1* 1 1)
          (find-code instructions *keypad-part2* 2 0)))

Advent of Code: [2016/03](https://adventofcode.com/2016/day/3)

Solution:

- Not much to do, expect for do the needed
- Parse the specs into a tuple
- Part 1: count which spec is valid, i.e. `a + b > c` and `a + c > b` and `b + c > a`
- Part 2: _rotate_ the specs first, then count which ones are valid
(defun read-single-triangle-specification (string)
  (cl-ppcre:register-groups-bind ((#'parse-integer a b c))
      ("(\\d+)\\s+(\\d+)\\s+(\\d+)" string)
    (list a b c)))

(defun read-specifications (data)
  (mapcar #'read-single-triangle-specification data))

(defun valid-triangle-p (spec)
  (destructuring-bind (a b c) spec
    (and (> (+ a b) c)
         (> (+ a c) b)
         (> (+ b c) a))))

(defun rotate-specs (specs)
  (loop
    :for (x y z) :on specs :by #'cdddr
    :for (ax bx cx) = x
    :for (ay by cy) = y
    :for (az bz cz) = z
    :collect (list ax ay az)
    :collect (list bx by bz)
    :collect (list cx cy cz)))

(define-problem (2016 03) (specs read-specifications)
  (values (count-if #'valid-triangle-p specs)
          (count-if #'valid-triangle-p (rotate-specs specs))))

2020-11-05 (permalink)

On _small_ design differences between Android and iOS platforms (and the importance of sticking with OS specific guidelines):
In the Facebook/Android style, a down-pointing chevron is a button you tap to expand more content, and an up-pointing chevron is a button you tap to collapse it. In the iOS/Mac style, a right-pointing chevron (or triangle, depending on the OS) indicates the collapsed state, and a down-pointing chevron indicates the expanded state. The Android way, a down-pointing chevron means “will open, if you tap”; the Mac/iOS way, a down-pointing chevron means “is open, tap to close”. You can argue that the Android style isn’t wrong, per se, just different, but (a) I’ll go to the mat arguing that disclosure chevrons should show state and imply action (the Mac/iOS way), not show action and imply state (the Android/Facebook way); and (b) it’s undeniably wrong for an iOS app — they’re breaking a core platform idiom. For examples that do it the right way, look at the list of accounts at the root level of iOS Mail, or the new collapsible “Pinned” header in iOS 14 Notes.
Source: [Disclosing a Few Thoughts on Facebook’s UI Design](https://daringfireball.net/2020/11/disclosing_a_few_thoughts_on_facebooks_ui_design)

2020-11-03 (permalink)

Here is a short CL function to _clear_ the screen (handy when periodically printing a board or something):
(defun cls ()
  "Clears the screen, and move the cursor to the top.

  Outputs:

  - \033[2J: ANSI sequence to clear the screen
  - \033[H: ANSI sequence to move the cursor to the _home_ position

  It does using some FORMAT magic (e.g. `~:*` to move back in the format argument list)"
  (format t "~A[2J~:*~A[H" #\escape))
Reference: [StackOverflow](https://stackoverflow.com/questions/37774983/clearing-the-screen-by-printing-a-character)

2020-11-01 (permalink)

TIL: with [vim-surround](https://github.com/tpope/vim-surround), `cs([` will introduce an additional whitespace when swapping `(` with `[`, while `cs(]` will not.

Finally gave [vim-fugitive](https://github.com/tpope/vim-fugitive)'s `:Gclog` command a try, and I think I am going to stick with it.

Here is a bit of wrapping logic that I built around it:
function! DiffWithMaster() abort " {{{
    if systemlist('git rev-parse --abbrev-ref HEAD')[0] != 'master'
        return 'master..'
    endif
endfunction " }}}
nnoremap <leader>gl :Gclog! <C-R>=DiffWithMaster()<CR> -12<CR>
nnoremap <leader>gL :Gclog! <C-R>=DiffWithMaster()<CR><Space>
- The function, `DiffWithMaster()`, returns "master.." when the current branch **is not** "master", or nothing otherwise -- this will come in handy when working on a branch, and willing to list all the commits that have not made it into "master" yet
- The first mapping, `<leader>gl`, runs `:Gclog!` to show the most recent `12` commits _of the current branch_ (i.e. `:Gclog! -12` if on "master", `:Gclog! -12 master..` otherwise)
- The last mapping, `<leader>gL`, _prepares_ a command similar to the one above, except that a) it does not set a limit to the number of commits to show, and b) it waits for you to press enter to run it -- so you can customize it however you want

Finally gave up, and restored some old `*` / `#` mappings that I have -- I cannot stand the position of the cursor jumping around... I usually want to highlight things first, and then jump around using `n` or `N`.
" Smarcase for */# {{{

" When running a forward search, Vim will take you to the first **next**
" occurrence of the pattern; here instead we want to:
"
" - Go back to where we were (`<C-O>`)
" - Move at the beginning of the current word (`b`): this way, should we decide
"   to change direction and use N instead of n, we would immediately jump to
"   the previous occurrence and not stay where we are)
nnoremap * /\<<C-R>=escape(expand('<cword>'), '*\')<CR>\><CR><C-O>b
" When running a backward search, Vim will take you to the first **previous**
" occurrence of the pattern, which in this case would be the beginning of the
" word under the cursor
nnoremap # ?\<<C-R>=escape(expand('<cword>'), '*\')<CR>\><CR>

" }}}

2020-10-31 (permalink)

* finished reading [Facebook: The Inside Story](https://www.amazon.com/Facebook-Inside-Story-Steven-Levy/dp/0735213151)

? alacritty + vim: cursor shape goes back to block (instead of bar) when using 'cc' on a line that uses indentation


Finally got around to fix some issues with my Javascript's "select-top-level-expression" mapping (previously, it was tripping over when trying to evalute top-level array definitions).

Before:
" Custom text object to highlight top-level expressions
" Explanation:
"
" - 99[( - move to the outmost ( character: that makes it easy to
"   jump to the line where a function invocation starts (not much
"   useful if we are trying to evaluate a function, but for a toplevel
"   function call, this will come in handy
" - 99[{ - move to the outmost { character: that makes it easy to jump
"   to the line where a function definition likely starts
" - V% - toggle linewise-visual, and jump to the 'other side' (this
"   could mean the closing brace, or parenthesis)
vnoremap <buffer> <silent>af :<C-U>silent! normal! 99[(99[{V%<CR>
onoremap <buffer> <silent>af :normal Vaf<CR>
vnoremap <buffer> <silent>if :<C-U>silent! normal! 99[{vi{<CR>
onoremap <buffer> <silent>if :normal vif<CR>
and after:
function! SelectTopLevelExpression(around) abort " {{{
    let motion = ''

    " Let's start by jumping to the end of the line: that's because
    " the cursor might be at the befinning of a function declaration (i.e.
    " with no outermost pair to jump to), so this hopefully moves the cursor to
    " the opening curly brace
    let motion .= 'g_'

    " However, the previous motion might have moved the cursor on a `;` char,
    " one that does not have a pair defined; so in that case, move the cursor
    " one char to the left (i.e. on a closing curly brace)
    let line = getline('.')
    if line[len(line) - 1] == ';'
        let motion .= 'h'
    endif

    " Jump to the outmost _pair_; this should gracefully handle the case where
    " the cursor is in the middle of:
    "
    " - array definitions (single, and nested ones)
    " - function parameters (i.e. its signature)
    " - or simply inside a nested block
    let motion .= '99[%'

    if a:around
        let motion .= 'V%'
    else
        let motion .= 'vi{'
    endif

    " Finally run the motion.  Note, `normal` instead of `normal!` needs to be
    " used, or otherwise we won't be able to use user-defined mappings
    echom motion
    execute "normal " . motion
endfunction " }}}
vnoremap <buffer> <silent>af :<C-U>call SelectTopLevelExpression(1)<CR>
onoremap <buffer> <silent>af :normal Vaf<CR>
vnoremap <buffer> <silent>if :<C-U>call SelectTopLevelExpression(0)<CR>
onoremap <buffer> <silent>if :normal vif<CR>
I whish I had known about matchit operators before (i.e. `:help [%`).

2020-10-30 (permalink)

Paul Graham on [Twitter](https://twitter.com/paulg/status/1314260316134531075) quoted the below a couple of weeks ago, with a comment: "Every startup founder should read this..."
I don't thinkn STripe is uniformly _fast_. I think teams at Stripe are just _faster than_ most companies, blocked a bit less by peer teams, constrained a tiny bit less by internal tools, etc etc. There are particular projects which have been agonizingly long to ship; literally years after I would have hoped them done. But _across the portfolio_, with now hundreds of teams working, we just _get more done_ than we "should" be able to.
A _stupenduous_ portion of that advantage is just consistently choosing to get more done. That sounds vacuous but hasn't been in my experience. I have seen truly silly improvements occasioned by someone just consistently asking in meetings "Coud we do that faster? What is the minimum increment required to ship? Could _that_ be done faster?" It's the Charge More of management strategy; the upside is so dramatic, the cost so low, and the hit rate so high that you should just invoke it ritualistically
I think I tend to agree with him on this: ultimately, this _ritual_, is meant to keep you on track, focused on what really matters.  However, I am just afraid that, if abused, it would result in _unnecessary_ pressure for both the product, and the development departments.

2020-10-27 (permalink)

Yesterday, someone on the Lisp Discord server while talking about [CLOS](https://en.wikipedia.org/wiki/Common_Lisp_Object_System), mentioned a pretty cool **practical** example that I really think might help people undederstand the concept: Mixins.

From the [Mixin](https://en.wikipedia.org/wiki/Mixin) Wikipedia page:
In object-oriented programming languages, a mixin (or mix-in)[1][2][3][4] is a class that contains methods for use by other classes without having to be the parent class of those other classes. How those other classes gain access to the mixin's methods depends on the language. Mixins are sometimes described as being "included" rather than "inherited".
Now jump to the Examples section of that Wikipedia page, and under Common Lisp you should find the followin.

`object-width` is a generic function with one argument that uses the `+` method combination. This combination determines that all applicable methods for a generic function will be called and the results will be added.
(defgeneric object-width (object)
  (:method-combination +))
`button` is a class with one slot for the button text.
(defclass button ()
  ((text :initform "click me")))
There is a method for objects of class button that computes the width based on the length of the button text. `+` is the method qualifier for the method combination of the same name.
(defmethod object-width + ((object button))
   (* 10 (length (slot-value object 'text))))
A `border-mixin` class. The naming is just a convention. There are no superclasses, and no slots.
(defclass border-mixin () ())
There is a method computing the width of the border. Here it is just 4.
(defmethod object-width + ((object border-mixin))
  4)
`bordered-button` is a class inheriting from both `border-mixin` and button.
(defclass bordered-button (border-mixin button) ())
We can now compute the width of a button. Calling `object-width` computes 80. The result is the result of the single applicable method: the method `object-width` for the class `button.`
(object-width (make-instance 'button))
=>
80
We can also compute the width of a `bordered-button`. Calling `object-width` computes 84. The result is the sum of the results of the two applicable methods: the method `object-width` for the class `button` and the method `object-width` for the class `border-mixin`.
(object-width (make-instance 'bordered-button))
=>
84
Pretty neat.

2020-10-26 (permalink)

TIL: `git rebase` supports _auto-squashing_ of "squash! ..." and "fixup! ..." commits, and _automatic-stashing_ of local changes.

Add the following to your .gitconfig, and that should be it:
[rebase]
    autosquash = true
    autoStash = true
I can finally use [vim-fugitive](https://github.com/tpope/vim-fugitive)'s fixup/squash mappings without `git` getting in the way because my local repository is _dirty_.

Additional reading:

- [automatically stash save/pop changes on git rebase?](https://stackoverflow.com/a/27117335)
- [Auto-squahsiung Git Commits](https://thoughtbot.com/blog/autosquashing-git-commits)

2020-10-25 (permalink)

+ book: The Common Lisp Condition System: https://www.amazon.com/Common-Lisp-Condition-System-Mechanisms/dp/148426133X

? book: Lisp From Nothing: http://t3x.org/lfn/index.html


I have been thinking about moving my dotfiles directory into Dropbox, lately, but I am still not sure whether that's a good idea or not.

Let's have a look first, at the problem I am trying to solve.  Too frequently I find myself playing with some config file tweaks on my laptop, then ssh into my work virtual machine expecting these config files to include the changes I was just testing out few minutes ago; except, I never pushed those changes to my Git repository to begin with, so I am not sure how would that work -- for the guest to pick up the changes...automatically.  Anyway, every time this happens, I usually go ahead and either commit the changes on my host, push, go back to the guest and pull, or I simply re-apply the changes, manually; clearly none of the mentioned solutions seems to be ideal, so what if there was a better way to deal with this?  What if I moved my dotfiles repository, inside Dropbox, and only pushed changes to GitHub only when actually ready to do so? (i.e. when I am happy with the config changes I am playing with).

As appealing as this solution might be, one biggest problem I see is making sure it all works across different OSes (i.e. MacOS on my personal laptop, Windows 10 on my work laptop, and Linux/Ubuntu inside VirtualBox).  Now, my config files already have OS specific switches, but what about _supporting_ libraries/binaries? For example, I depend on `js-langserver` as LSP server when editing JavaScript files; currently, I simply `npm install` it, but I don't want its node_modules directory to end up getting uploaded on Dropbox as that will be a waste of space/bandwidth, and most importantly, it will very likely not work (e.g. running `js-langserver` from MacOS won't work if I previously run `npm install` from Linux, or even worse, from Windows/Cygwin).

It seems this solution _could_ work if there was a programmatic way for me to tell Dropbox which files to sync, and which files to ignore.  Well, lucky me, there is indeed a way to [tell Dropbox that](https://help.dropbox.com/files-folders/restore-delete/ignored-files), so I guess the next step would be to figure the list of files/directories to exclude from the sync:

- All the node_modules directories
- All the venv directories -- that's the name I usually use for `virtualenv` directories
- Vim's temp/backup files maybe?
- Something else?

2020-10-23 (permalink)

? project idea: create a REPL for cURL. I have been using vim-rest-console, but that requires additional setup, probably too much, especially if I wanted to fire some requests straight from my .plan file; what a curl-repl existed? I mean, I could pop that open into a terminal buffer, and copy requests directly into it. I guess I would have to define a syntax to specify headers, requsts, bodies, and all that stuff.. but it would be interesting.

2020-10-21 (permalink)

At work, over the weekend we had to deal with an outage of our PROD environment, and here is a summary of what had happened:

- Few weeks prior to the outage, our IT department accidentally disabled one of ours distribution lists (unfortunately, the one we were using to receive notifications about the state of our servers)
- At about the same time, the disk usage patterns of our database server had increased -- interestingly, disk usage was increasing faster than we were able to clean up, with our periodic jobs
- Disk usage gets past the warning threshold, triggering some notifications that we never see because the distribution list got accidentally disabled
- Few more days go by, and we get past the critical threshold; more notifications that unfortunately we never get to see
- Eventually, over the weekend, the server runs out of disk space and pretty much all the services hosted on the server stopped working (in particular, for Neo4j, one of the hosted databases wound up with a corrupted transaction log)

Now, after a bit of fire-fighting we were able to:

- recover the corrupted transaction log and unlock that database
- re-enabled the distribution list used for notifications
- lowered notification thresholds even more (more breathing space, should things ever escalate again)

Still we were wondering if there wasn't something else Neo4j could have done to prevent itself from filling up the disk and paint itself into a corner.  What if Neo4j implemented a mechanism that would put the database instance in read-only mode, when disk usage went above a pre-configured threshold?  Elasticsearch implements [something similar](https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-cluster.html#disk-based-shard-allocation), and I think it would have come in really handy if it had been there for us to use.

I appreciate it should be responsibility of whoever manages the server that the server itself never runs out space; but bad things happen, and if a service, Neo4j in this case, was able to monitor the situation and defensively shut itself down / enter read-only mode, I believe that would have avoided a lot of pain down the line.

Anyway, brought this up with the Neo4j folks, so let's see what they think about this enhancement.

2020-10-20 (permalink)

I have been poking around with `winston`'s API lately, mostly to look into some unexpected behaviors where the `File` logger would stop writing in case of uncaught exceptions / unhandled rejections, and here is what I found out.

First things first, `winston` ships with some _transport_ settings that can be used to instruct the logger to deal with (or not) uncaught exceptions, and unhandled rejections:

- `handleExceptions`: tells winston to catch and log uncaught exceptions (e.g. `process.on('uncaughtException', log)`)
- `handleRejections`: tells winston to catch and log unhandled rejections (e.g. `process.on('unhandledRejection', log)`)

Previously, unaware that such API existed, we used to manually deal with this as follows:
process.on('uncaughtException', function(error) {
    // TODO: handle uncaught exceptions
    console.log('uncaughtException', error);
});

process.on('unhandledRejection', function(reason, promise) {
    // TODO: handle error here, all "swallowed" errors get here
    console.log('unhandledRejection', { reason, promise }, reason.stack);
});
But why keep on doing this, if a library you are already linking and using can deal with this on your behalf?  Exaclty, there is no reason, so we immediately got rid of this custom logic.

Now, annother interesting setting worth mentioning, is `exitOnError`; when set to `true`, winston will fire a `process.exit(1)` and cause the whole application to halt (the idea being... something _unexpected_ happened, so better to exit immediately than continuing in an _unknown_ state).  We have been setting this to `false`, but I wonder if we shouldn't stick with the default value instead -- in the end, should something bad happened and because of that our app crashed, the process manager used to keep the app running should immediately restart it.  Also, we still have to deal with the fact that the `File` transport seems to be shutting itself down in case of uncaught exceptions, and whether this is a bug or expected behavior, `exitOnError: true` seems to be the only workaround available (at least until winston's maintainers don't get back to us on [this](https://github.com/winstonjs/winston/issues/1855)).

Last point, which unfortunately we haven't got around to implement yet: don't bother with on-disk logging, just do all your logging to standard output, and let the process manager (`docker` in our case), deal with that.

2020-10-15 (permalink)

TIL: Express will automatically handle exception thrown inside defined middlewares:
Errors that occur in synchronous code inside route handlers and middleware require no extra work. If synchronous code throws an error, then Express will catch and process it. For example:
I am not sure why this surprises me, in the end the exception is thrown in stack, so that's the **right** thing to do; somehow I was under the impression that `throw`-ing from within a middleware would result in the _uncaught_ exceptions being triggered, instead I was wrong: Express will take care of it, and convert it into 500 response.
Starting with Express 5, route handlers and middleware that return a Promise will call next(value) automatically when they reject or throw an error.
Cool -- again, if you think about, it definitely makes sense.

Source: http://expressjs.com/en/guide/error-handling.html

2020-10-09 (permalink)

TIL: you can force a case-sensitive search on vim, by adding `\C` anywhere on the search pattern (`:help \\C`):
							*/\c* */\C*
When "\c" appears anywhere in the pattern, the whole pattern is handled like
'ignorecase' is on.  The actual value of 'ignorecase' and 'smartcase' is
ignored.  "\C" does the opposite: Force matching case for the whole pattern.
{only Vim supports \c and \C}

2020-10-05 (permalink)

* finished reading [Eloquent JavaScript, 3rd Edition: A Modern Introduction to Programming](https://www.amazon.com/Eloquent-JavaScript-3rd-Introduction-Programming/dp/1593279507)


Few other small things read inside 'Eloquent JavaScript' which I think are worth a note:

The first was about Node.js, the `fs` module in particular, and the fact that not only it ships with the classical callback-based asynchronous interface and with a sycncrhonous version of it, but that since `Promise`s got hot a while back, the core team decided to ship a new promise-based API, with identical names to the callback-based one, but all bundled inside a `promises` namespace.  What's interesting about it? I don't know... it made me think about how difficult it becomes to keep up with new user needs (e.g. `Promise`s are cool, everyone loves them, so we should add support for them) and at the same time supporting an existing product / API: they could have changed the existing API to be promise-based, but that would have introduced a big regression, so they ended up packaging the new API into a new _namespace_ (of course the new package would be less _discoverable_, but it seems like a good trade-off to me).

HTTP Request Method: [MKCOL](http://webconcepts.info/concepts/http-method/MKCOL) -- never heard of it, which I think it suggests it might not be a widely used HTTP method, but I would definitely add this to my TIL list.

Swapping bindings in JavaScript can nicely be done using [destructuring assignment](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Destructuring_assignment): `[b, a] = [a, b]`

2020-09-30 (permalink)

I was reading [Eloquent JavaScript](https://www.amazon.com/gp/product/1593279507/ref=as_li_tl?ie=UTF8&camp=1789&creative=9325&creativeASIN=1593279507&linkCode=as2&tag=marijhaver-20&linkId=d8642c0457954f03e27c02b0034d0d60) the other night, and learned an interesting and scaring thing about regular expressions:
JavaScript RegExp objects are stateful when they have the global or sticky flags set (e.g. /foo/g or /foo/y). They store a lastIndex from the previous match. Using this internally, exec() can be used to iterate over multiple matches in a string of text (with capture groups), as opposed to getting just the matching strings with String.prototype.match().
This happens with both `.exec()` and `.test()`, meaning that further calls to `.test()` will resume searching from `lastIndex` (which will continue to increase each time `test()` returns `true`).

Links:

- [RegExp.prototype.exec()](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/RegExp/exec)
- [RegExp.prototype.test()](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/RegExp/test): scroll to the "Using test() on a regex with the "global" flag" paragraph

2020-09-28 (permalink)

TIL: on Vim you can `set number` and `set relative number` at the same time, and have the current line number displayed instead of `0`.

With `set relativenumber`:
   3 set ttyfast
   2 set ruler
   1 set backspace=indent,eol,start
   0 set relativenumber
   1 set number
   2 set laststatus=2
   3 set history=1000
With `set number` and `set relativenumber`:
   3 set ttyfast
   2 set ruler
   1 set backspace=indent,eol,start
20   set relativenumber
   1 set number
   2 set laststatus=2
   3 set history=1000

2020-09-25 (permalink)

JSON Web Tokens Suck (https://www.youtube.com/watch?v=JdGOb7AxUo0)

Stateful JWT
 
- All the user information (username, first last names, email) are stored in the JWT
- Server receives token, validates it, reads it, and uses the information to reply back to the user
 
Stateless JWT
 
- We only store a Session-ID inside the JWT
- Server receives token, validates it, looks up the session (DB, cache, whatever), then uses the information to reply back to the user
 
Session cookies
 
- Cryptographically signed session identifier stored in a cookie.  Data is stored server-side, and retrieved from a database
- Pretty much similar to Stateless JWT
 
What's the difference?
 
- Encoding
- One uses JSON as data format, the other just sends the session ID as a plain string
 
Cookies:
 
- HTTP headers
- Key/value storage
- Limited in size (4kb)
- Server uses `Set-Cookie` header to set cookies on the client (e.g. `session=signed(12345)`)
- Flag `HttpOnly`: prevents user code (Javascript) from accessing cookies
- Flag `SameSite=strict`: prevents cross-site request forgery
- Flag `secure`: don't store this date unless we talking on SSL
 
Local storage:
 
- Browser based API to implement key/value store
 
Myths about JWT:
 
- JWTs are easier to use (than Session cookies): JWTs actually require additional libraries, while session cookies are nowadays supported by all the web servers/frameworks around
- JWTs are more flexible: actually, anything you can define in a JWT, you can also define as a session cookie (the payload would even be more succinct, resulting in less bandwidth usage)
- JWTs are more secure: again, both cryptographically signed, both can be encrypted, except JWT specification is more complicated, and different libraries support/implement it in different ways.  Session cookies on the other hand, have been battle tested for years, and no vulnerabilities have been found so far, so....
- JWTs don't require "cookie consent" -- false... the law requires to put the notification on your application if you are tracking user behavior
- JWTs prevent CSRF (because JWT are usually stored on local storage) -- well, you can still store JWTs with cookies.  If you are using local storage though, you are are opened yourself up to Cross-site scripting (XSS)
- JWTs are better for cross domain (e.g. authentication against www., docs., apps.domain.com)
- JWTs work better on mobile... NO!  Web authentication is handled by Web browsers, and web browsers work the same whether you are running them from Desktop, or Mobile
- JWTs are more efficient... NO!  Same payload, same cryptographic algorithm... in these circumstances the payload of JWT can be 3x larger
- JWTs are easy to revoke -- false, you have to change your signature key.  You can check check the validity of a token... but then if you need to check against a database/cache, why are you using JWTs to begin with?!
- JWTs are easier to "Scale" -- not really.  It's true, if you implement stateful JWT you don't have to interrogate the database, but then you are opening up to other security attacks; however, session management can nowadays be horizontally scaled quite easily, so the argument is weak.
- JWTs are secure by design: actually they were never meant for web authentication; there is always a tradeoff between security and speed, and unless you verify on the server that tokens are still valid (defeating again the purpose of using JWTs), then it won't be impossible for you to prevent malicious users from doing damage
 
What are JWTs good for?
 
- 1 time redirect: pay for a book on books., receive a link to download the book on download., the link contains the JWT which then the server internally validates (the JWT is short lived in this case)
- Resetting passwords

2020-09-24 (permalink)

* Submitted a [PR](https://github.com/tbodt/js-langserver/pull/9) for `js-langserver` to switch to the latest version of Tern.js (it simplifies the integration), and add support for `defs` and `loadEgerly`

* Submitted a [PR](https://github.com/sbdchd/neoformat/pull/323) for `neoformat` to try and fix a problem with folds getting destroyed/closed/opened every time `neoformat` is run -- it turns out it was due to the fact that the plugin was always resetting `&filetype`, even though it did not change

2020-09-23 (permalink)

TIL: in Vim, you can disable/enable spell-check based on syntax (`:help spell-syntax`)!
SYNTAX HIGHLIGHTING					*spell-syntax*

Files that use syntax highlighting can specify where spell checking should be
done:

1.  everywhere			   default
2.  in specific items		   use "contains=@Spell"
3.  everywhere but specific items  use "contains=@NoSpell"

For the second method adding the @NoSpell cluster will disable spell checking
again.  This can be used, for example, to add @Spell to the comments of a
program, and add @NoSpell for items that shouldn't be checked.
Also see |:syn-spell| for text that is not in a syntax item.

2020-09-22 (permalink)

Worse is better: http://dreamsongs.com/WorseIsBetter.html
In The Rise of Worse is Better, Gabriel claimed that "Worse-is-Better" is a model of software design and implementation which has the following characteristics (in approximately descending order of importance):
Simplicity: The design must be simple, both in implementation and interface. It is more important for the implementation to be simple than the interface. Simplicity is the most important consideration in a design.
Correctness: The design should be correct in all observable aspects, but It is slightly better to be simple than correct.
Consistency: The design must not be overly inconsistent. Consistency can be sacrificed for simplicity in some cases, but it is better to drop those parts of the design that deal with less common circumstances than to introduce either complexity or inconsistency in the implementation.
Completeness: The design must cover as many important situations as is practical. All reasonably expected cases should be covered. Completeness can be sacrificed in favor of any other quality. In fact, completeness must be sacrificed whenever implementation simplicity is jeopardized. Consistency can be sacrificed to achieve completeness if simplicity is retained; especially worthless is consistency of interface.

2020-09-20 (permalink)

Martin Fowler's essay about [Collection Pipeline](https://www.martinfowler.com/articles/collection-pipeline/) never gets old; should probably come back and re-read it more frequently.

2020-09-16 (permalink)

Lately, I have been exploring the codebase of the product I am involved with at work, looking for repetitive usage patterns.

One in particular that caught my attention, had something to do with [feature toggles](https://www.martinfowler.com/articles/feature-toggles.html):

_Is a given feature X, toggled on for the logged in user? If so, show this area of the screen, otherwise, hide it or show something else instead_

This is how we have been handling this (we use Angular):

- Get the `FeatureToggleStore` injected in the controller of the current screen
- Store somewhere on the controller if the given feature is on/off for the logged in user
- Use that flag inside the template to show/hide (read: `*ngIf`) certain areas of the screen

If you think about it, what we really want is a special [structural directive](https://angular.io/guide/structural-directives), similar to `*ngIf`, that instead of evaluating the input expression and expecting it be either truthy or falsey, it would check that feature against our store, and then manipulate the DOM accordingly.

Easy right?! Let's take a stab at this:
import { Directive, Input, TemplateRef, ViewContainerRef, EmbeddedViewRef } from '@angular/core';
import { FeatureToggleStore } from '@upgraded/services';

@Directive({
  selector: '[ifFeature]',
})
export class IfFeature {
  private featureToggled = false;
  private thenViewRef: EmbeddedViewRef<any> = null;

  constructor(
      templateRef: TemplateRef<any>,
      private viewContainer: ViewContainerRef,
      private featureToggleStore: FeatureToggleStore
  ) {
    this.thenTemplateRef = templateRef;
  }

  @Input()
  set ifFeature(featureName: string) {
    this.featureToggled = this.isFeatureToggled(featureName);
    this.updateView();
  }

  private updateView() {
    if (!this.featureToggled) {
      this.ViewContainerRef.clear();
    } else {
       this.viewContainer.createEmbeddedView(this.thenTemplateRef);
    }
  }

  private isFeatureToggled(featureName: string) {
    return this.featureToggleStore.hasFeature(featureName);
  }
}
Expected usage:
<div *ifFeature="beta-feature">
    Hi beta user!
</div>
This looks good.  However, the implementation has some problems:

- What if the name of the feature is dynamically updated, and user has access to the new one too? This solution will cause inner part of the DOM (the one protected by the feature), to be re-rendered again even though nothing has changed (we keep on calling `this.viewContainer.createEmbeddedView()`)
- What if we wanted to support an `else` branch, like `*ngIf` does?

Well, for this, we will have to stand on the shoulders of the [giants](https://github.com/angular/angular/blob/master/packages/common/src/directives/ng_if.ts), and you might end up with something like:
import { Directive, Input, TemplateRef, ViewContainerRef, EmbeddedViewRef } from '@angular/core';
import { FeatureToggleStore } from '@upgraded/services';

@Directive({
  selector: '[ifFeature]',
})
export class IfFeature {
  private featureToggled = false;
  private thenTemplateRef: TemplateRef<any> = null;
  private thenViewRef: EmbeddedViewRef<any> = null;
  private elseTemplateRef: TemplateRef<any> = null;
  private elseViewRef: EmbeddedViewRef<any> = null;

  constructor(
      templateRef: TemplateRef<any>,
      private viewContainer: ViewContainerRef,
      private featureToggleStore: FeatureToggleStore
  ) {
    this.thenTemplateRef = templateRef;
  }

  @Input()
  set ifFeature(featureName: string) {
    this.featureToggled = this.isFeatureToggled(featureName);
    this.updateView();
  }

  @Input()
  set ifFeatureThen(templateRef: TemplateRef<any>) {
    this.thenTemplateRef = templateRef;
    this.thenViewRef = null;
    this.updateView();
  }

  @Input()
  set ifFeatureElse(templateRef: TemplateRef<any>) {
    this.elseTemplateRef = templateRef;
    this.elseViewRef = null;
    this.updateView();
  }

  private updateView() {
    if (this.featureToggled) {
      if (!this.thenViewRef) {
        this.viewContainer.clear();
        this.elseViewRef = null;
        if (this.thenTemplateRef) {
          this.thenViewRef = this.viewContainer.createEmbeddedView(this.thenTemplateRef);
        }
      }
    } else {
      if (!this.elseViewRef) {
        this.viewContainer.clear();
        this.thenViewRef = null;
        if (this.elseViewRef) {
          this.elseViewRef = this.viewContainer.createEmbeddedView(this.elseTemplateRef);
        }
      }
    }
  }

  private isFeatureToggled(featureName: string) {
    return this.featureToggleStore.hasFeature(featureName);
  }
}
Additional reading: https://juristr.com/blog/2018/02/angular-permission-directive/

2020-09-08 (permalink)

Small Javascript utility function to create N-d arrays (_create_, not _initialize_):
function createArray(dim, ...dims) {
  if (!dims.length) {
    return new Array(dim);
  } else {
    return Array.from(Array(dim), () => createArray(...dims));
  }
}
- Accepts the size of the outer layer, and the sizes of the inner ones (if any)
- Recursive solution
- ...when on the last layer, simply return a new Array, with the expected size
- ...otherwise, create a new array with the expected size, and initialize its elements with a recursive call (i.e. each element would now be initialized with a new (N-1)-d array)

Few pointers on writing **good** commit messages:

- https://chris.beams.io/posts/git-commit/
- https://wiki.gnome.org/Git/CommitMessages
- https://www.conventionalcommits.org/en/v1.0.0/

When in doubt, check what the specific project commit guidelines say (if any).

2020-09-05 (permalink)

Great presentation, from Maciej Ceglowski, on how bloated the Web has become -- it's from 2015, but if anything, things have gotten worse since: https://www.youtube.com/watch?v=iYpl0QVCr6U

The annoying thing about HTML emails using CSS magic to convert DIVs into links/buttons, is that when you try and view them with Mutt/w3m, action links would not render correctly; sometimes this is great, there is a reduced risk of phishing attacks, but what if I indeed wanted to click that link?

Two options:

1. Pop Gmail open, and read the email there
2. Configure Mutt/mailcap to use an _external_ browser when explicitly told to

I have been using the former for quite some time, but with the increased influx of HTML email, I have grown tired of it.  Especially because the latter turned out to be pretty simple to implement:
# ~/.mutt/mailcap

text/html; open '%s' & sleep 3; nametemplate=%s.html; needsterminal
text/html; w3m -cols 90 -dump -T text/html '%s'; copiousoutput
...
The first entry, is used when _sending an attachment to mailcap_ (pop an email open, press `v` to open the attachment menu, select a text/html entry, and press `m`); the latter instead, is automatically used by Mutt to display HTML emails.

PS. The `& sleep 3` is used to make sure the attachment Mutt does not accidentally deletes the attachment before the _external_ browser has finished processing it.
PPS. Want to use HTML emails? Use semantic web, please... seriously.

2020-08-25 (permalink)

* First stab at _converting_ plan files into HTML (previously a plan entry was mostly wrapped inside a `<pre>` block, and that's it): I am now able to format lines differently, based on their type (i.e. completed, fixed, ideas, discarded, and verbatim), but I am not quite happy with the parsing logic (better to use SMUG or something, but still, it got the job done). You can check out the result on: https://matteolandi.net/plan.html

? plan-convert: review parser logic, and maybe use SMUG

? plan-convert: add ability to format inline italic, strong, code, and links

? plan-convert: generic text (i.e. generic notes) is currently getting wrapped inside a <pre> element, or otherwise lists would render on a single line; it all works fone on the browser (where we can style things with CSS), but with `w3m` these long lines do not wrap (as it usually happens with paragraphs) and I am forced to horizontally scroll

2020-08-24 (permalink)

TIL: Elasticsearch will enter read-only mode when running out of disk-space

Elasticsearch started acting out today, and logs were littered with the following error messages:
error: [search] Error when updating documents:
{"body":{"took":0,"errors":true,"items":[{"index":{"_index":"...","_type":"_doc","_id":"48416067","status":403,"error":{"type":"cluster_block_exception","reason":"blocked
by: [FORBIDDEN/12/index read-only / allow delete (api)];"}}}
Our database box did run out of space earlier today (well, actually, the disk usage got above a specific threshold), so that would explain Elasticsearch's behavior.  Anyway, free-ing up space and bouncing the service is not going to help; you will indeed have to manually unlock the cluster and restore write accesses:
$ curl -XPUT -H "Content-Type: application/json" http://localhost:9201/_all/_settings -d '{"index.blocks.read_only_allow_delete": null}'
After this, things should finally go back to normal.

Sources:

- https://selleo.com/til/posts/esrgfyxjee-how-to-fix-elasticsearch-forbidden12index-read-only
- https://discuss.elastic.co/t/forbidden-12-index-read-only-allow-delete-api/110282

2020-08-23 (permalink)

? djula: how to use `if`/`ifequal` tags to equate to a _symbol_? For example, say I fed the template with: `:item (list :type :verbatim :content "foo")`, how can I check against type? None of the following seem to work: "verbatim", "VERBATIM", :verbatim. Currently I am forced to pass `:verbatim :verbatim` to the template, and then `{% ifequal item.type verbatim %}` which is a bit...meh

2020-08-22 (permalink)

* finished reading [Patterns of Software: Tales from the Software Community](https://www.amazon.com/Patterns-Software-Tales-Community/dp/0195121236)

2020-08-19 (permalink)

TIL: [standard](ttps://www.npmjs.com/package/standard)

Electron - By Javan Makmali, 2016: https://www.youtube.com/watch?v=nwvourv3nSQ

Basecamp 3, released on Web, iOS, Android, and **desktop** (via Electron)

- The prototype was ready in 1 day
- 6 weeks to get it ready to ship
- You finally can CMD/CTRL-TAB and switch to the app (no more a tab within the browser)
- Having your brand on the dock... that's powerful :)
- Inject some CSS to add icons/items to the Window title bar (not the menu bar)
- History menu, to navigate to previous pages
- Remember your last place, and window size

Quick tour how Electron works:

- Main process: C++
- Opening a tab, spawns a new process (**Renderer** process)
- With Electron, Node.js is simply injected into each Renderer, and into the Main process, and they can communicate together, and access the OS
- Electron is... a set of APIs that you can use to build applications, and interact with the OS

Get started:
$ mkdir hello
$ npm init -y
$ npm install electron-prebuilt --save-dev
$ $EDITOR package.json
# add new "start" script to run: "electron ."
$ $EDITOR index.js
# and add
const { app, BrowserWindow } = require('electron')
app.on('ready', () => {
  let mainWindow = new BrowserWindow()
  mainWindow.loadURL(`file://${__dirname}/index.html`)
})
$ $EDITOR index.html
# add some content
$ npm start
Communicate back to the main process from the renderer:
document.addEventListener('click', (event) => {
  require('electron').remote.app.dock.setBadge(`Clicked! ${event.pageX}`)
})
Misc:

- You can inject new items in the menubar (big JavaScript object to define items: item -> click -> userland callback)
- You can create create toast notifications
- You don't have to worry about browser compatibility anymore, and you can use all the experimental features that you want (that are supported by Chromium)
- Mutate the browser like normal extensions would do (i.e. high)
- Intercept every request / proxy it / delete cross-origin headers and render stuff in an iframe even if forbidden
- You can inject a blob of css into fetched page (at Basecamp, they just add a new css-class, and keep all the styles remote)
- Persistence: SQLite, LocalStorage, IndexDB
- You can register system wide key commands (activated when the app is running)
- You can register the app as a protocol handler (e.g. basecamp://...)

Ship:

- Facilities to bundle/ship Electron applications
- Sign your application with your name
- APIs for automatic updates (Mac apps are always full-updates, while on Windows, only diffs are sent)

Brave:

- Browser built with Electron + React
- That's how they inspect traffic, and filter out ads
- Open a window, and load a website... simple

2020-08-18 (permalink)

? book: Living with Complexity


The JavaScript That Makes Basecamp Tick, By Javan Makhmali, 2014: https://www.youtube.com/watch?v=mWo3oEwFFzM

Basecamp (2):

- 40 app servers
- 100 total servers
- 1/2 PB file storage
- 1 TB database
- 1 TB caching
- 70 million requests per day
- 50-60ms Rails response times

The Asset Pipeline:

- Tool for compiling, concatenating, and compress asset files

  - desktop.js.coffee.erb -> erb first, then coffee compiler, then javascript...

- Bunch of _require_ comments that bring in assets from somewhere else, and add them to the pipeline
- All of this will later be replaced by webpacker
- You can also configure an `asset_host`, so the generated `<script src.../>` line, will load the asset from there (you will have to manually upload the file on the CDN)

Code structure:

- Lot of small, single-purple, files (see below), mostly to give behavior to elements/group of elements
- Data attributes:J avascript only, not inside CSS (inside CSS, they use class names, and IDs)
// behaviors/element/expandable.js.cofee
selector = "[data-behavior~=exandable]"
expandSelector = "#{selector} [data-behavior~=expand_on_click]"

$(document).on "click", expandSelector, (event) ->
  $(this).closest(selector).addClass("expanded")
  event.preventDefault

// somewhere else, in the html
<div data-behavior="expandable">
  <div class="expanded_content">...</div>
  <a href="#" data-behavior="expand_on_click">Expand</a>
</div>
- Most of the event handlers, are bound to the `document` element: arguably the slowest thing you can do... but it does not really matter -- the end result is pretty good.  Don't need to add bindings when elements are added to the dom; all the events will eventually get to the `document`, which will then dispatch based on the selector.

Stacker:

- Page stacking UI
+--------------------------
| Basecamp
| +------------------------
| | Project XXX
| | +----------------------
| | | Todolist YYY
- Refreshing the page, will actually change the layout a bit -- no more stacks, just the one you requested (you can still go back though)
- Pages are cached, so going back will first show you the cached page; the page is still getting fetched, but it will replace the current and cached one, only if-changed (304 status)
- Goodbye: DOMContentLoaded.  Hello: page:beforechange, page:change, page:update
- Custom way of registering global events, that enables them to document what the custom event is for -- that will make for easier debugging (they can enter debug mode, and see each handler how long it took to complete)
bcx.on "page:beforechange", "Set bcx.currentBucket", ->
  if matches = location.pathname.match /\/...

bcx.on "page:update", "Install autoresize behavior on visible elements", ->
  $("[data-behavior~=autoresize]:visible").autoResize()

bcx.on "page:change", "Enable placeholders in IE", ->
  $("[placeholder]").placeholder() if $.browser.msie
- Inspired by [pjax](https://pjax.herokuapp.com/)
- Lead to [turbolinks](https://github.com/turbolinks/turbolinks)

Assets reload:

- People are not refreshing their pages very often, so they won't always get the latest version of the JavaScript files

  - Every Rails request set `X-Asset-Paths` where all the assests and their digests are listed
  - On the client, whenever a Ajax request is completed (i.e. `ajaxSuccess`), we would compare these digests with the ones currently loaded, and if different, we would trigger a full page reload

Cache all the things:

- Russian doll caching

  - Each individual element is cached: todo, todolist, project
  - When you change a todo, you will touch the todolist, and the project
  - All these cache entries are invalidated
  - Rebuilding the project page is cheap though, as most of the content is already cached

- There is stuff you cannot cache (e.g. timestamps, edit/delete links -- usually available to the author, for 15 minutes)

  - You cache the complete object (as if an Admin, with all the permissions possible, was looking at it)
  - You use JavaScript to taylor the page to the logged in user
  - `<time data-local="time-ago" datetime="${UTC value}">` that will read the UTC value, and render a localized version of it **after** the page has been rendered
  - `data-creator-id=${ID}` and `data-visible-to="admin creator"` data-attributes in the template, then inside JavaScript
$("[data-visible-to]").each ->
  visibleTo = $(this).attr("data-visible-to").split /\s/
  creatorId = parseInt($(this.closest("[data-creator-id]").data("creator-id")))

  return if _.include(visibleTo, "admin") and bcx.currentPerson.isAdmin()
  return if _.include(visibleTo, "creator") and bcx.currentPerson.id is creatorId

  $(this).remove()
  - of course the same checks ought to be done on the server as well :)
def ensure_permission
  head :forbidden unless current_person.can_change?(@comment)
end
Backbone.js:

- Basecamp is not a backbone app, but it relies heavily on it on certain areas... like the Calendar!
- When there are a lot of interactions, it's better to start doing things on the client

Eco, templating language on the client:

- Coffeescript, ERB, so people are already familiar with it

"Live" page updates:

How to keep 2 pages in sync:

- Polling: each person with an active session, is polling the polling server (Rack app), every 3 seconds
- When I create a comment, Basecamp sends back to me some JavaScript to a) append the new comment to the thread (e.g. `$("section#comments_for_...").append("...")`), b) highlight it (e.g. `$("article#...").highlight();`)
- Redis queue where page updates pushed, and then delivered to users when they are polling; due to the nature of the updates (they go by section/id), if you are not **that** page, nothing will happen

2020-08-16 (permalink)

* finished reading [Common Lisp Recipes (CLR)](http://weitz.de/cl-recipes/)

2020-08-11 (permalink)

+ read Structure and Interpretation of Computer Programs (SICP)

2020-08-10 (permalink)

TIL: DevOps folks have [design patterns too](https://medium.com/@karthi.net/8-container-design-patterns-that-you-should-know-970a1d6a594c), like the _ambassador_ pattern, described [here](https://www.ctl.io/developers/blog/post/deploying-multi-server-docker-apps-with-ambassadors/) and [here](https://test-dockerrr.readthedocs.io/en/latest/admin/ambassador_pattern_linking/)

2020-08-09 (permalink)

Today I decided to spend some time to scratch one of the itches I have been having lately, when working with CL (Vim + Vlime): closing all the splits except for the current one, the Lisp REPL (terminal buffer), and Vlime's REPL.

When I work on a CL project, I usually arrange my splits as follows:
+------------------+------------------+
| file.lisp        | CL-USER>         |
|                  |                  |
|                  |                  |
|                  |                  |
|                  +------------------+
|                  | VLIME REPL       |
|                  |                  |
|                  |                  |
|                  |                  |
+------------------+------------------+
Well, that's when I start working... few minutes/hours later, it might as well look similar to the below:
+------------+------------+-----------+
| file.lisp  | file2.lisp | CL-USER>  |
|            |            |           |
|            |            |           |
|            |            |           |
+------------+------------+-----------+
| fil11.lisp | file3.lisp | VLIMEREPL |
|            |            |           |
|            |            |           |
|            |            |           |
+------------+------------+-----------+
So, how can I easily and quickly get from this messy state, to where I started off, with a single file, and the termnial and Vlime REPLs, and Vlime's REPL?  `:only` would unfortunately close all the open splits except for the current one, which would force me to create new splits for the terminal and Vlime REPLs.

I ended up creating the following VimL function:
function! LispCurrentWindowPlusVlimeOnes() abort " {{{
    let focused_win = win_getid()

    for tab in gettabinfo()
        for win in tab.windows
            if win != focused_win
                let bufname = bufname(winbufnr(win))

                if bufname =~? 'vlime'
                    continue
                else
                    let winnr = win_id2win(win)
                    execute winnr . 'close'
                endif
            endif
        endfor
    endfor
endfunction " }}}
Which can be summarized as follows:

- Get a hold of the focused split's Window-ID
- Iterate all the splits of the current tab

  - If the current split is **not** the focused one
  - and the loaded buffer does not contain 'vlime' (that will match both my terminal REPL, and Vlime's one)
  - then we should close the split

And that's it!  Wire those mappings in (I like to override `:only` ones) and you should be ready to roll:
nnoremap <C-W>o :call LispCurrentWindowPlusVlimeOnes()<cr>
nnoremap <C-W>O :call LispCurrentWindowPlusVlimeOnes()<cr>
nnoremap <C-W><C-O> :call LispCurrentWindowPlusVlimeOnes()<cr>

2020-08-03 (permalink)

Had some troubles updating my brew formulae today:
$ brew cask upgrade java
==> Upgrading 1 outdated package:
Error: Cask 'java' definition is invalid: Token '{:v1=>"java"}' in header line does not match the file name.
It seems like my java cask is [unbelievably old](https://github.com/Homebrew/homebrew-cask/issues/58475):
$ rm -rf "$(brew --prefix)"/Caskroom/java
$ brew cask install java
Updating Homebrew...
==> Auto-updated Homebrew!
Updated Homebrew from 36c10edf2 to 865e3871b.
No changes to formulae.

==> Downloading https://download.java.net/java/GA/jdk14.0.2/205943a0976c4ed48cb16f1043c5c647/12/GPL/openj
Already downloaded: /Users/matteolandi/Library/Caches/Homebrew/downloads/4a16d10e8fdff3dd085a21e2ddd1d0ff91dba234c8c5f6588bff9752c845bebb--openjdk-14.0.2_osx-x64_bin.tar.gz
==> Verifying SHA-256 checksum for Cask 'java'.
==> Installing Cask java
==> Moving Generic Artifact 'jdk-14.0.2.jdk' to '/Library/Java/JavaVirtualMachines/openjdk-14.0.2.jdk'.
🍺  java was successfully installed!

2020-07-28 (permalink)

Oh boy, time-zones are a bitch!

Spent the day looking at the different DB-related libraries I am using in my web app (MITO, CL-DBI, and CL-MYSQl) trying to chase down a problem with DATE fields; these fields, even though stored with the **right** value inside the database, once _inflated_, they ended up with an unexpected time-zone offset (in MITO's terminology, _inflated_ is the action of transforming data, after it's read from the database, so it's easier to work with it inside the controller).

Before we dive into the details, let's make it clear I want my web app to store DATE/DATETIME/TIMESTAMP in UTC (everyone should, really)... and here I thought the following SETF statement will get the job done:
(ql:quickload :local-time)
=>
(:LOCAL-TIME)

(setf local-time:*default-timezone* local-time:+utc-zone+)
=>
#<LOCAL-TIME::TIMEZONE UTC>
It all starts with the _model_ trying to save a date object; it reads it as string from the request, converts it to a `local-time:timestamp`, and pass it down:
(defvar date "2020-07-28")
=>
DATE

(local-time:parse-timestring date)
=>
@2020-07-28T00:00:00.000000Z
MITO will then process the value twice; it will try to _deflate_ it first inside DEFLATE-FOR-COL-TYPE (_deflate_ is the opposite of _inflate_, i.e. make a memory value ready to be processed by the database access layer) -- this step will turn out to be a noop for us, since we are using `local-time:timestamp` already:
(defgeneric deflate-for-col-type (col-type value)
  (:method ((col-type (eql :date)) value)
    (etypecase value
      (local-time:timestamp
        value)
      (string value)
      (null nil)))
Then it will apply some MySQL specific conversions inside CONVERT-FOR-DRIVER-TYPE
(defgeneric convert-for-driver-type (driver-type col-type value)
  (:method (driver-type (col-type (eql :date)) (value local-time:timestamp))
    (local-time:format-timestring nil value
                                  :format *db-date-format*))
This basically converts the `local-time:timestamp` back into a YYYY-MM-DD string:
(defvar *db-date-format*
  '((:year 4) #\- (:month 2) #\- (:day 2)))
=>
*DB-DATE-FORMAT*

(local-time:format-timestring nil **
                              :format *db-date-format*)
=>
"2020-07-28"
And this is how the data is actually getting stored inside the database.  Let's have a look now at what happens when we try to read the data back.

I am using a MySQL database, and CL-DBI internally uses CL-MYSQL to actually talk to the database; CL-MYSQl in particular, tries to convert all the DATE, DATETIME, and TIMESTAMP objects into CL [universal time](http://www.lispworks.com/documentation/lw51/CLHS/Body/26_glo_u.htm#universal_time), and it does it with the following function:
(defun string-to-date (string &optional len)
  (declare (optimize (speed 3) (safety 3))
          (type (or null simple-string) string)
          (type (or null fixnum) len))
  (when (and string (> (or len (length string)) 9))
    (let ((y (parse-integer string :start 0 :end 4))
    (m (parse-integer string :start 5 :end 7))
    (d (parse-integer string :start 8 :end 10)))
      (unless (or (zerop y)
                  (zerop m)
                  (zerop d))
        (encode-universal-time 0 0 0 d m y)))))
=>
STRING-TO-DATE
Let's apply this function to the value stored inside the database and see whta it returns:
(string-to-date **)
=>
3804876000
This is what MITO receives, before _inflating_ it; so let's _inflate_ this and see if we get back to where we started:
(local-time:universal-to-timestamp *)
=>
@2020-07-27T22:00:00.000000Z
What?!  That's what I thought too.

The first thing I tried was to change STRING-TO-DATE, and force a UTC offset when invoking ENCODE-UNIVERSAL-TIME:
(defun string-to-date (string &optional len)
  (declare (optimize (speed 3) (safety 3))
          (type (or null simple-string) string)
          (type (or null fixnum) len))
  (when (and string (> (or len (length string)) 9))
    (let ((y (parse-integer string :start 0 :end 4))
    (m (parse-integer string :start 5 :end 7))
    (d (parse-integer string :start 8 :end 10)))
      (unless (or (zerop y)
                  (zerop m)
                  (zerop d))
        (encode-universal-time 0 0 0 d m y 0)))))
=>
STRING-TO-DATE

(string-to-date "2020-07-28")
=>
3804883200

(local-time:universal-to-timestamp *)
=>
@2020-07-28T00:00:00.000000Z
This is better!  What happened? Basically SETF-ing `local-time:*default-timezone*` changes the behavior of the LOCAL-TIME library, but does not change how built-in date-related functions operate; which means that if the runtime thinks it's getting runned in CEST, then CEST is the time-zone it will try and apply unless instructed not not to.

Alright, it kind of makes sense, but how can we fix this?  Clearly we can't change CL-MYSQL and hard-code the UTC time-zone in there, as everyone should be free to use the library under whichever time-zone the like.  Enters the `TZ=UTC` environment variable.
$ TZ=UTC sbcl

(ql:quickload :local-time)
=>
(:LOCAL-TIME)

(setf local-time:*default-timezone* local-time:+utc-zone+)
=>
#<LOCAL-TIME::TIMEZONE UTC>

(defvar date "2020-07-28")
=>
DATE

(local-time:parse-timestring date)
=>
@2020-07-28T00:00:00.000000Z

(defvar *db-date-format*
  '((:year 4) #\- (:month 2) #\- (:day 2)))
=>
*DB-DATE-FORMAT*

(local-time:format-timestring nil **
                              :format *db-date-format*)
=>
"2020-07-28"

(defun string-to-date (string &optional len)
  (declare (optimize (speed 3) (safety 3))
          (type (or null simple-string) string)
          (type (or null fixnum) len))
  (when (and string (> (or len (length string)) 9))
    (let ((y (parse-integer string :start 0 :end 4))
    (m (parse-integer string :start 5 :end 7))
    (d (parse-integer string :start 8 :end 10)))
      (unless (or (zerop y)
                  (zerop m)
                  (zerop d))
        (encode-universal-time 0 0 0 d m y)))))
=>
STRING-TO-DATE

(string-to-date **)
=>
3804883200

(local-time:universal-to-timestamp *)
=>
@2020-07-28T00:00:00.000000Z
_Lovely time-zones..._

2020-07-18 (permalink)

? lispindent: backquote templates are not properly indented

This is the output:
(defmacro with-tansaction (conn &body body)
  `(let ((*connection* ,conn))
    (cl-dbi:with-transaction *connection*
      ,@body)))
While I would expect:
(defmacro with-tansaction (conn &body body)
  `(let ((*connection* ,conn))
     (cl-dbi:with-transaction *connection*
       ,@body)))
Note how `(cl-dbi:with-transaction ...)` and its body should be indented by an additional space

2020-07-17 (permalink)

* finished reading [Lisp Hackers](https://leanpub.com/lisphackers)

* finished reading [Full Stack Lisp](https://leanpub.com/fullstacklisp)

2020-07-15 (permalink)

TIL: `cal` can be instructed to display not just the current month, but also surrounding ones, with `-$N` (if `$N` is 3, `cal` will show the current month, the previous, and the next one):
$ cal -3
     June 2020             July 2020            August 2020
Su Mo Tu We Th Fr Sa  Su Mo Tu We Th Fr Sa  Su Mo Tu We Th Fr Sa
    1  2  3  4  5  6            1  2  3  4                     1
 7  8  9 10 11 12 13   5  6  7  8  9 10 11   2  3  4  5  6  7  8
14 15 16 17 18 19 20  12 13 14 15 16 17 18   9 10 11 12 13 14 15
21 22 23 24 25 26 27  19 20 21 22 23 24 25  16 17 18 19 20 21 22
28 29 30              26 27 28 29 30 31     23 24 25 26 27 28 29
                                            30 31
Source: https://drewdevault.com/2020/07/14/March-2nd-1943.html

2020-07-14 (permalink)

* finished reading [Lisp for the Web](https://leanpub.com/lispweb)

* finished reading [Lisp Web Tales](https://leanpub.com/lispwebtales)


RailsConf 2016 - Turbolinks 5: I Can’t Believe It’s Not Native! by Sam Stephenson: https://www.youtube.com/watch?v=SWEts0rlezA

Basecamp3:

- 18 months
- 200+ screens
- 5 platforms (desktop web, mobile web, android native, ios native, and email)
- 6 rails devs, 2 android, 2 ios
- built 3 open source frameworks: Trix (editor), Action Cable, and Turbolinks 5

Turbolinks for iOS (or Android):

- Intercepts link-click events, and creates a new view on the stack
- There is a single WebView only, and that's shared by all the windows on the stack (smaller memory footprint)
- You can customize the user agent, and on the backend for example, hide certain areas which would not look good on the APP
- Native (kind-of) app in 10 minutes
- You can go fully native on specific screens -- hybrid approach
- You can create native menus, by simply extracting metadata from the received page

2020-07-12 (permalink)

TIL: `git` fixups / autosquashes: https://fle.github.io/git-tip-keep-your-branch-clean-with-fixup-and-autosquash.html

* finished reading [Sails.js in Action 1st Edition](https://www.amazon.com/Sails-js-Action-Mike-McNeil/dp/1617292613)

2020-07-06 (permalink)

Rails Conf 2013 Patterns of Basecamp's Application Architecture by David Heinemeier Hansson: https://www.youtube.com/watch?v=yhseQP52yIY&list=PL01Ur3GaFSxxOQWCxypbFBqb2s11Yy_N0&index=8

HTML: document based programming, or delivery mechanism?  Google maps vs Blogs (and to some extend, SPAs vs server-side rendered applications, or hybrid applications)

To make things fast:

1. Key-based cache expiration (or generational caching)

  - Model name + id / "new" + last-updated-at (if available)

2. Russian Doll nested caching

  - Comment is inside a post, a post is inside a project
  - When you change a comment, the system automatically touches the parent, then the parent touches its parent... (where "touches" really means update the last-modified-date timestamp)
  - The MD5 of a partial, is included in the cache key, which means every time you update a partial, the new view will be rendered (and cached again); if the partial is not updated, the old, cached value would still be safe to use
  - Templates are inspected, and caches are invalidates based on `render` calls
  - What if you want to show local time?  You add UTC to the template, add `data-time-ago` attribute, inject some JavaScript on the client to display the utc value into local value
  - What if you have admin links (Edit, Delete, Copy, Move) which are only visible to a subset of users?  We cannot access the logged-in user when rendering/caching the view, so a similar solution to the problem above: annotate the link with `data-visible-to`, and then some JavaScript will take care of removing the link which is not visible

3. Turbolinks process persistence

  With Turbolinks, your app could be as fast as the backend is able to serve responses: 50/150 ms?  The experience is great; higher than that, and you are not going to notice much of a difference from a complete page-refresh.

4. Polling for updates

2020-07-04 (permalink)

Bit of vim g* niceties:

g8: show the ASCII code of the character under the cursor
g<: see the output of the last command
gF: got to file, and line number (./path/file.js:23) -- handy if you have a stack-trace in a buffer
gv: re-select the last selection
gi: go back to last insert location
gq: wrap command (e.g. break at the specified text-width)

2020-07-03 (permalink)

A little bit off-topic from the usual stuff I write about, but I just want to drop here some links to videos about DJ techniques (don't ask why I got into these):

* Strobing/Strobe Technique: https://www.youtube.com/watch?v=2TokF34LUtI

* Another video about Strobing/Chasing: https://www.youtube.com/watch?v=vXaq5gTHsME

* Back-spinning: https://www.youtube.com/watch?v=WTx34IsDIJU

* Finally the original DJ battle video that got me into this rabbit hole: https://www.youtube.com/watch?v=XymUZKLTjW4


TIL: Ansible won't re-create a container, when you remove one of its ENV variables -- and apparently, that's by design!

Apparently ansible would not recreate the container when ENV variables
are removed, because the condition of the remaining ENV variables still
being set would still hold true.

  https://github.com/ansible/ansible/pull/31340

The work-around, is to inject a dummy ENV variable, and keep on updating
it's value every time we want to force ansible to recreate the container
(for whatever reason).

2020-07-01 (permalink)

? cb.vim: can it work like a standard register?

+ cb.vim: vim-fugitive: ["x]y<C-G> copies the current git-object, but how can I do the same with ,y and send it to cb?

2020-06-30 (permalink)

? ap: parsing errors are not loggedf to stdout ... and there is no exit status too

? book: the medici effect

? book: the innovator's dilemma

2020-06-20 (permalink)

What?!
$ curl wttr.in
Weather report: Pisa, Italy

    \  /       Partly cloudy
  _ /"".-.     20 °C
    \_(   ).   ← 6 km/h
    /(___(__)  10 km
               0.0 mm
                                                       ┌─────────────┐
┌──────────────────────────────┬───────────────────────┤  Sat 20 Jun ├───────────────────────┬──────────────────────────────┐
│            Morning           │             Noon      └──────┬──────┘     Evening           │             Night            │
├──────────────────────────────┼──────────────────────────────┼──────────────────────────────┼──────────────────────────────┤
│    \  /       Partly cloudy  │    \  /       Partly cloudy  │    \  /       Partly cloudy  │    \  /       Partly cloudy  │
│  _ /"".-.     22 °C          │  _ /"".-.     24..25 °C      │  _ /"".-.     22..25 °C      │  _ /"".-.     20 °C          │
│    \_(   ).   ↑ 6-8 km/h     │    \_(   ).   → 10-12 km/h   │    \_(   ).   → 12-15 km/h   │    \_(   ).   → 9-14 km/h    │
│    /(___(__)  10 km          │    /(___(__)  10 km          │    /(___(__)  10 km          │    /(___(__)  10 km          │
│               0.0 mm | 0%    │               0.0 mm | 0%    │               0.0 mm | 0%    │               0.0 mm | 0%    │
└──────────────────────────────┴──────────────────────────────┴──────────────────────────────┴──────────────────────────────┘
                                                       ┌─────────────┐
┌──────────────────────────────┬───────────────────────┤  Sun 21 Jun ├───────────────────────┬──────────────────────────────┐
│            Morning           │             Noon      └──────┬──────┘     Evening           │             Night            │
├──────────────────────────────┼──────────────────────────────┼──────────────────────────────┼──────────────────────────────┤
│    \  /       Partly cloudy  │    \  /       Partly cloudy  │    \  /       Partly cloudy  │    \  /       Partly cloudy  │
│  _ /"".-.     23..24 °C      │  _ /"".-.     25..26 °C      │  _ /"".-.     23..25 °C      │  _ /"".-.     22 °C          │
│    \_(   ).   ↑ 4-5 km/h     │    \_(   ).   → 9-10 km/h    │    \_(   ).   → 11-14 km/h   │    \_(   ).   ↘ 6-10 km/h    │
│    /(___(__)  10 km          │    /(___(__)  10 km          │    /(___(__)  10 km          │    /(___(__)  10 km          │
│               0.0 mm | 0%    │               0.0 mm | 0%    │               0.0 mm | 0%    │               0.0 mm | 0%    │
└──────────────────────────────┴──────────────────────────────┴──────────────────────────────┴──────────────────────────────┘
                                                       ┌─────────────┐
┌──────────────────────────────┬───────────────────────┤  Mon 22 Jun ├───────────────────────┬──────────────────────────────┐
│            Morning           │             Noon      └──────┬──────┘     Evening           │             Night            │
├──────────────────────────────┼──────────────────────────────┼──────────────────────────────┼──────────────────────────────┤
│    \  /       Partly cloudy  │    \  /       Partly cloudy  │    \  /       Partly cloudy  │    \  /       Partly cloudy  │
│  _ /"".-.     25..27 °C      │  _ /"".-.     27..28 °C      │  _ /"".-.     24..26 °C      │  _ /"".-.     22 °C          │
│    \_(   ).   ↑ 4-5 km/h     │    \_(   ).   → 8-10 km/h    │    \_(   ).   → 10-14 km/h   │    \_(   ).   ↘ 6-11 km/h    │
│    /(___(__)  10 km          │    /(___(__)  10 km          │    /(___(__)  10 km          │    /(___(__)  10 km          │
│               0.0 mm | 0%    │               0.0 mm | 0%    │               0.0 mm | 0%    │               0.0 mm | 0%    │
└──────────────────────────────┴──────────────────────────────┴──────────────────────────────┴──────────────────────────────┘
This is pretty dope!

2020-06-11 (permalink)

TIL: only the Quartz Java scheduler (i.e. the one used by Jenkins)[0] supports the H character to _evenly_ distribute jobs across a defined time range [1]

[0] https://en.wikipedia.org/wiki/Cron
[1] https://stackoverflow.com/a/47302727

2020-06-07 (permalink)

* Changed my `lispindent` for to read `.lispwords` from the current directory, from its parent, from its parent parent... until the process gets past the user home directory.

* scmindent: it leaves trailing empty spaces when keywords are surrounded with empty lines

* scmindent: skip commented out lines

* Created Vim plugin, vim-lispindent, to: 1. automatically `set equalprg` for lisp and scheme files, 2. parse `.lispwords` and 3. set Vim's `lispwords` accordingly

? vim-lispindent: download `lispindent` if not found on $PATH

~ vim-lispindent: support rules with LIN = 0 (we want to remove these from `lispwords`)

? vim-lispindent: README + doc/

? cg: pass the last executed command (e.g '$!') and use that to _influence_ the order of suggested commands to execute

? scmindent: cases inside `HANDLER-CASE` are not _properly_ indented

2020-06-01 (permalink)

Been finally giving [vim-sexp](https://github.com/guns/vim-sexp) a try, and I gotta be honest, I am quite liking it.

I have also been reading about _structural code editing_, and landed on this [SO page](https://stackoverflow.com/a/30178499) about _barfing_ and _slurping_:
slurping and barfing are the essential operations/concepts to use one of the modern structural code editors. after getting used to them I'm completely incapable of editing code without these. Of the ~20 people that sit with me writing clojure all day all of them use these all the time. so saying that they are "helpful for lisp coders" is a very tactful and polite understatement.

slurp: to include the item to one side of the expression surrounding the point into the expression

barf: to exclude either the left most or right most item in the expression surrounding the point from the expression
That's an alright definition, but when would you actually use it?  Another [answer](https://stackoverflow.com/a/30190612) to the same question tries to shed some light on this:
so a session might be:

    (some-fn1 (count some-map))
    (some-fn2 (count some-map))

aha, a let could come in here to refactor the (count some-map):

    (let [c (count some-map)]|)
    (some-fn1 c)
    (some-fn2 c)

But the let isn't wrapping the 2 calls, so we want to pull in (slurp) the next 2 forms inside the let s-exp, so now at the cursor position, slurp twice which will give after first:

    (let [c (count some-map)]
      (some-fn1 c)|)
    (some-fn2 c)

and then on second:

    (let [c (count some-map)]
      (some-fn1 c)
      (some-fn2 c)|)
It all starts to make sense.  Let's now have a look at Tpope's [vim-sexp-mappings](https://github.com/tpope/vim-sexp-mappings-for-regular-people), and in particular what he thought some sane mappings should be for slurpage and barfage:
" Slurping/Barfing
"
" angle bracket indicates the direction, and the parenthesis
" indicates which end to operate on.
nmap <silent><buffer>   <(   <Plug>(sexp_capture_prev_element)
nmap <silent><buffer>   >)   <Plug>(sexp_capture_next_element)
nmap <silent><buffer>   >(   <Plug>(sexp_emit_head_element)
nmap <silent><buffer>   <)   <Plug>(sexp_emit_tail_element)
So going back the example above where the cursor was on the outermost closed paren:
    (let [c (count some-map)]|)
    (some-fn1 c)
    (some-fn2 c)
To slurp the next expression in, you would end up pushing the closing paren the cursor is on to right (past to the next element), and the chosen mapping, `>)`, really tries to caption that...

...again it all starts to make sense...

* Changed `lispindent` to support reading ./.lispwords: https://github.com/ds26gte/scmindent/issues/7

* Changed `lispindent` to properly format FLET, MACROLET, and LABELS forms: https://github.com/ds26gte/scmindent/issues/8

2020-05-31 (permalink)

? add a Vim operator for neoformat: this requires the current formatter to support reformatting of partial areas, in which case, one could surround the area to re-format with markers (e.g. comments, that the formatter is likely to ignore), and then use these markers to extract the formatted area of interest and stick it back into the buffer

2020-05-29 (permalink)

~ small utility to create RSS feed URL given another URL (i.e. give it a YouTube URL, and it will generate a feed for it; give it a GitHub URL, and it will generate a feed for it)

2020-05-25 (permalink)

On indenting lisp files with Vim.

First things first, Vim has Lisp mode (`:h lisp`), an that alone should give you good enough indentation for lisp files.
au FileType lisp setlocal lisp
However, Vim's default way of indenting s-expressions is not always perfect, so if you happen to use [Vlime](https://github.com/vlime/vlime) (and if you don't, I highly reccommend that you do), then you can switch to Vlime's `indentexpr` by simply disabling lisp mode (the plugin will do the rest):
" Weirdly, you have to set nolisp for (the correctly working) indentexpr to work.
" If lisp (the vim option) is set, then it overrules indentexpr.
" Since an indentexpr is set by vlime (and working!) it would be great to add nolisp to the default configuration, I guess.
" https://github.com/l04m33/vlime/issues/26#issuecomment-343761050
au FileType lisp setlocal nolisp
Lastly, you can go back to using lisp mode, but tell Vim to use an external program like [scmindent](https://github.com/ds26gte/scmindent) to invoke when indenting s-expressions (make sure `lispindent` is in your `$PATH`):
au FileType lisp setlocal lisp
au FileType lisp setlocal equalprg=lispindent
You might prefer this last approach more, especially when collaborating with others, as it does not rely on a specific conbination of editor / plugin, but on an external tool that others can easily configure in their workflow.

Few final words about Vim `lispwords` (`:h lisowords`), and particular a question that has bugged me for quite some time: do you need to set them up with using Vlime's `indentexpr`, or an external program to indent your lisp files?  Yes, especially if you care for Vim properly indenting s-expressions as you type them: `indentexpr` is only used when in conjunction with the `=` operator (i.e. when manually indenting blocks of code), while `lispwords` are used when typing in new code.

2020-05-24 (permalink)

Random quote from a podcast I recently listened to:
A lot of people say, just do a support model... and I think everybody in the business room looks at this and laughs... because you don't trade time for money... only poor people do that... rich people, they trade products for money.

2020-05-23 (permalink)

Simplified Poor's man yankstack implementation to to use `localtime()` instead of timers.

The idea is pretty simple, everytime you invoke it:

- Take a timestamp
- Check that timestamp another one (buffere variable) representing when the function was last executed
- If the two timestamps are more than 5 seconds apart, it means the user wants to start from register 0, otherwise 1, 2, 3...

Change SendToTerminal (STT), to strip out command prompts (if configured)

Sometimes you just want to _send_ a line like the following one, without worrying about stripping the `> ` bit out of it.
$ fortune
So I changed STT to automatically strip it out for me!

2020-05-22 (permalink)

? book: tools for thought

2020-05-21 (permalink)

Fucking fixed bash completion for `g`, my `git` alias.
# For God knows what reason, on Linux, `git` completion functions would not be
# loaded until you manually trigger them:
#
#   > git <tab>
#
# So here we manually source the file, if available
if [ -f /usr/share/bash-completion/completions/git ]; then
  source /usr/share/bash-completion/completions/git
fi
function g() { git "$@"; }
__git_complete g __git_main

2020-05-19 (permalink)

* Finally added a new page on my website, to talk about .plan files. Check it out: https://matteolandi.net/plan.html

2020-05-16 (permalink)

Alright, I think I figured out what I had to do in order to get my CL programs, on Window, to properly output stuff to stdout.  It turns out you have to explicitly tell deploy[1], the system I use do deploy standalone CL applications, that you want your Windwos application to be a _console_ application, or otherwise deploy will go with a GUI application (and GUI applications that don't usually get a standard output to write to).

To do so, we will have to set the feature flag `:deploy-console`, before building the application:
(pushnew :deploy-console *features*)
(let ((deploy:*status-output* t))
  (asdf:make :cg :force t))
And that's it, your '.exe' program will finally print stuff to stdout, irrespective of the terminal you run it from:
mlandi at pisa299 in /home/mlandi
$ cg --help
Guess commands to run from stdin, and print them to stdout.

Available options:
  -h, --help               print the help text and exit
  -v, --version            print the version and exit
  -d, --debug              parse the RC file and exit
[1] https://github.com/Shinmera/deploy

Run into another problem with Travis-CI the other day; when deploying artifacts to GitHub, it would error out with the following message:
/home/travis/.travis/functions: eval: line 109: unexpected EOF while looking for matching ``'
/home/travis/.travis/functions: eval: line 110: syntax error: unexpected end of file
After a bit of digging I finally found this `dpl` issue [1], where the OP says that there were some problems with his password/token; that made me think about the token I had used, and whether I got it right or not (I generate tokens with `cb | travis encrypt`, so what if I accidentally copied something else inthe OS clipboard, overriding GitHub's code?).

The new token turned out to be different from what I had inside my .travis.yml file, so yeah...yet another PEBCAK (even though, the guys at Travis-CI could have logged a better error message!)

[1] https://github.com/travis-ci/dpl/issues/634
[2] https://en.wikipedia.org/wiki/User_error#Acronyms_and_other_names

2020-05-12 (permalink)

Did a little bit of debugging as to why **all** my Common Lisp Windows executables were not printing things out on the stdout as expected, and it seems to have something to do with redirection...
mlandi at pisa299 in /home/mlandi
$ cg --help

mlandi at pisa299 in /home/mlandi
$ cg --help | cat
Guess commands to run from stdin, and print them to stdout.

Available options:
  -h, --help               print the help text and exit
  -v, --version            print the version and exit
  -d, --debug              parse the RC file and exit
I am not sure what's going on here... but at least I narrowed down the problem a bit.  I will try something else during the following days, but I might wind up asking directly to the cygwin mailing list.

2020-05-05 (permalink)

Been plaing with `ap` lately, trying to find a _good-enough_ heuristic, but it's really hard.  I even tried with implementing a depth-first-search algorithm, and even though with pruning it converges to a good-enough solution pretty quickly, it won't ever reach the optimum (the search space is simply too vaste).  The solution?  Add a command line switch, `--good-enough`, to tell `ap` to settle for a sub-optimal solution, and to do it ASAP.

To recap:

- Optimal solution: A* without heuristic (might take forever, or even blow up)
- Good enough solution -- best between

  - A* with very naive heuristic (one that can easily overestimate the remaining cost)
  - Bounded DFS -- i.e. one that will stop after having evaluated 50000 states

* finished reading [Shape Up: Stop Running in Circles and Ship Work that Matters](https://basecamp.com/shapeup)

2020-04-24 (permalink)

Previously I had the following in my `.gitconfig`:
[alias]
    b = branch -v
Now, and after a bit of fiddling around, I have:
[alias]
    b = "!f() { git branch --color --verbose | tr -d '\\r'; }; f"
Why?  Because GitLab merge commits can contain `^M` caracters, and those can fuck up the output of `git branch --verbose`.

The solution?  Manually strip carriage returns escapes, and forcibly enable colours -- `git` knows it's output is being piped, so it would turn colors off by default.

2020-04-21 (permalink)

This `docker` trick will never get old:
$ docker run --rm -i --name bash-rm -v $(pwd):/pwd-host bash rm ...

2020-04-19 (permalink)

Finally spent some time trying to figure out a solution for `ap` claiming activities too _early_ (i.e. before all their dependencies were satisfied).

I wound up keeping track of when each activity is completed; this way, when worker `w` who has been busy for `t` time tries to claim activity `a`, the algo would check and confirm that all `a` dependencies have been completed before `t` (and if not, it would skip the activity and move to the next one).

It's a bit inefficient at the moment (well, the whole codebase leaves a lot to be desired) but before cleaning that up I need first to figure out a good heuristic; and if I cannot find one, I might as well change how search state are represented, or decide to abandon the project altogether.

Let's see.

2020-04-18 (permalink)

Wanted to try and setup some _automatic checkins_ for the team; as people work remotely, I would like to experiment with reducing the number of scheduled meetings, but before implementing that and say, kill every daily standup and have people write their daily status update, I thought we should first give people a place where they could talk about anything, and stay up to date with each other lives (if they want).  So rather than start asking people what they will be working on during the coming week, I thought we should start with something lighter, something social, something like: How was your weeked like?

The task is quite easy: every Monday morning at 8am, have Jenkins fire an email (later published on Teams) to ask people: what have you guys done over the weekend?  People into sharing, would then reply to the thread, or simply read what others have been up to.  Nothing creazy, nothing creepy, just a bit of sharing to make people we work with a bit more interesting.

All right, let's talk implementation:
every Monday morning at 8am
We have people working from: Jakarta, Noida, Hyderabad, Pisa, Scotland, and New York.  So what that 8am really means here?  Well, ideally we would want people to process this as soon as they get in (i.e. skim through inbox / Teams to see if there is anything critical, then spend a few minutes _sharing_), so the notification needs to be there when Jakarta people get in.
have Jenkins
Jenkins lets you easily schedule when jobs need to run (Build Triggers > Build periodically > Schedule), and it supports custom timezones too:
TZ=Asia/Jakarta
# Every Monday morning, at 8am
# MINUTE HOUR DOM MONTH DOW
* 0 8 * * 0
fire an email ...
Nothing could have been easier: on a Jenkins job configuration page, Build > Add build step > Execute shell, then copy the following into the 'Command' field (replace `$TO_EMAIL` with the email address associated with the specific MSFT Teams channel):
#!/bin/bash -ex

/bin/mutt -s 'How was your weekend?' $TO_EMAIL <<EOF
Reply to this thread and let the others know what have you been up to.
EOF
And that's it, really!

2020-04-16 (permalink)

Source: https://twitter.com/_tessr/status/626076327133577216
how to remember tar flags 🎏

tar -xzf 👈 eXtract Ze Files!
tar -czf 👈 Compress Ze Files!
This never gets old!

2020-04-13 (permalink)

Flipped the order of notes in my .plan file...again!

The main reason why switched from _most-recent-last_ to _most-recent-first_ last August, was that I thought that would make it easier for a casual reader to access the most recent entry; however, that made it really difficult for someone, to dump the whole file and read it in chronological order.

Anyway, now that I can create a Feed RSS off my .plan file (see: https://matteolandi.net/plan.xml), there no more need for adding new notes at the top of my .plan file, so I decided to flip the order of my notes to _most-recent-last_.

This is what I last used last August, to reverse the order of all the notes:
" Select all the end of lines *not* preceding a new day note
/\v\n(# [0-9]{4})@!
" Change all the end of lines with a § -- unlikely to use character
:%s//§
" Sort all the collapsed notes (! is for reverse :help sort)
:'<'>sort!
" Change § back to \r
:%s/§/\r
It still does not properly support the scenario where a `§` character is being used in a note, but I checked  my plan file before running it, and it's going to be all right.

* finished reading [It Doesn't Have to Be Crazy at Work](https://www.amazon.com/Doesnt-Have-Be-Crazy-Work/dp/0062874780/)

2020-04-12 (permalink)

* finished reading [The 4-Hour Work Week: Escape the 9-5, Live Anywhere and Join the New Rich](https://www.amazon.com/4-Hour-Workweek-Expanded-Updated-Cutting-Edge-ebook/dp/B002WE46UW)

2020-04-11 (permalink)

Ode to [checklists](https://medium.com/javascript-scene/managing-remote-teams-use-checklists-301aae93e7a5):

Good checklists are:

- Short enough to memorize
- Only include the keypoints

For example (quite relevant these days):

- Use soap
- Wet both hands completely
- Rub the soap until it forms a thick lather covering both hands completely
- Wash hands for at least 20 seconds [not included in this study, but we know now it takes at least 20 seconds to break down viral bugs including Coronavirus so I’m putting it here for posterity]
- Completely rinse the soap off

Other examples, closer to the software development industry

Before asking peers to review your pull request, make sure that:

- [ ] Successfully tested on Jenkins
- [ ] Product team has signed off the implemented workflow
- [ ] Bonus: Screenshots/screencast demo included

Before approving a pull request and merging that into master, make sure that:

- [ ] Code is readable
- [ ] Code is tested
- [ ] The features are documented
- [ ] Files are located and named correctly
- [ ] Error states are properly handled
- [ ] Successfully tested on Jenkins

Before pushing a new version out, make sure that:

- [ ] Look into failing tests (if any), and confirm if intermittent
- [ ] Announce on whichever social media you are using, the main features / bugfixes / changes being released

After a successful release, make sure that:

- [ ] Do some smoke-test to confirm it's all good
- [ ] Run any post-release action (e.g. database migration, if not automatic, support tickets updated)
- [ ] Announce on social media that the release has completed successfully
- [ ] Archive completed stories

When adding a new Defefct to the backlog, make sure it includes:

- [ ] Description
- [ ] Steps to reproduce
- [ ] Expected behavior
- [ ] Actual behabior
- [ ] Bonus: Screenshot/video demonstrating the bug
- [ ] Bonus: any additional pre-analysis

I believe we can go on forever, but this seems promising.

2020-04-08 (permalink)

Is it possible to implement Basecamp's automatic check-ins with Teams?  I think it is, with some duct tape of course.

- Each Teams channel has an email address associated with it; you send an email to it, and a few seconds later that message would pop up on teams
- A Jenkins job can be configured to run periodically (e.g. every day @ 9am)
- You can can easily send email from the command line as follows: `echo 'body' | mutt -s 'Subject' your.address@foo.com`

So...

1. Gather your Microsoft Teams' channel email
2. Set up your script:
#!/bin/bash -ex

/bin/mutt -s 'Automatic check-in: What will you be working on this week?' $TO_EMAIL <<EOF
Reply to this thread and let the others know what your plan for the week is.

- Things change, so don't worry if you wind up working on something completely different
- Few bullet points, a poem, a mockup, anything that might help you sending the message across
- That's it
EOF
3. Tell Jenkins to run this every Monday morning, at 9am

And that's it!  Everyone, every Monday morning will have a new notification in the specified channel, and could inform the others about their plans for the week.

? tmux / macos: when piping a pane into a program, trailing whitespaces are added to some of the lines (who is generating them? is it the command that generated the output, is it tmux itself?)

? ap: implement person-activity preferences

+ ap: current _dependencies_ implementation is flawed... at any given state, people might have worked for different days (and completed activities at different times), so in order to be able to understand if an activity is unblocked, we need to understand **when** all its dependencies have been completed

A1 -> A2 (depends on)
P1: A1
P2: A2

P2 cannot start working on A2 until A1 is finished, so the best is T(A1) + T(A2), while I suspect the algo is going to spit out max(T(A1), T(A2))

2020-04-07 (permalink)

Another interesting video from the Basecamp guys, on how they use Basecamp...to run Basecamp.  Check it out: https://m.signalvnoise.com/a-live-tour-of-how-basecamp-uses-basecamp-to-run-basecamp/

Home:

- 3 sections: HQ, teams, projects
  - HQ i.e. company space
  - Teams i.e. teams, or departments (how people are organized)
  - Projects i.e. how work is organized
- each of these sections has **tools**
- each person has access to HQ, and only those teams and projects they have been invited to

Tools:

- Message board
  - project/team/company announcements: so when a new joiner is added, they can easily browse past announcements

- Todos
  - Lists (or scopes, or chunks of work)
  - Each list has items
  - An item can:
- be assigned to someone
- notify list of people when done
- have a due date assigned
- and store additional notes
  - Hill-chart
- simple way to understand _where things stand_, at a given time (not based on number of tasks completed/remaining, as tasks are not linear)
- two sides: left, figuring things out (lots of unknown), right, making it happen (just need time to implement it, but no blockers in sight)
- each dot is a scope of work, and people can drag it around the hill curve
- scopes are dragged manually, not systematically... we want to exercise people gut feelings
- you can roll-back a project and see how things moved on the hillchart, over time
- Docs & Files

  - Dropbox, Google Drive ...
  - Can create folders
  - Can _color_ document -- should you need some clues

- Campfire

  - realtime chat, built-in

- Schedule

  - project/team/company shared calendar

- Automatic Check-ins

  - scheduled questions sent to everybody in the company
  - Click on someone's name, and it will pull all someone's answers to a specific automatic check-in (it works similarly for other features too)


General:

- Everything is _commentable_
- Almost everything has a deep-link associated with it, so you can share it with anyone, and if they have the right permissions they will be able to see it
- You can leave **boosts** (like reactions, or 16-character-long comments) to messages or to comments
- From each team, or each project, if you scroll down the Tools section, you get access to the **whole** history of the team/project.  Organized stuff at the top (your tools), scroll through history down the bottom
- The same **history** concepts exists for the whole **company**.  There is a "Latest activity" section where you can see what's happening, in realtime, across all the teams, across all the projects
- There is a "My Schedule" section, that lists all the events that you are involved with; same with the Schedule tool, you can get a link and display this information inside your personal Google Calendar, or whatever
- You can open someone's profile, and access all that is on their plate (i.e. all the to-dos currently assigned to them)
- You can open someone's profile, and access all of their activities... not just to-dos, but comments

Basecamp for Basecamp:

- Discuss multiple things in a **team**... but very specific ones inside **projects**
- If what you want to do can be summarized in one to-do item, or a message... then it goes into a team; if not, then it goes into a project
- Pretty much all the **teams** have everybody on it -- very open company.  Only a few teams have a few people only -- mostly for executives to discuss about strategy.
- Tons of projects and they usually last for 1 week or 2 weeks (once completed, you can archive them) For example: every new release of the iOS Basecamp app, has a new project, with to-dos, hill-chart and everything
- They have a project called OUT, with only the Schedule tool turned on, where people tell when they will **not** be working.  The nice thing about the Schedule tool, is that you can import that into Google Calendar, Outlook, iCal.
- Idea pitches are completely managed asynchronously on Basecamp: big writeup on the message board (usually the _deep_ thinking is done by one person), with enough details for programmers to be able to come up with high level estimates, but that's it (too many details, at this stage, are almost a waste of time, especially if the idea does not turn out to be a good one).
- How to manage contractors? 1. create a new company (otherwise they would have access to your HQ) 2. invite them to projects.

HQ:

- Announcements

  - Everyone has to see this stuff

- All Talk

  - Chat-room, social stuff, everyone has access to it

- Dates

  - Company coming/past meetups

- Reference Materials

  - Policies
  - Logos for products
  - National Holidays
  - 5x12s: every month, 5 random people, 1h to talk about stuff which is not work -- all of this is transcribed

- Fan mail

  - Forward mail into Basecamp, so they show up there

- Scheduled check-ins:

  - Every Monday morning: what did you do this weekend?
  - Every month, what are you reading? -- helps building a culture, especially for remote teams

Example project: Hey (a new email platform they are launching):

- 2 develoeprs, but everyone invited to the project
- Tools: Message board, To-dos, Campfire
- All starts with a _shaping-doc_ (read more at: https://basecamp.com/shapeup)

  - high level what the project is about
  - edge cases maybe
  - few low-fidelity mockups
  - the team creates the tasks

- To-dos

  - Mixed to-dos for the programmer and the designer
  - Different scopes (list of to-dos), like 'sign-up flow'

Another example project: 'What Works [2020]' project (one project per year):

- Message board

  - Heartbeats: every team lead, every 6 weeks (they run 6 weeks dev cycles), write about what they have done over the last cycle
  - Kickoffs: every team lead, every 6 weeks, write about what they will be working on
  - Just summaries for the rest of the company to read, and stay up to date
  - One heartbeat / kickoff per team pretty much

- Automatic checkins

  - Every Monday, 9am, what will you be working on this week?  Rough estimate of what we think we will be working on
  - Every day, at 4:30pm, what did you do today?  Not just a list of to-dos... as most of the time, what you are working on does not have a to-do attached.  Own words, 15 minutes tops
  - Managers can stop asking people about updates, but read them instead

- Schedule

  - When 6 weeks cycle start, end

- To-dos

2020-04-05 (permalink)

Put together a little script to generate the RSS feed URL given a YouTube page.

YouTube does not include such URL in the HTML page, but it's easy to guess it (thanks to [1]):
1. Go to the YouTube channel you want to track
2. View the page’s source code
3. Look for the following text: channel-external-id
4. Get the value for that element (it’ll look something like UCBcRF18a7Qf58cCRy5xuWwQ
5. Replace that value into this URL: https://www.youtube.com/feeds/videos.xml?channel_id=...
Here you go:
#!/usr/bin/env bash

set -euo pipefail

URL=${1?YouTbe URL required}

channel_id=$(curl "$URL" \
  | grep --only-matching --extended-regexp 'data-channel-external-id="[^"]*"' \
  | head -n1 \
  | cut -d\" -f2)

echo "https://www.youtube.com/feeds/videos.xml?channel_id=$channel_id"
[1] https://danielmiessler.com/blog/rss-feed-youtube-channel/

Very interesting (and quite lengthy) Q&A session on remote working, held from the Basecamp guys: https://m.signalvnoise.com/live-qa-on-remote-working-working-from-home-and-running-a-business-remotely/

2020-04-04 (permalink)

A new version of Alacritty has been released (0.4.2), so time to do the _dance_:

1. Update the main app:
brew cask install --force alacritty
2. Do the dance
cd /Applications
for suffix in fullscreen main scratch w3m; do
  target=Alacritty-$suffix
  echo "Alacritty.app -> $target.app"
  rm -rf $target.app
  cp -rf Alacritty.app $target.app
  echo "Patchiing $target.app/Contents/Info.plist"
  sed -i "s/Alacritty/$target/" $target.app/Contents/Info.plist
  echo "Done"
  cat $target.app/Contents/Info.plist | grep VersionString -A1
done
cd -
That's it!

Been working on a little tool called `ap`: the activity planner.

You feed to it a list of **activities** (ID, effort, and dependencies), and a list of **people** (ID, allocation, and whether they are working already on a specific activity):
activity act-01 5
activity act-02 3
activity act-03 5
activity act-04 3
activity act-05 3
activity act-06 3
activity act-07 3
activity act-08 5
activity act-09 3
activity act-10 5
activity act-11 5
activity act-12 5 act-03
activity act-13 10
person user-01 80 act-01
person user-02 20 act-02
person user-03 60 act-03
person user-04 80 act-04
person user-05 60 act-05
person user-06 20 act-06
person user-07 80
And it will output what it thinks the _best_ execution **plan** is:
act-04 0 3.75 user-04
act-05 0 5.0 user-05
act-01 0 6.25 user-01
act-03 0 8.333333 user-03
act-13 0 12.5 user-07
act-06 0 15.0 user-06
act-02 0 15.0 user-02
act-10 3.75 10.0 user-04
act-08 5.0 13.333333 user-05
act-07 6.25 10.0 user-01
act-09 8.333333 13.333333 user-03
act-11 10.0 16.25 user-04
act-12 10.0 16.25 user-01
Where each line tells:

- The ID of the activity (column 1)
- When _someone_ will start working on it (column 2)
- When the work will be done (column 3)
- Who this _someone_ actually is (column 4)

This also tell us that all the planned activities will be completed after `16.25` days of work.

Now, actual days of work can be converted into calendar days, week-ends can be excluded...
act-04 2020-03-30 2020-04-02 user-04
act-05 2020-03-30 2020-04-06 user-05
act-01 2020-03-30 2020-04-07 user-01
act-03 2020-03-30 2020-04-09 user-03
act-13 2020-03-30 2020-04-15 user-07
act-06 2020-03-30 2020-04-20 user-06
act-02 2020-03-30 2020-04-20 user-02
act-10 2020-04-02 2020-04-13 user-04
act-08 2020-04-06 2020-04-16 user-05
act-07 2020-04-07 2020-04-13 user-01
act-09 2020-04-09 2020-04-16 user-03
act-11 2020-04-13 2020-04-21 user-04
act-12 2020-04-13 2020-04-21 user-01
...and with this we can then plot a Gantt chart and I don't know, maybe make some decisions about it.

Finding the optimal execution plan turned to be more complicated than anticipated; in fact, the moment I remove activities pre-allocations from the example above, `ap` would blow up and crash (apparently the search space becomes too large).  _There must be a problem with your implementation_ you might argue, and I would not disagree; the reason why I am putting this into writing, is so I can clarify the problem in my mind (never heard of rubber-ducking before?!).

Anyway, `ap` internally uses A*[1] to find the best execution plan; for the sake of readability, we can say that each state object is just a list of `workers`, where each worker is defined as two fields:

- `been-working-on`: the list of activities completed _so far_
- `free-next`: when the worker will next be available to pick up something else

With this, we can easily project our `target-date` (i.e. when all the in-progress activities will be completed) by simply running the `max` of all the workers' `free-next` values.

But what's the **cost** to move from one state to another?  Well that depends on:

1. The activity itself (i.e. it's `effort`)
2. Who is to claim that activity (i.e. someone with a _high_ `allocation` value, will complete the activity in less time)
3. When all the activities claimed so far will be completed

For example, let's imagine:

- the current `target-date` (before the worker had claimed the new activity) is '13'
- the current worker has been working on for '10' days
- and that it will will take them '5' days to complete the a newly claimed activity

In this case, the _new_ target date (again, _max of all the been-working-on values_) will be '15', so the cost of moving into this new state is `15 - 13 = 2`; if on the other hand `target-date` was '17', then the cost to get into this new state would have been '0' (i.e. the _projected_ target date would not have changed).

So far so good... I hope.

How can we now _speed-up_ things a bit, and suggest A* which states to evaluate first, so that `ap` does not blow up when the search space increases?  That's where the heuristic come into play -- as a remainder, in order for A* to find the optimal solution, the heuristic function has to be _admissible_ [2] i.e. it should never _overestimate_ the cost required to reach the target state.

Let's look at couple of examples:

1. Sum of all the remaining activities' `effort`.  This assumes that all the remaining activities are completed one after the other (and not in parallel), so it's easy to understand how this would end up overestimating the cost remaining.  This heuristic is clearly non-admissible.
2. Sum of all the remaining activities' `effort`, over the number of total `workers`.  This is a bit better from the previous one, as at least it tries to assume that work can actually happen in parallel); however, not **all** of it can actually be done in parallel, so this too is doomed to overestimate under certain circumstances.

Please note that using a non-admissible heuristic does not necessarily implies A* will stop _working_; it will probably find a solution, sometimes even the optimal one, but it really depends on the proble, your input, and how lucky you were.  For example, let's see what happens when we try and use the first heuristic:
(defun heuristic (activities state)
  (let* ((effort-remaining (effort-remaining activities state)))
    effort-remaining))
With this, `ap` would still return '16.25' as closest completion date, but when I try that without taking pre-allocations into account, it would return '22.5' (clearly a success given that the planner is not blowing up anymore, but the solution found is _sub-optimal_).

Is there an **admissible** heuristic that we can use with this problem?  Have I modelled the problem poorly, and painted myself into a corner?

[1] https://en.wikipedia.org/wiki/A*_search_algorithm
[2] https://en.wikipedia.org/wiki/Admissible_heuristic

2020-03-27 (permalink)

Fucking figured out why my `PS1` was breaking input with Alacritty: https://github.com/alacritty/alacritty/issues/3512

It turns out I was using non-printing characters in the prompt... without escaping them!  Hence `bash` was not able to properly calculate the length of the prompt, hence `readline` blew up!

You can read more about escaping non-printing characters, here:
https://superuser.com/questions/301353/escape-non-printing-characters-in-a-function-for-a-bash-prompt

2020-03-26 (permalink)

My reg-cycle plugin is not quite working as expected... sometimes you yank some text... then yank something else but when you try to paste what you yanked the first time (`p` followed by `gp`), other stuff will be pasted instead (like text previously deleted).

Few things to look at:

- `:help redo-register`
- https://vim.fandom.com/wiki/Remembering_previous_deletes/yanks

? fix reg-cycle

2020-03-24 (permalink)

Today I learded, you can quickly setup SSH proxies by adding the following to your ~/.ssh/config file:
Host server.example.com
  ProxyCommand ssh -q proxy.example.org nc %h %p
By this, when we connect to the server by git push, git pull or other commands, git will first SSH to server.example.com. As the ssh client will check the config file, the above rule makes it set up a proxy by SSH to proxy.example.org and relaying the connection to %h (server.example.com) with port %p (22 by default for SSH) by nc (you need to have nc installed on proxy). This way, the git connection is forwarded to the git server.

And why would this be relevant?  Think about this scenario:

1. Main laptop: decent CPU / memory, connected to the home WIFI, and company VPN
2. Dev laptop: good CPU / memory, connected to the home WIFI, but not to the company VPN (because you know, you don't want to connect and re-enter auth tokens... **twice**)

Well, with this proxy trick setup, you would still be able to access configured internal services (e.g. git) from the dev laptop without it being connected to the company VPN.  Quite handy!

https://www.systutorials.com/git-through-ssh-tunnel-as-proxy/

Something is messing up my bash prompt, when running it inside Alacritty:

1. Open a new instance
$ open -n /Applications/Alacritty.app --args --config-file /dev/null
The new instance is opened, and the configured prompt is printed:
matteolandi at hairstyle.local in /Users/matteolandi
$ █
2. Press `up-arrow` to access the last executed command:
matteolandi at hairstyle.local in /Users/matteolandi
$ open -n /Applications/Alacritty.app --args --config-file /dev/null█
3. Press `ctrl+a` to get to the beginning of the line

Expected behavior: the cursor moves indeed at the beginning of the line
matteolandi at hairstyle.local in /Users/matteolandi
$ █pen -n /Applications/Alacritty.app --args --config-file /dev/null
Actual behavior: the cursor stops somewhere in the middle of the line
matteolandi at hairstyle.local in /Users/matteolandi
$ open -n█/Applications/Alacritty.app --args --config-file /dev/null
Note 1: if I now press `ctr-e` (to get to the end of the line), the cursor get past the current end of the line:
matteolandi at hairstyle.local in /Users/matteolandi
$ open -n /Applications/Alacritty.app --args --config-file /dev/null       █
Note2: if I hit `ctrl+c` (to reset the prompt), press `up-arrow` again, and start hammering `left-arrow`, it eventually gets to the beginning of the line:
matteolandi at hairstyle.local in /Users/matteolandi
$ open -n /Applications/Alacritty.app --args --config-file /dev/null       ^C

matteolandi at hairstyle.local in /Users/matteolandi
130 > █pen -n /Applications/Alacritty.app --args --config-file /dev/null
And if I know press `ctrl+e` (again, to get to the end of the line), the cursor does actually move to the end of the line:
matteolandi at hairstyle.local in /Users/matteolandi
130 > open -n /Applications/Alacritty.app --args --config-file /dev/null█
I opened a new [ticket](https://github.com/alacritty/alacritty/issues/3512) to the project's GitHub page -- hopefully someone will get back to me with some insights.

2020-03-23 (permalink)

* finished reading [Node.js 8 the Right Way: Practical, Server-Side JavaScript That Scales](https://www.amazon.com/Node-js-Right-Way-Server-Side-JavaScript/dp/168050195X)

2020-03-22 (permalink)

Re-set up SSH on my W10 laptop:

1. Re-install: `cygrunsrv`, and `openssh`
2. Open a cygwin terminal, **as-administrator**, and run:
$ ssh-host-config -y
$ vim /etc/sshd_config
# make sure 'PubkeyAuthentication yes' is there, and not commented out
$ cygrunsrv --start cygsshd
3. Add a Firewall rule to let in all incoming connections on TCP: 22

Note: if you do this on a company's laptop, `sshd` will try and contact the company's AD when authenticating connections, so make sure your laptop is connected to the Intranet.

Links:
- http://www.noah.org/ssh/cygwin-sshd.html
- https://gist.github.com/roxlu/5038729

2020-03-21 (permalink)

I have got a bit serious about trying to use `w3m` for web browsing (yeah, I know, I bit ironic coming from me, given that my team is working on a SPA unable to load shit without Javascript enabled).

First things first... mappings!
keymap  r      RELOAD
keymap  H      PREV
keymap  L      NEXT
keymap  C-U    PREV_PAGE
keymap  C-D    NEXT_PAGE

keymap gg BEGIN
keymap w NEXT_WORD
keymap b PREV_WORD
keymap zz CENTER_V

keymap o COMMAND "GOTO /cgi-bin/b1.cgi; NEXT_LINK"
keymap t COMMAND "TAB_GOTO /cgi-bin/b1.cgi; NEXT_LINK"
keymap ( NEXT_TAB
keymap ) PREV_TAB
keymap gt TAB_LINK
keymap d CLOSE_TAB
keymap x CLOSE_TAB

keymap :h HELP
keymap :o OPTIONS
keymap :q QUIT

keymap gs SOURCE
keymap V EXTERN

keymap p  GOTO      /cgi-bin/cb.cgi
keymap P  TAB_GOTO  /cgi-bin/cb.cgi
keymap yy EXTERN "url=%s && printf %s "$url" | cb && printf %s "$url" &"
I am going to change these quite a lot over the following weeks, but these should be enough to get you started using it.  A few gems

- You won't be able to read all the pages you visit inside `w3m`... so when you bump into one, just press `V` (or `M`) to open the current page in a configured external browser
- There is no support for a default search engine (a bit annoying), but given that you can customize the hell out of `w3m`, I was able to work around that with the following: `COMMAND "GOTO /cgi-bin/b1.cgi; NEXT_LINK"`.  This renders the content of cgi script that I created, and moves the cursor to the next link (so I can then press `<enter>`, type anything I want to search for, press `<enter>` to quit editing the text field, press `<tab>` to focus the Submit button, and press `<enter>` again to submit... easy uh?).  Anyway, this is the content of my cgi script (it's a simple wrapper for [bunny1](https://github.com/ccheever/bunny1)):
#!/usr/bin/env bash

set -euo pipefail

cat <<EOF
Content-type: text/html

<html>
<head>
<title>bunny1 wrapper</title>
</head>
<body>
<form action="http://localhost:9084/">
<input type='text' name='___' value=''>
<input type='submit' value='try me'>
</form>
</body>
</html>
EOF
- Another intersting thing you can do by using a custom cgi script, is opening withing `w3m` the URL saved in the OS clipboard.  How? 1) read the content of the clipboard (assuming it's a URL), 2) tell `w3m` to go to that URL (yes, your cgi script can send commands to `w3m` ):
#!/usr/bin/env bash

set -euo pipefail

echo "w3m-control: GOTO $(cb --force-paste)"
- This last tip will teach you how to _copy_ the current page URL into the OS clipboard.  We are going to use the `EXTERN` command for this (i.e. invoke an _external_ command and pass the current URL as the first argument) and in particular we will: 1) set `url` to the input URL, 2) print `$url` (without the trailing new-line) and 3) pipe that into [cb](https://matteolandi.net/cb.html) to finally copy it into the OS clipboard.
keymap yy EXTERN "url=%s && printf %s "$url" | cb && printf %s "$url" &"
Few interesting articles I found when setting this all up:

- Bunch of `w3m` related tricks: https://ntmlabs.com/w3m-tricks/
- A vim like firefox like configuration for w3m:
  https://github.com/felipesaa/A-vim-like-firefox-like-configuration-for-w3m

Finally decided to give [Alacritty](https://github.com/alacritty/alacritty) a go, and damn... now I wished I had tried it earlier: scrolling in `vim` inside a `tmux` session is finally fast again!  It supports `truecolors` too, so what else could I ask more?

Well, one thing maybe I would ask for: [support multiple windows](https://github.com/alacritty/alacritty/issues/607).  We can _multi-instantiate_, and that's fine, but those instances would not be recognized as being part of the same group (i.e. multiple entries in the dock, no support for `cmd + backtick`)... and that would make it really hard for people to set up keystrokes to focus a specifc instance of a group (currently with `Terminal.app`, I have a mapping to focus the group, and then then automatically inject: `cmd + opt + 1` to select the first instance of the group, `cmd + opt + 2` for the second...).

There is a `--title`, but Keyboard Maestro would not let me select an application given its title (also, I do want the title of the window to be dynamic, based on the current process...).  There is a `--class` option, but it seems to be working on Linux only... so what else?

Well, after a bit of fiddling I finally found a solution:
$ cp -r /Applications/Alacritty.app /Applications/Alacritty-scratch.app
$ vim /Applications/Alacritty-scratch.app/Contents/Info.plist
# And change CFBundleName CFBundleDisplayName accordingly
It's ugly, I know it, it's a bit like installing the same application twice, with a different name, but who cares?  I mean, scrolling is damn fast now <3

PS. remember to re-sync these '/Applications' directories the next time that an new release of `alacritty` is published.

2020-03-17 (permalink)

_Rant against software development in 2020: start_

We added `prettier` to our repo because you know, it feels good not to have to worry about indentation and style; so we linted all of our files (yes, it was a massing changeset), and then carried on.  Except...changesets pushed by different team members looked different from one another, like `prettier` settings were not properly parsed...and yet all of them had their IDEs properly configured.

Well, it turns out 'prettier-vscode', `prettier` plugin for VisualStudio Code, is not picking up '.prettierignore' unless it's in the project working directory: https://github.com/prettier/prettier-vscode/issues/375.  And guess what? Since we have a monorepo, we have different '.prettierignore' files, one per sub-module...so that's why changesets were looking different.

Long story short:

- `prettier` does not support that behavior: https://github.com/prettier/prettier/issues/4081
- 'prettier-vscode' does not want to support that, unless `prettier` support that: https://github.com/prettier/prettier-vscode/issues/375

So yeah, users, go fuck yourselves!

Well, of course there are [workarounds](https://code.visualstudio.com/docs/editor/multi-root-workspaces), but still, `git` works that way, `npm` works that way...why not implement a similar behavior?  I am pretty sure there are reasons, but still...

_Rant against software development in 2020: start_

Few notes on how to implement multitenancy with Elasticsearch:

[shared-index](https://www.elastic.co/guide/en/elasticsearch/guide/current/shared-index.html), i.e. a single index, shared by all the tenants, in which all the documents have a tenant identifer
PUT /forums
{
  "settings": {
    "number_of_shards": 10
  },
  "mappings": {
    "post": {
      "properties": {
        "forum_id": {
          "type":  "string",
          "index": "not_analyzed"
        }
      }
    }
  }
}

PUT /forums/post/1
{
  "forum_id": "baking",
  "title":    "Easy recipe for ginger nuts",
  ...
}
This approach works, but all the documents for a specific tenant will be scattered across all the shards of the index.  Enters [document routing](https://www.elastic.co/guide/en/elasticsearch/guide/current/routing-value.html):
PUT /forums/post/1?routing=baking
{
  "forum_id": "baking",
  "title":    "Easy recipe for ginger nuts",
  ...
}
By specifing 'baking' as routing value, we are basically telling Elasticsearch to always use a single shard...  which one? `shard = hash(routing) % number_of_primary_shards` (not that we care, as long as it's always the same).

Oh, and when searching data (with routing), don't forget to filter by the tenant identifer, or you ill accidentally retrun other tenant's data:
GET /forums/post/_search?routing=baking
{
  "query": {
    "bool": {
      "must": {
        "match": {
          "title": "ginger nuts"
        }
      },
      "filter": {
        "term": {
          "forum_id": {
            "baking"
          }
        }
      }
    }
  }
}
But...can we do better, and have Elasticsearch do the filtering for us?  Yes, [aliases](https://www.elastic.co/guide/en/elasticsearch/guide/current/faking-it.html) to the rescue!

When you associate an alias with an index, you can also specify a filter and routing values:
PUT /forums/_alias/baking
{
  "routing": "baking",
  "filter": {
    "term": {
      "forum_id": "baking"
    }
  }
}
Now we can use the 'baking' index when indexing, and forget about _routing_:
PUT /baking/post/1
{
  "forum_id": "baking",
  "title":    "Easy recipe for ginger nuts",
  ...
}
Similarly, when searching, we can omit the _routing_ parameter, and the filters too!
GET /baking/post/_search
{
  "query": {
    "match": {
      "title": "ginger nuts"
    }
  }
}
Next step: see if it's possible to implement ACL so that one user can connect to Elasticsearch and only see their aliases: neither the aliases used by other tenants, nor the big, fat, shared index.

2020-03-16 (permalink)

? ctags and javascript files: https://www.reddit.com/r/vim/comments/fj9tsz/do_lsps_make_tag_generating_tools_obsolete/fkn1tsk/

2020-03-15 (permalink)

Asking for help, on [reddit](https://www.reddit.com/r/node/comments/fj1zru/best_practices_on_deploying_multitenant/?), on best-practices on how to implement multitenancy Node.js applications:
Hello Everyone,

I am trying to add [multitenancy](https://en.wikipedia.org/wiki/Multitenancy) support to an existing Node.js/Express.js application, and while the Web is full of articles on how to properly segregate data (e.g. additional 'tenant_id' column, different databases), I am currently struggling to find best-practices on how to properly implement it... let me explain.

Irrespective of your chosen data segregation model, there comes a point throughout the life-cycle of the user request where you will have to: 1) figure out what the current tenant is, 2) use that information to fetch the _right_ data (e.g. run custom _by-tenant-id_ queries, or by selecting the proper database connection).  I am particularly interested in hearing how others have implemented this, especially given that point 1 and 2 might be a few asynchronous function calls away.

There are 3 possible options I could think of, to solve this problem:

1. Use libraries like [continuation-local-storage](https://github.com/othiym23/node-continuation-local-storage), or its successor [cls-hooked](https://github.com/Jeff-Lewis/cls-hooked)

  - these libraries rely on experimental APIs and require monkey-patching of other asynchronous libraries to make sure the context is indeed properly propagated
  - when users ask about [maturity](https://github.com/Jeff-Lewis/cls-hooked/issues/22), their authors suggest to heavily test internally and make sure the context is never lost
  - other users on [reddit](https://www.reddit.com/r/node/comments/7g30kv/expressjs_logging_info_with_global_unique_request/) confirm they had a lot of issues with cls, and because of that only used it for logging purposes

2. Pass a context object around, **everywhere** (a la Golang's [context](https://golang.org/pkg/context/))

  - it's a **lot** of work, especially for an already existing application (and I am not [alone](https://www.reddit.com/r/node/comments/8qi3dv/express_forward_custom_headers_received_from/) in thinking this)

3. Defer the initialization of the used services until the request is received, and inject this context object as a constructor argument.

  - this seems to require less changes as compared to the previous option, but again, it forces you to defer the initialization of **all** your services until you know what the request tenant is

Now, all this journey has led me to believe that maybe this is not the _right_ way to implement multitenancy inside Node.js applications; maybe, instead of having a single Node.js application being able to serve requests coming from different tenants, we should just _multi-instantiate_ this application and have each of these instances serve one specific tenant only.

This is one realization that I had when thinking about all this:

- Service multitenancy (i.e. a single service, being able to serve requests coming from different tenants), is not required, but could save you some money later on, as that would let you utilize hardware resources to full extent
- Service multi-instantiation (i.e. different services, each one specialized to serve a specific tenant), is not required, but it's **inevitable**: there are only so many requests you will be able to serve with a single multitenant service

So probably I shouldn't worry much about how to add support multitenancy support to an existing Node.js application, but rather _multi-instantiate_ it and put a broker in front of it (i.e. Nginx).

Let me know your thoughts.

Interesting article about [Zone.js](https://medium.com/ngconf/deep-dive-into-zone-js-part-1-execution-context-92166bbb957): the dummy implementation is ridiculously simple!
var currentZoneFrame = {zone: root, parent: null};
class Zone {
  static get current() {
    return currentZoneFrame.zone;
  }

  run(callback, applyThis, applyArgs) {
    _currentZoneFrame = {parent: _currentZoneFrame, zone: this};
    try {
      return callback.apply(applyThis, applyArgs);
    } finally {
      _currentZoneFrame = _currentZoneFrame.parent;
    }
  }
}
This is how you are expected to use it:
var zoneA = Zone.current.fork({name: 'zoneA'});

function callback() {
  // _currentZoneFrame: {zone: zoneA, parent: rootZoneFrame}
  console.log('callback is invoked in context', Zone.current.name);
}

// _currentZoneFrame = rootZoneFrame = {zone: root, parent: null}
zoneA.run(function() {
  // _currentZoneFrame: {zone: zoneA, parent: rootZoneFrame}
  console.log('I am in context', Zone.current.name);
  setTimeout(zoneA.wrap(callback), 100);
});
Too bad, the [TC39 proposal](https://github.com/domenic/zones) has been withdrawn!

* finished reading [Graph Algorithms: Practical Examples in Apache Spark and Neo4j](https://www.amazon.com/Graph-Algorithms-Practical-Examples-Apache/dp/1492047686)

2020-03-09 (permalink)

MacOS Bundle IDs:
function mac-os-bundle-id() {
  lsappinfo info -only bundleid "$*" | cut -d '"' -f4
}
Note: this only works if the the application you are querying the bundle ID for, is running.

https://superuser.com/questions/346369/getting-the-bundle-identifier-of-an-os-x-application-in-a-shell-script

2020-03-08 (permalink)

English writing lessons:

- Use verb + adverb, not the opposite
Good: I have worked closely with...
Bad: I have closely worked with...
- Use 'be experienced with' instead of 'have experience with'
- Don't use 'seem'... be assertive: either one is, or is not!
- Use 'further' instead of 'improve'

TIL: you can pass '60m' to `docker logs --since ...` to get logs of the last hours... I knew how to se data/times with `--since`, and always been doing the math in my head to figure out the correct cut-off date-time value; but being able to `--since 60m` is just... pure gold!

2020-03-07 (permalink)

Today I finally got around to toy a bit with MacOS Karabiner-Elements, and customize my configuration even further so that:

- `'` behaves exactly as `<caps>` (i.e. `<ctrl>`-modifier when held down, `<esc>` when tapped)
- `[` behaves exactly as `<tab>` (i.e. _movement_-modifier when held down, `<open-bracket>` when tapped)

What's the deal with these _movement_-modifiers?  Just to give you an idea:

- `<tab> + h` -> `<left>`
- `<tab> + j` -> `<down>`
- `<tab> + k` -> `<up>`
- `<tab> + l` -> `<right>`
- `<tab> + p` -> _paste from clipboard_
- `<open-bracket> + b` -> _one word backward_
- `<open-bracket> + w` -> _one word forward_

For the record, these are the relevant blocks of my karabiner.json file that enabled this:
{
    "description": "Tab as modifier",
    "manipulators": [
        {
            "description": "tab as modifier",
            "from": {
                "key_code": "tab"
            },
            "to": [
                {
                    "set_variable": {
                        "name": "tab_modifier",
                        "value": 1
                    }
                }
            ],
            "to_after_key_up": [
                {
                    "set_variable": {
                        "name": "tab_modifier",
                        "value": 0
                    }
                }
            ],
            "to_if_alone": [
                {
                    "key_code": "tab"
                }
            ],
            "type": "basic"
        },
        {
            "conditions": [
                {
                    "name": "tab_modifier",
                    "type": "variable_if",
                    "value": 1
                }
            ],
            "description": "tab + h as left_arrow",
            "from": {
                "key_code": "h"
            },
            "to": [
                {
                    "key_code": "left_arrow"
                }
            ],
            "type": "basic"
        },
        {
            "conditions": [
                {
                    "name": "tab_modifier",
                    "type": "variable_if",
                    "value": 1
                }
            ],
            "description": "tab + j as left_arrow",
            "from": {
                "key_code": "j"
            },
            "to": [
                {
                    "key_code": "down_arrow"
                }
            ],
            "type": "basic"
        },
        {
            "conditions": [
                {
                    "name": "tab_modifier",
                    "type": "variable_if",
                    "value": 1
                }
            ],
            "description": "tab + k as left_arrow",
            "from": {
                "key_code": "k"
            },
            "to": [
                {
                    "key_code": "up_arrow"
                }
            ],
            "type": "basic"
        },
        {
            "conditions": [
                {
                    "name": "tab_modifier",
                    "type": "variable_if",
                    "value": 1
                }
            ],
            "description": "tab + l as left_arrow",
            "from": {
                "key_code": "l"
            },
            "to": [
                {
                    "key_code": "right_arrow"
                }
            ],
            "type": "basic"
        },
        {
            "conditions": [
                {
                    "name": "tab_modifier",
                    "type": "variable_if",
                    "value": 1
                }
            ],
            "description": "tab + p as shift inssert",
            "from": {
                "key_code": "p"
            },
            "to": [
                {
                    "key_code": "v",
                    "modifiers": [
                        "left_command"
                    ]
                }
            ],
            "type": "basic"
        }
    ]
}
{
    "description": "open_bracket as modifier",
    "manipulators": [
        {
            "description": "open_bracket as modifier",
            "from": {
                "key_code": "open_bracket"
            },
            "to": [
                {
                    "set_variable": {
                        "name": "open_bracket",
                        "value": 1
                    }
                }
            ],
            "to_after_key_up": [
                {
                    "set_variable": {
                        "name": "open_bracket",
                        "value": 0
                    }
                }
            ],
            "to_if_alone": [
                {
                    "key_code": "open_bracket"
                }
            ],
            "type": "basic"
        },
        {
            "conditions": [
                {
                    "name": "open_bracket",
                    "type": "variable_if",
                    "value": 1
                }
            ],
            "description": "open_bracket + w as left_alt + right_arrow (one word forward)",
            "from": {
                "key_code": "w"
            },
            "to": [
                {
                    "key_code": "right_arrow",
                    "modifiers": [
                        "left_alt"
                    ]
                }
            ],
            "type": "basic"
        },
        {
            "conditions": [
                {
                    "name": "open_bracket",
                    "type": "variable_if",
                    "value": 1
                }
            ],
            "description": "open_bracket + b as left_alt + left_arrow (one word backward)",
            "from": {
                "key_code": "b"
            },
            "to": [
                {
                    "key_code": "left_arrow",
                    "modifiers": [
                        "left_alt"
                    ]
                }
            ],
            "type": "basic"
        }
    ]
}

? implement these very same mappings on Windows (AutoHotKey + SharpKeys)

2020-03-06 (permalink)

TIL: when you have an unresponsive SSH connection, you can kill it with `<enter>~.`

Also, you can type `<enter>~?` and get the list of supported escape codes:
remote> cat # then type <enter>~?
Supported escape sequences:
~.   - terminate connection (and any multiplexed sessions)
~B   - send a BREAK to the remote system
~C   - open a command line
~R   - request rekey
~V/v - decrease/increase verbosity (LogLevel)
~^Z  - suspend ssh
~#   - list forwarded connections
~&   - background ssh (when waiting for connections to terminate)
~?   - this message
~~   - send the escape character by typing it twice
https://superuser.com/questions/985437/ssh-escape-key-only-works-when-connection-is-stuck

* finished reading [Composing Software](https://leanpub.com/composingsoftware)

? I realized my kindle highlights for books I haven't purchased through amazon are not getting displayed here: https://read.amazon.com/ Need to figure out a way to extract them

2020-03-05 (permalink)

Convert markdown files to PDF on MacOS:
$ brew install pandoc
$ brew install homebrew/cask/basictex
$ ln -s /Library/TeX/Root/bin/x86_64-darwin/pdflatex /usr/local/bin/pdflatex
$ pandoc multi-tenancy.md -o multi-tenancy.pdf

2020-03-03 (permalink)

When we estimate, it is worth considering worst-case scenarios as well as best-case scenarios in order to provide a more accurate estimate. A formula I often use in estimation templates is the following (derived from a book I read a long time ago):
[Best + (Normal * 2) + Worst] / 4
Keep a risk register alongside the estimates
Review the estimates and the risk register at the end of the sprint

Estimating work is never easy. No two tasks are the same, and no two people are the same. Even if they were, the environment (code base) would most certainly not be the same. Getting to a precise estimation of work is almost impossible without doing the work itself.

Source: https://medium.com/swlh/planning-and-estimation-11e59ac1956b

2020-02-16 (permalink)

+ R-type notation for documenting Javascript functions

? Auto-curry recursive magic spell (from composing software)

2020-02-15 (permalink)

All right, I finally got around to get `sic(1)` up and running on my laptop and I am quite...OK with the result.  Why sic and not irssi, or weechat, you migth ask; I don't know, I guess I just got bored...

Anyway, [sic](https://tools.suckless.org/sic/) stands for simple irc client, and it really lives up to its name:

- let you connect to a server (plain connection, no SSL)
- support `PASS` authentication
- outputs a single stream of messages (yes, all the messages received on the different channels you joined, are just...comingled)

That's it, really.

Suckless tools might seem...lacking at first -- well, I guess they really are -- but there is a reason for that: they usually are highly composable, which means that most of the times you can combine them with other tools (suckless' tools, or unix tools in general) and say implement a feature that was originally missing.

For example, `sic(1)` does only support plaintext connections; not a problem, create a secure TCP connection using `socat(1)` and have `sic(1)` connect to the local relay instead:
socat tcp-listen:6697 openssl-connect:irc.freenode.net:6697
sic -h 127.0.0.1 -p 6697 -n your-nickname
Or, `sic(1)` does not support colors, but everyone would agree that a certain level of color does not hurt; easy, pipe `sic(1)`'s output into `sed(1)`, et voila:
sic -h 127.0.0.1 -p 6697 -n your-nickname \
  sed/your-nickname/${RED}your-nickname${DEFAULT}/
Or, you want to store every message ever received or sent? easy, that's what `tee(1)` is for:
sic -h 127.0.0.1 -p 6697 -n your-nickname \
  tee -a ~/irc/server.log \
  ...
Or you can add a typing hisory using `rlwrap(1)`, or on-connect auto-commands (e.g. join #vim) by using a combination of shell pipes, `echo(1)` and `cat(1)`.

Anyway, I guess you got the idea...and if anything, I hope I made a you a bit curious about this whole _compose-all-the-things_ philosophy.

These are the scripts I ended up creating -- feel free to poke around and tell me if anything intersting come into your mind:

- [sicw](https://github.com/iamFIREcracker/dotfiles/blob/master/bin/sicw)
- [sicw-rlwrap](https://github.com/iamFIREcracker/dotfiles/blob/master/bin/sicw-rlwrap)

Say you have an interactive process like a REPL or something; how can you send some data/text to it, before entering in interactive mode, waiting for the user to type anything?  For example: how can you pass some _auto-commands_ like `:j #suckless` and `:j #lisp` to `sic(1)`, so that you don't have to manually type them every time you connect to the server? Note, you still want to be able to manually send messages to the server, after the auto-commands have been sent.

Well, I found the solution somewhere on StackOverflow (unfortunately I cannot find the link to the page anymore):
#!/usr/bin/env bash

echo -en "$@"
cat -
What this `input` script does, is:

- `echo(1)` its input argument
$ echo -en 'foo\nbar\n'
foo
bar
- Then start reading from stdin (i.e. enter interactive mode)

Clever, isn't it?
$ input ':j #suckless\n' ':j #vim\n' | sic

* cb: added support for xsel(1) -- not that I needed it, but since I stumbled upon it when looking at other people dotfiles repos, I figured I should just steal it

+ add SelectAndSendToTerminal("^Vap") to my .vimrc -- currently there is `<Plug>SendSelectionToTerminal`, but it forces me to use nmap/xmap, instead of nnoremap/xnoremap, and I have to manually save/restore the state of the window (annoying)

? add kr to the family of 2-letters-named tools, to run `keyring` remotely -- will have to be careful to make sure people on the same network can not snoop my passwords ;-)

2020-02-14 (permalink)

The Old Way Using Decorate-Sort-Undecorate
This idiom is called Decorate-Sort-Undecorate after its three steps:

- First, the initial list is decorated with new values that control the sort order.
- Second, the decorated list is sorted.
- Finally, the decorations are removed, creating a list that contains only the initial values in the new order.
From: https://docs.python.org/3/howto/sorting.html

Spent a good 30 mins trying to find this reference... somehow I remembered it was called Wrap-Sort-Unwrap and Google wasn't returning anything meaningful.  After the fact, I was able to find this in the Python Cookbook Recipes by searching for pack/unpack.

2020-02-13 (permalink)

+ remap '/" to F13, and then use it as a hyper when hold-down (e.g. +S as CTRL)

+ remap '[" to F14, and then use it as a hyper when hold-down (e.g. +w to move forward one word, +b to move backward one word)

2020-02-08 (permalink)

* finished reading [Remote](https://www.amazon.com/Remote-Office-Not-Required/dp/0804137501)

? bunny1 in common lisp

2020-02-06 (permalink)

I got tired of the colors of my prompt (probably because the default colors palette used by most of the terminal applications are kind of terrible and I don't want to bother finding new colorscheme), so here is what I am experienting with:

- username: bold (previously pink)
- hostname: cyan / pink (previously yellow / blue)
- path: underline (previously green)

It's definitely less colored as it used to be, and I am pretty sure I will tweak it again over the following days, but let's see if it works

+ https://github.com/ds26gte/scmindent

2020-02-03 (permalink)

Vim was segfaulting when opening lisp files, and after bisecting my ft_commonlisp `autogroup` I found the culprit:
au syntax lisp RainbowParenthesesLoadRound
I dug deeper and found that the line in the plugin that Vim was tripping over, was:
for each in reverse(range(1, s:max_depth))
It did not take me long to get to this vim issue: https://github.com/vim/vim/issues/5541, and changing `reverse(range(...))` with a simple `range()` with negative 'stride' did solve the problem for me:
diff --git a/autoload/rainbow_parentheses.vim b/autoload/rainbow_parentheses.vim
index b4963af..455ad22 100644
--- a/autoload/rainbow_parentheses.vim
+++ b/autoload/rainbow_parentheses.vim
@@ -106,7 +106,7 @@ func! rainbow_parentheses#load(...)
                let b:loaded = [0,0,0,0]
        endif
        let b:loaded[a:1] = s:loadtgl && b:loaded[a:1] ? 0 : 1
-       for each in reverse(range(1, s:max_depth))
+       for each in range(s:max_depth, 1, -1)
                let region = 'level'. each .(b:loaded[a:1] ? '' : 'none')
                let grp = b:loaded[a:1] ? 'level'.each.'c' : 'Normal'
                let cmd = 'sy region %s matchgroup=%s start=/%s/ end=/%s/ contains=TOP,%s,NoInParens fold'
Opened a MR for this: https://github.com/eapache/rainbow_parentheses.vim/pull/12

2020-02-02 (permalink)

Finally got around to enhance pmdb.lisp (Poor Man's debugger) to add a few macros to facilitate profiling of slow functions -- not really debugging per-se, I will give you that, but since I did not have a Poor Man's profiler package yet and I wanted to get this done quickly, I figured I could shove it inside pmdb.lisp and start using it (I can always refactor it later).

Anyway, these are the macros that I added:

- DEFUN/PROFILED: a DEFUN wrapper to create functions that, when executed, would keep track of their execution times
- PROFILE: a macro (similar to TIME) to evaluate a given expression and output information about profiled functions (e.g. % of CPU time, execution time, number of calls)

Say you TIMEd an expression and decided you wanted to try and understand where the bottlenecks are; the idea is to replace all the DEFUNs of all the potential bottleneck-y functions, with DEFUN/PROFILED, and then replace TIME with PROFILE.

For example, say you started from something like:
(defun fun-1 (&aux (ret 0))
  (dotimes (n 10000)
    (incf ret))
  ret)

(defun fun-2 (&aux (ret 0))
  (dotimes (n 2000000)
    (incf ret))
  ret)

(defun fun-3 (&aux (ret 0))
  (dotimes (n 200000000)
    (incf ret))
  ret)

(defun main () (fun-1) (fun-2) (fun-3))
To time the execution time of MAIN, you would probably do the following:
(time (main))
; Evaluation took:
;   0.396 seconds of real time
;   0.394496 seconds of total run time (0.392971 user, 0.001525 system)
;   99.49% CPU
;   910,697,157 processor cycles
;   0 bytes consed
;
; 200000000
Perfect!  But what's the function that kept the CPU busy, the most?  Replace all the DEFUNs with DEFUN/PROFILED:
(defun/profiled fun-1 (&aux (ret 0))
  (dotimes (n 10000)
    (incf ret))
  ret)

(defun/profiled fun-2 (&aux (ret 0))
  (dotimes (n 2000000)
    (incf ret))
  ret)

(defun/profiled fun-3 (&aux (ret 0))
  (dotimes (n 200000000)
    (incf ret))
  ret)
Replace TIME with PROFILE, et voila', you now have a better idea of where your bottleneck might be:
(profile (main))
; Profiled functions:
;  98.00% FUN-3: 449 ticks (0.449s) over 1 calls for 449 (0.449s) per.
;   2.00% FUN-2: 8 ticks (0.008s) over 1 calls for 8 (0.008s) per.
;   0.00% FUN-1: 0 ticks (0.0s) over 1 calls for 0 (0.0s) per.
;
; 200000000
It also works with nested function calls too:
(defun/profiled fun-4 (&aux (ret 0))
  (dotimes (n 200000000)
    (incf ret))
  ret)

(defun/profiled fun-5 (&aux (ret 0))
  (dotimes (n 200000000)
    (incf ret))
  (fun-4)
  ret)

(defun main () (fun-1) (fun-2) (fun-3) (fun-4) (fun-5))

(profile (main))
; Profiled functions:
;  39.00% FUN-5: 880 ticks (0.88s) over 1 calls for 880 (0.88s) per.
;  21.00% FUN-4: 464 ticks (0.464s) over 1 calls for 464 (0.464s) per.
;  20.00% FUN-5/FUN-4: 451 ticks (0.451s) over 1 calls for 451 (0.451s) per.
;  19.00% FUN-3: 430 ticks (0.43s) over 1 calls for 430 (0.43s) per.
;   0.00% FUN-2: 9 ticks (0.009s) over 1 calls for 9 (0.009s) per.
;   0.00% FUN-1: 0 ticks (0.0s) over 1 calls for 0 (0.0s) per.
;
; 200000000
Percentages would start to lose meaning here (FUN-5/FUN-4 execution time would be included in FUN-5's one), but still I am quite happy with the overall result.

PS. If you read PCL, you should notice quite a few similarities with the WITH-TIMING macro explained at the end of the book.

I am going to leave it here (another e-learning platform): https://www.edx.org/

2020-01-31 (permalink)

* finished reading [The Mythical Man-Month](https://www.amazon.com/Mythical-Man-Month-Software-Engineering-Anniversary/dp/0201835959)

2020-01-26 (permalink)

Few Lisp book recommendations (in this order):

* Common Lisp: A Gentle Introduction to Symbolic Computation

* Practical Common Lips

? Paradigms of Artificial Intelligence Programming (PAIP)

+ Common Lisp Recipes (CLR): http://weitz.de/cl-recipes/

+ Patterns of software Book by Richard P. Gabriel

? On Lisp (Paul Graham): http://www.paulgraham.com/onlisp.html

? Let Over Lambda: https://letoverlambda.com/


* finished reading [Practical Common Lisp](https://www.amazon.com/gp/product/1590592395?ie=UTF8&tag=gigamonkeys-20&linkCode=as2&camp=1789&creative=9325&creativeASIN=1590592395)

2020-01-24 (permalink)

Got presented with the following info messages this morning, while booting up my dev box:
[default] GuestAdditions seems to be installed (6.0.14) correctly, but not running.
Got different reports about installed GuestAdditions version:
Virtualbox on your host claims:   5.2.8
VBoxService inside the vm claims: 6.0.14
Going on, assuming VBoxService is correct...
Got different reports about installed GuestAdditions version:
Virtualbox on your host claims:   5.2.8
VBoxService inside the vm claims: 6.0.14
Going on, assuming VBoxService is correct...
VirtualBox Guest Additions: Starting.
VirtualBox Guest Additions: Building the VirtualBox Guest Additions kernel
modules.  This may take a while.
VirtualBox Guest Additions: To build modules for other installed kernels, run
VirtualBox Guest Additions:   /sbin/rcvboxadd quicksetup <version>
VirtualBox Guest Additions: or
VirtualBox Guest Additions:   /sbin/rcvboxadd quicksetup all
VirtualBox Guest Additions: Kernel headers not found for target kernel
4.15.0-72-generic. Please install them and execute
/sbin/rcvboxadd setup
VirtualBox Guest Additions: Running kernel modules will not be replaced until
the system is restarted
All right then, let's install them and see what happens:
$ sudo apt-get install linux-headers-$(uname -r)
...
$ sudo /sbin/rcvboxadd setup
VirtualBox Guest Additions: Starting.
VirtualBox Guest Additions: Building the VirtualBox Guest Additions kernel
modules.  This may take a while.
VirtualBox Guest Additions: To build modules for other installed kernels, run
VirtualBox Guest Additions:   /sbin/rcvboxadd quicksetup <version>
VirtualBox Guest Additions: or
VirtualBox Guest Additions:   /sbin/rcvboxadd quicksetup all
VirtualBox Guest Additions: Building the modules for kernel 4.15.0-72-generic.
update-initramfs: Generating /boot/initrd.img-4.15.0-72-generic
VirtualBox Guest Additions: Running kernel modules will not be replaced until
the system is restarted
Reload the virtual box and....
 $ vagrant reload
 ...
[default] GuestAdditions 6.0.14 running --- OK.
==> default: Checking for guest additions in VM...
...

2020-01-19 (permalink)

All right, I feel dumb...

For almost one year I have been dreaming of a `WITH-SLOTS` macro to help with structures and not only classes, and today I stumbled upon a piece of Common Lisp code where the author was using `WITH-SLOTS` with a structure.

"That has to be homemade/library macro, right?" I thought.  Well, it turns out `WITH-SLOTS` works with slots, and strucutres use slots too, so I have been waiting all this time.. for nothing.

Let's look at the bright side of this, though: now I can finally use it!
(defstruct foo
  bar
  baz)
=>
FOO

(let ((foo (make-foo :bar 123 :baz 'asd)))
  (with-slots ((bar1 bar) (baz1 baz)) foo
    (format T "~a ~a~&" bar1 baz1)))
=>
123 ASD
NIL
Reference: http://clhs.lisp.se/Body/m_w_slts.htm

? implement topaz's paste, in common lisp: https://github.com/topaz/paste

2020-01-18 (permalink)

EVAL-WHEN is there to tell the file compiler whether it should execute code at compile-time (which it usually does not do for function definitions) and whether it should arrange the compiled code in the compiled file to be executed at load time. This only works for top-level forms.

Common Lisp runs the file compiler (remember we are talking about compiling files, not executing in a REPL) in a full Lisp environment and can run arbitrary code at compile time (for example as part of the development environment's tools, to generate code, to optimize code, etc.). If the file compiler wants to run code, then the definitions need to be known to the file compiler.

Also remember, that during macro expansion the code of the macro gets executed to generate the expanded code. All the functions and macros that the macro itself calls to compute the code, need to be available at compile time. What does not need to be available at compile time, is the code the macro form expands to.

This is sometimes a source of confusion, but it can be learned and then using it isn't too hard. But the confusing part here is that the file compiler itself is programmable and can run Lisp code at compile time. Thus we need to understand the concept that code might be running at different situations: in a REPL, at load time, at compile time, during macro expansion, at runtime, etc.

Also remember that when you compile a file, you need to load the file then, if the compiler needs to call parts of it later. If a function is just compiled, the file compiler will not store the code in the compile-time environment and also not after finishing the compilation of the file. If you need the code to be executed, then you need to load the compiled code -> or use EVAL-WHEN -> see below.

https://stackoverflow.com/a/45683744

All right, replaced a bunch of `(if (< a b) -1 (if (> a b) 1 0))` expressions with `(signum (- a b))`, but I wonder: doesn't a mathematical/cs function exist, do refer to this?  It seems too common as a pattern, not to have an agreed name -- https://www.reddit.com/r/Common_Lisp/comments/eqf061/is_the_following_a_commonknown_mathematicalcs/?

Its name is: three-way comparison, a.k.a. spaceship operator

Figured out how to customize Clozure-CL REPL prompt... it couldn't be easier, could it?!
;; What sorcery is this?  In summary:
;;
;; - Use the first argument as conditional (I believe it represents the number
;;   of pending stacktraces or something
;; - If none, call PACKAGE-PROMPT -- we need to re-add one argument to the stack
;;   or invoking user defined functions would fail
;; - If there are pending exceptions to process, print the lot followed by `]`
;;
;; FORMAT directives 101
;;
;; ~[...~] Conditional expression. This is a set of control strings, called
;;   clauses, one of which is chosen and used. The clauses are separated by ~;
;;   and the construct is terminated by ~].  Also, ~:; can be used to mark a
;;   default clause
;; ~/name/ calls the user defined function, NAME
;; ~:* ignores backwards; that is, it backs up in the list of arguments so
;;   that the argument last processed will be processed again. ~n:* backs up
;;   n arguments.
;;
;; Source: https://lists.clozure.com/pipermail/openmcl-devel/2015-January/010862.html
(setf ccl::*listener-prompt-format* "~[~:*~/package-prompt/~:;~:*~d]~]")

~ aoc: abcl: hangs on 2018/22

~ aoc: allegro: attempts to call RECURISVELY, which is undefined....

~ aoc: ccl: hangs on 2018/22

~ aoc: clisp: =: NIL is not a number (last 1AM test appears to be 2019/18)

~ aoc: cmulc: Structure for accessor AOC/2019/24::TILES is not a AOC/2019/24::ERIS: NIL

~ aoc: ecl: hangs on 2019/20

2020-01-17 (permalink)

Finally got around to refactor 2019/15, and fix the +1 issue

The issue was pretty dumb...when reading 2 from the robot I ended up setting oxygen with the current position, and not with the position after the movement!

Thanks to some Reddit folk, I ended up simplifying the solution by implementing a simple dfs to explore the map (no more 5000 ticks random walking)

It feels good!

2020-01-16 (permalink)

Couple of TILs (not all today's really, but whatever):

- `:serial T` inside ASDL systesms, make sure that components are compiled in order; otherwise, in case of inter-file dependencies, one is forced to to use `:depends-on`, which is annoying
- you need to run `(ql:update-dist "...")` to download an updated version of a dist, or `(ql:update-all-dists)` to download them all

Fixed xml-mitter#6, added tests support to the project

Fixing #6 was pretty easy; in fact, double quoting WITH-RSS optional
arguments was all we had to do.  Adding unit tests support to the module
on the other end, wound up being not as easy as I initially thought it
would, but still, I am quite happy with the final result.

I split the tests support in 3 different sub-activities:

1) Add a package (and a system) for the tests: `:xml-emitter/tests`
2) Add a Makefile to run the tests

  This turned out to be more annoying than expected:

  - The interactive debugger needs to be disabled when tests fail, or the
  CI job would hang forever
  - The spawn REPL needs to be shut down when tests pass, or again,
  it would cause the CI job to hang forever
  - Certain implementations did not exit with non-0 code when tests
  failed

  And all this needed to be done outside of the testing system (hence in
  the Makefile) becasue I still wanted to leave things interactive when
  people tried to run the tests with `(asdl:test-system ...)`

3) Add a .travis.yml file to test the library with multiple
   implementations

The result:

- Makefile
.PHONY: test test-sbcl test-ros

lisps := $(shell find .  -type f \( -iname \*.asd -o -iname \*.lisp \))

cl-print-version-args := --eval '\
	(progn \
		(print (lisp-implementation-version)) \
		(terpri))'

cl-test-args := --eval '\
	(progn \
		(ql:quickload :xml-emitter/tests :verbose T) \
		(let ((exit-code 0)) \
			(handler-case (asdf:test-system :xml-emitter) \
				(error (c) \
					(format T "~&~A~%" c) \
					(setf exit-code 1))) \
			(uiop:quit exit-code)))'

all: test

# Tests -----------------------------------------------------------------------

test: test-sbcl

test-sbcl: $(lisps)
	sbcl --noinform $(cl-test-args)

test-ros: $(lisps)
	ros run $(cl-print-version-args) $(cl-test-args)
- .travis.yml
language: generic

env:
  global:
    - PATH=~/.roswell/bin:$PATH
    - ROSWELL_INSTALL_DIR=$HOME/.roswell
    - ROSWELL_VERSION=19.12.13.103
    - ROSWELL_URL="https://raw.githubusercontent.com/roswell/roswell/v$ROSWELL_VERSION/scripts/install-for-ci.sh"

  matrix:
    - LISP=abcl
    - LISP=allegro
    - LISP=ccl
    - LISP=clisp
    - LISP=cmucl
    - LISP=ecl
    - LISP=sbcl

install:
  - curl -L $ROSWELL_URL | sh

script:
  - make test-ros
This is it!

2020-01-15 (permalink)

Today I installed `roswell` again, because I wanted to test `xml-emitter` with abcl, but somehow it seemed the runtime corrupted:
$ brew install roswell
... boring output
$ ros install sbcl
sbcl-bin/1.4.0 does not exist.stop.
$ ros help
$ sbcl-bin/1.4.0 does not exist.stop.
$ Making core for Roswell...
$ sbcl-bin/1.4.0 does not exist.stop.
$ sbcl-bin/1.4.0 does not exist.stop.
Googling for the error I stumbled upon this GH issue: https://github.com/roswell/roswell/issues/216, and after nuking ~/.roswell, `ros setup` successfully installed the latest version of sbcl.

I was erroneously trying to use `cl-travis` instead of plain `roswell`.  Why erroneusly?  `cl-travis` uses `CIM`: https://github.com/sionescu/CIM, which turns out to have been deprectated in [2017](https://keens.github.io/blog/2017/01/29/deprecating_cim/).  What's the replacement?  `ros` and similars -- so better directly use `ros` instead.

The command `ros` however won't be available unless you tweak a couple of ENV vars:
global:
  - PATH=~/.roswell/bin:$PATH
  - ROSWELL_INSTALL_DIR=$HOME/.roswell

2020-01-11 (permalink)

* Fix DIGITS compilation error: `integer` is not a variable #64

2020-01-05 (permalink)

Imported yet another trick from SJL, this time to automatically load all the .lisp files conained a directory -- this way I don't have to update aoc.asd everytime I add a new day.

First you define a new class, and then implement the generic method COMPONENT-CHILDREN:
(defclass auto-module (module) ())

(defmethod component-children ((self auto-module))
  (mapcar (lambda (p) (make-instance 'cl-source-file :type "lisp"
                        :pathname p
                        :name (pathname-name p)
                        :parent (component-parent self)))
          (directory-files (component-pathname self)
                           (make-pathname :directory nil :name *wild* :type "lisp"))))
Next, you use `:auto-module` in your components list as follows (2nd line):
(:file "intcode")
(:auto-module "2017")
(:module "2018" :serial t
 :components ((:file "day01")

TIL, VALUES in Common Lisp, is SETF-able!
(loop :with remainder
      :do (setf (values n remainder) (truncate n base))
      :collect remainder
      :until (zerop integer)))

* vim: add mapping to (in-package) the current file in the repl

+ quickutil: pull request to fix a problem with DIGITS: use `n` instead of `integer` when calling :until

~ quickutil: pull request to change DIGITS to optionally return digits in the reverse order (this way I can ditch DIGITS-REVERSE)

2020-01-04 (permalink)

* add ADJACENTS to aoc.utils

* change BFS to use QUEUE instead of poor's HEAPQ

2020-01-03 (permalink)

* add quickutils to aoc

~ https://itnext.io/a-wild-way-to-check-if-a-number-is-prime-using-a-regular-expression-4edfb725f895

2020-01-02 (permalink)

? http://play.elevatorsaga.com/

? PARTIAL-1: evaluate only once, and not every-time the lambda function is called

2019-12-31 (permalink)

Migrated all the mercurial repository from Bitbucket to Github:
cd ~/tmp
git clone https://github.com/frej/fast-export.git
hg clone ssh://hg@bitbucket.org/iamFIREcracker/logfilter
git init logfilter.git
../fast-export/hg-fast-export.sh -r ../logfilter
git remote add origin git@github.com:iamFIREcracker/logfilter.git
git push origin --all
git push origin --tags
# repeat
https://git-scm.com/book/en/v2/Git-and-Other-Systems-Migrating-to-Git

* migrated logfilter to github -- thanks bitbucket

* migrated strappo-api to github -- thanks bitbucket

* migrated weblib to github -- thanks bitbucket

* migrated strappo-analytics to github -- thanks bitbucket

* migrated strappon to github -- thanks bitbucket

* migrated getstrappo to github -- thanks bitbucket

* migrated deploy-strappo to github -- thanks bitbucket

* migrated osaic to github -- thanks bitbucket

* migrated expensio to github -- thanks bitbucket

* migrated medicinadellosport to github -- thanks bitbucket

* migrated webpy-facebook-login to github -- thanks bitbucket

* aoc: 2019/18/2 -- this is fucking it!

+ aoc: 2019/18/2: 400s to complete part 2... I blame it on HEAPQ

+ aoc: 2019/18/2: don't manually update the input to generate the 4 sub-vaults

+ aoc: add DIGITS function to utils, and use it inside 2019/04 and 2019/16

+ aoc: 2019/17: use PRINT-HASH-MAP-GRID

+ aoc: 2019/20: figure out a way to programmatically figure out if a portal is on the outside or on in the inside

~ aoc: add goap-p to search algo, and review dijkstra and bfs..

+ aoc: add FLATTEN to utils

+ common lisp use quickutils!

? update logfilter repo references to the new github location

? update osaic repo references to the new github location

2019-12-30 (permalink)

* aoc: 2019/18/1

+ aoc: 2019/18/1: 20s to complete part 1...

2019-12-27 (permalink)

* aoc: 2019/22/2

* aoc: 2019/25

? aoc: 2019/22/2: explain all that math stuff, and move some of it inside utils.lisp

~ aoc: 2019/25 automate it?!

2019-12-26 (permalink)

* aoc: 2019/22/1

? aoc: 2019/22 -- explain the math behind it, the modeling

2019-12-25 (permalink)

* aoc: 2019/24/2

~ aoc: 2019/24: better logic for recurive neighbors

~ aoc: 2019/24: how to efficiently add levels to the state? currently we end up with way more states, and that takes processing time

2019-12-24 (permalink)

* aoc: 2019/24/1

2019-12-23 (permalink)

* aoc: 2019/23

2019-12-22 (permalink)

* aoc: 2019/21

2019-12-21 (permalink)

* aoc: 2019/20

2019-12-20 (permalink)

* aoc: 2019/14/2

+ aoc: 2019/14/2 better euristic (binary search from cargo-limit and 0)

+ create a function to return sortable-difference between two numbers: a < b ? -1 : a > b ? 1 : 0, and use it for 2019/14/2, phases, robots

2019-12-19 (permalink)

* aoc: 2019/19

~ aoc: proper solution for 2019/19/2 (figure out how to calculate the direction of the uppear border of the beam -- now I manually calculated it by looking at the output)

2019-12-18 (permalink)

* aoc: 2019/17

* aoc: 2019/16/2

~ find a proper solution for 2019/17/2 -- right now I got the solution by manually looking at the map, and figuring out program inputs. Also, there is no proper solution at the moment :)

? explain the realization about 2019/16/2

2019-12-16 (permalink)

* aoc: 2019/16/1

+ define CIRCULAR, and use it for 2019/16, and 2018/01: https://stackoverflow.com/questions/16678371/circular-list-in-common-lisp

2019-12-15 (permalink)

You cannot really schedule `sudo` commands with crontab, you better use root's crontab instead...

So, to fix datetime drifing on Virtualbox:
# crontab -l
28 * * * * service ntp restart

AOC 2019/12: That was freaking painful -- for me at least.

Part 1 was easy: parse input, apply gravity, apply velocity, repeat, calculate energy.

Part 2? Not quite!  At first I tried to look at the output of some of the variables to figure out if there were any patters, and noticed:

- the *baricenter* of the system (i.e. sum of all xs, sum of all the ys, sum of all the zs) was constant
- the overall velocity of the system (i.e. sum of all vxs, sum of all vys, sum of all the vzs) was 0

But unfortunately none of the above turned out to be useful/usable.

Something else that I played a lot with, was looking at each moon separately: I would still simulate the whole universe of course, but inspect a moon at time, again, hoping to find patterns to exploit.  I was hoping that each moon had a different cycle, and that the overlall system cycles could be calculated as the the LCM of each moon cycles, but each moon was influencing the others, so my logic was flawed.

Anyway, I put this on hold, then tried again, I eventually gave up and looked for some help on Reddit, and this is what brought me to the solution [0]:
Note that the different dimensions are independent of each other. X's don't depend on y's and z's.
The idea that the overall cycle would be calculated as the LCM of other *independent* systems cycles was right; unfortunately I failed to understand what these *independent* systems were.

[0] https://www.reddit.com/r/adventofcode/comments/e9jxh2/help_2019_day_12_part_2_what_am_i_not_seeing/

* aoc: 2019/15

* aoc: 2019/12/02

+ aoc: 2019/15: figure out a proper way to explore the map (now it goes on for 50000 steps)

+ aoc: 2019/15: why my BFS costs are off by 1?

+ aoc: print-hash-table-as-grid

2019-12-14 (permalink)

* aoc: 2019/14/1

? refactor partial-1/partial-2 to accept a form, instead of a FN and ARGS

? also, why is the following form not compiled? (partial-2 #'* -1 (- _1 _2))

It spits the following error:
The value
  -1
is not of type
  (OR (VECTOR CHARACTER) (VECTOR NIL) BASE-STRING SYMBOL
      CHARACTER)

2019-12-13 (permalink)

My virtualbox date and time is getting out of sync when the laptop is suspended.  A temporary fix was to 1) install ntp, 2) restart the service every hour:
$ crontab -l
28 * * * 0 sudo service ntp restart
But there has to be a better way to fix this / prevent this from happening

* aoc: 2019/13 -- intcode problem, with some AI (well ... a simple `(- ball-x bar-x)` seems to do just fine)

2019-12-12 (permalink)

* aoc: 2019/10/2 -- phases are hard

* aoc: 2019/11 -- phases are hard

* aoc: 2019/12/1

~ aoc: 2019/10 needs some serious refactoring around phases, and vector math

+ aoc: 2019/11/2 -- figure out a way to test for the output string

2019-12-11 (permalink)

* aoc: 2019/09 -- "parameters in relative mode can be read from or written to." GODDAMMIT!

* aoc: 2019/10/a -- my direction-reduce function was able to reduce (5, 0) into (1, 0), (5, 5) into (1, 1), (2, 4) into (1, 2), but **not** (4 6) into (2 3) -,-

2019-12-10 (permalink)

Wasted the night trying to get a solution for AOC 2019/09, but somehow it ain't working.

The logic is simple:
for each asteroid, as 'curr'
  initialize 'remaining' as **all** asteroids
  for each remaining asteroid, as 'other'
    calculate the line of sigth between 'current' and 'other'
    remove from 'remaining' any asteroid hidden **behind** 'other'
  maximize 'remaining'
So the biggest challenge here would be to calculate the sight direction vector between 'curr' and 'other'.  Few things to take into account though:

- when dir is (0 0): return (0 0)
- when dir-x is 0: return vertical unit vector (preserve direction)
- when dir-y is 0: return hirizontal unit vector (preserve direction)
- when dir-y divides dir-x: reduce it (so we can return (2 1) instead of (4 2))
- when dir-x divides dir-y: reduce it (so we can return (1 2) instead of (2 4))
- otherwise return dir, as-is

All good, except it doesn't work -- and I have no idea why...

2019-12-09 (permalink)

* aoc: 2019/07

+ aoc: 2019/05 -- use intcode as per 2019/07

+ aoc: 2019/08 -- add test for output part2

2019-12-08 (permalink)

* aoc: 2019/08

+ aoc: 2019/08/2 -- figure out a way to test for the output string

2019-12-07 (permalink)

* aoc: 2019/07 -- part2 missing still, fucking opcodes

2019-12-06 (permalink)

* aoc: 2019/06

2019-12-05 (permalink)

* aoc: 2019/05 -- fucking opcodes :-O

2019-12-04 (permalink)

I am trying to look into compilation errors thrown when loading my :AOC project, but even though I shuffled functions/macros around to make sure functions are defined before they are getting used, it still errors out ...
The function AOC::ARGS-REPLACE-PLACEHOLDER-OR-APPEND is undefined.
It is defined earlier in the file but is not available at compile-time.
I am afraid I need to re-study again how Lisp code is being parsed/compiled, and if there is anything we can do about it.  I am on the go right now, so this will have to wait until tomorrow.

Why on earth would someone use rlwrap with `--pass-sigint-as-sigterm`?  I mean, all my *-rlwrap scripts had it, but it was kind of dumb if you think about it...

Most of the times rlwrap is used with REPLs, and REPLs usually catch SIGINT to interrupt running operations (e.g. break out from an infinite loop), so why would someone kill the REPL, instead of leave the REPL deal with it?

I am sure there are legit use cases for this, it's just that I cannot think of any.

Damn, my Lisp is really rusty!  Today I re-learned about:

- :THEREIS and :ALWAYS keywords on LOOP

Few days ago I threw PR/PRL inside .sbclrc hoping I could handily use them from any REPL I had open.  Well I was right and wrong at the same time: I could indeed use PR/PRL in the REPL, but only if inside the :CL-USER package.

So I moved those macros inside a new file, made a package out of them (pmdb, poor's man debugger, lol), and loaded it from my sbclrc file; after that I was finally able to use those PR/PRL from any package -- too bad I had to use the package classifier to use them (i.e. PMDB:PR).  To fix that, I went on and added :pmdb to the :use form of the package I was working on.

Alternatively, I could have created two vim abbreviations like:
inoreabbr pr  pmdb:pr
inoreabbr prl pmdb:prl
but for some reasons, abbreviations ain't working while editing Lisp files -- I bet on parinfer getting in the way..

* aoc: 2019/04

? SCASE: a switch for strings -- and maybe use it on 2019 day 03

? figure out why abbreviations are not working on Lisp files

2019-12-03 (permalink)

* aoc: 2019/03 -- needs some clean up tho

? ONCE-ONLY: https://stackoverflow.com/questions/9808928/understanding-how-to-implement-once-only-lisp-macro

2019-12-02 (permalink)

Unfortunately I spent too much time because of a wrong use of RETURN: you can indeed use RETURN (which is the same as RETURN-FROM NIL) only from inside a DO or PROGN block, otherwise you have to use RETURN-FROM followed by the name of the surrounding function.

Or...you can create a named block with BLOCK, and RETURN-FROM it: https://www.cs.cmu.edu/Groups/AI/html/cltl/clm/node85.html

* aoc: 2019/02

2019-12-01 (permalink)

* aoc: 2019/01

+ PR, PRL defined inside .sbclrc, is not available when running the REPL via VLIME -- https://stackoverflow.com/questions/22524213/cant-access-cl-user-symbols-in-new-package-defined-with-make-package

+ loading :aoc does not work cleanly anymore -- have to load RECURSIVELY first, then all the ARGS-* related functions needed for PARTIAL, then it will eventually work...

2019-11-26 (permalink)

? plan-rss: use #'string> when merging, and get rid of the call to REVERSE

+ reg-cycle: use localtime() instead of timers

2019-11-25 (permalink)

plann-rss

Previously, testing PARSE-OPTS was a bit of a pain as your REPL could
end up getting accidentally killed if passed in options would cause the
parsing logic to invoke OPTS:KILL; it goes without saying it that this
was very annoying.

The fix for this was simple though:

1) Create a new condition, EXIT
2) ERROR this condition (with the expected exit code) instead of
   directly invoking OPTS:EXIT
3) Invoke PARSE-OPTS inside HANDLER-CASE, and invoke OPTS:EXIT when the
   EXIT condition is signalled

This way:

- When run from the command line OPTS:EXIT will be called, and the app
will terminate
- When run from the repl, an unmanaged condition will be signalled, and
the debugger will be invoked

* plan-rss: add --max-items option

2019-11-18 (permalink)

Angular, AOT, and custom decorators

Plan on using custom decorators with Angular, and AOT?  Well, don't...or at least, beware!

We did implement a custom decorator lately, to add behavior to existing Angular components in a mixins kind of fashion, and while everything seemed to be working just fine on our local boxes -- where the app runs in JIT mode -- once we switched to prod mode -- where AOT is enabled -- all components decorated with our custom decorator stopped working, completely, like Angular did not recognize them as components...at all.

After some fussing around, I noticed that the moment I added a dummy `ngOnInit` to such components, all of a sudden the `ngOnInit` defined by our custom decorator would be run; but not any of the others `ng*` methods enriched by our decorator though, just `ngOnInit`, the one that we also re-defined inside the Angular component.

This led me to this Angular ticket: https://github.com/angular/angular/issues/3149, where someone seemed to have a pretty good understanding of what was going on:
Appears the reason here is that Angular places `ngComponentDef` generated code before all `__decorate` calls for class. `ɵɵdefineComponent` function takes `onInit` as a reference to `Class.prototype.ngOnInit` and overridding it later has no any effect.

...

Here's a pseudo-code of what happens:

```
class Test {
  ngOnInit() {
    console.log('original')
  }
}

const hook = Test.prototype.ngOnInit;

Test.prototype.ngOnInit = () => {
  console.log(1);
}

hook(); // prints 'original'
```

In case of JIT compilation `componentDef` is created lazily as soon as `getComponentDef` gets called. It happens later.
Someone at the end even tried to suggest a possible solution, https://github.com/angular/angular/issues/31495#issuecomment-549873125, but when I noticed that the author suggested that we added `ngOnInit` to the final component, I hoped there would be a better way to work-around this (especially because we would be forced to redefine `ngOnInit`, `ngOnChanges`, `ngAfterViewInit` and `ngOnDestroy`).

Further digging took me to this other Angular ticket: https://github.com/angular/angular/issues/16023, where I found a couple of not-so-promising comments, left almost at the start of the thread:
This is a general "problem" with AOT: It relies on being able to statically analyze things like lifecycle hooks, properties with @Input on them, constructor arguments, ...
As soon as there is dynamic logic that adds new methods / properties, our AOT compiler does not know about this.
Immediately followed by:
As we are moving to AOT by default in the future, this won't go away.
Sorry, but closing as infeasible.
All right, to recap:

- Typescript compiler, `tsc`, supports Decorators just fine
- Angular's compiler is a custom compiler, different from  `tsc`
- Angular's compiler is not 100% feature-compatible with `tsc`
- Since  Angular's compiler does not properly implement support for custom decorators, it's not possible to use custom decorators that enrich objects prototypes (Angular would not notice any enriched method)

So it turns out the components annotated with our custom decorator do have to implement `ng*` methods -- we cannot reply on the decorator adding them -- otherwise the AOT compiler would not know of them, and not call them at the right time...Sweet!

The solution?  Create a base class implementing these `ng*` methods, and have the components annotated with our custom decorator *extend* this base class; the only problem is that we would also have to add `super()` to the component's constructor -- which could be annoying -- but it's either this or no custom decorators, so I guess this will do just fine.
```
export class ObserveInputsBase implements OnChanges, OnDestroy {
  ngOnChanges(changes: SimpleChanges): void {}
  ngOnDestroy(): void {}
}

@Component({
  ...
})
@CustomDecorator
export class TodoComponent extends ObserveInputsBase  {
constructor(){
  super();
  }
}
```

The thing is, :read! appears to be stripping ^M already, irrespective of the use of ++opts:

  :read !powershell.exe Get-Clipboard
  :read ++ff=unix !powershell.exe Get-Clipboard
  :read ++ff=dos !powershell.exe Get-Clipboard

To be honest, I'd expect the second one, where we specify ++ff=unix, to leave trailing ^M, but somehow that's not happening, and for your reference (after I vim -u NONE):

  :verbose set ff

Returns 'fileformat=unix'

  :verbose set ffs

Returns 'fileformats=unix,dos'

So am I correct if I say that there is something "weird" going on with system()?  I also found the following at the end of system()'s help page, but somehow the experienced behavior is not the documented one:

  To make the result more system-independent, the shell output
  is filtered to replace <CR> with <NL> for Macintosh, and
  <CR><NL> with <NL> for DOS-like systems.

I even tried to give systemlist() a go, but each entry of the array still has that trailing ^M, so it really seems like Vim cannot properly guess the fileformat from the command output.

I am really in the dark here.

2019-11-16 (permalink)

Trying to understand why copy-pasting things on Vim/Windows generates trailing ^M...

Let's create a file with `\r\n`s (that's how the clipboard appears to be populated when I copy things from Chrome):
$ echo -en "OS cliboards are hart...\r\naren't they?\r\n" > clipboard.txt

$ cat clipboard.txt
OS cliboards are hart...
aren't they?

$ cat clipboard.txt -A
OS cliboards are hart...^M$
aren't they?^M$
Let's open up vim now (`vim -u NONE`) and do some experiments:

1) read 'clipboard.txt', under the cursor
:read clipboard.txt
  The content is loaded, not trailing ^M, at the bottom I see: "clpboard.txt" [dos format] 2 lines, 40 characters

2) load 'clipboard.txt' using `read!`
:read! cat clipboard.txt
  The content is loaded, no trailing ^M

3) load the content into a variable, then paste it under the cursor
:let @@ = system('cat clipboard.txt') | exe 'normal p'
  The content is loaded, trailing ^M are added

Let's now push the file into the OS clipboard:
$ cat clipboard.txt | clip.exe
Open up vim again, and:

4) paste from the `*` register (works with `Shift+Ins` too):
:exe 'normal "*p'
  OS clipboard is read, no trailing ^M

5) load the content of the clipboard using `read!`
:read! powershell.exe Get-Clipboard
  The content is loaded, no trailing ^M

6) Read the content of the clipboard into a variable, then paste it under the cursor
:let @@ = system('powershell.exe Get-Clipboard') | exe 'normal p'
  OS clipboard is read, trailing ^M are added

So what's the difference between *reading* a file, and calling another program that reads it and outputs its content?

? add Makefile to my-env/dotfiles, so I can simply run make to reubild what changed

2019-11-12 (permalink)

More typescript niceties

Earlier I thought it wasn't possible to enforce (at compile-time) that a decorator were applied to a class implementing specific interface, and because of it I wound up implementing run-time checks (check the notes of some previous .plan file).  Well I was wrong!

First you define an interface for the constructor:
export interface UrlDescriptorConstructor {
    new (...args: any[]): UrlDescriptor;
}
(for reference, here is how `UrlDescriptor` is looking like)
export interface UrlDescriptor {
  generatePathname(): string;
  generateParams(): any;
}
Next you change the signature of your decorator to expect the target class to be a constructor for that type:
export function SearchHrefDirective(target: UrlDescriptorConstructor) {
  ...

2019-11-11 (permalink)

? cb.vim is adding ^M when pasting on Windows -- :.!cb --force-paste doesn't though...

2019-11-10 (permalink)

Typescript

interfaces are not available at runtime: they are getting used at compilation time for type-checking, but then thrown away.

So, if we want to implement run-time checking and throw an error if the decorator is being applied to a non-compatible object, we are forced to:

- Define a dummy class, implementing the interface
- Create an instance of such class
- Store its `Object.keys()`

https://stackoverflow.com/a/43910186/348524

Downgrading a Angular attribute directive is simply **not** supported: https://github.com/angular/angular/issues/16695

People suggest that you can wrap `downgradeComponent()` calls and change `component.restrict` from 'E' to 'A' or 'EA', but I could not make it work -- and to be honest I am kind of happy that it did not work, as otherwise I would have probably left the hack there, and it would broken again on the next upgrade of the framework.

So what to do about it?

- Migrate the old component in need of the new directive (duh)
- Maintain two directives: one for Angular, and one for Angular.js

Fucking figured out how, with AutoHotKeys, to activate window and cycle through all the similar windows, with the same hotkey.

At first I thought I would implement it with something like:

- if the focused window does not match "activation criteria", then activate one that maches it
- otherwise, activate the next matching window

Well, it turns out AutoHotKeys already supports all this, via [window groups](https://www.autohotkey.com/docs/commands/GroupAdd.htm); the idea is simple, you define first a group of windows (all with a given title, class, or exe), and then you call `GroupActivate` instead of `WinActivate`.

For example:

- Define te ChromeGroup
GroupAdd, ChromeGroup, ahk_exe chrome.exe
- Activate the group / cycle its windows using `Hyper+k`:
F15 & k::GroupActivate, ChromeGroup, R
(`R`, is for activating the most recent window of the group: https://www.autohotkey.com/docs/commands/GroupActivate.htm)

? cg: br regexp is not working under Windows

2019-11-08 (permalink)

* plan-rss: preserve empty lines -- we are using ~& instead of ~% (use ~^ to avoid the last newline)

* plan-rss: v0.0.6

2019-11-07 (permalink)

Not really sure what happened, but now CL travis-ci builds have started working again...

Anyway, I am going to add the .sh script into every CL repo I am building via travis-ci, as using a single scrip hosted on gist/github turned out not to be a good idea; I mean, when the script works, fine, but when it doesn't and have to update it, then I am forced to update all the repos too, as each of them would still be pointing to the old version of the script.

Also, having the script checked into the repo, sounds like the right thing to do, especially if one wants reproducible builds.
#!/usr/bin/env bash

ROSWELL_VERSION=19.09.12.102
ROSWELL_URL="https://github.com/roswell/roswell/releases/download/v$ROSWELL_VERSION/roswell_${ROSWELL_VERSION}_amd64.zip"

OS_WIN=$(uname -s | grep -e MSYS_NT)
if [ -n "$OS_WIN" ]; then
  ROSWELL_IN_PATH=$(echo $PATH | grep -F /tmp/roswell)
  if [ -z "$ROSWELL_IN_PATH" ] ; then
    echo "/tmp/roswell not found \$PATH"
    exit 1
  fi

  echo "Downloading Roswell (v$ROSWELL_VERSION) from: $ROSWELL_URL"
  curl -L "$ROSWELL_URL" \
      --output /tmp/roswell.zip
  unzip -n /tmp/roswell.zip -d /tmp/
fi

# Run roswell's CI script, and since it will find `ros` already available
# in $PATH, it would not try to build it but instead will install the specified
# CL implementation + quicklisp
curl -L "https://raw.githubusercontent.com/roswell/roswell/v$ROSWELL_VERSION/scripts/install-for-ci.sh"
All good!

I also remembered I had asked roswell mainteiners about adding Windows support to install-for-ci.sh -- https://github.com/roswell/roswell/issues/374 -- and it appears they did it!

Switching from custom .sh (wrapping the official one), to simply using the official one was pretty easy:

1) Define the following ENV variables
env:
  - PATH=~/.roswell/bin:$PATH
  - ROSWELL_INSTALL_DIR=$HOME/.roswell
2) Call roswell's ci script:
install:
  - curl -L https://raw.githubusercontent.com/roswell/roswell/release/scripts/install-for-ci.sh | sh
  ...
And that's it!  Bonus: you might want to define `LISP` too, and make it point to a specific version -- again, reproducible builds are the right thing to do!

* added roswell-for-ci.sh to plan-rss and cg

* plan-rss: use MD5 of plan entries as content of `<guid>` tags

* plan-rss: v0.0.5

+ cg: parse git push output to quickly get a reference of the last commit ID -> ' f5455f40f1..6443447668 story/US2709-Workstreams-Make-workstreams-visible-only-to-their-members -> story/US'

2019-11-06 (permalink)

* plan-rss: v0.0.4

+ travis-ci builds are failing on Windows -- ros is erroring out

2019-11-05 (permalink)

* plan-rss: `--disable-pre-tag-wrapping`

* plan-rss: remove `-m`, `-s` options (use longer versions instead)

2019-11-04 (permalink)

* Use plan-rss (CL version) to generate https://matteolandi.net/plan.xml

2019-11-03 (permalink)

Mac OS builds on Travis-CI were not working lately, and after some quick Googling around I stumbled upon this: https://github.com/travis-ci/travis-ci/issues/10482

The following indeed seems to fix the problem:
before_install:
  # 2019-10-30: Homebrew/brew.rb:23:in `require_relative': ...
  # ... unexpected keyword_rescue, expecting keyword_end
  # https://discourse.brew.sh/t/missing-file-homebrew-dead/5657
  - if [ "$TRAVIS_OS_NAME" = osx ]; then
      set -x &&
      brew update-reset &&
      if ! brew install gsl; then
        cd "$(brew --repo)" && git add . && git fetch &&
        git reset --hard origin/master;
      fi;
      set +x;
    fi
But what the hell should I install `gsl`?  Let's try something else (`brew update-reset` only):
before_install:
  # 2019-10-30: Homebrew/brew.rb:23:in `require_relative': ...
  # ... unexpected keyword_rescue, expecting keyword_end
  # https://discourse.brew.sh/t/missing-file-homebrew-dead/5657
  - if [ "$TRAVIS_OS_NAME" = osx ]; then
      set -x &&
      brew update-reset &&
      set +x;
    fi
This one works too, so I guess I am going to stick with this for now!

Setting up Travis-CI deployment keys can be as easy as running `travis setup releases`, or way more messier (that's the price you have to pay if you don't to share your username/password with `travis`).

1) Add a new Oauth token: https://github.com/settings/tokens, and copy it -- only the `public_repo` scope seems to be required to upload assets
2) Run `cb | travis encrypt iamFIREcracker/plan-rss` (replace `iamFIREcracker/plan-rss` with your public repository)
3) Copy the secured token inside your .travis.yml file
deploy:
  provider: releases
  api_key:
    secure: XXX_SECURE_XXX
  skip_cleanup: true
  file: bin/$DEST
  on:
    repo: iamFIREcracker/plan-rss
    tags: true
Also, when *all* conditions specified in the `on:` section are met, your build will deploy.  Possible options:

- `repo`: in the form `owner_name/repo_name`.
- `branch`: name of the branch (or `all_branches: true`)
- `tags` -- to have tag-based deploys
- other..

Useful links:

- https://docs.travis-ci.com/user/deployment/releases/
- https://stackoverflow.com/questions/25302518/travis-ci-setup-releases-with-github-token
- https://github.com/travis-ci/travis-ci/issues/2982
- https://github.com/travis-ci/docs-travis-ci-com/pull/1383/files

* plan-rss: v0.0.1

* plan-rss: fix Windows build issue

* plan-rss: remove trailing new-line from <generator>

* plan-rss: v0.0.2

* plan-rss: --version not working for dev version (i.e. not taking into account commits since last tag)

* plan-rss: travis ci binaries crashing (fucking had forgotten to check in some changes...)

* plan-rss: v0.0.3

+ plan-rss: ERROR instead of OPTS:EXIT so one can test opts parsing without accidentally closing the REPL

2019-11-02 (permalink)

Update on my journery to a Common Lisp version of plan-rss

Figured out a way to wrap "description" values inside CDATA/pre elements, but I don't think I am 100% satisfied by the solution (reason why I haven't submitted a MR yet).

So basically I added a new key argument to RSS-ITEM, `descriptionWrapIn` a 2 elements LIST, which the function would use to wrap the content of "description" (the first element, before, and the second after).  Here is what I came up with:
(xml-emitter:rss-item date
                      :link *link*
                      :description (plan-day-content day)
                      :descriptionWrapIn '("<![CDATA[<pre>" "</pre>]]>"))
Don't take me wrong, it works, but I just don't think it's good way of implementing it.

Another approach would be to convert RSS-ITEM into a macro and let it accept a `BODY` that users can specify to further customize the content of the item.  Something along the lines of:
(xml-emitter:with-rss-item (date :link *link*)
  (xml-emitter::with-simple-tag ("description")
    (xml-emitter::xml-as-is "<![CDATA[<pre>")
    (xml-emitter::xml-out (plan-day-content day))
    (xml-emitter::xml-as-is "</pre>]]>"))))))
More verbose, but more flexible too; in fact, yesterday's MRs to add `isPermaLink` to `<guid>`, and to support Atom's link with rel=self would not be required anymore if RSS-ITEM and RSS-CHANNEL-HEADER were macros.

Channel with Atom link:
(xml-emitter::with-rss-channel-header (*title* *link* :description (read-channel-description)
                                                      :generator *generator*
                                                      :image *image*)
  (xml-emitter::empty-tag "atom:link" `(("href" ,*atom-link-self*)
                                        ("rel" "self")
                                        ("type" "application/rss+xml"))))
Item with guid (isPermaLink=false) and description wrapped inside CDATA/pre blocks:
  :do (xml-emitter:with-rss-item (date :link *link*
                                       :pubDate "XXX")
        (xml-emitter::simple-tag "guid" (format NIL "~a#~a" *link* date)
                                 '(("isPermaLink" "false")))
        (xml-emitter::with-simple-tag ("description")
          (xml-emitter::xml-as-is "<![CDATA[<pre>")
          (xml-emitter::xml-out (plan-day-content day))
          (xml-emitter::xml-as-is "</pre>]]>"))))))

TIL: you can pretty print a Common Lisp condition with: `(format t "~a" cond)`

CL: unix-opts

Required options seem to conflict with typical use of `--help` or `--version`: https://github.com/libre-man/unix-opts/issues/10

Here is a possible workaround (adapted by another one found in the thread there):
(defun parse-opts (&optional (argv (opts:argv)))
  (multiple-value-bind (options)
      (handler-case
          (handler-bind ((opts:missing-required-option (lambda (condition)
                                                         (if (or (member "-h" argv :test #'equal)
                                                                 (member "--help" argv :test #'equal)
                                                                 (member "-v" argv :test #'equal)
                                                                 (member "--version" argv :test #'equal))
                                                           (invoke-restart 'opts:skip-option)
                                                           (progn
                                                             (format t "~a~%" condition)
                                                             (opts:exit 1))))))
              (opts:get-opts argv))
        (opts:unknown-option (condition)
          (format t "~a~%" condition)
          (opts:exit 1))
        (opts:missing-arg (condition)
          (format t "~a~%" condition)
          (opts:exit 1)))
    (if (getf options :help)
      ...
Important bits:

- We do want to invoke the `OPTS:SKIP-OPTION` restart when any of the special options is set -- this way the library will use `NIL` and move on
- HANDLER-BIND -- and not HANDLER-CASE -- needs to be used to handle `OPTS:MISSING-REQUIRED-OPTION`, especially if we want to call INVOKE-RESTART (with HANDLER-CASE the stack is already unwound, when the handler runs. Thus the restart established in the function is gone.)

* xml-emitter: RSS macros (https://github.com/VitoVan/xml-emitter/pull/5) (also cancelled the other MRs, as this one would now give more control to the user)

* plan-rss: add pubDates (rfc2822)

2019-11-01 (permalink)

* xml-emitter: Add support for guid isPermaLink=false (https://github.com/VitoVan/xml-emitter/pull/3)

* xml-emitter: Add support for atom:link with rel="self" (https://github.com/VitoVan/xml-emitter/pull/4)

2019-10-30 (permalink)

Finally A/I came back online, and I was finally able to create a request for a mailing list (to use it with the other college friends).  Anyway, the request has been created, so hopefully over the following days we will hear back from them...stay tuned!

2019-10-27 (permalink)

Few bits I found around, about generating html emails, starting from markdown, with Mutt.

Mutt's editor first:
set editor = "nvim -c 'set ft=mdmail' -c 'normal! }' -c 'redraw'"
Then a macro, activated with `H` or whatever, to pipe the body of the message to `pandoc` to generate the html version of the message, and attach it to the email:
macro compose H "| pandoc -r markdown -w html -o ~/.mutt/temp/neomutt-alternative.html<enter><attach-file>~/.mutt/temp/neomutt-alternative.html<enter><tag-entry><previous-entry><tag-entry><group-alternatives>"
And the '.vim/syntax/mdmail.vim' syntax file:
if exists("b:current_syntax")
  finish
endif
let s:cpo_save = &cpo
set cpo&vim

" Start with the mail syntax as a base.
runtime! syntax/mail.vim
unlet b:current_syntax

" Headers are up to the first blank line.  Everything after that up to my
" signature is the body, which we'll highlight as Markdown.
syn include @markdownBody syntax/markdown.vim
syn region markdownMailBody start="^$" end="\(^-- $\)\@=" contains=@markdownBody

" The signature starts at the magic line and ends at the end of the file.
syn region mailSignature start="^-- $" end="\%$"
hi def link mailSignature Comment

let b:current_syntax = "mdmail"
let &cpo = s:cpo_save
unlet s:cpo_save

Niceties to cat / edit the content of an executable (`which`-able program); the first one, will open the specified command/executable, in your configured editor
function ew() { $EDITOR $(which "$1"); }
complete -c ew -w which
While this other one, will simply `cat` its content:
function cw() { cat $(which "$1"); }
complete -c cw -w which

Found this intersting thread started by Aaron Swartz, from 2008, on HTTP sessions:

  https://groups.google.com/forum/#!topic/webpy/KHCDcwLpDfA

I was initially confused by it -- and somewhat I still am -- as I could not really understand where Aaron was getting at with this, but then I guess his biggest complain was that sessions require servers to store information locally, on a per-user basis, breaking the *stateless* idea of the web.  Cookies are good (and Session cookies are good to, as far as I can tell), but Aaron did not seem lo like the use of cookies for session management; for that, one should use Digest access authentication (https://en.wikipedia.org/wiki/Digest_access_authentication).

Cookies, for session managements, I think they are good, especially because they prevent username/password to fly around -- even though hashed -- betweeen the clients and the server; better to generate a temporary session ID and have clients pass that to the server.

Does this break Web's stateless-ness?  Yes of course, but is that really a problem?  I mean, if the session is stored on disk, then yes, it's a problem, because the client will have to hit the same server to restore its session (or at least it would be uselessly complicated to manage it);but if one instead used a different session storate system, like Redis, then client requests could be handled by any available server in the pool, and it would be able to restore the specific client session just fine.

Anyway, the sad thing is, we are never going to get to know what Aaron had in mind when started that thread.

Changed my household booklet spreadsheet to:

- Handle cash withdrawal by entering two expenses: one negative on my bank account, and one positive, as cash bucket
- Changed the payment type sheet to know read the starting balance by summing up all the "positive" expenses

Sooner or later this will have to become an app...

2019-10-20 (permalink)

Disabled dark-theme on matteolandi.net

2019-10-17 (permalink)

Burpees!

Rounds: 6
Time on: 0:30
Time off: 1:00

Still no jumping.

2019-10-10 (permalink)

I was reading this hackernews post (https://news.ycombinator.com/item?id=21195913) about adding support for a dark-theme, and even though the overall activity took me less than one minute to complete (well, my website is very *simple*, so I am sure adding dark-theme support for a more complex website / web app is going to take longer, usually), one commend made me wonder: "What if someone was interested in *muting* the browser colors, the container's, but not the content?  Should we add some Js to let people toggle between light and dark mode?"

Few examples:

- Google, does not seem to support a dark theme
- Wikipedia, does not seem to support a dark theme
- Facebook, does not seem to support this either

Again, https://matteolandi.net is nothing compared to Google and Wikipedia, so honestly...I am not that surprised they haven't added support for dark-mode yet; but still, is it possible they haven't done it yet because, as things stand, it's not yet a "problem" for them?  Somewhere on the Web I read that one should tweak CSS only to solve a specific user problem, so maybe adding a dark-them is not going to solve any of their problems, who knows...

Anyway, if you have dark-mode enabled at OS/browser level, going to https://matteolandi.net you should be presented with a dark version of the website -- and here is what I added to my style.css to make this happen:
@media (prefers-color-scheme: dark) {
  body {
    background-color: #444;
    color: #e4e4e4;
  }
  a {
    color: #e39777;
  }
  img {
    filter: grayscale(30%);
  }
}

* added dark-them to matteolandi.net

2019-10-09 (permalink)

Started reading The Mythical Man-Month (TMMM)

Finally got around to create a SPID ID, so I can - hopefully - easily access all my health record and anything else PA related, without having to use obsolete smart card readers or remember crazy PINs.

You have to request a SPID ID to one of the entity providers out there, and they might ask you for proof of identity; lucky me, Hello Bank is partnering with Poste Italiane (an entity provider), so I could easily request an ID without much of a hassle: the bank trasferred all the required documents over, and in 15 mins I had my SPID created.

I then installed PosteID on my mobile, signed in with my SPID ID, created a new code (and there was silly me thinking the SPID ID would be the last username/password to remmeber), and that's it: every time I try to access PA websites and choose SPID as login workflow, I would receive a notification on my mobile asking me to confirm or deny the access -- living in the future!

Few handy PA links:

- Fascicolo Sanitario Elettronico (FSE):  http://fascicolosanitario.regione.toscana.it/
- CUP online: https://prenota.sanita.toscana.it/

I have decided to list all the applications I have on my mobile, so I can 1) monitor which I have been constantly using vs the ones I throw away after a bit, and 2) easily re-setup my mobile in case Google refused to re-install my previously installed apps.

Here we go:

- Any.do (notes)
- Authenticator (2FA codes)
- Booking.com
- British Airways
- Calendar (Google)
- Chrome
- Dropbox
- E-mail (non G-mail accounts)
- Feedly
- File Manager+
- Fineco
- Firefox Nightly
- Fogli (Google spreadsheets)
- Gmail
- Hello bank!
- Hoplite (roguelike game)
- Instagram
- Maps
- Materialistic (hacker news)
- My3
- Netflix
- Odyssey (endless runner snowboarding game)

+ Orario Treni (train timetables)

- PosteID (SPID identity)
- Reddit
- Ryanair
- SAP concur (tracking expenses when traveling)
- Satispay
- Settle Up
- Shazam (the future!)
- Signal
- Snapseed
- Spotify
- Stremio
- Tasker
- Teams (Microsoft)
- TV Time
- Twitter
- Waze
- WhatsApp
- Youtube

Damn, I was not expecting this to be that long...

Few "interesting" links:

- koa.js, from the creators of express.js: https://koajs.com/
- Apex ping, easy and sexy monitoring for websites, applications, and APIs: https://apex.sh/ping/#pricing

nlp.js

- Train the NLP manager, then save it on disk so you don't have to train it again: https://github.com/axa-group/nlp.js/blob/master/docs/nlp-manager.md
- NER Manager: https://github.com/axa-group/nlp.js/blob/master/docs/ner-manager.md

My Node.js application kept on failing to start up from my new Virtualbox box, blowing up with a ENOSPC error.
echo fs.inotify.max_user_watches=524288 | sudo tee -a /etc/sysctl.conf && sudo sysctl -p

? setting up a vpn on my DO instance (for mobile, and laptop)

2019-10-08 (permalink)

* finished reading [Natural Language Processing with Python: Analyzing Text with the Natural Language Toolkit](https://www.amazon.com/Natural-Language-Processing-Python-Analyzing/dp/0596516495)

+ add dark mode support to matteolandi.net, just for fun -- Dark mode in CSS with `prefers-color-scheme` - https://news.ycombinator.com/item?id=21195913

2019-10-07 (permalink)

Reading about Natural Language Processing (NLP), and Named Entity Recognition (NER) in particular, to be able to populate search engine filters, based on user's free text input
mkdir ~/tmp/ner
cd ~/tmp/ner
virtualenv -p python3 venv
pip install nltk
pip install numpy
https://towardsdatascience.com/simple-nlp-search-in-your-application-step-by-step-guide-in-scala-22ca1ce3e475
https://towardsdatascience.com/natural-language-processing-with-spacy-in-node-js-87214d5547
https://medium.com/code-brew-com/natural-language-processing-using-python-or-nodejs-a76c91dcd68c

I was reading this book about NLP, and there was this exercise about text tokeninzation, that the author thought about solving it using simulated annealing.  Now, I always found simulated annealing fascinating, and because of that I decided to dump the exercise (and solution) here for future reference.

Let's first define a function, `segment()`, that given some text (i.e. a string, like 'Hello world!'), and another string representing where such text should be broken into chunks, would split the text into chunks:
def segment(text, segs):
  # text = 'helloworld'
  # segs = '000100000'
  # => ['hello', 'world']
  words = []
  last = 0
  for i in range(len(segs)):
    if segs[i] == '1':
      words.append(text[last:i + 1])
      last = i + 1
  words.append(text[last:])
  return words
`evaluate()` instead is a function that evaluates the _quality_ of a *segmentation*.
def evaluate(text, segs):
  words = segment(text, segs)
  text_size = len(words)
  lexicon_size = len(''.join(list(set(words))))
  return text_size + lexicon_size
Now the simulated annealing: basically we start with a *random* segmentation kernel, and start randomly flipping *bits* on/off, evaluating how the new segmentation compares with the best one found so far; repeat this multiple times, always reducing the number of *bits* to flip (e.g. temperature, decreasing over time)
def flip(segs, pos):
  return segs[:pos] + str(1 - int(segs[pos])) + segs[pos + 1:]

def flip_n(segs, n):
  for i in range(n):
    segs = flip(segs, randint(0, len(segs) - 1))
  return segs

def anneal(text, segs, iterations, cooling_rate):
  temperature = float(len(segs))
  while temperature > 0.5:
    best_segs, best = segs, evaluate(text, segs)
    for i in range(iterations):
      guess = flip_n(segs, int(round(temperature)))
      score = evaluate(text, guess)
      if score < best:
        best, best_segs = score, guess
    score, segs = best, best_segs
    temperature = temperature / cooling_rate
    print(evaluate(text, segs), segment(text, segs))
  return segs

2019-10-04 (permalink)

* finished reading john carmak's collection of .plan files

I am taking some time off from work next week, and I plan to do some research on Natural Language Processing (NPL); in particular I would like to experiment with Named Entity Recognition (NER) in the context of search engines (populate search filters based on users free text input).

Also, I will be in London for the next couple of days to meet some friends, so don't expect much done (other than reading, I guess).

2019-10-03 (permalink)

Time for another burpees training session.  Last time it took me almost 3 days to fully *recover* so I figured I should try with something lighter this time:

Rounds: 6
Time on: 0:30
Time off: 1:00

6 rounds instead of 8, and no jumping since the beginning (last time I had to stop in the middle of the second round).  Let's what my body thinks about this tomorrow or the day after.

2019-10-02 (permalink)

Was playing with my dev box earlier, trying to test persistent drives -- this way, once a new LTS is released I can destroy the box, recreate it, and pick up from where I left -- and accidentally wiped out my ConnectION workspace; I did not realize `vagrant destroy` would delete also attached drives...

SO has a solution for this: https://stackoverflow.com/a/45146027/348524

- Mark the storage drive as hot-pluggable
- Register a pre-destroy hook, and inside of it detach the drive -- so it wont be automatically deleted

Unfortunately that does not seem to work as expected: syntax errors, triggers being invoked multiple times.  I guess I will be creating a custom `./vagrantw` wrapper to take care of all this

Getting a reference to `Window` in Angular, with AOT enabled, is super painful.  `Window` is defined as an interface, and because of that you cannot simply rely on Angular DI, but instead you have to...sacrifice few animals the God of forsaken programmers!

First you define a named provider:
// file: ./window-ref.service.ts

import { InjectionToken } from '@angular/core';

export const WindowRef = new InjectionToken('WindowRef');

export function windowRefFactory() {
  return window;
}

export const WindowRefProvider = { provide: WindowRef, useFactory: windowRefFactory }
Next you register the provider into your main module:
// file: ./app.module.ts

import { WindowRefProvider } from './utils/window-ref.service';

@NgModule({
  providers: [
    WindowRefProvider,
    ...
Finally you update your dependent services / components as follows:
 import { Injectable, Inject } from '@angular/core';
 import { WindowRef } from '@connection-ui/utils/window-ref.service';

 @Injectable()
 export class HelloWorldService {
   $window: Window;

   constructor(@Inject(WindowRef) $window) {
     this.$window = $window as Window;
   }
Yeah, you need to use the named injector but without any type (i.e. `any`), and then inside the constructor you cast it to `Window`; it's retarded, but that's the only solution I could come up with.

Played with mocking classes in Typescript, and stumbled upon something neat:
type PublicInterfaceOf<Class> = {
    [Member in keyof Class]: Class[Member];
}
class MockHelloWorld implements PublicInterfaceOf<HelloWorld> {
  hello(name) {
    return "Mocked!";
  }
}
https://www.michaelbromley.co.uk/blog/mocking-classes-with-typescript/

? create vagrantw to properly deal with persistent drives, and prevent they are removed when `vagrant destroy`-ing

2019-09-30 (permalink)

* enhanced vitality.vim to support Terminal.app escapes -- for changing the cursor shape

I have been thinking about this lately, and I don't think relying on env variables to figure out which terminal Vim or other programs are being run into, is The Right Thing.  I mean, it works pretty decently for local sessions, but the moment you ssh into a remote box and create a Tmux session for some long-running activity, your screwed.

For example, say that I am at work (Windows, Mintty), and I log into my remote Workstation.  When you do so -- ssh into a remote box -- based on your settings, some/all your env variables on your local host will be maved available in your remote session; this could seem sexy at first, because you might have Vim-specific settings based on the running terminal emulator, but what if you create a Tmux session (which saves env variables), start some work, and decide to carry it on from home, where you have a different terminal?  You won't be destroying your tmux session, but since it was created earlier, with a different terminal emulator, some of the already pre-configured variables might not apply for the *new* client.

I believe the Right Thing to do here, is to use escape sequencies, when available, to query the Terminal emulator capabilities -- which is what the Mintty manteiner is suggeseting people do: https://github.com/mintty/mintty/issues/867

Anyway, I guess for the time being I will kill myself and set/unset env variables based on the client I am connecting from, and based on the tmux environment. <3

I am going to dump here the steps to reprovision my Linux box:
cd /some/dir
curl -O https://raw.githubusercontent.com/iamFIREcracker/my-env/master/Vagrantfile
vagrant up
vagrant ssh
cd /data
mkdir Workspace
ln -s $(pwd)/Workspace ~/Workspace
ln -s $(pwd)/Workspace ~/workspace
git clone --recurse-submodules git@github.com:iamFIREcracker/my-env.git
ln -s $(pwd)/my-env ~/my-env
cd my-env
git clone --recurse-submodules git@github.com:iamFIREcracker/dotfiles.git
bash install.sh --force --bootstrap --os-linux

2019-09-28 (permalink)

I have been trying to create a private mailing on https://inventati.org for the last couple of days, but they seem to have put 'Request a service' page on mainentance mode, and by the look of their tech blog it seems like they are undergoing a big infrastructure update.  Anyway, shot them an email to understand if there is something we can do about it (e.g. sign up, and register a service).

Follows another dump of notes re multitenancy...

Benefits of multitenancy:

- Segregation of data across the clients
- Manage and customize the app according to the client's needs (e.g. separate branding, separate feature flags)
- Maximize efficiency while reducing the cost needed for centralized updates and maintenance

Ways of implementing this:

- Logical separation of data
- physical separation of data

https://blog.lftechnology.com/designing-a-secure-and-scalable-multi-tenant-application-on-node-js-15ae13dda778

2019-09-27 (permalink)

* fixed commentstring for autohotkey files

TIL: Vim will try and syn-color a maximum of `synmaxcol` characters.  -- `:help 'synmaxcol'`

To separate various tenants' data in Redis, we have – like SQL – two options. The first is to maintain independent Redis server instances, and switch out the connection per request. Of course, this has the disadvantage of reducing or eliminating reuse of the connection pool, as the Rails server must open a new set of connections to the Redis server on each web request.

Instead, we recommend using a simple "namespace" concept to identify each key with its tenant.

What's a Redis namespace?  When working with Redis, it's customary to namespace keys by simply prepending their name with the namespace, i.e. namespace:key_name. We often do this manually, and write classes to encapsulate this process, so that other parts of our applications needn't be aware of it.

https://quickleft.com/blog/multi-tenant-applications-redis/

2019-09-26 (permalink)

Started doing burpees -- used this timer web-app to set the pace: https://fitlb.com/tabata-timer

Rounds: 8
Time on: 0:30
Time off: 1:00

For those unfamiliar with what a burpee is:

1) Begin in a standing position.
2) Move into a squat position with your hands on the ground. (count 1)
3) Kick your feet back into a plank position, while keeping your arms extended. (count 2)
4) Immediately return your feet into squat position. (count 3)
5) Stand up from the squat position (count 4)

Well, I initially tried to jump instead of stand up at step 5), but then I quickly realize I would not have made it alive that way, so I opted for something more lighter.

Anyway, it was tough, and sometimes I even stopped before the time, but I guess I shouldn't be beating myself too much about it: in the end it's the first session.  Let's if tomorrow I can get out of my bed.

2019-09-23 (permalink)

+ Change autohotkeys comment string from `/* .. */` is being used

In a multi-tenant app, each request that comes in can be for a separate tenant.  We need two things from the outset:

- a way to determine what tenant a request is for
- and a way to process the request in the context of that tenant.

Single login leading to tenants
def current_tenant
  current_user.tenant
end
Don't change your public API as follows:
/tenant/:tenant_id/model_name/:model_id.
Users will never go off wandering to a nother tenant, and since a user is linked to a tenant only, all the magic can happen behind the scenes without the user knowing anything about it.

Another approach is to implement a custom piece of middleware that would take care of figuring out the tenant, and change configs (e.g. DB user) accordingly:
module Rack
  class MultiTenant
    def initialize(app)
      @app = app
    end

    def call(env)
      request = Rack::Request.new(env)
      # CHOOSE ONE:
      domain = request.host               # switch on domain
      subdomain = request.subdomain       # switch on subdomain

      @tenant = TENANT_STORE.fetch(domain)

      # Do some configuration switching stuff here ...

      @app.call(env)
    end
  end
end
And the list of tenants could be configured as:

- in memory hash (not dynamic, will require a bounce of the server)
- database (dynamic, but you would still want to cache some data, or it would slow down every request)

https://quickleft.com/blog/multi-tenant-applications-detecting-the-tenant/

How to provide the *right* data for each user, based on their tenant?

- have a single database, but to use associations to control the siloing.  Pros: single database (less maintenance); Cons: you forget about that node-tenant relation, and all of a sudden users are able to see data from other tenants
- multiple independent copies of the application's database.  Pros: once the connection is switched, siloing is complete and guaranteed; Cons: configuration changes at run-time, possibly invalidating some of the db connection pool techniques implemented by drivers

Also, the database is not the only "feature" you might want to swap out based on the user tenant; for example, if you were using Redis, you might want to try and implement (and then switch) [Redis namespace](https://stackoverflow.com/questions/8614858/namespaces-in-redis)
var fooRedis = new Redis({ keyPrefix: 'foo:' });
fooRedis.set('bar', 'baz');  // Actually sends SET foo:bar baz
https://quickleft.com/blog/multi-tenant-applications-separating-sql-databases/

2019-09-22 (permalink)

* finished reading [The History of the Future: Oculus, Facebook, and the Revolution That Swept Virtual Reality](https://www.amazon.com/History-Future-Facebook-Revolution-Virtual-ebook/dp/B01NATTA7F)

* cleaend/validated matteolandi.net pages -- broken elements inside <head> in particular

* cg: better rm-rmr support for mac and linux outputs: it's funny how little the output of the same command on different OSes could change...I mean, what's the point, really?!

* aadbook: added py3 support! yay

~ fzf's install script seems to delete bin/fzf-tmux on Windows/Cygwin only -- on Mac, I run it, and fzf-tmux was stll there

Spent some time trying to get offlineimap to work with Python3, but no luck:

- https://github.com/OfflineIMAP/offlineimap/issues/354
- https://github.com/OfflineIMAP/offlineimap/issues/373

This is going to be annoying, especially when python2 sunsets in January next year...

Started reading John Carmack .plan archive: https://github.com/ESWAT/john-carmack-plan-archive

I have also started learning more about multi-tenant applications, so I am going to start dropping notes here.

A multi-tenant application needs to be carefully designed in order to avoid major problems.  It:

- needs to the detect the client (or tenant) that any individual web request is for;
- must separate persistent data for each tenant and its users;
- can not allow cached data (e.g. views) to leak beetween tenants;
- requires separate background tasks for each tenant;
- must identify to which tenant each line of log output belongs.

https://quickleft.com/blog/introduction-to-designing-multi-tenant-web-applications/

What’s the definition? (of a multi-tenant application)

We don’t think there is a clear, concise definition.  In broad strokes, y'know it when y'see it.  You might say that the application is multi-tenant if the architects ever had to decide between deploying a number of separate (not pooled) instances of the app, or deploying a single instance.

Or you might say it has to do with data segregation: the idea that there are some datasets that simply shouldn't touch, shouldn't interact, and shouldn't be viewable together in any combination.  This seems to be the “definition” that most closely fits the examples we’ve come up with, and the spirit of the term (as we understand it).

Set up different domains per client:

- client1.domain.com
- client2.domain.com
- ...

https://quickleft.com/blog/what-is-a-multi-tenant-application/

2019-09-18 (permalink)

+ cg rm -> rmdir

rm '.tmux-plugins/tmux-fpp/'
rm: cannot remove '.tmux-plugins/tmux-fpp/': Is a directory

~ git co bin/fzf-tmux # XXX why on earth would the install script delete fzf-tmux

2019-09-09 (permalink)

Started looking into rewriting plan-rss into CL.  I found xml-emitter, but I am afraid I will have to submit a couple PRs for it to be able to generate valid RSS2.0

+ xml-emitter: missing support for <guid> isPermaLink

+ xml-emitter: missing CDATA support

+ xml-emitter: Missing atom:link with rel="self"

2019-09-08 (permalink)

* finished reading [Hackers & Painters: Big Ideas from the Computer Age](https://www.amazon.com/Hackers-Painters-Big-Ideas-Computer-ebook/dp/B0026OR2NQ)

+ aadbook does not work with python3

? offlineimap (the version I am manually building) does not work with python3

~ cg: add an option to use a single guesser only -- that would help me replace urlview completely

TIL, you can stop Vim to increment numbers in octal notation (i.e. C-A) with the following
:set nrformats-=octal.

2019-09-07 (permalink)

* Figured out why my .plan was not being getting updated anymore -- it turns out the two cron jobs I had created (one for .plan, the other for plan.xml) were sharing the same name, so the second one (for plan.xml) would override the first one

* Changed some of my ansible scripts to run specific commands (i.e. init of avatar or init download of .plan file) only if the target file does not exist already -- if it does, the cron job would take care of updating it in due time: https://raymii.org/s/tutorials/Ansible_-_Only_if_a_file_exists_or_does_not_exist.html

* Copying more entries from the old journal into this file

~ I am still having second thoughts on whether updating old .plan entries should force an update on the relevant plan.xml ones (i.e. republish them with new content).

2019-09-06 (permalink)

* Copying more entries from the old journal into this file

~ .plan does not automatically sync on my phone, unless I open Dropbox first

2019-09-05 (permalink)

* enhance install.sh to isntall the various rubygems I am using

* enhance install.sh to abort when running bogus version of rubygems

* plan-rss: better item pubdate management: 1) new item: use date of the plan entry, 2) content differs: use $NOW, 3) re-use last pubdate

+ .plan (and plan.xml) did not update overnight.. again!

+ plan-rss: publish last 10 recently updated entries

2019-09-04 (permalink)

* Started caching gravatar locally, on my box; I hoped this could fix the problem with my RSS channel image not being rendered, but it did not work -- still not image on Feedly, and with the other RSS client I am using on Mac OS

* Fixed my rubygems cygwin env

When trying install any gem, I would be presented with the following error:
$ gem install tmuxinator
ERROR:  While executing gem ... (NameError)
    uninitialized constant Gem::RDoc
After some Googling around, I landed on this GitHub [page](https://github.com/rubygems/rubygems/issues/2483) where they suggest to manually edit 'rubygems/rdoc.rb' file, and move RDoc logic inside the begin/rescue block -- where it should be
vim /usr/share/rubygems/rubygems/rdoc.rb
# move the last line inside begin/rescue block
Everything started working again since then

+ Add step to install.sh to guard against this rubygems/rdoc weirdness

+ plan-rss: plan.xml did not update overnight -- manually running update-plan-xml did the trick though

I cannot get any RSS client I found to parse the image that I added to my plan.xml file, but I checked the specification, and it should be correct:

- Someone on SO pointing about the `<image>` tag in the specification: https://stackoverflow.com/questions/12524435/supply-an-image-thumbnail-with-my-rss-feed
- Codinghorror's feed: https://feeds.feedburner.com/codinghorror -- it seems to be using `<image>`, but somehow his image is popping up, mine not
- Another feed (taken from NetNewsWire) where image is being used: https://cate.blog/feed/
- Some guy on SO suggesting that we use favicon.ico: https://stackoverflow.com/questions/23006691/adding-favicon-to-rss-feed
- GitHub's repo feed: https://github.com/iamFIREcracker/cg/commits/master.atom -- it's ATOM, and when adding this to NetNewsWire, it seems to be using the favicon.ico

* Opened a ticket to NetNewsWire to see if RSS2.0 channel's image is supported or not -- https://github.com/brentsimmons/NetNewsWire/issues/968

2019-09-03 (permalink)

~ plan-rss: remove old pub-date logic

Regarding the image not showing on Feedly: could it be that we are not actually pointing to a proper image resource, but rather to an endpoint that will generate it?!

~ tmuxinator on windows: https://github.com/tmuxinator/tmuxinator/issues/490

2019-09-02 (permalink)

* Backed up all the .env.properties into Dropbox

* plan-rss: populate <generator>

* plan-rss: fucking fixed (I guess) new-line madness

* plan-rss: pubDate matching plan entry's date

- XML-escape all your content
- Wrap it into `<pre>` ... `</pre>`
- Wrap it into `<![CDATA[` .. `]]>`
- Voila'!

No need to append `<br>`s to each line -- `<pre>` will do the trick!

~ plan-rss: populated <image> but feedly still does not show it

? Cygwin + Tmux weirdnesses: if I don't force tmux to set TERM=xterm-256color, everything will go black and white -- it turns out TERM is being set to screen, and not screen-256color

? Tmux weirdnesses: if I don't force tmux to set TERM=xterm-256color, tmux will set it to screen-256color which doesn't support italics well

2019-09-01 (permalink)

* Started moving my old journal entries over to my .plan.work file

Regarding CHERE_INVOKING, on Cygwin: it turns out that the cygwin default /etc/profile checks for an environment variable named CHERE_INVOKING and inhibits the change of directory if set.  So by default, when sourcing /etc/profile, cygwin would take you to your home directory, unless CHERE_INVOKING=1 is set -- https://superuser.com/a/345967

* Implemented plan-rss script -- in bash for now

And here there was me thinking it would be easy to manually generate RSS:

- XML escape VS CDATA...it turns out you need both: XML escape because the output has to be valid HTML, CDATA if you want the parser to ignore non-existng HTML tags
- Channel's description has to be plain text -- item's description has to be HTML compatible

+ plan-rss: new-fucking-lines

+ plan-rss: publish latest 5 entries only

+ plan-rss: rewrite in CL

2019-08-31 (permalink)

* added Makefile and install.sh support for cb

* removed tmux,winpty from PATH -- now installing it into ~/local/bin

* fucking managed to run fzf on Cygwin!

Created `fzf` wrapper that would start `fzf` via cmd.exe on Windows:
#!/usr/bin/env bash

EXE=$(realpath ~/.vim/pack/bundle/start/fzf/bin/fzf)
if ! hash cygpath 2>/dev/null; then
  "$EXE" "$@"
else
  stdin=$(tempfile .fzf.in)
  stdout=$(tempfile .fzf.in)
  wrapper=$(tempfile .fzf.sh)
  trap "rm -f $wrapper $stdout $stdin" EXIT

  # save stdin into a temporary file
  cat - > "$stdin"

  # create wrapper
  cat > "$wrapper" <<EOF
(cat "$stdin") | "$EXE" $@ > $stdout
EOF

  # run fzf through cmd.exe -- stolen from: https://github.com/junegunn/fzf.vim
  cmd.exe /C 'set "TERM=" & start /WAIT sh -c '"$wrapper"
  cat "$stdout"
fi
And updated `fzf-tmux` as follows -- no need to run tmux if we would be spawning a `cmd.exe` anyway
EXE=~/.vim/pack/bundle/start/fzf/bin/fzf-tmux
if [ -f ~/.vim/pack/bundle/start/fzf/bin/fzf.exe ]; then
  # While on Cygiwn, fzf-tmux would normally open a tmux pane, which would open
  # a cmd.exe (yes, because on cygwin fzf has to be run through a cmd.exe).
  #
  # That's ridicolous, let's skip the tmux part completely
  EXE=fzf
fi

$EXE "$@"

* fucking figured out (well, partially) how to make tmux preserve current path when splitting panes: it appears I have to export CHERE_INVOKING=1. That also fixes a problem I have always had when running tmuxinator on Windows where it would not always move in the expected directory

+ what the hell is CHERE_INVOKING, and why do I need to set it on Windows, but not on MacOS/Linux

+ backup .env.properties files

Mutt was behaving weirdly when run inside tmux, and googling around I discovered that it's not recommended to set TERM to anything different from screen/scren-256color while inside tmux: https://superuser.com/questions/844058/tmux-mutt-not-redrawing.  So I tried to disable the lines where I override TERM in my .tmux.conf, let's see what happens.

2019-08-30 (permalink)

* added Makefile and install.sh support for br

* added Makefile and install.sh support for cb

* removed a bunch of wrappers from ~/bin (br, cb, urlview, mutt-notmuch-py)

+ fzf ain't working on cygwin: https://github.com/junegunn/fzf/wiki/Windows#fzf-outputs-character-set-not-supported-when-term-environment-variable-is-set and https://github.com/junegunn/fzf/wiki/Cygwin

+ try to ditch the following wrappers: fzf-tmux -- find a way to install them locally (make install PREFIX=..., or the same for python crap)

+ avoid to update PATH for tmux, and winpty

2019-08-28 (permalink)

* Created [cb.vim](https://github.com/iamFIREcracker/cb.vim) -- still a WIP, but it's working, with my set up at least

2019-08-27 (permalink)

* Added new reading.html page to matteolandi.net, listing all the books I remember having read

* Changed ansible scripts to re-use letsencrypt certificates for the docker registry -- so I don't have to re-generate self signed certificates every year

2019-08-25 (permalink)

* Suppressed CR/LF warnings raised when working on Unix Git repos from Windows -- https://stackoverflow.com/questions/5834014/lf-will-be-replaced-by-crlf-in-git-what-is-that-and-is-it-important

git config core.autocrlf input

* Figured out a way to prevent `git up` from updating the dotfiles submodule while on the parent repo, my-env: add `update = none` to the sub-module entry inside .gitmodules

[submodule "dotfiles"]
        path = dotfiles
        url = git@github.com:iamFIREcracker/dotfiles.git
        ignore = dirty
        update = none
Why would I want that you might ask?  Well, I usually keep the dotfiles submodule up to date already (e.g. `cd dotfiles; git up`), so when I decide to sync my-env, I don't want `git up` to accidentally checkout an older version of dotfiles.

* Created tmux binding to spawn a command/application in a split pane -- similar to how `!' on mutt lets you spawn an external command

* finished reading [Black Box Thinking: Why Most People Never Learn from Their Mistakes--But Some Do](https://www.amazon.com/Black-Box-Thinking-People-Mistakes-But/dp/1591848229)

2019-08-24 (permalink)

* Cleaned up configs for mutt and friends (offlineimap, msmtp)

* Restored Ctrl+Enter behavior on Terminal.app

Change some terminal mappings

- Ctrl-Enter key from ✠ to ◊
- Shift+Enter key from ✝ to Ø

Why?  Because I recently switched from iTerm2 to Terminal.app, and
I could not find a way to creat custom shortcuts (e.g. to generate the
maltese cross on Ctrl+Enter).

I tried then using Karabiner-Element, but it doesn't seem it supports
unicodes, so the only solution I found was:

- Figure out a new char to use (the lozenge), whose Option+key shortcut
was known (Option+Shift+v)
- Tell Karabiner elements to generate that sequence when Ctrl+Enter is
pressed
- Change all the other settings to use the lozenge instead of the
maltese cross

Too bad I could not use a more meaningful character, like the carriage
return one.

PS. https://pqrs.org/osx/karabiner/json.html#root-data-structure
Now that these key mappings have been restored too, Terminal.app is pretty much usable; I do not see any performance improvements (when scrolling through Tmux + Vim panes), and true color support is not there, but that's OK, I will survive I guess

2019-08-22 (permalink)

Still fighting with non-blocking vim channels, and how to close stdin making sure that all the pending write operations are flushed first

2019-08-21 (permalink)

* Updated async.vim MR to wait for transmit buffer to be empty before closing stdin

* Asked on vim_dev about a weird ch_sendraw behavior -- not sure if it's expected behavior, or a bug

Hello everyone,

ch_sendraw seems to fail when sending large chunks of data to a job started in non-blocking mode.  Is this expected?

Steps to reproduce:

- Copy the following into a file, and source it from vim

let args = ['cat']
let opts = {
      \ 'noblock': 1
      \}

let job = job_start(args, opts)
let data = repeat('X', 65537)

call ch_sendraw(job, data)
call ch_close_in(job)

Expected behavior:

- Nothing much

Actual bevarior:

- E630: channel_write_input(): write while not connected

If on the other hand, I initialized data with 65536 bytes instead of 65537, then no error would be thrown.

In case you needed it, here is the output of `vim --version`

$ vim --version
VIM - Vi IMproved 8.1 (2018 May 18, compiled Aug 16 2019 02:18:16)
macOS version
Included patches: 1-1850
Compiled by Homebrew
Huge version without GUI.  Features included (+) or not (-):
+acl               -farsi             -mouse_sysmouse    -tag_any_white
+arabic            +file_in_path      +mouse_urxvt       -tcl
+autocmd           +find_in_path      +mouse_xterm       +termguicolors
+autochdir         +float             +multi_byte        +terminal
-autoservername    +folding           +multi_lang        +terminfo
-balloon_eval      -footer            -mzscheme          +termresponse
+balloon_eval_term +fork()            +netbeans_intg     +textobjects
-browse            +gettext           +num64             +textprop
++builtin_terms    -hangul_input      +packages          +timers
+byte_offset       +iconv             +path_extra        +title
+channel           +insert_expand     +perl              -toolbar
+cindent           +job               +persistent_undo   +user_commands
-clientserver      +jumplist          +postscript        +vartabs
+clipboard         +keymap            +printer           +vertsplit
+cmdline_compl     +lambda            +profile           +virtualedit
+cmdline_hist      +langmap           -python            +visual
+cmdline_info      +libcall           +python3           +visualextra
+comments          +linebreak         +quickfix          +viminfo
+conceal           +lispindent        +reltime           +vreplace
+cryptv            +listcmds          +rightleft         +wildignore
+cscope            +localmap          +ruby              +wildmenu
+cursorbind        +lua               +scrollbind        +windows
+cursorshape       +menu              +signs             +writebackup
+dialog_con        +mksession         +smartindent       -X11
+diff              +modify_fname      -sound             -xfontset
+digraphs          +mouse             +spell             -xim
-dnd               -mouseshape        +startuptime       -xpm
-ebcdic            +mouse_dec         +statusline        -xsmp
+emacs_tags        -mouse_gpm         -sun_workshop      -xterm_clipboard
+eval              -mouse_jsbterm     +syntax            -xterm_save
+ex_extra          +mouse_netterm     +tag_binary
+extra_search      +mouse_sgr         -tag_old_static
  system vimrc file: "$VIM/vimrc"
    user vimrc file: "$HOME/.vimrc"
2nd user vimrc file: "~/.vim/vimrc"
      user exrc file: "$HOME/.exrc"
      defaults file: "$VIMRUNTIME/defaults.vim"
  fall-back for $VIM: "/usr/local/share/vim"
Compilation: clang -c -I. -Iproto -DHAVE_CONFIG_H   -DMACOS_X -DMACOS_X_DARWIN  -g -O2 -U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=1
Linking: clang   -L. -fstack-protector-strong -L/usr/local/lib -L/usr/local/opt/libyaml/lib -L/usr/local/opt/openssl/lib -L/usr/local/opt/readline/lib  -L/usr/local/lib -o vim        -lncurses -liconv -lintl -framework AppKit  -L/usr/local/opt/lua/lib -llua5.3 -mmacosx-version-min=10.14 -fstack-protector-strong -L/usr/local/lib  -L/usr/local/Cellar/perl/5.30.0/lib/perl5/5.30.0/darwin-thread-multi-2level/CORE -lperl -lm -lutil -lc  -L/usr/local/opt/python/Frameworks/Python.framework/Versions/3.7/lib/python3.7/config-3.7m-darwin -lpython3.7m -framework CoreFoundation  -lruby.2.6

Thanks,
Matteo

* Created new async.vim MR to workaround ch_sendraw erroring out when data is bigger than 65536 bytes -- in this case, we would fall back to manual data chunking

2019-08-14 (permalink)

* Flipped the order of my plan notes!

" Select all the end of lines *not* preceding a new day note
/\v\n(# [0-9]{4})@!
" Change all the end of lines with a § -- unlikely to use character
:%s//§
" Sort all the collapsed notes (! is for reverse :help sort)
:'<'>sort!
" Change § back to \r
:%s/§/\r

2019-08-12 (permalink)

~ Use gx to open the URL under the cursor

2019-08-10 (permalink)

* Changed my clipboard vim wrappers to copy asynchronously, using aync.vim; it was not working initially, because the job in the background never exited (kept on listening on stdin), but by hacking the library (basically adding support for sending data and close stdin) I eventually managed to make it work.

? cb.vim -- https://vim.fandom.com/wiki/Using_the_Windows_clipboard_in_Cygwin_Vim

2019-08-03 (permalink)

I decided to give Terminal.app a try (instead of iTerm2) and one thing I immediately noticed what the lack of True Color support -- Vim would start in almost B&W, and it took me a good 15 minutes to figure out what was wrong

2019-07-20 (permalink)

* finished reading [The Peter Principle: Why Things Always Go Wrong](https://www.amazon.it/Peter-Principle-Things-Always-Wrong/dp/0062092065)

2019-07-07 (permalink)

* opened a ticket to `goobook` to see why `goobook add` is not working anymore: https://gitlab.com/goobook/goobook/issues/82

* fixed a bug with `cg` preventing all the guessers from being run -- it got recently introduced when we kept on processing guessers even though a command got already guessed

* enhance the personal budget google sheets to include expense source (cash, debit, credit...)

2019-06-19 (permalink)

* cleaning up my vimrc -- from https://www.reddit.com/r/vim/wiki/vimrctips

`nomodeline` -- it's dangerous
No more `nocompatible` -- it's useless
Don't use shortnames
/\v<\w\w\=   " should match ts=
/\v<\w\w\w\= " should match sts=
Allow your functions to `abort` upon encountering an error -- this uses a zero-width negative look-behind assertion... https://stackoverflow.com/a/10328015
/\v\s+function.*(abort.*)@<!$  " should match any function definition without trailing `abort`

2019-06-17 (permalink)

* cg: added few guessers (git-st -> git-add/reset/rm, rm -> rmr)

* cg: redirect stderr as well 2>&1

* cg: keep on processing guessers, even after a match -- this makes it possible for users to define multiple suggestions for the same input line

* cg: add support for guessers which return multiple suggestions

? tmux-externalpipe: spawn a new pane for the selection

2019-06-05 (permalink)

+ ?? .file -> rm....

+ rm "is directory" -> rm -rf...

2019-06-01 (permalink)

* published new repo / matteolandi.net page for: [tmux-externalpipe](https://github.com/iamFIREcracker/tmux-externalpipe) -- with it I could deprecate tmux-urlview / tmux-fpp, and hopefully simplify `cg` too

2019-05-21 (permalink)

* cg: stop `fzf` from sorting `cg`'s output

* released cg 0.2.0

2019-05-20 (permalink)

* cg: fix bug with downloader -- it would blow up in case ./bin was missing

* don't output duplicate suggestions

* add page for `cg` on matteolandi.net

2019-05-19 (permalink)

* implemented some cg guessers

* fixed a bug with cg build script where when bumbing a version, `git rev-list` would blow because unable to find the base version

* released cg 0.1.0

2019-05-18 (permalink)

* got `cg` to build automatically (for Linux, Mac OS) using travis-ci

* calculate version at build time, and use `git rev-list %s..HEAD --count` to append a build number to the base version

Windows build are kind of working too; list of caveats:

- set `langugage: shell` or it will hang
- created custom roswell install script to fetch binaries somehwere, add them to $PATH, and then invoke the ufficial roswell ci script -- by then, ros is found available, so it will set it up and not compile it: https://gist.github.com/iamFIREcracker/3564db5a24e296e087da49513d6cd00c

2019-05-16 (permalink)

* added command line opts to cg: -h/--help, and -v/--version

* added a `download` script to `cg`, to download the latest binary available (Linux, Mac OS, Windows)

+ cg: git branch --set-upstream-to=origin/<branch> opts

2019-05-15 (permalink)

I managed to get QUICKLISP automatically installed as part of the build process, however, I am not 100% sure that's the way to go; some users on Reddit pointed out that it might not be ideal for a build script to automatically install dependencies under someone's HOME.

Next experiments:

+ Try to get `cg` to build using travis-ci

+ Add a `./download` script to fetch the latest release

2019-05-13 (permalink)

All right, I think I figured out a way to avoid asking users to symlink/clone my repo inside ~/quicklisp/local-projects; putting the following at the top of my build.lisp file, right above (ql:quickload :cg), should do the trick:
;; By adding the current directory to ql:*local-project-directories*, we can
;; QL:QUICKLOAD this without asking users to symlink this repo inside
;; ~/quicklisp/local-projects, or clone it right there in the first place.
(push #P"." ql:*local-project-directories*)
Also, about getting "deploy" automatically installed: I guess I can throw in there a (ql:quickload :deploy) and get the job done, right?

Still need to figure out how easy it would be to get QUICKLOAD automatically installed;  well, maybe I found one: https://stackoverflow.com/questions/40903944/how-to-do-dynamic-load-load-in-common-lisp

2019-05-12 (permalink)

* cg: documentation

2019-05-10 (permalink)

* cg: documentation and refactoring

I feel like `cg` is ready for people to give it a try, so maybe I will post it on Reddit... maybe..

2019-05-09 (permalink)

* cg: fzf script to enable multi-select

2019-05-08 (permalink)

* change `cg`'s Makefile not to specify `sbcl` full path -- let the SHELL figure it out, as the path might be different based on the OS

2019-05-06 (permalink)

* more `cg` guessers: 'git branch -D', pruning '[gone]' branches, `sudo apt install`

? cg: nvm use --delete-prefix v8.16.0

2019-05-05 (permalink)

* finally got around to implement [cg](https://github.com/iamFIREcracker/cg), a command line utility (written in Common Lisp) that tries to extract commands to run from its input -- e.g. you pipe `git log -n ..` into it, and it will recommend you to `git show` a specific commit. It parses definitions from a RC file (err, a lisp file), and ships with FZF and TMUX plugins

2019-05-04 (permalink)

Participated to Google Code Jam Round1-C but did not make it to the second one:

- 1st problem: miss-read the description and took a pass on it -- even though, in the end, it turns out I should have started with that
- 2nd problem: the interactive ones, as since I have never done an interactive problem, I took a pass on it too
- 3rd problem: it did not seem to complicated at first (especially if you were amining for completing the first part), but somehow I bumped into recursion errors and could not submit a working solution on time

Lesson learned:

- Google Code Jam is though, and you better prepare for it... in advance
- You gotta read carefully man!

2019-05-01 (permalink)

* bumbped aadbook dependencies -- github sent me an alert that one of them contained a vulnerability

* set up keyring to play well with `twice` -- https://pypi.org/project/twine/

* finished my urlview vim wrapper -- https://github.com/iamFIREcracker/dotfiles/commit/722022590003dc2e78a4a15b5f7c09e60400d451

2019-04-25 (permalink)

Fucking figured out why [bracketed paste mode](https://cirw.in/blog/bracketed-paste) had stopped working sometime during the last 3/4 months...finally!  Long story short, readline 8.0 was released this January, and with it shipped a fix for https://lists.gnu.org/archive/html/bug-bash/2018-01/msg00097.html, a fix which I think broke something somewhere else.  Anyway, if your bash setup includes a custom prompt command - set via PROMPT_COMMAND - that outputs some text with newline characters in it, when bracketed paste mode is on and you try to run a pasted command, changes are you would end up running not just the command you pasted but also one or more empty commands as well (depending on the number of newline chars that are generated inside the custom prompt command).

   matteolandi at hairstyle.local in /Users/matteolandi
   $ bind 'set enable-bracketed-paste on'

   matteolandi at hairstyle.local in /Users/matteolandi
   $ pwd | pbcopy

   matteolandi at hairstyle.local in /Users/matteolandi
   $ echo "CMD+V + Enter"
   CMD+V + Enter

   matteolandi at hairstyle.local in /Users/matteolandi
   $ /Users/matteolandi
   -bash: /Users/matteolandi: Is a directory

   matteolandi at hairstyle.local in /Users/matteolandi

   matteolandi at hairstyle.local in /Users/matteolandi
   126 >

Luckily for me I was able to fix my bashrc setup and move my custom prompt command `echo` calls into `$PS1` (see: https://github.com/iamFIREcracker/dotfiles/commit/4f683bf10bac71ecd06b28f8d0db76591f53f273#diff-85cf9b51417c7cfb8766aa6b56f7edb9R745), but I wonder if this wasn't a regression actually -- to be honest I don't know if it's expected to genrate output from custom prompt commands or not.

* added initial support for mintty terminal to vitality.vim

2019-04-21 (permalink)

Use `hexdump(1)` to view text as binary data (use `-C` to display original strings alongside their binary representation).

And why would I care so much about viewing text as binary data?  Because I am trying to understand bracketed pasted mode, and figure out if the extra new-line when pasting text in the terminal is intentional or a bug (readline? iTerm? iTerminal?)

2019-04-16 (permalink)

* fucking fixed iterm2/tmux/term-mode weirdness -- vitality.vim lacked support for [terminal-mode](https://github.com/sjl/vitality.vim/pull/42)

? vitality for mintty -- mintality

+ read lispwords from project file

2019-04-14 (permalink)

Had some *fun* with [sic](https://tools.suckless.org/sic/), and implemented a wrapper to:

- create a SSL proxy using `socat` -- https://unix.stackexchange.com/questions/491846/using-socat-to-make-a-secure-tcp-connection-to-an-irc-server/491940#491940
- keep track of input history using `rlwrap`
- store IRC logs using `tee`
- highlight my name with `gsed`

2019-04-13 (permalink)

* finally got gcj-qualifications-3-criptopangrams accepted by the grader -- it's possible it was blowing up because of memory issues before, and I did fix that by reading the ciphertext one piece at a time (i.e. multiple `(read)`) instead of read-line + split + parse

If a codejam solution is RE-ing out, but cannot find any apparent flaw in the algorithm, try to invoke the GC at the end of each case -- `(sb-ext:gc)`

2019-04-06 (permalink)

* implemented Clojure's threading macro, in Common Lisp -- https://github.com/iamFIREcracker/adventofcode/blob/099925161207767e785647fb3747f0b5ad6c46f2/utils.lisp#L262-L297

* refactored 2018 day 11 to use Summed-area table!

2019-04-04 (permalink)

* implemented Summed-area Table data-structure, in Common Lisp -- https://github.com/iamFIREcracker/adventofcode/commit/92519ded8e7e544acda1b12e0aaed942488b6834

2019-03-24 (permalink)

* implemented DEFUN/MEMO macro to define a memoized function

Tried to see if I could somehow expose the cache used (i.e. replacing `LET` with `DEFPARAMETER`) but could not find a way to programmatically generate a name as the concatenation of the function name, and a `/memo` literal.  Anyway, I am happy with it already..
(defmacro defun/memo (name args &body body)
  (with-gensyms (memo result result-exists-p)
    `(let ((,memo (make-hash-table :test 'equalp)))
       (defun ,name ,args
         (multiple-value-bind (,result ,result-exists-p)
             (gethash ,(first args) ,memo)
           (if ,result-exists-p
             ,result
             (setf (gethash ,(first args) ,memo) (progn ,@body))))))))

2019-03-23 (permalink)

* changed PARTIAL-1 to support nested substitution via SUBST

* added PARTIAL-2 (similar to PARTIAL-1, but with the returned function accepting 2 arguments instead of 1)

I am pretty excited about what I came up with. At first I got stuck when trying to use SUBST because I couldn't get nested backticks/commans to work as expected; then I realized I coudl do all the transformation inside the macro, and only put inside backticks the transformed argument list
(defmacro partial-1 (fn &rest args)
  (with-gensyms (more-arg)
    (let ((actual-args (subst args '_ more-arg)))
      `(lambda (,more-arg)
          (funcall (function ,fn) ,@actual-args)))))

2019-03-21 (permalink)

* factored out A-STAR from 2018 22

? DESTRUCTURING-BIND for COMPLEX

? how to destructuring-bind an espression, ignoring specific elements? https://stackoverflow.com/questions/25463763/any-good-way-to-declare-unused-variables-in-destructuring-bind/25464689#25464689)

2019-03-20 (permalink)

+ gotta go vim-sexp a try -- and tpope's [mappings](https://github.com/tpope/vim-sexp-mappings-for-regular-people) in particular

2019-03-19 (permalink)

*refactored 2018 22

- Use COMPLEX instead of (LIST X Y)
- Use CASE / ECASE instead ASSOC
- Add poor's heap-queue implementation to utils
- Review a-star implementation -- soon I will factor out a new function for it

The execution time dropped from ~40 seconds to ~5

In the process I have also added HASH-TABLE-INSERT to utils, and used that wherever I saw fit

2019-03-17 (permalink)

? change RING related methods to use DEFINE-MODIFY-MACRO -- https://blog.cneufeld.ca/2014/01/the-less-familiar-parts-of-lisp-for-beginners-define-modify-macro/, and https://stackoverflow.com/questions/31587429/define-modify-macro-with-operator-argument

2019-03-16 (permalink)

~ review vim-yankring config -- I have a custom branch to override some mappings which hopefully could go; also, it stomps on vim-surround mappings too

Extracts from https://practicaltypography.com/why-racket-why-lisp.html:

Everything is an expression. [Lisps] Most programming languages are a combination of two syntactically distinct ingredients: expressions (things that are evaluated to produce a value) and statements (things that express an action). For instance, in Python, `x = 1` is a statement, and `(x + 1)` is an expression.

Every expression is either a single value or a list. [Lisps] Single values are things like numbers and strings and hash tables. (In Lisps, they’re sometimes called atoms.) That part is no big deal.
Macros. [Racket] Also known in Racket as syntax transformations. Serious Racketeers sometimes prefer that term because a Racket macro can be far more sophisticated than the usual Common Lisp macro.

2019-03-14 (permalink)

? read about Y-combinator: https://rosettacode.org/wiki/Y_combinator#Common_Lisp -- I am pretty sure there are better resources out there

2019-03-13 (permalink)

* completed advent of code 2017 day23-3 -- and with it AoC 2017!!!

Oh boy, I am done reverse-engineering assembly code -- probably because I am bad at it.  Few lesson learned though:

- Use TAGBODY to create CL code matching your input
- When iterating over big ranges of numbers (like here, where we working in the range 108100 - 125000), one could try to make such range smaller, hopefully making it easier to figure out what the code is actually doing

Enjoy my abomination -- it's ugly, still it helped me realize we were counting non-prime numbers
(defun solve-part-2 ()
  (let ((b 0) (c 0) (d 0) (e 0) (f 0) (g 0) (h 0))
    (tagbody
      (setf b 81
            c b)
      part-a
        (prl 'part-a)
        (setf b (* b 100)
              b (- b -100000)
              c b
              c (- c -17000))
      part-b
        (prl 'part-b b c (break))
        (setf f 1
              d 2)
      part-e
        (prl 'part-e)
        (setf e 2)
      part-d
        (prl 'part-d d e b)
        (setf g (- (* d e) b))
        (if (not (zerop g))
          (go part-c))
        ; (prl 'resetting-f (break))
        (prl 'resetting-f)
        (setf f 0)
      part-c
        (prl 'part-c d e b)
        (setf e (- e -1)
              g e
              g (- g b))
        (if (not (zerop g))
          (go part-d))
        (setf d (- d -1)
              g d
              g (- g b))
        (if (not (zerop g))
          (go part-e))
        (if (not (zerop f))
          (go part-f))
        ; (prl 'setting-h (break))
        (prl 'setting-h)
        (setf h (- h -1))
      part-f
        (prl 'part-f b c)
        (setf g b
              g (- g c))
        (if (not (zerop g))
          (go part-g))
        (return-from solve-part-2 (prl b c d e f g h))
      part-g
        (prl 'part-g)
        (setf b (- b -17))
        (go part-b))))
Also, this! from [reddit](https://www.reddit.com/r/adventofcode/comments/7lms6p/2017_day_23_solutions/drnjwq7/)
The key to understanding what this code does is starting from the end and working backwards:
>
- If the program has exited, g had a value of 0 at line 29.
- g==0 at line 29 when b==c.
- If g!=0 at line 29, b increments by 17.
- b increments no other times on the program.
- Thus, lines 25 through 31 will run 1000 times, on values of b increasing by 17, before the program finishes.
>
So, given that there is no jnz statement between lines 25 and 28 that could affect things:
>
- If f==0 at line 25, h will increment by 1.
- This can happen once and only once for any given value of b.
- f==0 if g==0 at line 15.
- g==0 at line 15 if d*e==b.
- Since both d and e increment by 1 each in a loop, this will check every possible value of d and e less than b.
- Therefore, if b has any prime factors other than itself, f will be set to 1 at line 25.
>
Looking at this, then h is the number of composite numbers between the lower limit and the upper limit, counting by 17.

2019-03-12 (permalink)

* completed advent of code 2017 day25-1 -- it was painful to parse the input, and as many did in the megathread, I should have manually parsed it, hopefully got some scores, and then implemneted the input parser, but you know, I am late for the leaderboard anyway, so...

2019-03-11 (permalink)

* completed advent of code 2017 day24-1

* completed advent of code 2017 day24-2

+ `goobook-add` is not working anymore -- it ends up creating an account having name and email both set to the email address: `goobook-add Matteo Landi matteo@matteolandi.net` creates a new account with name: matteo@matteolandi.net, and address: matteo@matteolandi.net

~ in CL, what's the idiomatic way of filtering a list keeping the elements matching a specific value? `(remove value list :test-not 'eql)`? `(remove-if-not #'= value list)`

2019-03-10 (permalink)

* finally solved all the vim-fugitive issues I was facing since the last upgrade; it turns out some plugin functions were deprecated, and other plugins (e.g fugitive-gitlab were still them) -- after I upgraded those plugins as well, everything started to work just fine

* completed advent of code 2017 day22-1

* completed advent of code 2017 day22-2

* completed advent of code 2017 day22-1 -- reused code from day18 and managed to quickly get a star for part y; for part2 it seems like I need to reverse engineer that code, which I don't want to do right now

TIL:

- I can use `:when .. :return ..` instead of `:do (when .. (return ..))`
- I can use `:with .. :initially ..` to create and initialize `LOOP`'s state instead of
`LET` + initialization logic *outside* of `LOOP`

? implement MULF to multiply a place -- http://stevelosh.com/blog/2016/08/playing-with-syntax/

2019-03-09 (permalink)

* completed advent of code 2017 day21-1

* completed advent of code 2017 day21-2

I decided to represent the pixels grid as a string, so I had to implement logic for:

- accessing specific sub-squares of such grid -- `(+ (* i * size) j)`
- combine sub-squares together -- https://github.com/iamFIREcracker/adventofcode/commit/0e47962ac47d568573d16a08d8d8be27b9f21a12

It takes less than 4 seconds to complete the 18th iteration, so I did not bother optimizing further; I have however read something interesting on Reddit:
After 3 iterations a 3x3 block will have transformed into 9 more 3x3 blocks, the futures of which can all be calculated independently. Using this fact I just keep track of how many of each "type" of 3x3 block I have at each stage, and can thus easily calculate the number of each type of 3x3 block I'll have 3 iterations later.
So yeah, this can be optimized further but you know, I am engineer..

2019-03-08 (permalink)

* optimize solution for 2017 day20 -- add GROUP-BY, refactored UNIQUE-ONLY to use GROUP-BY, and that's it, from O(N²) to O(N)!

? Is it guaranteed that iterating over hash-table keys would return them in insertion order? It seems to be the case for SBCL at least, but is it implementation dependent?

2019-03-07 (permalink)

* optimized 2017 day 17

We are interested in the value next to 0 after 50000000 iterations, so we don't have to keep the ring buffer up to date, we can simply simulate insertions, and keep track of all the values inserted after value-0

2019-03-06 (permalink)

* completed advent of code 2017 day20-1

* completed advent of code 2017 day20-2

Really sloppy solution, mine.  The problem statement says:
Which particle will stay closest to position <0,0,0> in the long term?
So I said, why don't I let the simulation run for some time until the particle closest to <0,0,0> doesn't change (part1), or no more particles collide with one another (part2)? Well, it turned out 500 iterations were enough to converge to the right answers!

~ find a clever (and more generic) way to solve 2017 day20 -- implement a function to check when all the particles are moving away from <0,0,0>

2019-03-05 (permalink)

* completed advent of code 2017 day18-1

* completed advent of code 2017 day18-2

* completed advent of code 2017 day19-1

* completed advent of code 2017 day19-2

Day 18:

Another "here is your instructions set, create a compiler for it" kind of exercise.  Part1 was OK, it did not take me long to got it right; part2 instead, it got messier than expected, but overall it doesn't look that _bad_ and it runs pretty quickly too, so...

2019-03-04 (permalink)

* completed advent of code 2017 day16-1

* completed advent of code 2017 day16-2

* completed advent of code 2017 day17-1

* completed advent of code 2017 day17-2

+ Make day17-2 run faster -- it takes minutes for that 50_000_000 iterations on that ring buffer to complete

2019-03-03 (permalink)

* completed advent of code 2017 day15-1

* completed advent of code 2017 day15-2

Upgraded vim-fugitive, bus since then, I cannot `:Gcommit` anymore.  Even when I do it from the command line, not only I see the following error message, but when I try to save, Vim complains with: `E13: File exists (add ! to override)` ...
Error detected while processing function <SNR>41_config[4]..<SNR>119_buffer_repo:
line    1:
E605: Exception not caught: fugitive: A third-party plugin or vimrc is calling fugitive#buffer().repo() which has been removed. Replace it with fugitive#repo()
Error detected while processing function FugitiveDetect:
line   11:
E171: Missing :endif

+ Need to figure this out, as it's super annoying

+ implement something similar to `SUMMATION`, but for `MAX` and `MIN` -- `MAXIMIZATION` and `MINIMIZATION`

Also, TIL (or better, remembered) that you can `(loop :repeat N ...)` to execute a loop `N` times

2019-03-02 (permalink)

* completed advent of code 2017 day13-1

* completed advent of code 2017 day13-2

* completed advent of code 2017 day14-1

* completed advent of code 2017 day14-2

Day13 part 2 kept me busy more than expected, but eventually figured out what I was doing wrong:
Severity == 0 isn't the same as not getting caught.
Also, at first I implemented this, but simulating the dilay, and the packets passing through the different firewall layers; it worked, but was slow as balls (around 13 secs), so I decided to do some math to figure out collisions, without having to simulate anything.  It paid off, as now it takes around 0.4secs to find the solution.

Day14 part 1 was all about re-using Knot Hashes from day 10, and figure out an hexadecimal to binary conversion.  Part 2 instead, disjointsets, again!

+ implement a proper hex to binary macro or something -- `HEXADECIMAL-BINARY`

2019-03-01 (permalink)

* completed advent of code 2017 day11-1

* completed advent of code 2017 day11-2

* completed advent of code 2017 day12-1

* completed advent of code 2017 day12-2

Day11 was all about hex grids; found this interesting [article](https://www.redblobgames.com/grids/hexagons/) about how to represent hex grids, and how to calculate the distance between tiles -- I opted for the _cube_ representation.

Day12 instead, disjointsets!

2019-02-27 (permalink)

* completed advent of code 2017 day10-1

* completed advent of code 2017 day10-2

Couple interesting `FORMAT` recipes:

- `~x` to print a hexadecimal (`~2,'0x~` to pad it with `0`s)
- `~(string~)` to lowercase a string

2019-02-26 (permalink)

* completed advent of code 2017 day8-1

* completed advent of code 2017 day8-2

* completed advent of code 2017 day9-1

* completed advent of code 2017 day9-2

It would be nice if there was a macro similar to `WITH-SLOTS`, but for structures, so I wouldn't have to do the following:
(loop
  :with registers = (make-hash-table :test 'equal)
  :for instruction :in data ; XXX isn't there a better way to handle this?
  :for reg-inc = (reg-inc instruction)
  :for inc-delta = (inc-delta instruction)
  :for op = (comp-op instruction)
  :for reg-comp-1 = (reg-comp-1 instruction)
  :for value-comp = (value-comp instruction)

2019-02-25 (permalink)

* completed advent of code 2017 day7-1

* completed advent of code 2017 day7-2

A little messy my solution for 2017 day 07 -- gotta find someone else's solution and compare it with mine to see what can be improved

2019-02-24 (permalink)

* completed advent of code 2017 day5-1

* completed advent of code 2017 day5-2

* completed advent of code 2017 day6-1

* completed advent of code 2017 day6-2

2019-02-23 (permalink)

* completed advent of code 2017 day4-1

* completed advent of code 2017 day4-2

In CL, you can use `:thereis` to break out of a `LOOP` when a given condition is `T`:
(loop
  :for (a . rest) :on pass-phrase
  :thereis (member a rest :test 'equal)))
There are `:never` and `:always` too!

2019-02-21 (permalink)

* completed advent of code 2017 day3-1

* completed advent of code 2017 day3-2

+ is there anything similar to `WITH-SLOTS`, but for structures?

2019-02-20 (permalink)

* completed advent of code 2017 day2-1

* completed advent of code 2017 day2-2

TIL, `MULTIPLE-value-LIST`:
(loop
  :for values :in data
  :for (divisor number) = (multiple-value-list (evenly-divisible-values values))
  :summing (/ number divisor))))

2019-02-19 (permalink)

* completed advent of code 2017 day1-1

* completed advent of code 2017 day1-2

2019-02-16 (permalink)

fixed DNS problems after upgrading Ubuntu from 16.04 to 18.04
After not understanding why this wouldn't work I figured out that what was also needed was to switch /etc/resolv.conf to the one provided by systemd. This isn't the case in an out-of-a-box install (for reasons unknown to me).
sudo ln -sf /run/systemd/resolve/resolv.conf /etc/resolv.conf
Source: https://askubuntu.com/questions/907246/how-to-disable-systemd-resolved-in-ubuntu/938703#938703

Also, recently W10 started pushing me to use a new Snip tool, so had to figure out how to change my AutoHotKey files to use that:
I figured it out! Create a shortcut as described here: https://stackoverflow.com/questions/35125286/how-to-start-windows-store-apps-in-autohotkey and then incorporate that shortcut into the following code:
+#s::
IfWinExist Snip & Sketch
{
    WinActivate
    WinWait,  Snip & Sketch,,2
    Send ^n
}
else
{
    Run "F:\Snip & Sketch.lnk"
    sleep, 500
    send, ^n
}
return
Source: https://linustechtips.com/main/topic/995171-solved-windows-10-snip-sketch-shortcut-not-working/

2019-02-11 (permalink)

[04 – Focal Length](http://www.r-photoclass.com/04-Focal-length/)

Ultra-wide angle – 14-24mm
Wide angle – 24-35mm
Normal – 40-75mm
Mild tele – 85-105mm
Medium tele – 120-300mm
Long and exotic tele – 300-800mm
This was a very interested exercise for me. I used a kit Canon 18-55mm lens with the shots at 18/35/55. I then approached the hydrant and shot it at 18mm. The most noticeable impact that I saw was the loss of depth you have when you are zooming in versus physically approaching the subject. If you look at the dirt pile just over the top left of the hydrant in the third image versus the fourth, the depth of field and the scope are significantly different versus trying to shoot the same shot just physically closer without any significant zoom. Interesting exercise.
>
http://imgur.com/a/BosdW

2019-02-10 (permalink)

Lightroom shortcuts:

- R: grid overlay (press O to change the grid type; shift-O to rotate the grid)
- L: lights out (dim background, isolating the photo you are working on); press it again, all the controls will now be black
- F: fullscreen
- Y: before & after
- \: compare with original
- J: show where the image is clipping (red: overexposed, blue: underexposed)

Lightroom quick tips:

- Exposure > Tone Auto
- Reset button (bottom right corner) / or double click on the effect
- Grid > Angle: you can now draw a line, and the photo will be rotated based on that
- Radio brush adjustment

2019-02-06 (permalink)

* refactored day9 using doubly linked list and ring structures

`ARRAY-IN-BOUNDS-P` can be used to figure out if an index (list of indexes) is a valid array (multi-dimensional array) position

You loop over _lazily_ generated numbers by using `LOOP`:
(loop :for var = initial-value-form [ :then step-form ] ...)
For example, to indefinitely loop over a _range_ of values:
(loop
  :for player = 0 :then (mod (1+ player) players)

2019-02-05 (permalink)

* implement `RECURSIVELY` macro

2019-02-03 (permalink)

+ `recursively` macro -- and use it for day08

? is it possible to do `:collecting` with a specific type

? parsing input is still annoying...

2019-01-28 (permalink)

Need to concatenate multiple images into a single PDF? [Easy](https://askubuntu.com/questions/493584/convert-images-to-pdf/557975#557975): `convert "*.{png,jpeg}" -quality 100 outfile.pdf`

2019-01-27 (permalink)

* converted my advent-of-code solutions repository to [quickproject](https://xach.livejournal.com/278047.html)

It's important to symlink your project, or its .asd file won't properly load up:
mkdir -p ~/quicklisp/local-projects/
ln -s $(pwd) ~/quicklips/local-project/$(dirname)
Once you do this, and you open a REPL inside a _quickproject_, a simple `(ql:quickload ...)` should work just fine

2019-01-26 (permalink)

* replaced TSlime with a custom-made mini-plugin that that ships with a couple of `<Plug>` mappings: one to list all the buffers and pick a terminal you want you send data to, and another to send the current selection to the selected terminal

2019-01-24 (permalink)

Trying to make sense of lisp indentation with Vim and Vlime, and it turns out you have to `set nolisp` (or simply not `set lisp` if you were explicitly setting it): https://github.com/l04m33/vlime/issues/26

+ I still don't get why I should use `lispwords` though (I thought Vlime used a different set of rules), but maybe I just don't know what I am doing...

2019-01-20 (permalink)

* got more familiar with CL `loop`'s syntax

`loop` macro supports iterating over a list of elements and exposing not just the current element, but also the remaining ones; it's implemented using the `:on` phrases (this will make `var` step var over the cons cells that make up a list.).
(let ((data '(1 2 3 4 5 6 6)))
  (loop
    :for (a . remaining) :on data
    :for b = (find a remaining)
    :do (when b
          (return a))))
Note that you cannot simply `:for b :in remaining` because `remaining` will be `NIL` when accessed, forcing and early exit from `loop` (this does not happen when we use [Equals-Then Iteration](http://www.gigamonkeys.com/book/loop-for-black-belts.html)).

`loop` macro supports `count` (and `counting`), so expressions like:
:summing (if (hash-table-find 3 freqs) 1 0) :into threes
can be simplified into:
:counting (hash-table-find 3 freqs) :into threes

2019-01-19 (permalink)

* configured my inputrc to popup an editor (rlwrap-editor) with C-xC-e, when playing with Node, Python, or SBCL -- which are rlwrap-wrapped

* uninstalled `roswell` and installed plain `sbcl` instead; had to install quicklisp though, to make it work with vlime, but that was [easy](https://gist.github.com/jteneycke/7947353)

1 Downlaod quicklisp: `curl -O https://beta.quicklisp.org/quicklisp.lisp`
2 Execute the following forms from a sbcl REPL:
(load quicklisp.lisp)
(quicklisp-quickstart:install)
3 Finally instruct SBCL to always load quicklisp -- add the following to your '.sbclrc: `(load "~/quicklisp/setup.lisp")`

2019-01-17 (permalink)

* finished reading: [Common LISP: A Gentle Intorduction to Symbolic Compoutation](https://www.amazon.com/Common-LISP-Introduction-Computation-Engineering/dp/0486498204)

2019-01-13 (permalink)

* completed advent of code day15-1

* completed advent of code day15-2 -- it's official, I don't know how to code a-star. Also, I am unlucky: my solution was working with pretty much all the inputs I found only, but still it wasn't accepting mine. Anyway, there was a bug in my a-star algorithm, and once fixed it, it finally accepted my answer (funny how it was working OK for all the other inputs).

* completed andvent of code 2018 -- now time to look at the solutions again, and see what can be improved, factored out

* fixed my laptop -- `brewski` fucked it Vim (it would not start up), and it turned out there were some issues with [python2 vs python3](https://superuser.com/questions/1115159/how-do-i-install-vim-on-osx-with-python-3-support/1381420#1381420)

2019-01-10 (permalink)

* completed advent of code day23-2

I spent a few hours trying to come up with a proper solution for this, but I gave up eventually; I got so clouded by the size of the input (not just the number of nanobots, but also their coordinates and their radius sizes), than rather than immediately searching for the correct solution, I should have just tried things in a trial and error fashion.

The solution I happened to copy from Reddit goes as follows (I am still not 100% sure it is generic, but it worked for my input so...): we sample the three dimensional space looking for a local maximum; once we have one, we zoom in around that maximum, and keep on sampling, until the area is a point.  Of course we have to take into account not only the number of nanobots covering the sampled point, but also its distance to the origin.

2019-01-08 (permalink)

* completed advent of code day21-2

After spending almost an hour to try and undestand what the problem was doing, I gave up and peeked into Reddit to see how other people had solved it and... it turned out I got the problem all wrong!  The program loops forever, and the only way to break out of it is if (in my case) R0 is loaded with some random values, say 10 234 55 1 12 and 10 again.

Now, if you load R0 with 10, it will break out immediately; if you load it with 234 it will check against 10 and then 234 (at which point it will exit); the problem asks for the number for which the program will run the maximum number of instructions, and the answer is the latest value of the loop: 12. You load R0 with 12, and the program will run and check against 10, 234, 55, 1, and finally 12!

The program run in 511 seconds but I don't want to try to understand what the program does and simplify it (or transpile it into C or something).  I can try and make opcodes run faster, but that's it

? Make Elf opcodes run faster -- use arrays instead of consing

2019-01-07 (permalink)

* completed advent of code day11-2 -- before, I were generating all the squares, calculating the energy of a square, and then finding the max; now, I am doing all this using for loops, without allocating much stuff. It's slow as balls, but only after 19 minutes it spit the right solution :P

+ make solution for dat11-2 more efficient

2019-01-06 (permalink)

* completed advent of code day25-1 -- disjointset

* completed advent of code day7-2

? write a popn macro popping n elements from place

2019-01-05 (permalink)

* completed advent of code day24-1 -- it took me quite some time to get all the rules implemented, but it worked eventually

* completed advent of code day24-2 -- starting from a boost of 2 I started to double it until I found **a** winning boost; then run a binary search between the winning one, and its half

2019-01-04 (permalink)

* completed advent of code day20-1 -- Dijkstra

* completed advent of code day20-2

2019-01-03 (permalink)

* completed advent of code day22-2 -- the solution (a-star) runs in 48 seconds.. common lisp doesn't ship with an implementation of heapq, so adding+sort on every step is probably killing the performance

+ make day22-2 run fuster -- find a good implementation of heapq

Simple Dijkstra/path-finding Python implementations: https://www.redblobgames.com/pathfinding/a-star/implementation.html

2018-12-31 (permalink)

? parse string using regexpes in common lisp

2018-12-30 (permalink)

* completed advent of code day23-1 -- it took me forever to realize I was looping over the sequence I had passed to `sort` -- which is destructive -- and that some elements were gone (I should have either `copy-seq`d or looped over the sorted list)

* completed advent of code day22-1

+ memoize decorator in common lisp

2018-12-26 (permalink)

* completed advent of code day21-1

I looked for all the comparison instructions where R0 (the only one we could mutate) appeared as one of the argument; put a breakpoint on the instruction, and inspected what the expected value was, used that as R0 and that's it.

2018-12-25 (permalink)

* completed advent of code day19-2

I started analyzing all the comparison instructions and the status of the registers before and after these were executed.  Then tweaking the initial state of the registers to fast forward the program execution I managed to understand that the program was trying to calculate the sum of the divisors of a given number (where _given_ means some funky formula I have no idea of... I simply looked at the registers to see what this number was).

2018-12-24 (permalink)

* completed advent of code day17-1 -- it took me forever to get this right.. I am dumber that I thought I were

* compelted advent of code day17-2 -- once you get part 1 right..

* compelted advent of code day18-1

* compelted advent of code day18-2

* compelted advent of code day19-1

2018-12-21 (permalink)

* completed advent of code day16-2

+ learn about lispwords -- vim

2018-12-20 (permalink)

* completed advent of code day16-1

? figure out how to parse day16-2 input as a single file, instead of 2

2018-12-16 (permalink)

* finished reading [Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy](https://www.amazon.com/Weapons-Math-Destruction-Increases-Inequality/dp/0553418815)

* completed advent of code day13-1

* completed advent of code day13-2

* completed advent of code day14-1

* completed advent of code day14-2 -- slow-ish

? make solution for dat14-2 more efficient

2018-12-15 (permalink)

* completed advent of code day12-1

* completed advent of code day12-2

? understand why day12-2 first set of loops need to be skipped (> cur-gen 10000)

2018-12-14 (permalink)

* completed advent of code day10-1

* completed advent of code day10-2

* completed advent of code day11-1

2018-12-13 (permalink)

* completed advent of code day9-1

* completed advent of code day9-2 -- slow as balls, had to leave it going during the night

+ more efficient solution for day9-2

2018-12-12 (permalink)

* completed advent of code day8-1

* completed advent of code day8-2

2018-12-11 (permalink)

* completed advent of code day7-1 -- it did not seem that complex... yet it took me forever to implement the dependency-tree traversal

2018-12-10 (permalink)

* completed advent of code day6-1

* completed advent of code day6-2

2018-12-09 (permalink)

* completed advent of code day4-1

* completed advent of code day4-2

* completed advent of code day5-1 -- it takes around 2 minutes to run... we can do better

* completed advent of code day5-2 -- left running at night

+ more efficient solution for day5-1 and day5-2

2018-12-08 (permalink)

* completed advent of code day3-1

* completed advent of code day3-2

2018-12-06 (permalink)

* completed advent of code day2-1

* completed advent of code day2-2

2018-12-05 (permalink)

* completed advent of code day1-2

* cleared Spotify cache

Couple interesting articles on how to clear/configure Spotify's cache:

- https://support.spotify.com/it/using_spotify/troubleshooting/reinstallation-of-spotify/
- https://community.spotify.com/t5/Desktop-Mac/Why-the-cache-Spotify-app-on-OSX-takes-up-10G-of-space/td-p/4544523

2018-12-03 (permalink)

* completed advent of code day1-1

2018-11-25 (permalink)

* fixed editorconfig configuration for .vimrc -- https://github.com/editorconfig/editorconfig-vim/issues/114

2018-11-13 (permalink)

* finished reading [Ripensare la smart city](https://www.amazon.com/Ripensare-smart-city-Italian-Francesca-ebook/dp/B07GWZJJT1)

2018-11-11 (permalink)

* managed to get tern.js to reference `async function` definitions -- I did not realize I had `"ecmaVersion": 6` in my .tern-project file

* created a PR to tern.js to factor out server-boostrapping logic into a different file people can use to easily start a Tern server without the HTTP wrapper

Read a few articles about [Initiative Q](https://initiativeq.com/): https://breakermag.com/initiative-q-is-all-the-bad-parts-of-crypto-and-none-of-the-good-ones/ and https://www.vox.com/the-goods/2018/11/8/18075816/initiative-q-email-cryptocurrency-bitcoin-paypal-saar-wilf, and I feel like it's just a big scam: they are giving away free money in exchange of...your data, and information about your inner circles (you only have 5 invites, so you need to be carefull whom you are sending it to; and when you receive one, since you know the sender very well and you know they only had 5 invites, you might as well simply accept it and join the community because again...you really trust the sender of the invite and he wouldn't have sent it to you if the thiing was a scam).  Anyway, let's see how it pans out.

2018-11-10 (permalink)

* finished reading [Machines of Loving Grace: The Quest for Common Ground Between Humans and Robots](https://www.amazon.com/Machines-Loving-Grace-Common-Between/dp/0062266691)

Played around with Tern.js internals to understand: 1) why `findDef` of an `async function` is not returning anything, 2) how to enhance js-langserver to support pretty much all the supported properties one can configure inside .tern-config/.tern-project

Regarding point 1 above, I tried to factor out a boostrap.js file inside tern.js/lib, containing all the code necessary to start up a Tern server (code gently borrowed from tern.js/bin/tern) -- let's see if tern.js maintainers are interested in merging my PR

2018-11-06 (permalink)

* submitted PR to rainbow_paretheses.vim for configuring the maximum number of levels to color

2018-11-04 (permalink)

+ tern.js does not recognize `async function foo() ..` as definition

2018-11-03 (permalink)

Finally found some time to play with Microsoft's [Language Server Protocol](https://microsoft.github.io/language-server-protocol/specification)

* js-langserver: submitted PR for loading ESLint from the project directory -- instead of the language server one

* js-langserver: updated the LS manifest based on implemented features

* js-langserver: fixed 'rename' -- the protocol must have changed since the first implementation, and 'rename' wasn't working anymore

2018-11-01 (permalink)

* aadbook: 0.1.2 -- fix potential security vulnerability related to `requests==2.19.1`

2018-10-31 (permalink)

* vim-lsp: use loclist instead of quickfix for document diagnostics https://github.com/prabirshrestha/vim-lsp/pull/195

2018-10-29 (permalink)

* expensio: initial support (server/client) for expenses: list, create

* expensio: edit of group categories

* expensio: fixed exception loggin when `json: true`

2018-10-28 (permalink)

* expensio: add logging of HTTP requests using morgan

? expensio: fix logging of warnings/errors with winston

2018-10-27 (permalink)

* expensio: initial support (server/client) for groups: list, create

* expensio: initial support (server/client) for group-members: list, create

* expensio: initial support (server/client) for groupcategories: list, create

? expensio: confirm group membership

? expensio: hierarchical categories

2018-10-26 (permalink)

* expensio: almost complete auth workflow -- missing: emails

* expensio: authenticate paymentmethods-related workflows

? expensio: dynamically make 'code' required

? expensio: intercept UNIQUE CONSTRAINT and return 409

? expensio: stop reloading pm/expenses -- especially when going back after pm/expense edit

2018-10-25 (permalink)

* expensio: added back/front end support for editing paymentmethods

* expensio: created passport/bearer based auth layer

2018-10-23 (permalink)

* expensio: created skeleton Node.js app -- logger, module loader, express, openrecord

* expensio: created skeleton Angular app -- boilerplate, Login/Register pages

? expensio: send auth tokens via email

2018-10-22 (permalink)

* created MR for goobook, fixing pip installation -- setup.py is trying to read a file not included in the manifest

* fixed goobook crash happening when user contacts list contained some contacts without neither a name, nor a email address

* added support for local `b:goobookprg` to vim-goobook

* added editorconfig integration to dotfiles/vim

2018-10-20 (permalink)

* improved vim-syntax for .plan files

* fixed `aadbook authenticate` bug where the comamnd could fail when initializing the Auth object, leaving the user with anything to do but wipe everything out

? don't advance mutt cursor when deleting/archiving threads

2018-10-14 (permalink)

* finished setting up GPG w/ Mutt!!

It's been a long journey, but I finally did it.  At first I tried setting up all the pgp_* mutt commands to verify/sign/encrypt/decrypt, but while encrpyting seemed to work, when I tried to decrypt my own messages (i.e. message I had sent to my own address) Mutt errored out complaining that GPG was not able to decrypt the message -- note that manually running the `gpg` command on a copy of the message worked without any issues.  Anyway, I finally gave up and went on and added `set crypt_use_gpgme` to my .muttrc (had to re-install mutt, with `--with-gpgme` flag).

Few related (and interesting) articles:

  Gnu PGP Intro: https://www.gnupg.org/gph/en/manual.html#INTRO
  Getting Started with PGP and GPG: https://medium.com/@johnnyRose/getting-started-with-pgp-and-gpg-58c8d6c35df5
  Simpler GnuPG & Mutt configuration with GPGME: https://henrytodd.org/notes/2014/simpler-gnupg-mutt-config-with-gpgme/
  GPG / Mutt / Gmail: https://gist.github.com/bnagy/8914f712f689cc01c267

2018-10-11 (permalink)

* disabled caching of html|css files on matteolandi.net

2018-10-10 (permalink)

* finished reading [Masters of Doom: How Two Guys Created an Empire and Transformed Pop Culture

](https://www.amazon.com/Masters-Doom-Created-Transformed-Culture/dp/0812972155)

2018-10-09 (permalink)

* added personal GPG key to matteolandi.net

* added fuzzy-finding support to aadbook by replacing each whitespace ` ` with `.*`

* added fuzzy-finding support to goobook

2018-10-08 (permalink)

* fix <lastmod>s format on matteolandi.net sitemap

2018-10-07 (permalink)

* yet another matteolandi.net restyle -- ripped of css rules from: http://bettermotherfuckingwebsite.com/

* automatic generation of sitemap.xml

2018-10-06 (permalink)

* cleaned up mail related scripts (personal/work)

* added .plan page to matteolandi.net

* images on matteolandi.net are no more cropped on mobile

2018-10-05 (permalink)

* set up a .plain file, some vim scripts, and a bash script to manage it

+ add a `logfilter` page on matteolandi.net

? fix mutt 'indicator' colors taking over 'index' ones! I want index rules to be more important than indicator one: https://unix.stackexchange.com/questions/138325/mutt-how-to-display-selected-new-messages-differently-in-index

? pin specific gmail/microsoft certificates http://www.offlineimap.org/doc/FAQ.html#does-offlineimap-verify-ssl-certificates

+ PGP mutt https://www.linuxjournal.com/content/encrypt-your-dog-mutt-and-gpg https://medium.com/@johnnyRose/getting-started-with-pgp-and-gpg-58c8d6c35df5

? live notification fetcher plugin on matteolandi.net

2018-10-03 (permalink)

* make matteolandi.net mobile friendly

2018-10-02 (permalink)

* add aadbook page to matteolandi.net

2018-10-01 (permalink)

* aadbook: fix soft-authentication issue

2018-09-29 (permalink)

* vim-goobook completion works even on lines starting with spaces/tabs

2018-09-28 (permalink)

* make vim-goobook platform independent

2018-09-26 (permalink)

* fix certbot on matteolandi.net

2018-09-24 (permalink)

* finished reading [Ghost in the Wires: My Adventures as the World's Most Wanted Hacker](https://www.amazon.com/Ghost-Wires-Adventures-Worlds-Wanted/dp/0316037729)

2018-09-21 (permalink)

* reimplement vim-goobook filtering logic in pure vimscript

2018-09-16 (permalink)

* aadbook: always regenerate tokens when invoking 'authenticate'

* aadbook: don't refresh auth token, unless expired

2018-09-14 (permalink)

* finished reading [When Google Met WikiLeaks](https://www.goodreads.com/book/show/21839908-when-google-met-wikileaks)

2018-09-09 (permalink)

* finished reading [The Art of Invisibility: The World's Most Famous Hacker Teaches You How to Be Safe in the Age of Big Brother and Big Data](https://www.amazon.com/Art-Invisibility-Worlds-Teaches-Brother/dp/0316380504)

2018-09-08 (permalink)

* first aadbook release!

* fork vim-goobook and add support for specifying a custom program (e.g. aadbook)

2018-09-07 (permalink)

* reinstalled docker-ce on Ubuntu

https://docs.docker.com/install/linux/docker-ce/ubuntu/

2018-08-29 (permalink)

A tale of Sinon, Chai, and stubs

Imports:
const chai = require('chai');
const expect = chai.expect;
const sinon = require('sinon');
const sinonChai = require('sinon-chai');
chai.should();
chai.use(sinonChai);
Then inside the tests:
expect(mock.search).to.have.been.calledOnce;
expect(mock.search.firstCall).to.have.been.calledWith(args);

2018-08-26 (permalink)

* finished reading [No Place to Hide: Edward Snowden, the NSA, and the U.S. Surveillance State](https://www.amazon.com/No-Place-Hide-Snowden-Surveillance-ebook/dp/B00E0CZX0G)

* setting up Mutt on Windows

Cannot find mutt-patched, so let's go with the default
apt-cyg mutt
It turns out, default mutt now ships with the sidebar patch -- yay!!!

Manually installed installed `offlineimap` using Cygwin
apt-cyg install offlineimap

+ build offlineimap from sources

SSL certificates on Cygwin are put inside a different directory, so I had to customize files a little to point to the right path

+ create a script to inspect the disk and symlink certificates in a known location (e.g. ~/.certs)

Used python's [keyring](https://pypi.org/project/keyring/) library to handle secrets (it work on MacOS too!)

urlview is not available on Cygwin; I tried with [urlscan](https://pypi.org/project/urlscan/) for a couple of days but ended up with manually building urlview from sources:
git clone ...
apt-cyg install \
    autoconf \
    automake \
    gettext \
    libcurses-devel
./configure
autoreconf -vfi
make
Created `~/bin/urlview` script delegating to `~/opt/urlview/urlview` (location of my local urlview repo); created `~/.urlview` containing:
COMMAND br
Luckily for me, msmpt is available on Cygwin, so a simple `apt-cyg install ...` did the trick

2018-08-25 (permalink)

* force merge to keep feature branches in the history

This [post](http://www.boekhoff.info/how-to-keep-feature-branches-in-the-git-history/)
does a good job at explaining how to keep feature branches in the git history
(i.e. forgin Git to always create a merge commit).

In a nutshell, to disable _fast-forward_ merges completely:
 git config --global --add merge.ff false
While to disable them just for `master`:
git config --global branch.master.mergeoptions  "--no-ff"

2018-08-20 (permalink)

* re-learned how to release on pypi

Make sure your `~/.pypirc` file is current:
[distutils]
index-servers:
    testpypi
    pypi

[testpypi]
repository: https://test.pypi.org/legacy/
username: landimatte

[pypi]
repository: https://upload.pypi.org/legacy/
username: landimatte
Clean up the `dist/` directory:
rm -rf dist/*
Build the source distribution package:
python setup.py sdist
Release to the *test* instance of Pypi, and check everything is in place:
twine upload dist/* -r testpypi
Release to the *public* instance of Pypi:
twine upload dist/* -r pypi

2018-08-19 (permalink)

* finished reading [Cypherpunks: Freedom and the Future of the Internet](https://www.amazon.com/Cypherpunks-Freedom-Internet-Julian-Assange/dp/1944869085)

2018-08-01 (permalink)

* init'd new work windows box -- and addes stuff to my-env's README

2018-07-12 (permalink)

* fucking fixed weird Cygwin/SSH username issues

Symptoms: cygwin `USER` and `USERNAME` env variables are different[0]:
$ set | egrep "USER=|USERNAME="
USER=IONTRADING+User(1234)
USERNAME=mlandi
Where does the value of USER come from?  Take a look in '/etc/profile'. You
will find the following:
# Set the user id
USER="$(/usr/bin/id -un)"
`id` gets it's data from /etc/passwd (which, if not there, you might have to
create with the following command[1]):
mkpasswd -c > /etc/passwd
You will have to kill all the cygwin processes to see this working.

https://superuser.com/questions/783183/cygwin-user-and-username-are-different
https://stackoverflow.com/questions/19613654/rename-change-cygwin-username

2018-07-05 (permalink)

* finished reading [Modern Vim: Craft Your Development Environment with Vim 8 and Neovim](https://www.amazon.com/Modern-Vim-Development-Environment-Neovim/dp/168050262X)

2018-06-05 (permalink)

* finished reading [The Senior Software Engineer](https://www.amazon.com/Senior-Software-Engineer-Practices-Effective/dp/0990702804)

* make `appcelerator` work again

Unset `_JAVA_OPTS` as scripts are too fragile and the `javac` command is not properly parsed
~/Library/Application Support/Titanium/mobilesdk/osx/7.2.0.GA/node_modules/node-appc/lib/jdk.js
Added log messages because of uncaught exceptions

2018-06-03 (permalink)

* learned how to take a peek at a repo's remote

Fetch incoming changes:
git fetch
Diff `HEAD` with `FETCH_HEAD`:
git diff HEAD..FETCH_HEAD
If the project had a changelog, you could just diff that:
git diff HEAD..FETCH_HEAD -- some/path/CHANGELOG
Bring changes in:
git pull --ff

2018-01-08 (permalink)

* finished reading [Never Split the Difference: Negotiating As If Your Life Depended On It](https://www.amazon.com/Never-Split-Difference-Negotiating-Depended-ebook/dp/B014DUR7L2)

2017-12-27 (permalink)

* fucking fixed MacOS not going to sleep!

As of OS X 10.12, the nfsd process now holds a persistent assertion preventing sleep. This is mostly a good thing, as it lets NFS work on networks where wake-on-lan packets don't work. However, this means that when plugged in, OS X will no longer automatically sleep. If you unload the daemon, sleep works fine, and I noticed that vagrant starts nfsd if required on vagrant up.

This is now worse with 10.12.4 as a call to nfsd stop is no longer supported when system integrity protection is turned on. Thus the only way to let the machine sleep is to prune /etc/exports and restart nfsd. It would be nice to at least add a cli command to have vagrant prune /etc/exports (so we can script in when we want nfs pruned) or even better have it automatically prune exports on suspend and halt and re-add on wake and up.

For this I created the following, and added it to my .bashrc:
function fucking-kill-nfsd() {
    # https://github.com/hashicorp/vagrant/issues/8103
    sudo sh -c "> /etc/exports"
    sudo nfsd restart
}
[Source](https://github.com/hashicorp/vagrant/issues/8103)

2017-12-21 (permalink)

* fix symbolic links on Windows/cygwin

Define the following environment variable:
CYGWIN=winsymlinks:nativestrict
https://stackoverflow.com/questions/18654162/enable-native-ntfs-symbolic-links-for-cygwin/18659632#18659632

2017-12-20 (permalink)

* learned how to clone repos subdirectories

This can be done by using *sparse checkouts* (available in Git since 1.7.0):
mkdir <repo>
cd <repo>
git init
git remote add -f origin <url>
This creates an empty repository with your remote, and fetches all objects but
doesn't check them out.  Then do:
git config core.sparseCheckout true
Now you need to define which files/folders you want to actually check out. This
is done by listing them in `.git/info/sparse-checkout`, eg:
echo "some/dir/" >> .git/info/sparse-checkout
echo "another/sub/tree" >> .git/info/sparse-checkout
Last but not least, update your empty repo with the state from the remote:
git pull origin master
To undo this instead:
echo "/*" > .git/info/sparse-checkout
git read-tree -mu HEAD
git config core.sparseCheckout false
https://stackoverflow.com/a/13738951/348524
https://stackoverflow.com/a/36195275/348524

2017-12-15 (permalink)

* fixed key-repeat timings on Mac OS

We would do this by using Karabiner/Karabiner-Elements, but that option is long gone, so we need to do that by using `defaults`:
defaults write NSGlobalDomain InitialKeyRepeat -int 11
defaults write NSGlobalDomain KeyRepeat -float 1.3
In particular, the above sets:

- Delay until repeat: 183 ms
- Key repeat: 21 ms

https://github.com/tekezo/Karabiner-Elements/issues/551#issuecomment-277987194

2017-12-12 (permalink)

* fixed slow bash login shells on Mac OS

Unlink old (and slow) bash-completion scripts:
brew unlink bash-completion
Install bash-completion for bash4:
brew install bash-completion@2
Change '.bash\_profile' to source the new bash\_completion files
if [ -f $(brew --prefix)/share/bash-completion/bash_completion ]; then
    . $(brew --prefix)/share/bash-completion/bash_completion
fi
Finally, make sure your default `bash` shell is the +4 one:
sudo bash -c 'echo /usr/local/bin/bash >> /etc/shells'
chsh -s /usr/local/bin/bash
https://troymccall.com/better-bash-4--completions-on-osx/
https://stackoverflow.com/a/44293632/348524

2017-11-26 (permalink)

* finished reading [Don't Make Me Think, Revisited: A Common Sense Approach to Web Usability (Voices That Matter)](https://www.amazon.com/Dont-Make-Think-Revisited-Usability/dp/0321965515)

2017-10-21 (permalink)

* Installed Common Lisp on Mac OS

brew install roswell # https://github.com/roswell/roswell
ros install sbcl/1.3.13

2017-10-13 (permalink)

* finished reading [Programming JavaScript Applications: Robust Web Architecture with Node, HTML5, and Modern JS Libraries](https://www.amazon.com/Programming-JavaScript-Applications-Architecture-Libraries/dp/1491950293)

2017-09-26 (permalink)

* finished reading [Practical Vim: Edit Text at the Speed of Thought](https://www.amazon.com/Practical-Vim-Edit-Speed-Thought-ebook/dp/B018T6ZVPK)

2017-08-27 (permalink)

* edit markdown files with live-reload

Install `markserv`:
npm install markserv -g
Move into the folder containing any markdown file you would like to preview, and run:
markserv

2017-08-08 (permalink)

* web resource caching

Manually adding headers via express or anything:
'Cache-Control': 'public, max-age=31536000' // allow files to be cached for a year
Nginx configuration:
location /static {
  alias /site;

  location ~* \.(html)$ {
    add_header Last-Modified $date_gmt;
    add_header Cache-Control 'no-store, no-cache, must-revalidate, proxy-revalidate, max-age=0';
    if_modified_since off;
    expires off;
    etag off;
  }
}

2017-07-25 (permalink)

Me of the future, do enjoy this!
" Dear /bin/bash: fuck you and your bullshit, arcane command-line behaviour.
"
" Basically, I want to set this to a non-login, non-interactive bash shell.
" Using a login/interactive bash as Vim's 'shell' breaks subtle things, like
" ack.vim's command-line argument parsing. However, I *do* want bash to load
" ~/.bash_profile so my aliases get loaded and such.
"
" You might think you could do this with the --init-file command line option,
" which is used to specify an init file. Or with --rcfile. But no, those only
" get loaded for interactive/login shells.
"
" So how do we tell bash to source a goddamned file when it loads? With an
" *environment variable*. Jesus, you have multiple command line options for
" specifying files to load and none of them work?
"
" Computers are bullshit.
let $BASH_ENV = "$HOME/.bashrc"

2017-04-26 (permalink)

* finished reading [To Sell is Human: The Surprising Truth About Persuading, Convincing, and Influencing Others](https://www.amazon.com/Sell-Human-Surprising-Perusading-Influencing-ebook/dp/B009OINFH4)

2017-02-19 (permalink)

* finished reading [Designing with the Mind in Mind: Simple Guide to Understanding User Interface Design Guidelines](https://www.amazon.com/Designing-Mind-Understanding-Interface-Guidelines/dp/0124079148)

2017-01-18 (permalink)

* finished reading [Drive: The Surprising Truth About What Motivates Us](https://www.amazon.com/Drive-Surprising-Truth-About-Motivates/dp/1594484805)

2017-01-08 (permalink)

* finished reading [The Everything Store: Jeff Bezos and the Age of Amazon](https://www.amazon.com/Everything-Store-Jeff-Bezos-Amazon-ebook/dp/B00BWQW73E)

2017-01-06 (permalink)

* downloaded all my intagram photos

- Load your profile page on the browser, and scroll to the bottom until you reached the end of your stream.
- Open the developer console, right click on the <html/> element, copy it, and dump it somewhere
- Run the following command: `lynx -dump -listonly -image_links ../foo.html | grep cdninstagram | - 640x640 | awk '{print $2}' | xargs -n 1 -P8 wget`

2017-01-03 (permalink)

Found this nice hack to quickly get the average color of an image
img2 = img.resize((1, 1))
color = img2.getpixel((0, 0))
https://www.hackzine.org/getting-average-image-color-from-python.html

2016-12-31 (permalink)

* figured out a way to always update vbguest add-ons: `vagrant plugin install vagrant-vbguest` (http://kvz.io/blog/2013/01/16/vagrant-tip-keep-virtualbox-guest-additions-in-sync/)

Also, you could instruct Vagrant to automatically install required dependencies with the following:
required_plugins = %w( vagrant-hostmanager vagrant-someotherplugin )
required_plugins.each do |plugin|
   system "vagrant plugin install #{plugin}" unless Vagrant.has_plugin? plugin
end
Source: http://stackoverflow.com/a/25918153/348524

2016-12-28 (permalink)

* found a way to debug why a node app refuses to grecefully shut-down

// as early as possible
const wtf = require('wtfnode');
setInterval(function testTimer() {
  wtf.dump();
}, 5000);

2016-12-26 (permalink)

* successfully built vim, from sources

sudo apt-get install -y \
    liblua5.1-0-dev \
    libluajit-5.1-dev \
    lua5.1 \
    luajit \
    python-dev \
    ruby-dev

sudo ln -s /usr/include/lua5.1 /usr/include/lua5.1/include

cd ~/opt
git clone https://github.com/vim/vim
cd vim

./configure \
        --with-features=huge \
        --enable-multibyte \
        --enable-largefile \
        --disable-netbeans \
        --enable-rubyinterp=yes \
        --enable-pythoninterp=yes \
        --enable-luainterp=yes \
        --with-lua-prefix=/usr/include/lua5.1 \
        --with-luajit \
        --enable-cscope \
        --enable-fail-if-missing \
        --prefix=/usr
make

2016-09-16 (permalink)

Run the last history command (like `!!`):
fc -s
Run last command without appending it to the history:
HISTFILE=/dev/null fc -s

2016-09-14 (permalink)

Move pushed commit to a different branch
git checkout $destination-branch
git cherry-pick $change-done-on-the-wrong-branch
git push
git checkout $origin-branch
git revert $change-done-on-the-wrong-branch
Related reads
- http://christoph.ruegg.name/blog/git-howto-revert-a-commit-already-pushed-to-a-remote-reposit.html
- http://stackoverflow.com/a/9180445

2016-09-13 (permalink)

Analyze angular performance:
angular.element(document.querySelector('[ng-app]')).injector().invoke(['$rootScope', function($rootScope) { var a = performance.now(); $rootScope.$apply(); console.log(performance.now()-a); }])
or:
angular.element(document.querySelector('[ng-app]')).injector().invoke(['$rootScope', function($rootScope) {
  var i;
  var results = []
  for (i = 0; i < 100; i++) {
    var a = performance.now();
    $rootScope.$apply();
    results.push(performance.now()-a);
  }
  console.log(JSON.stringify(results));
}])

2016-09-12 (permalink)

* finished reading [The Startup Playbook: Secrets of the Fastest-Growing Startups from their Founding Entrepreneurs](https://www.amazon.com/Startup-Playbook-Fastest-Growing-Startups-Entrepreneurs/dp/1452105049)

2016-08-21 (permalink)

lvh.me is a domain that resolves to localhost, and it works with subdomains too (xxx.lvh.me would still resolve to localhsot).  Why is this helpful? Because you could use it when developing applications that need subdomain configurations.

https://reinteractive.net/posts/199-developing-and-testing-rails-applications-with-subdomains

2015-11-05 (permalink)

* finished reading [Clean Code: A Handbook of Agile Software Craftsmanship](https://www.amazon.com/Clean-Code-Handbook-Software-Craftsmanship/dp/0132350882)

2015-09-07 (permalink)

* finished reading [The Pragmatic Programmer: From Journeyman to Master](https://www.amazon.com/Pragmatic-Programmer-Journeyman-Master/dp/020161622X)

2015-09-03 (permalink)

* finished reading [Zero to One: Notes on Start Ups, or How to Build the Future Kindle Edition](https://www.amazon.com/Zero-One-Notes-Startups-Future/dp/0804139296)

2015-08-01 (permalink)

* finished reading [The Lean Startup: How Today's Entrepreneurs Use Continuous Innovation to Create Radically Successful Businesses](https://www.amazon.com/Lean-Startup-Entrepreneurs-Continuous-Innovation/dp/0307887898)

2013-10-01 (permalink)

* finished reading [Java Concurrency in Practice](https://www.amazon.com/Java-Concurrency-Practice-CONCURRENCY-PRACT-ebook/dp/B004V9OA84)

2013-02-01 (permalink)

* finished reading [Python Cookbook](https://www.amazon.com/Python-Cookbook-Third-David-Beazley/dp/1449340377)

2012-11-03 (permalink)

* finished reading [Effective Programming: More Than Writing Code](https://www.amazon.com/Effective-Programming-More-Than-Writing-ebook/dp/B008HUMTO0)

2012-10-24 (permalink)

* finished reading [Simplify](https://www.amazon.com/Simplify-Joshua-Becker-ebook/dp/B006431ADS)

2012-09-12 (permalink)

* finished reading [Design Patterns: Elements of Reusable Object-Oriented Software](https://www.amazon.com/Design-Patterns-Elements-Reusable-Object-Oriented/dp/0201633612)

2012-08-23 (permalink)

* finished reading [Vim and Vi Tips: Essential Vim and Vi Editor Skills](https://www.amazon.com/Vi-Tips-Essential-Editor-Skills/dp/8360869006)

2012-08-17 (permalink)

* finished reading [Functional Programming for Java Developers: Tools for Better Concurrency, Abstraction, and Agility](https://www.amazon.com/Functional-Programming-Java-Developers-Concurrency/dp/1449311032)

2012-08-10 (permalink)

* finished reading [Effective Java (2nd Edition)](https://www.amazon.com/Effective-Java-2nd-Joshua-Bloch/dp/0321356683)

2011-06-15 (permalink)

* finished reading [Growing Object-Oriented Software, Guided by Tests](https://www.amazon.com/Growing-Object-Oriented-Software-Guided-Tests/dp/0321503627)

2011-04-02 (permalink)

* finished reading [Getting Real: The Smarter, Faster, Easier Way to Build a Successful Web Application](https://www.amazon.com/Getting-Real-Smarter-Successful-Application/dp/0578012812)

2011-03-14 (permalink)

* finished reading [Learning the vi and Vim Editors: Text Processing at Maximum Speed and Power](https://www.amazon.com/Learning-Vim-Editors-Processing-Maximum-ebook/dp/B005EI85BE)

2011-01-29 (permalink)

* finished reading [Rework](https://www.amazon.com/Rework-Jason-Fried/dp/0307463745)