2022-07-25 (permalink)

Prevent layout shifts when body becomes scrollable

(The above notes were ripped from this nicely written post: [React: Preventing Layout Shifts When Body Becomes Scrollable](https://maxschmitt.me/posts/react-prevent-layout-shift-body-scrollable/))

Use [`scrollbar-gutter`](https://developer.mozilla.org/en-US/docs/Web/CSS/scrollbar-gutter).  However, if Safari / iOS support is paramount for your business, you can try with the following.

First, calculate the width of the scrollbar:
function getScrollbarWidth() {

  // Creating invisible container
  const outer = document.createElement('div');
  outer.style.visibility = 'hidden';
  outer.style.overflow = 'scroll'; // forcing scrollbar to appear
  outer.style.msOverflowStyle = 'scrollbar'; // needed for WinJS apps

  // Creating inner element and placing it in the container
  const inner = document.createElement('div');

  // Calculating difference between container's full width and the child width
  const scrollbarWidth = (outer.offsetWidth - inner.offsetWidth);

  // Removing temporary elements from the DOM

  return scrollbarWidth;
Then create a custom hook, around `ResizeObserver`, to understand when the document body is scrollable and when it's not:
function useBodyScrollable() {
    const [bodyScrollable, setBodyScrollable] = useState(
        document.body.scrollHeight > window.innerHeight
    useEffect(() => {
        const resizeObserver = new ResizeObserver(() => {
            setBodyScrollable(document.body.scrollHeight > window.innerHeight)
        return () => {
    }, [])
    return bodyScrollable
Last, add _some_ padding to the document when its body is **not** scrollable, and remove it (the padding) when the body gets scrollable:
  const bodyScrollable = useBodyScrollable()
  useLayoutEffect(() => {
      if (bodyScrollable) {
          document.body.style.paddingRight = '0px'
      } else {
          document.body.style.paddingRight = `${scrollbarWidth}px`
  }, [bodyScrollable])

2022-07-13 (permalink)

Sending keep-alive messages with Nest.js

SSE keep-alive messages are usually implemented via _comments_, i.e. lines with a colon as first character ([MDN](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events#event_stream_format)); unfortunately Nest does not seem to support this type of messages, and this forces us to get a hold of the current request object and manually write keep-alive messages into it.
sse(@UserWithOrganization() user: AuthenticatedUser, @Res() res: Response): Observable<MessageEvent> {
  const notifications$ = this.notificationsEventService.subscribe(user.profileId, user.organizationId).pipe(map((data) => ({ data })));
  const keepAlive$ = interval(this.configService.get('app.sseKeepAliveInterval')).pipe(
    tap(() => res.write(': keepalive\n\n')),
    filter(() => false)
  ) as Observable<MessageEvent>;
  return merge(keepAlive$, notifications$);
Nest expects SSE messages to comply to the following interface:
export interface MessageEvent {
    data: string | object;
    id?: string;
    type?: string;
    retry?: number;
I tried to emit objects shaped like: `{ data: '' }`, as keep-alive messages, but Nest actually wound up emitting:
id: 1

id: 2

id: 3

I did not like the fact that it was increasing the last id message for each keep alive message, so I opted for the other solution described above, i.e. get the response object injected and manually write into it.

2022-07-12 (permalink)

Which of the `useEffect` dependencies was the one that triggered the effect?

I was recently trying to understand why one of my effects was getting fired even though I _thought_ that none its dependencies had changed, but then after a little bit of debugging I realized that I was listing as dependency an object which was getting re-created every time the hook was run...

The following homemade hook is what really helped me pinpoint the faulty dependency:

- Same signature as `useEffect`: effect function and dependencies array
- Internally calls `useEffect` to register the effect to fire when _any_ of the dependencies changes
- Also registers individual effects, one for each dependency, to log to console the dependency that did change
- Also registers an extra effect to log when the current component is mounted / unmounted
import { EffectCallback, DependencyList, useEffect } from 'react';

const useEffectsVerbose = (effect: EffectCallback, deps: DependencyList) => {
  useEffect(() => {
    return () => console.log('UNMOUNT');
  }, []);
  for (let each of deps) {
    // eslint-disable-next-line react-hooks/rules-of-hooks
    useEffect(() => {
      console.log('CHANGED', each);
    }, [each]);
  useEffect(effect, deps); // eslint-disable-line react-hooks/exhaustive-deps

export default useEffectsVerbose;

2022-06-09 (permalink)

More handy react-hooks

`useEventSource`, a tiny wrapper around `EventSource` (for SSE):

- The hook's signature matches exactly `EventSource` one
- It returns the most recently received event / error
import { useEffect, useState } from 'react';

type UseEventSourceReturn<T> = {
  data: T | undefined;
  error: Event | undefined;

const useEventSource = <T>(url: string, init?: EventSourceInit): UseEventSourceReturn<T> => {
  const [data, setData] = useState<T>();
  const [error, setError] = useState<Event>();

  useEffect(() => {
    const es = new EventSource(url, init);

    es.onmessage = (e) => {

    es.onerror = (err) => {

    return () => {
  }, [url, init]);

  return { data, error };

export default useEventSource;

2022-06-08 (permalink)

Couple of useful react-hooks we have created at work, recently

`useIntersectionObserver`, a tiny `IntersectionObserver` wrapper:

- `node` is the `HTMLElement` to observe on
- `opts` is the init params to pass to `IntersectionObserver`
- `observedEntry` is the observed entry, as seen by `IntersectionObserver`
import { useEffect, useState } from 'react';

  root: null,
  rootMargin: '0px',
  threshold: 0.75,

const useIntersectionObserver = (
  node: HTMLElement | null,
  opts: IntersectionObserverInit | undefined = INTERSECTION_OBSERVER_INIT,
) => {
  const [observedEntry, setObservedEntry] = useState<IntersectionObserverEntry>();

  useEffect(() => {
    const observer = new IntersectionObserver(([entry]) => setObservedEntry(entry), opts);

    if (node) {

    return () => observer.disconnect();
  }, [node, opts]);

  return { observedEntry };

export default useIntersectionObserver;
`useVirtualKeyboardVisibility`, a hook to detect if the virtual keyboard is visible or not, while on mobile:

- handy if you wanted to hide certain elements as the user is typing into a form or something
- works by diffing the screen height with the document height
import { useEffect, useState } from 'react';
import { isMobile } from 'react-device-detect';

const useVirtualKeyboardVisibility = (minBrowserToolbarHeight = 200) => {
  const [visible, setVisible] = useState(isVisible(minBrowserToolbarHeight));

  useEffect(() => {
    if (!isMobile) {
    function handler() {
      if (visible !== isVisible(minBrowserToolbarHeight)) {
    window.visualViewport.addEventListener('resize', handler);
    return () => {
      window.visualViewport.removeEventListener('resize', handler);
  }, [visible, minBrowserToolbarHeight]);

  return visible;

function isVisible(minDelta: number) {
  return window.screen.height - document.body.clientHeight > minDelta;

export default useVirtualKeyboardVisibility;

2022-04-29 (permalink)

? vim indentexpr that uses neoformat behind the scenese to figure out where the new line should go

  - similar to what my vim-lispindent plugin does

? vim netrw to automatically create the file **and** the parent directory, if missing

2022-04-26 (permalink)

[11 promises from a manager](https://twitter.com/MrEchs/status/1516024322409832450):
1. We’ll have a weekly 1:1. I’ll never cancel this meeting, but you can cancel it whenever you like. It’s your time.
2. Our 1:1 agenda will be in the meeting invite so we remember important topics. But you’re always free to use the time for whatever’s on your mind.
3. When I schedule a meeting with you, I’ll always say *when I schedule it* what it’s meant to be about. I will not schedule meetings without an agenda.
4. When I drop into your DM’s, I’ll always say “hi and why.” No suspense, no small talk while you are wondering what I want.
5. News or announcements that significantly impact you, your work, or your team will come from me directly in a 1:1, not revealed in a big meeting.
6. You’ll get feedback from me when it’s fresh. There will be no feedback in your performance review that you’re hearing for the first time.
7. I trust you to manage your own time. You don’t need to clear with me in advance your time AFK or OOO.
8. Your work gets done your way. My focus is on outcomes, not output. Once we’re clear on where we need to go, how to get there is up to you. If I ever find it necessary to suggest a specific approach, I will supply an example.
9. A team is strongest when it’s working together, looking after one another, and taking care of each other. Please look to your left and to your right for opportunities to help your colleagues. Please ask for help when you need it. Nobody works alone.
10. I trust you to skip level and talk to my manager or other senior management about anything you feel is relevant. You don’t need to clear it with me, and I’m not going to get weird about it when you do.
11. I will attribute credit appropriately to you and your team. I will never exaggerate my own role or minimize your contribution. I’ll be especially certain to nail down attribution when senior management are hearing of our accomplishments.

2022-04-15 (permalink)

[How does React Suspense Work?](https://www.youtube.com/watch?v=onrMjAGY4qA)
  - You **throw a Promise**
  - React will catch that, wait for it to be fulfilled / rejected, and then call the render function again
  - The next time the render function is invoked, the promise's result is returned **syncrhonously** (i.e. no need to `await` for it)
let resource = {
  user: wrapPromise(fetch(...)),

function wrapPromise(promise) {
  let status = "pending";
  let result;
  let suspender = promise.then(
    r => {
      status = "success";
      result = r;
    e => {
      status = "error";
      result = e;
  return {
    read() {
      if (status === "pending") {
        throw suspender;
      } else if (status === "error") {
        throw result;
      } else if (status === "success") {
        return result;

function ProfileDetails() {
  const user = resource.user.read();
  return <h1>{user.name}</h1>

2022-04-13 (permalink)

TIL: you can pass `--not --remotes` to `git log` to get the list of local commits not present in any of the remotes
  - This was mentioned on [tpope/vim-fugitive] Large Unpulled section (Issue #1963), as a way to improve the information show while firing `:G`

2022-04-10 (permalink)

TIL: [vim-rhubarb](https://github.com/tpope/vim-rhubarb) supports GitHub issues, issue URLs, and collaborator omni-completion
TIL: [Curl](http://curl.haxx.se/) supports reading secrets from .netrc

Git's dotted Range Notations
The .. (two-dot) Range Notation
The ^r1 r2 set operation appears so often that there is a shorthand for it. When you have two commits r1 and r2 (named according to the syntax explained in SPECIFYING REVISIONS above), you can ask for commits that are reachable from r2 excluding those that are reachable from r1 by ^r1 r2 and it can be written as r1..r2.
The ... (three-dot) Symmetric Difference Notation
A similar notation r1...r2 is called symmetric difference of r1 and r2 and is defined as r1 r2 --not $(git merge-base --all r1 r2). It is the set of commits that are reachable from either one of r1 (left side) or r2 (right side) but not from both.
Source: https://mirrors.edge.kernel.org/pub/software/scm/git/docs/gitrevisions.html

2022-04-09 (permalink)

TIL: ECMAScript [bind operator proposal](https://github.com/tc39/proposal-bind-operator)
  - It introduces a new operator `::` which performs `this` binding and method extraction
  - In its binary form, the `::` operator creates a bound function such that the left hand side of the operator is bound as the `this` variable to the target function on the right hand side
// before
let hasOwnProp = Object.prototype.hasOwnProperty;
let obj = { x: 100 };
hasOwnProp.call(obj, "x");
// after
let hasOwnProp = Object.prototype::hasOwnProperty;
  - In its unary prefix form, the `::` operator creates a bound function such that the base of the supplied reference is bound as the `this` variable to the target function
// before
// after
  - `document::foo(bar)` is `foo.bind(document)(bar)` (this only makes sense when `foo` is defined like `function foo` and not `foo = () => ...` (`.bind` has no effect on arrow functions))
  - `::document.foo` is `document.foo.bind(document)`
  - The proposal has not been updated since 2018, but 2 new other proposals, namely [extensions](https://github.com/tc39/proposal-extensions), and [call-this](https://github.com/tabatkins/proposal-call-this-operator), are slowly gaining audience (both Stage-1)
  - Even if almost dead, 3 new proposals seem to try to replace this functionality
- [proposal-call-this](https://github.com/tc39/proposal-call-this)

* @read [Declarative Shadow DOM](https://web.dev/declarative-shadow-dom/)

  - [Shadow DOM](https://web.dev/declarative-shadow-dom/), but for SSR
  - Add `shadowroot` property to a `<template>` element, and that's it
data:text/html,<host-element><template shadowroot="open"><slot></slot></template><h2>Light content</h2></host-element>

* ["Unison: a new distributed programming language" by Paul Chiusano](https://www.youtube.com/watch?v=gCWtkvDQ2ZI)

  - Content-addressed code: all the below definitions, **are the same**
factorial n = product (range 1 (n+1))
blah      n = product (range 1 (n+1))
blah2     z = product (range 1 (z+1))
  - For each hash, it will store it's serialized AST, plus additional metadata, like _names_
  j1ejquc/ # hash
  - Inside the AST file, you will find the original definition with all the dependencies replaced by their hashes, and with all local variables normalized
factorial =
  $arg1 -> #8h7jbs5 (#h9bwx9q 1 (#e2i3biw $arg 1))
  - No builds, easy renames, test caching...
  - No dependency conflicts
  - Easy typed, durable storage
  - Programs that deploy themselves + describe whole elastic distributed systems
  - Further reading: [Unison Programming Language](https://news.ycombinator.com/item?id=27652677), on Hacker News

2022-04-07 (permalink)

WTF: Since when am I using the ancient version of bash bundled within MacOS, and not `brew`s one?
  - default bash shell
$ $BASH --version
GNU bash, version 3.2.57(1)-release (x86_64-apple-darwin20)
Copyright (C) 2007 Free Software Foundation, Inc.
  - brew's one
$ $(brew --prefix)/bin/bash --version
GNU bash, version 5.1.16(1)-release (x86_64-apple-darwin20.6.0)
Copyright (C) 2020 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>

This is free software; you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
$ echo $(brew --prefix)/bin/bash
  - Make sure `/usr/local/bin/bash` is listed anywhere inside /etc/shells
  - `chsh -s /usr/local/bin/bash`
  - didn't I do this already? @2017-12-12
  - I am not sure what's going on, but I am going to pretend this never happened...computers are weird
  - https://itnext.io/upgrading-bash-on-macos-7138bd1066ba

2022-04-06 (permalink)

TIL: how to get a customizable entrypoint for a docker image
  - specify in the ENTRYPOINT, the base command only, e.g. `ENTRYPOINT ["node"]`
  - specify in the CMD, all the additional arguments to run the app, e.g. `CMD ["dist/main.js"]`
  - this way, a standard `docker run $image` would run: `node dist/main.js`
  - if you wanted to lunch a Node.js REPL instead: `docker run $image -`
  - if you wanted to fire a terminal shell instead, you will have to use the `--entrypoint` flag instead: `docker run --entrypoint sh $image`
  - [Docker CMD vs. Entrypoint Commands: What's the Difference?](https://phoenixnap.com/kb/docker-cmd-vs-entrypoint)
TIL: you can point your browser to `data:text/html,...`, and have it process some HTML
  - Three nested div elements, with a slightly opaque black background each (`%23` is just `#`, url-encoded)
data:text/html,<style>* { background: %230001 }</style><div>foo<div>bar<div>baz
TIL: `<a><b><a>` means `<a><b></b></a><b><a>`, i.e. the `<b>` gets duplicated
data:text/html,<style>a { background: rgba(255,255,0,0.5) } * { border: 1px solid black; padding: 1em; margin: 1em } b { outline: 5px dotted red }</style><div id=div><a>foo<b>bar<a>baz</div><pre id=pre></pre><script>pre.innerText = div.innerHTML</script>
  - What if the `<b>` has an ID?  Well, that gets duplicated too!
data:text/html,<style>a { background: rgba(255,255,0,0.5) } * { border: 1px solid black; padding: 1em; margin: 1em } b { outline: 5px dotted red }</style><div id=div><a>foo<b id=b>bar<a>baz</div><pre id=pre></pre><script>pre.innerText = div.innerHTML</script>
TIL: the `<plaintext>` tag can be used to stop the parser from processing the rest of the document
  - You can use `<plaintext>` to make an html preamble to an arbitrary document, and have the whole thing be an html document
<html>[...html document...]<plaintext>[...arbitrary document...]
  - The `<plaintext>` tag just swallows everything coming after it, but you can still interact with the rest of the document just fine
data:text/html,foo<script>onload = () => document.body.append(123)</script><plaintext>bar</plaintext>baz
TIL: module `<script>` are automatically deferred
  - btw, `defer` here basically means to wrap the script in `onload`

2022-04-05 (permalink)

- is it possible to npm install a project with package-lock.json but without package.json? because that might speed up my docker builds a bit
  - say I add a new script to my package.json, but don't change anything else
  - ideally I would not want my CI/CD to re-download dependencies
  - re-building the it's fine, I guess, but re-downloading dependencies really seems like a waste of time, especially if the package-lock.json did not change
  - ideally I would: 1) copy package-lock, 2) run the magic npm command, 3) copy package.json
  - and if the package-lock does not change, docker should be able to use the cached layers for step 1 and 2, and only run 3
  - but does such magic command exist?
  ? https://stackoverflow.com/questions/51110793/bumping-package-json-version-without-invalidating-docker-cache

2022-04-04 (permalink)

TIL: Replit.com just launched a @SearchEngine -- https://blog.replit.com/search
  - **We believe that you should be able to find anything on Replit in less than 30 seconds.**
  - First prototype created during what they internally call, the _hack week_
  - It uses Elasticsearch
  - Plus a bunch of data-pipelines whose sole job is to take from the database (or other sources), convert it to a more convenient format, and finally push the result _into_ Elasticsearch
  - It searches across REPLs, posts, comments, snippets of code, doc articles
  - Most importantly: no _commingled_ search: all the different data-stores are searched in parallel, and results are ranked
- It's a difficult problem to be able to rank objects coming from different sources
- It does not surprise me that they avoided this in the first iteration
- I meant to ask about this during the Q&A at ReplCon, but run out of time...
  - Some personal notes:
- 30 seconds is a lot of time, so that should give you plenty of time to figure out a way to search and retrieve all the data you have
- they most likely have their search engine scattered across different servers, with lots of RAM available for grabs
- they don't have commingled search, so they can search different indexes in parallel, and populate the search results page on a first-come-first-served basis, giving then time to the slowest paths, to catch up

2022-04-03 (permalink)

? @LiveProgramming with [LSX](https://github.com/fukamachi/lsx), and [parenscript](https://parenscript.common-lisp.dev/), and Websockets

  - Clients open a Websocket connection to the liveprogramming server, and everything they receive, they will `eval()` it
  - On the server we will have custom DEFUN, DEFVAR macros that would a) emit actual _JavaScript_ (this way you can save the output to a file, and use that as bootstrap point for new clients), and b) send a message to all the connected clients to evaluate the just updated piece of code
  - I am not sure what a DEFVAR expression gets translated into, but we need to make sure the existing value is **not** overridden
  - Aha: https://github.com/eshamster/proto-cl-hot-loads
  - One could even think of integrating [incremental-dom](https://github.com/google/incremental-dom), and basically roll out your own poor man's version of React

? @book The Programmer's Brain

? @book Pragmatic Thinking and Learning: Refactor Your Wetware

2022-04-02 (permalink)

TIL: You can stop `brew` from updating itself while `brew install`-ing something, by setting the following env variable: `HOMEBREW_NO_AUTO_UPDATE=1`
TIL: In JavaScript, `>>> 0`, is an abuse of the operator to convert any numeric expression to an "integer" or non-numeric expression to zero. -- [SO](https://stackoverflow.com/a/11385688)

* Always persist the @whiteboard at: https://whiteboard.party/20220402.b4t4fmyrcf

  - This of this as a _public_ whiteboard, almost like the old guestbook
  - I am sure I am going to regret this ;-)

? @book The Act of Living: Talks on the Alexander Technique

? @book A Pattern Language: Towns, Buildings, Construction (Center for Environmental Structure Series)

2022-04-01 (permalink)

TIL: Vim landed native Language Server Protocol support! [vim/vim@9247a22](https://github.com/vim/vim/commit/9247a221ce7800c0ae1b3487112d314b8ab79f53)

? Give this new vim/lsp plugin a try: https://github.com/yegappan/lsp

2022-03-30 (permalink)

Interesting presentation from @DavidNolen on @live-programming a react-native application: [ClojureScript in the Age of TypeScript](https://www.youtube.com/watch?v=3HxVMGaiZbc)

- Good analogy between a file-system, and a programs
  - You use a SHELL to interact with the file-system
- You can change directory
- You an see what's inside the current directory
- You can create a new file in the current directory
- You can delete a file from the current directory
- You would **never** use a SHELL that did not offer those 4 basic functionalities
  - REPL is how you interact with the program
- You should be able to change module
- You should be able to inspect the current module
- You should be able define a new function in the current module
- You should be able to delete a function from the current module
- Why then settle for a REPL that did not offer the same 4 basic functionalities?
- Too much focus on the _static_ part of the development cycle; no affordances for the _dynamic_ part, i.e. "what happens when I actually run the program?"
  - Lisp
  - Smalltalk
- Storybook (to deal with the _static_)
  - Helps you develop your React components in isolation
  - All significant UI component/view states are represented in a story
  - Can be tested via a simulator and on the device
- Krell (to deal with the _dynamic_)
  > In programming there is a wide-spread 1st order theory that one shouldn't build one's own tools, languages, and especially operating systems.  This is true--an incredible amount of time and energy has gone down these ratholes.  On the 2nd hand, if you can build your own tools, languages and operating systems, then you absolutely should because the leverage that can be obtained (and often the time not wasted in trying to fix other people's not quite right tools) can be incredible.  --Alan Kay, The Power of the Context
  - Connect to native devices / simulator
  - Hot code reload without loss of state
  - Possible thanks to: Clojure/ClojureScript, and React/React-Native
- It cannot be done with Swift-UI, or Flutter
- Tooling should help with both the static and dynamic elements of our programs
  - debuggers solve only part of this problem
- Given the current state of technology 70s era Lisp/Smalltalk style live programming delivers incredible value over current practice
- You need powerful tools that do not get in the way to enable higher level thoughts

* Process scrollwheel events only when triggered from the canvas -- @whiteboard

* Started collecting in a single place all the keyboard-related customizations I use -- one day I will publish it somewhere

  - Apps layer
  - Motion layer
  - Function layer
  - Capslock
  - ...

2022-03-29 (permalink)

TIL: CSS support the "parent selector", or as the [spec](https://drafts.csswg.org/selectors/#relational) calls it, "the relational pseudo-class" or simply `:has()`

This could be used for form styling, e.g. styling labels differently, based on the validity of their inputs:
form:has(input:invalid) {
  border: 2px solid red;
But I am sure somebody else would come up with more interesting use cases that this.

A note on its usage though, from the [MDN](https://developer.mozilla.org/en-US/docs/Web/CSS/:has):
The `:has()` pseudo-class takes a relative selector list as an argument. In earlier revisions of the CSS Selectors Level 4 specification, `:has` had a limitation that it couldn't be used within stylesheets. Instead, it could only be used with functions like `document.querySelector()`; this was due to performance concerns. _This limitation has been removed_ because no browser implemented it that way. Instead, browsers currently only support the use of :has() within stylesheets.
So it's fine to use it inside CSS; but it **cannot** be used with `document.querySelector()`...too bad!

Getting livereload to work when deployed on Replit turned out a bit more complicated that expected -- @whiteboard

- I am trying to integrate livereload with an Express application running on [Replit](https://replit.com)
  - When deployed there, Replit assigns you a hostname
  - It proxies all incoming connections to your service, so long as it listened to ``
- To make this all work I then need the Express application to listen on `` (not a big deal, it's the default)
- I also need to tell livereload to listen on a different interface, e.g. ``
  - But this, unfortunately is currently not supported
  - Hence a fork, to [add support for an additional config, `host`](https://github.com/iamFIREcracker/node-livereload/commit/ac2b26e4d205f5be54c697eb487e21e98368e826) which I can then tweak accordingly (I have also [asked](https://github.com/napcs/node-livereload/issues/161) the maintainer of the project if there was appetite for such a change...let's see what they say)
- So, knowing all this and with our fork: first we spin up the livereload server
var lrServer = livereload.createServer({ port: 35729, host: '' });
lrServer.watch(__dirname + "/public");
- Then we the application Websocket server
var ioServer = http.createServer((req, res) => {
ioServer.listen(23434, '')
var io = socket(ioServer);
- Then the Express application, with middlewares to properly proxy requests to the liveserver and socket-io servers respectively
var app = express();
app.use(createProxyMiddleware('/lr', {
  target: '',
  ws: true,
  pathRewrite: {
    '^/lr': '',
app.use(createProxyMiddleware('/socket.io', {
  target: '',
  ws: true,
var server = http.createServer(app);
server.listen(3000, '');
- Finally, we inject the following code on the client, to have them connect to the livereload server (you can read more about this [here](https://github.com/livereload/livereload-js/issues/98#issuecomment-1081922742))
  '<script src="' +
    location.protocol +
    '//' +
    location.host +
    '/lr/livereload.js?snipver=1' + (location.protocol === 'https:' ? '&port=443' : '') + '&path=lr"></' +

DNS for @whiteboard

- I am going to be hosting this on Replit
- Replit supports custom domains via CNAME records
  - They give you a unique name, e.g. `$id.id.repl.co`
  - They have it resolve the IP address of the server your Repl runs on
  - You then set up a CNAME record so that `app.domain` gets resolved via `$id.id.repl.co`
- I bought [whiteboard.party](https://whiteboard.party) and I was hoping I could use that to access the app, but
- Google Domains would not let me set up a CNAME on a 2nd level domain
  - OK if you did it for: www.whiteboard.party, or app.whiteboard.party
  - Not OK for whiteboard.party
- So I have settled for [d.whiteboard.party](https://d.whiteboard.party) for now, also set up automatic forwarding so people landing on whiteboard.party would automatically be redirected to d.whiteboard.party
- I have not made up my mind yet; other alternatives follow
  - the.whiteboard.party
  - a.whiteboard.party
  - app.whiteboard.party

2022-03-28 (permalink)

[Programmable notes](https://maggieappleton.com/programmatic-notes)
Programmable notes are note-taking systems that allow you to write programmatic rules that facilitate particular ways of working with your notes.
Based on triggers (specific conditions are met, input from the user, time of day), they kick off a sequence of actions. Actions might be prompts for the user to answer, transformations on the notes themselves, or requests to external websites and data sources for information.
Few examples:
When I open my note-taking app first thing in the morning -> ask what I'm currently thinking about. Once I've marked this complete, show me what I said yesterday, 3 days ago, and last week.
When I create a note related to a specific book -> pull in the author, cover, and publication date from the Google Books API.
When a week has passed since I've written notes on a book -> ask me everything I remember about it off the top of my head. Put this into the summary section of my notes as a first draft.
When I mark an essay draft as complete -> ask me all the questions my favourite writing professor would ask.

- Do you actually have a point and have you made it?
- Does your introduction present a compelling problem?
- Are you assuming the audience is thicker than a brick?
- Is this littered with passive voice (to be, is, was, were...) and evasive language?
So the idea is basically figure out a way to make the knowledge base...alive!

From the same article:
This is [end-user programming](https://www.inkandswitch.com/end-user-programming/) in action. The developers and designers of note-taking apps will never be able to create systems that suit the needs of everyone's particular knowledge management workflows. Enabling users to design and share their own is the only way to give people agency over their knowledge bases.

* Change the color of the toolbar when a new marker color is selected -- @whiteboard

  - I have also replaced the `<select>` with a _palette_ icon

* Smaller button sizes on mobile screens -- @whiteboard

  - This (and the change above too) enables us to display more control buttons on mobile

* Start livereload-js from _within_ the server application @whiteboard

  - This way I don't have to mess around with nix scripts to get the livereload server to start while running on Replit

2022-03-27 (permalink)

* @A Add "Welcome" dialog to @whiteboard

  - Quick overview of the existing functionalities, and on how to use it
  - "whiteboards are secret but not private" warning
  - "whiteboards older than 15 days will be nuked" danger

2022-03-25 (permalink)

* @C @whiteboard Show the user how big the eraser (and the marker) is going to be

  - As of now, we are simply rendering an _ellipsis_ following the mouse movements

? @B Implement touch gestures @whiteboard

  - Pinch to zoom / double tap and hold a-la Google Maps
  - Two fingers to move around

* @A Fixed a bug where zooming out too much changes the aspect ratio of the @whiteboard

  - [`p5.image()`](https://p5js.org/reference/#/p5/image), when in "destination rectangle mode" (i.e. invoked with 8 parameters as shown below) seems to be stretching the image when source's width or height gets bigger than the buffer's width or height
  // destination
  0, 0, p5.windowWidth, p5.windowHeight,
  // source
  view.x, view.y, view.w, view.h,
  - The solution was to implementing the _scaling_ logic ourselves, and simply paint the buffer (i.e. the dashboard) scaled up or down accordingly
  -view.x / scale(),
  -view.y / scale(),
  width / scale(),
  height / scale(),

function scale() {
  return view.w / p5.windowWidth;

+ @B @2022-03-28 Implement an API so it's possible to programmatically create a @whiteboard (think about creating a bot for Microsoft Teams)

  - `/` redirects to `/new`
  - `/new` redirects to `/20220325.ga49oukh4i`
  - In the future, we can have our landing page mounted on `/`

+ @A @2022-03-28 @whiteboard forget about network import -- use local modules instead

  - Repeatable builds, please!

2022-03-24 (permalink)

* Integrated [livereload-js](https://github.com/livereload/livereload-js) with @whiteboard

  - `npm run serve` will use `nodemon` to start the server, excluding changes happening inside public/
  - `npm run livereload` will start `livereload` on the public/ folder
  - `npm run start-dev` will runn `npm run serve` and `npm run livereload`, in parallel
  - Change something on the express app, `nodemon` will kick in and restart it
  - Change something on the client facing code, `livereload` will kick in and tell any connected client to _refresh_ (note: JavaScript related changes require a full page refresh, while CSS or HTML ones, not)

* @A @whiteboard It was not possible to use the mouse on laptops with touch-screens

  - That's because we were either registering touch events **or** mouse ones
  - Instead, we should have always registered mouse events, and optionally touch ones

? [skypack is breaking font files](https://github.com/skypackjs/skypack-cdn/issues/137), so I was forced to use unpkg.com instead; should I go all in with this one, and forget about skypackdev?

2022-03-23 (permalink)

* @A Fixed Out of memory exception while on mobile @whiteboard -- I guess saving a canvas **per shape** is a bit too much, isn't it?! Ended up reworking the drawing logic a bit

  - 1 rendering canvas, 1 offscreen graphic
  - all the _remote_ draw operations are done on the offscreen
  - inside p5.draw, first we copy the offscreen graphic over to the canvas, and then we directly draw any local shape not yet acknowledged by the server
  - And with this, it's now decently usable on mobile as well!
  - For the record, this is the out of memory exception I was talking about
Uncaught RangeError: Failed to execute 'getImageData' on 'CanvasRenderingContext2D': Out of memory at ImageData creation
    at d.default.Renderer2D.loadPixels (cdn.skypack.dev/-/p5@v1.4.1-EGzjLa9vkIOCBSEWMsHC/dist=es2019,mode=imports/optimized/p5.js:10747:130)
    at s.default.Graphics.h.default.loadPixels (cdn.skypack.dev/-/p5@v1.4.1-EGzjLa9vkIOCBSEWMsHC/dist=es2019,mode=imports/optimized/p5.js:13082:73)
    at Object.h.default._copyHelper (cdn.skypack.dev/-/p5@v1.4.1-EGzjLa9vkIOCBSEWMsHC/dist=es2019,mode=imports/optimized/p5.js:13071:12)
    at s.default.Graphics.h.default.copy (cdn.skypack.dev/-/p5@v1.4.1-EGzjLa9vkIOCBSEWMsHC/dist=es2019,mode=imports/optimized/p5.js:13069:29)
    at draw (:3000/main.js:391:12)
    at add (:3000/main.js:377:20)

* @A @whiteboard Prevented touch events from scrolling the window -- only works on Chrome for now, but whatever

? @B @whiteboard Touch events cause the window to scroll while using DuckDuckGo Android browser

  - [javascript + html5 canvas: drawing instead of dragging/scrolling on mobile devices?](https://stackoverflow.com/questions/9975352/javascript-html5-canvas-drawing-instead-of-dragging-scrolling-on-mobile-devic)
  - [prevent browser scrolling while drawing on canvas on mobile](https://kirkdev.blogspot.com/2020/10/prevent-browser-scrolling-while-drawing.html)

* @B @whiteboard Added Zoom-in/Zoom-out explicit buttons/menu

2022-03-22 (permalink)

? @B Immediately scrolling without moving the mouse first, causes the @whiteboard camera to close in (or get far from) the top left corner instead of where the mouse is currently at

2022-03-21 (permalink)

* Integrated fzf with z, as suggested [on the official wiki](https://github.com/junegunn/fzf/wiki/Examples#z)

* Integrated fzf with bash's history search (CTRL-R) (I actually borrowed all the magic from [the official key bindings](https://github.com/junegunn/fzf/blob/master/shell/key-bindings.bash))

* Complete UI overhaul for @whiteboard

  - We now use Bootstrap
  - Different modes of working: panning, drawing, erasing

* Fixed @whiteboard panning bug, where slight movements would cause the camera view to move **a lot**

  - We were moving the camera based on the point where the dragging had begun, and not just the previous point
  - We are now also taking the zoom level into account, and adjusting the movements accordingly (e.g. the more the user is zoomed in, the smaller the movement)

* Adjust the @whiteboard marker size accordingly, based on the current zoom level -- what you see is what you get

2022-03-19 (permalink)

* Shapes are now relative to the top-left corner of the @whiteboard

- Previously they were relative to the client view, and because of that draw commands had to include not just the points, but also the size of the client view
- This also also addresses a couple of quirks of the current implementation
  - Lines looked different (i.e. more thick, or more slim) based on the zoom level of the client that drew them; now they all look the same, at least **after** the server has acknowledged the draw command (while drawing, for the client who is drawing, the line would still look _different_)
  - Shapes would get stretched based on the aspect ratio of the client screen -- definitely a bug

* System generated @whiteboard room IDs now include the creation date

- We were previously using UUIDs
- Now IDs look like: `r20220319.83e8fmmoo1`
  - `r`, to mark that it's a room ID
  - `20220319`, the date when the room was created
  - `.`, a separator
  - `83e8fmmoo1`, some random data [zbase32](https://philzimmermann.com/docs/human-oriented-base-32-encoding.txt) encoded
- One plus of having switched to a _structured_ ID like this one, is that we can now parse the room creation date and, for example, decide **not** to persist any room which is older than, say 30 days
  - We will still keep it it memory, just in case, but the moment the service is bounced, old rooms will be gone
setInterval(function syncToDisk() {
  const today = moment.utc();
  const copy = {};
  for (const key of Object.keys(state)) {
    const cdate = moment.utc(key.substring(1, 9), "YYYYMMDD");
    if (today.diff(cdate, 'days') < 15) {
      copy[key] = state[key];
  fs.writeFile("state.json", JSON.stringify(copy), function writeFileCb(err) {
    if (err) throw err;
    console.log(`Persisted [state=${JSON.stringify(copy)}]`);
}, seconds(30));

2022-03-18 (permalink)

* @Whiteboard changed the active cursor based on what the user is doing (i.e. pen, eraser, move)

  - Interestingly enough I was not able to use `p5.cursor` because it does not support .svg images; well, it actually lets you use them, it's just that if you also try to specify a x y offset to identify the _active_ portion of the cursor, p5.js will ignore this additional information

+ @A @2022-03-19 Shapes are wrongly stretched based on the different aspect ratios -- @whiteboard

  - (try to) Draw a square
  - Open developer tools
  - Refresh the page (now that developer tools are open, the aspect ration of the canvas would be different)
  - The Square should be rendered as stretched vertically

+ @A @2022-03-19 Zooming out too much causes the @whiteboard to be rendered with the wrong aspect ratio

~ @C @2022-04-02 @Whiteboard is really janky while accessed from mobile... maybe the following might help: [Touch events ](https://developer.mozilla.org/en-US/docs/Web/API/Touch_events)

2022-03-16 (permalink)

? @B Replace `(permalink)`s with `¶`s while generating html for my @plan file -- https://www.w3.org/WAI/EO/Drafts/tutorials/page-structure/in-page-navigation/#anchors

? @B Implement permalinks for sections (or for special markers like `~$DATE.$RANDOM`) -- @plan

TIL about the difference between _direct_ and _indirect_ `eval` in JavaScript
var x = 'outer';
(function() {
  var x = 'inner';
  eval('console.log(x)'); // "inner"

var x = 'outer';
(function() {
  var x = 'inner';
  [eval][0]('console.log(x)'); // "outer"
(You can read more about this here: [Global eval. What are the options?](http://perfectionkills.com/global-eval-what-are-the-options/))

Anyways, I will be playing with this a bit more over the following days, and see how difficult it actually is to implement a system that would let me change a Web page from my IDE, without page refresh (related: [Skewer: live web development with Emacs](https://github.com/skeeto/skewer-mode)).

2022-03-14 (permalink)

I am dropping here a few links I stumbled upon, recently, around ID schemas.

Let's start with [Snowflake IDs](https://en.wikipedia.org/wiki/Snowflake_ID):

- Total of 64 bits (only 63 are actually used)
  - First 41 bits, representing milliseconds since a chose epoch
  - Next 10 bits represent the machine ID (to prevent clashes)
  - Next 12 bits, represent per-machine sequence number (this allow creating multiple IDs in the same millisecond)
- Sortable by time (IDs are based on the time they were created)
- The creation time of the entity using the ID, can be calculated from the snowflake
- Introduced by Twitter, in [June 2010](https://blog.twitter.com/engineering/en_us/a/2010/announcing-snowflake)
  - Later adopted by other companies as well, like Instagram in [December 2012](https://instagram-engineering.com/sharding-ids-at-instagram-1cf5a71e5a5c)

[Collision-resistant IDs](https://github.com/ericelliott/cuid):

- Example: **c - h72gsb32 - 0000 - udoc - l363eofy**
  - Literal `c`, identifying it's a cuid (starting with a character guarantees that the ID can be used pretty much everywhere, e.g. you cannot have a html entity ID starts with a number)
  - Timestamp
  - Counter, representing the number of IDs (modulus the block size) generated by the current process
  - Client fingerprint
  - Random
- Monotonically increasing (when less than 10000 cuids are generated within the same millisecond)
  - Suited for [high-performance database primary keys](http://code.openark.org/blog/mysql/monotonic-functions-sql-and-mysql), because they can be binary searched
- Fast, Secure, and Portable

Interesting [Twitter thread](https://twitter.com/meekaale/status/1503340213342556166) from @meekaale, on how, tagging functions, variables, and slots, with persistent unique IDs could aid with change tracking and hypermedia in general
another thing I started working on for http://Wisp.Town is a kind of Roam-like system where functions, variables, slots can be tagged with a persistent unique ID
those would then be used in hyperlink block reference fashion with help from the IDE or editor mode
I’ve been thinking about this since early last year when I started implementing something similar in Elixir. Persistent identifiers will also be great for understanding source code changes: you can tell computationally when a function is renamed, etc
If your docstrings can reference stable IDs instead of names, it’s easier to refactor the code. Your documentation URLs become stable. Etc
You’ll be able to mention these IDs in comments and other non-semantic little thingies. Use them in commit messages if you want to identify something clearly. Maybe they can be used as bug issue tracking identifiers too, who knows!
I devised a little format for these identifiers, because everyone loves a new ad hoc UUID standard:
It’s today’s date and 48 bits of randomness in a case insensitive base-32, fully URL safe.
These IDs fit in 64 bits if you store the date as days since 20220101. 🤓
This [commit message](https://github.com/mbrock/wisp/commit/6b5cb6599225beab6d0ec5bc0323042336608436) also shows how the author thinks this system might be used:
Make Ctrl-Shift-K insert a fresh key in the editor
This isn't very useful yet, but it will be.

The idea is that you will write e.g.

    (defun foo (x) ~20220314.8ITJ8NXNXY
      (append '(1 2 x) '(a b c)))

to give FOO that key.  Then FOO will have a stable identifier even if
you rename it, which is great for URLs and other hypermedia stuff.
This could be useful, for example, if I wanted to create anchor points in my @plan file and have other notes reference to it.  Nothing crazy, really, but it's the simplicity of the solution that really caught my attention.

? @book Rationality: From AI to Zombies -- by Eliezer Yudkowsky

2022-03-14 (permalink)

? @book Lila: An Inquiry Into Morals - Robert M. Pirsig

? @book Zen and the Art of Motorcycle Maintenance: An Inquiry Into Values - Robert M. Pirsig

Feel like listening some chiptune @music, lately? [c64radio.com](https://c64radio.com/) got you covered!

Another @music related note; this time, it's [ThePrimeagen](https://www.twitch.tv/ThePrimeagen)'s [BANGERS](https://www.youtube.com/playlist?list=PLm323Lc7iSW9oSIDihesMJXmMNfh8U59k) playlist!

2022-03-13 (permalink)

+ @B @2022-03-14 @whiteboard Add recenter button

2022-03-12 (permalink)

Getting zoom support for my toy @whiteboard turned out a tiny bit more complicated than I thought it would.

Zooming in or out, **without** taking the mouse position into account, is quite straightforward: all you have to do is widening or shortening the field of view as the user uses the scroll wheel.  However, users are used to move the mouse over the area of interest first, and when they use scroll wheel they expect the application to zoom in or out on that (and not, say, on around the center of the visible area, like the naive solution described above would probably end up doing).

To implement a smooth zoom in / out animation, the area of the whiteboard / the `<x, y>` point that the user is pointing at, with their mouse, _needs to stay the same for the whole duration of the animation_, or otherwise the camera would start moving around.  And that's it, really: it's that's simple!

The following is some documentation that I attached to my zoom routine; inside of it:

- `view`, represents which part of the whiteboard is currently visible to the current user
  - `view.x` and `view.y` represent the top-left corner of the visible area (relative to the top-left corner of the whiteboard)
  - `view.w` and `view.h` represent the width and height of the visible area
- `tview`, represents the target view, i.e. what do we want the user to be looking at, next
  // What does `p5.mouseX` currently points to?
  //     x = view.x + (p5.mouseX / p5.windowWidth) * view.w
  // While zooming, we don't want to change what `p5.mouseX` currently
  // points at, or otherwise the _camera_ would move.  This means
  // the the following should be true as well:
  //     x = tview.x + (p5.mouseX / p5.windowWidth) * tview.w
  // `x` is known from the first equation, `tview.w` we just
  // calculated it inside the previous block; all we have to do
  // is figure out `tview.x`:
  //     x = view.x + (p5.mouseX / p5.windowWidth) * view.w
  //     x = tview.x + (p5.mouseX / p5.windowWidth) * tview.w
  //     ----------------------------------------------------
  //     0 = view.x - tview.x + (p5.mouseX / p5.windowWidth) * (view.w - tview.w)
  //     ----------------------------------------------------
  //     tview.x = view.x + (p5.mouseX / p5.windowWidth) * (view.w - tview.w)
Again, pretty simple _after_ you realize you cannot move the area the user is zooming in to / out from.

+ @C @2022-03-19 @whiteboard: should lines drawn while at different zoom level look "the same"? We are currently using `p5.scale()`, and with it, drawing when zoomed out will result in thick lines!

2022-03-10 (permalink)

* @B @whiteboard Add panning support

* @B @whiteboard Add Export button to export as png

~ @B @2022-04-02 @whiteboard Slow performance due to bigger canvas **and** increasing undo history -- maybe offscreen rendering can help?

  - [Main Thread Canvas vs Web Worker (Offscreen) Canvas](https://codepen.io/newinweb/pen/bxazNg)
  - [Generating the Mandelbrot Set](https://badgerbadgerbadgerbadger.dev/generating-the-mandelbrot-set/)

+ @B @2022-03-12 @whiteboard Scrollwheel to zoom in / zoom out

  - [companje/PanZoom.js](https://gist.github.com/companje/5478fff07a18a1f4806df4cf77ae1048)

2022-03-09 (permalink)

* @A @whiteboard @mobile Set up `<meta>` element accordingly, so it does not start completely zoomed out

* @A @whiteboard @mobile Added support for touch events ([Mobile Touch Draw](https://openprocessing.org/sketch/1398186))

* @A @whiteboard @mobile `crypto.randomUUID()` does not seem to be defined while on mobile (I am not sure why though), so I ended up snatching a [UUID generator](https://stackoverflow.com/a/2117523) from StackOverflow and use that when `crypto.randomUUID` is missing

? @B @whiteboard Fix flicker effect when sending `draw` commands remotely (1. draw few shapes, rapidly, 2. each shape will disappear only to reappear a few ms right after, when the linked `draw` message is received)

2022-03-08 (permalink)

[Programming, Motherfucker](http://programming-motherfucker.com/), will never get old!
We are a community of motherfucking programmers who have been humiliated by software development methodologies for years.
We are tired of XP, Scrum, Kanban, Waterfall, Software Craftsmanship (aka XP-Lite) and anything else getting in the way of...**Programming, Motherfucker**.
We are tired of being told we're socialy awkward idiots who need to be manipulated to work in a Forced Pair Programming chain gang without any time to be creative because none of the 10 managers on the project can do... **Programming, Motherfucker**.
We must destroy these methodologies that get in the way of...**Programming, Motherfucker**.

In the same spirit, there is also: [@Outage party](https://outage.party)
Incident Response Runbook
1. Find out what changed.
2. Rollback everything.
>> My change was in a different datacenter.
Roll that back.
>> I changed the internal network and this is an external problem.
Please be rolling that back.
>> I only changed staging.
Oh really? Roll it back.
>> My change couldn't possibly be related!
Don't care, roll it back.
>> I didn't change code, it's just a backfill
Don't care, roll it back. Turn off the backfill.
>> Nothing changed! I just added some additional app nodes!
Turn them off.
>> Seriously it would be impossible for this change to be related.
You keep using that word, I do not think it means what you think it means. Roll it back.

2022-03-07 (permalink)

* Added a `New` button to initialize a new @whiteboard

+ @2022-03-19 Evict unused @whiteboard's data, after a week

+ @2022-03-10 Make it possible to move around without drawing -- @whiteboard

? Fucking fix pasting text inside a Microsoft Teams private chat; few relevant pointers:

  - [Why does Teams always strip indentation when pasting text?](https://techcommunity.microsoft.com/t5/microsoft-teams/why-does-teams-always-strip-indentation-when-pasting-text/m-p/2349812) -- first opened in May 2021!
  - [From Markdown To Pastable Formatted Text In OS X Terminal](https://www.ostricher.com/2015/08/from-markdown-to-pastable-formatted-text-in-os-x-terminal/)
  - [Clipboard inspector](https://evercoder.github.io/clipboard-inspector/) -- Web page you can past stuff into, to see what's actually being stored inside the OS clipboard (i.e., plaintext, vs html, vs rft)

? @read This other article on how to draw smooth lines in Javascript: http://scaledinnovation.com/analytics/splines/aboutSplines.html @whiteboard

2022-03-06 (permalink)

* Started to put together some notes on the syntax for my @plan files

* Started to put together some notes for a CLI to actually manage my @plan files

? @read patterns of enterprise application architecture

? @read debugging: the 9 indispensable rules for finding even the most elusive software and hardware problems

2022-03-02 (permalink)

Thoughts on implementing undo/redo for my @multiplayer @whiteboard

- Each user keeps a stack of shapes -- the ones they drew
- Each shape has a unique ID attached to it -- generated by the user that created the shape
- "draw" event are broadcast'd to all the users (the sender of the message included)
  - this way, the sender, seeing a draw command with a _known_ ID, can acknowledge that the message was received by the server, and promote the shape into the "undo history stack"
- Each user keeps an undo-history stack
  - Entries of this stack contain the last received "draw" message
  - They also contain a "buffer", i.e. a copy of the visible canvas, **after** the last "draw" message had been processed
- When a user wants to undo **their** last "draw" action, they would generate an "undo" message, containing the ID of the shape that needs to be undone, and send it to the server which will then broadcast it to all the other clients
- They would pop from their shapes stack, and push it into a different stack (e.g., undone shapes)
- Each client (the sender included) will then
  - Pop elements elements from the undo-history stack (and push them into a different temporary stack), until the element with the matching ID is found
  - Pop that element too -- but discard it, don't save it anywhere
  - Copy the buffer of the first element of the undo-history stack onto the canvas
  - Re-apply all the "draw" operations, in order
- When re-doing the last operation instead, the client will simply generate a "draw" message, with the exact same shape information as before -- the only difference, is that now the shape will be rendered **on top** of everything else
- Drawing a new shape, should empty the local stack of undone shapes too -- correct, we do not support undo tree ;-)

2022-03-01 (permalink)

[Copy canvas to graphics object in p5.js](https://stackoverflow.com/questions/70231135/copy-canvas-to-graphics-object-in-p5-js) @multiplayer @whiteboard

2022-02-28 (permalink)

How to disable / enable specific Git branches from triggering a CI, with GitHub actions?

[This](https://github.com/git/git/commit/e76eec35540f9eca7baef3d84c164a1f8d4d3cb9) is how `git` did it, and I am going to summarize the steps here.

The first step is to create a branch, `config-repo`, with no files in it except for `ci/config`, whose sole responsibility is to return `true` or `false` if the input branch name is enabled for CI or not:
# Sample script for enabling/disabling GitHub Actions CI runs on
# particular refs. By default, CI is run for all branches pushed to
# GitHub. You can override this by dropping the ".sample" from the script,
# editing it, committing, and pushing the result to the "ci-config" branch of
# your repository:
#   git checkout -b ci-config
#   cp allow-refs.sample allow-refs
#   $EDITOR allow-refs
#   git commit -am "implement my ci preferences"
#   git push
# This script will then be run when any refs are pushed to that repository. It
# gets the fully qualified refname as the first argument, and should exit with
# success only for refs for which you want to run CI.

case "$1" in
# allow one-off tests by pushing to "for-ci" or "for-ci/mybranch"
refs/heads/for-ci*) true ;;
# always build your integration branch
refs/heads/my-integration-branch) true ;;
# don't build any other branches or tags
*) false ;;
The next step is to create a new GitHub action job, `ci-config`, responsible for invoking `ci/config` above and ultimately set an _output_ variable indicating whether the current commit should trigger CI or not:
      runs-on: ubuntu-latest
        enabled: ${{ steps.check-ref.outputs.enabled }}
        - name: try to clone ci-config branch
          continue-on-error: true
          run: |
            git -c protocol.version=2 clone \
              --no-tags \
              --single-branch \
              -b ci-config \
              --depth 1 \
              --no-checkout \
              --filter=blob:none \
              https://github.com/${{ github.repository }} \
              config-repo &&
              cd config-repo &&
              git checkout HEAD -- ci/config
        - id: check-ref
          name: check whether CI is enabled for ref
          run: |
            if test -x config-repo/ci/config/allow-ref &&
               ! config-repo/ci/config/allow-ref '${{ github.ref }}'
            echo "::set-output name=enabled::$enabled"
Last, you will have to update all the pre-existing jobs to a) wait for job `ci-config` to run first, and b) skip executing the job if `ci-config` decided that CI had to be disabled for the current commit:
    needs: ci-config
    if: needs.ci-config.outputs.enabled == 'yes'
    runs-on: windows-latest
    - uses: actions/checkout@v1
And that's it!

2022-02-25 (permalink)

I have been thinking about implementing a @multiplier @whiteboard lately.

"Why?" you might ask, and well, the answer is two-fold:

- I got sick of the alternatives, e.g. Gimp + scree/window share, or Microsoft Whiteboard, mostly because a) they usually eat up a lot of CPU and my MacBook Air from 2014 can hardly cope with that, and b) their integration with Microsoft Teams, when available, kind of suck (e.g., on Teams, you can activate Whiteboard only from meetings, but not from private chats)
- I wanted to play around with multiplayer concepts, and this seemed like a good application, with the right size, to get started with it

Here are some ideas of a possible MVP for this (`@A` items are more important than, say, `@C` ones):

+ @A @2022-03-07 Create new @whiteboard

+ @A @2022-03-04 Join existing @whiteboard by URL or something

~ @C @2022-04-02 See where the others are on the @whiteboard, and click to follow-through

+ @A @2022-02-28 Pre-defined colored markers @whiteboard

~ @C @2022-04-02 Color picker for markers @whiteboard

~ @C @2022-04-02 Sliders to increase/decrease marker size @whiteboard

? @B Keyboard shortcuts @whiteboard

+ @A @2022-02-28 Eraser (i.e. a bigger, white, marker) @whiteboard

~ @C @2022-04-02 Erase shapes instead of drawing over them @whiteboard

+ @A @2022-03-02 Undo / Redo @whiteboard

? @B Smooth drawing @whiteboard

? @C Electron / standalone app @whiteboard

Where do we start?! Well, of course the lines drawn will have to look _stunning_ (i.e. smooth), so why not kicking this off searching for ways to draw smooth lines?  Well, it _should_ be possible to do this (either with splines, or point filtering based on age / position), but I will surely have to dig deeper:

- [How to draw a smooth line through a set of points using Bezier Curves](https://stackoverflow.com/questions/9916119/how-to-draw-a-smooth-line-through-a-set-of-points-using-bezier-curves)
- [Canvas Spliner](https://github.com/jonathanlurie/canvasSpliner)
- [Cubic splines in JavaScript (via CoffeeScript)](http://blog.mackerron.com/2011/01/01/javascript-cubic-splines/)
- [understand implementation of cubic interpolation](https://stackoverflow.com/questions/28032958/understand-implementation-of-cubic-interpolation)
- [how to draw smooth curve through N points using javascript HTML5 canvas?](https://stackoverflow.com/questions/7054272/how-to-draw-smooth-curve-through-n-points-using-javascript-html5-canvas)

PS. I put together a simple Repl (it's a fork of another existing one, plus some local edits) which I am going to use to experiment with this: you can play with this [here](https://replit.com/@iamFIREcracker/MulitplayerTest).

2022-02-24 (permalink)

+ @2022-02-27 @read [Zettelkasten](https://en.wikipedia.org/wiki/Zettelkasten)

? @read [Unit of Work + Repository Pattern using TypeScript (NestJs) and TypeOrm](https://jideowosakin.com/unit-of-work-pattern-in-typescript/)

? @read [Node.js + MySQL + Unit of Work pattern](https://medium.com/@vinjenks/node-js-mysql-unit-of-work-pattern-4b20cbc61963)

2022-02-23 (permalink)

Intent-based development environment

(taken from: https://amasad.me/right)
At Replit, we're really passionate about making a simple yet powerful
programming environment. The environment needs to be very intuitive for
newbies and should be enjoyable for experts.
The environment always starts with 3-columns, something we've tested and made
sure that newbies intuitively get: files, editor, console. That's great, but
now what if you want to do web development?
You simply write the code, and the environment will open a new pane to show
you the output.
Web development is not the only thing people want to do on Replit, many would
also like to build games, plot, or boot up old operating systems for fun. In
this case, if you wrote the code, or invoked the program that wants to do
graphics, we will detect that and stream VNC down to your browser. So how do
we do it? Initially, we relied on LD_PRELOAD[4] which allows you to override
arbitrary function calls and inject your own logic. Pretty neat and works for
other intent-based functionality. However, we decided to move away[5] from
that, here is the new approach:
In order to avoid using the LD_PRELOAD trick, we took a page out of
systemd's socket activation feature, so that we detect the intent of
a repl wanting to communicate with X by opening the socket to the
X server, and launch the server + Window Manager at that point in time.
Finally, much of software development today involves open-source packages.
However, package manager can be a bit of pain to use, and honestly they can
take out of the flow of coding. So while we do have native UI for package
management, we also built a system to detect your intent to install an OSS
package and simply do it for you.
Intent detection is hard but worth it.

2022-02-13 (permalink)

NrrwRgn - Region highlight is lost on write

I have been playing with [NrrwRgn]() Vim plugin lately, and I have to say it, I am quite enjoying it so far.  One small annoyance that I have experienced, is the fact that the highlight of the narrowed region inside the original window seem to get lost every time I write changes back from the narrowed region to the original window (read: https://github.com/chrisbra/NrrwRgn/issues/87).

I did spend some time looking into this today, and here are my findings.

First off, here is the context of the .txt file I have used for testing:
Who does not love Lorem Ipsum?

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor
incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis
nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.
Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu
fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in
culpa qui officia deserunt mollit anim id est laborum.
While to create the narrowed regions I have tried with the following steps, and either one of them seem to trigger the bug:

1. Line-wise selection, `:NarrowRegion` command
2. Visual selection, `<leader>nr` mapping

Let's get now to the interesting part:

- On save, while on the narrowed buffer, `:WidenRegion` ends up getting called
- Inside of it, `winnr` points to the ID of the current window, i.e. the ID of the window where the narrowed buffer is currently loaded
- `orig_buf` instead to the ID of the buffer where the narrowed region was created from
- In my execution path, `<sid>JumpToBufinTab` gets invoked which results in making the original window active (i.e. `winnr()` is now equal to `bufwinnr(orig_buf)`
- Note: later on, the following is run: `exe ':noa'. orig_win. 'wincmd w'`, but I am not sure what that is for, since `JumpToBufinTab` already took care of focusing the "right" window
- Then the narrowed region is widened accordingly, and highlights optionally reset - this is the guard being used:
" also, renew the highlighted region
" only highlight, if we are in a different window
" then where we started, else we might accidentally
" set a match in the narrowed window (might happen if the
" user typed Ctrl-W o in the narrowed window)
if !(has_key(s:nrrw_rgn_lines[instn], 'single') || winnr != winnr())
But the thing is, `winnr` will **always** be different from `winnr()` (in my execution path at least) which means the "re-highlight" logic will **never** be executed.

Now, by the look of `:help matchadd()`, the new highlight pattern gets defined for the _current window_; doesn't this suggest that t the original window (i.e. the one where the narrowed region was created from), be the active, at the time `<sid>AddMatches` is invoked? (which should already be the case, thanks to the call to `JumpToBufinTab`)

For fun, I tweaked the guard logic as follows, and now the narrowed region seems to get re-highlighted correctly after each call to `:WidenRegion`:
@@ -1361,7 +1361,7 @@ fun! nrrwrgn#WidenRegion(force)  abort "{{{1
 		" then where we started, else we might accidentally
 		" set a match in the narrowed window (might happen if the
 		" user typed Ctrl-W o in the narrowed window)
-		if !(has_key(s:nrrw_rgn_lines[instn], 'single') || winnr != winnr())
+		if !(has_key(s:nrrw_rgn_lines[instn], 'single') || bufwinnr(orig_buf) != winnr())
 			call <sid>AddMatches(<sid>GeneratePattern(
 				\ s:nrrw_rgn_lines[instn].start[1:2],
 				\ s:nrrw_rgn_lines[instn].end[1:2],
@@ -1397,7 +1397,7 @@ fun! nrrwrgn#WidenRegion(force)  abort "{{{1
 		" then where we started, else we might accidentally
 		" set a match in the narrowed window (might happen if the
 		" user typed Ctrl-W o in the narrowed window)
-		if !(has_key(s:nrrw_rgn_lines[instn], 'single') || winnr != winnr())
+		if !(has_key(s:nrrw_rgn_lines[instn], 'single') || bufwinnr(orig_buf) != winnr())
 			call <sid>AddMatches(<sid>GeneratePattern(
Again, this seems to work, but I am not sure about all the other use cases that this change might have broken...

@vim @lisp
Today I finally got around to scratching an itch of my own: getting Vim to highlight the S-Expression surrounding the cursor position.

The implementation is super naive:

- When the cursor moves (both in normal, and insert mode)
- Get `vlime` to return us the start and end position of the surrounding S-expression
- Create a text property for that region (see: `:help textprop`), and highlight that accordingly
augroup vlime_hilight_curr_expr

    if empty(prop_type_get('lisp_curr_expr'))
        call prop_type_add('lisp_curr_expr', {'highlight': 'LispCurrExpr'})
    hi link LispCurrExpr MatchParen

    function! s:HilightCurrExpr() abort
        call prop_remove({'type': 'lisp_curr_expr', 'all': v:true})

        let expr = vlime#ui#CurExpr(1)
        if expr[0] == '' || expr[1][0] == 0

        let [line1, col1] = expr[1]
        let [line2, col2] = expr[2]
        let l:options = {
                    \   'type': 'lisp_curr_expr',
                    \   'end_lnum': line2,
                    \   'end_col': col2,
                    \ }
        call prop_add(line1, col1, l:options)

    autocmd CursorMoved,CursorMovedI <buffer> call s:HilightCurrExpr()
augroup end
I am sure it can be improved (e.g. caching some of the calls to `vlime#ui#CurExpr()`), but I am really enjoying the scope-highlight effect so far!

Thanks [SO](https://vi.stackexchange.com/a/20522) for mentioning Vim's support for text properties (`:help textprop`)!

2022-02-10 (permalink)

? "compose key" for the ultimate hacking keyboard? I know I can use digraphs when in vim...it's just that sometimes it could come in handy outside of vim too!

@vim @plan
`Neoformat` invokes `prettier` as follows:
prettier --stdin-filepath "/Users/matteolandi/plan/NrrwRgn_.faang_1" 2> /var/folders/s1/zl3vtwg54_9_1s8fsxxtxpl00000gn/T/neoformat/stderr.log
But the missing file extension causes `prettier` to error out:
[error] No parser could be inferred for file: /Users/matteolandi/plan/NrrwRgn_.faang_1
One possible solution to this is to rename the buffer and include the proper file extension to it:
" While setting up the NrrwRgn buffer
let filename_with_ext = expand('%') . '.js'
execute "file" filename_with_ext
filetype detect
Running `:Neoformat` will now finally get `prettier` to properly reformat my code...great!

There is one more problem though: `:Neoformat` is not kicking in automatically on `:write`, as it should as per instructed by the following `:autocommand`:
 thisaugroup neoformat_neoformat

  " Thanks: https://github.com/sbdchd/neoformat/issues/134#issuecomment-347180213
  au BufWritePre * try | undojoin | Neoformat | catch /E790/ | Neoformat | endtry
augroup END
Same is for `:Neomake`, or any other `:BufWrite` activated autocommand.

Apparently, `NrrwRgn` is using `:BufWriteCmd` to manually sync the narrowed buffer back into the not narrowed one, and that seems to cause Vim to skip triggering `BufWrite` and `BufWritePost` events.

Anyways, long story short I ended up creating a [PR](https://github.com/chrisbra/NrrwRgn/pull/88) to add a new hook for running `:Neomake` in it; let's see what the maintainers think about this.

PS. I abandoned the idea of automatically running `prettier` on save; instead, I will be manually running it whenever I need.  On the other hand, I still think it's a good idea to have `eslint` (i.e. `:Neomake`) run every time the buffer is saved, especially because that is **not** changing the content of the buffer, but simply adding annotations to it.

2022-02-07 (permalink)

* Another small PR to fix `scmindent` indentation for `#| ... |#` regions (https://github.com/ds26gte/scmindent/pull/12)

2022-02-02 (permalink)

Consider adding swank-loader.lisp's `dirname` to the location of the directory where fasl files are cached

(This was posted on [slime/slime/issues/704](https://github.com/slime/slime/issues/704))

While testing something out, yesterday, I got into this weird situation where it appeared as if an older version of Slime was getting loaded, and not the one that I was trying to test; after a little bit of debugging, I realized the problem had something to do with where Slime is caching fasl files, hence the enhancement request.

A little bit about my setup (your mileage might vary of course):

- `~/opt/slime/__main__/`, git worktree pointing to my personal fork of Slime
- `~/opt/slime/upstream-master/`, git worktree pointing to the upstream repo, i.e. this one
- Inside my CL init script I have `(pushnew #P"~/opt/slime/__main__/" asdf:*central-registry*)` (or `(pushnew #P"~/opt/slime/upstream-master/" asdf:*central-registry*)`) to easily switch between Slime implementations

With this setup, and due to the default location chosen by Slime to cache its fasl files, every time I changed my init script to switch the _active_ version of Slime, I also had to _touch_ any .lisp file contained in that directory or chances are the fasls from the _inactive_ worktree will end up being loaded instead (especially if more recent).

Of course this only happens because I have 2 separate directories where Slime can be loaded from: in the case of a single git worktree, switching branches would indeed _touch_ any changed file and invalidate cached fasls accordingly.

At the moment, Slime's fasl files are getting cached inside a directory whose location takes into account:

- The current user's home
- Slime's version
- A _fingerprint_ of the CL implementation / OS where Slime is getting run

For example, on my laptop (MacOS), when I fire up Slime with SBCL I get:
* swank-loader:*fasl-directory*
What if the chosen directory also took into account the location of the swank-loader file itself?
diff --git a/swank-loader.lisp b/swank-loader.lisp
index 5434b9d1..44eb5437 100644
--- a/swank-loader.lisp
+++ b/swank-loader.lisp
@@ -159,7 +159,8 @@ Return nil if nothing appropriate is available."
     :directory `(:relative ".slime" "fasl"
                  ,@(if (slime-version-string) (list (slime-version-string)))
-                 ,(unique-dir-name)))
+                 ,(unique-dir-name)
+                 ,@(cdr (pathname-directory *load-truename*))))
And with the above:
* swank-loader:*fasl-directory*
Alternatively, and as explained at the top of swank-loader.lisp, one could load the loader, override SWANK-LOADER:*FASL-DIRECTORY*, and finally call SWANK-LOADER:INIT.  Unfortunately though, even if the above would surely get the job done without the need for further enhancements from Slime, it would also _not_ prevent a different person with a setup similar to mine, from having to figure this all out on their own and implement the same workaround; so here I am, picking your brain about this possible enhancement.

Let me know what you think about this, and in case I will shove the diff above (or a similar one) into a pull request for approval.


2022-02-01 (permalink)

* Use vlime#SYM instead of the undefined vlime#CL #3: https://github.com/phmarek/vlime/pull/3

* Fix input reading #4: https://github.com/phmarek/vlime/pull/4

2022-01-31 (permalink)

? shopping list: shared amongst people (i.e. household); work offline and only sync up when possible; single list but different items order (i.e. one for store); integrate with Alexa or Google Home

2022-01-28 (permalink)

? vlime: buffer with multiple packages: it seems to be picking the last one, and not the one just "above" the evaluated expression

2022-01-22 (permalink)

? scmindent: property indent lambda lists -- all args are aligned to the second element

2022-01-13 (permalink)

On HASH-TABLEs performance

(posted on [r/Common_Lisp](https://www.reddit.com/r/Common_Lisp/comments/s0g2q3/on_hashtables_performance/))

I [built](https://topaz.github.io/paste/#XQAAAQDdBwAAAAAAAAAUGQimgx+p6Nchs0/Hc3cRxcmyfeV+Bzi2FxeDDBZEkgZaDBSyRjbUkuFXP6dhVAaOH8AG99AzdDUHR/lSrXrRzt2ru8sshOcmQ5UeY7P5AZQhS4ncXon3a6r8j5K/gr4xuK1jjTG4PHnVajamzB6t54TV5dSPS63MDOK6jD6CsHfC+VkV4bms/+I8MmYXI4GU2JxpblEiXKntPh1orBPfGpyTCfhx4v+kHU29G6mF1Y4EM6ozuoAyzjwxpY0eKSSV1zFprANFIIlsaPqm/nWF9W4i224u0tUm3IHb+tnMDrFtscipbIplaGiZN8B2Q1apClGT/+51hyLVNGM+4prkx8VWxBWeWuN62LCmMrY0iqRvFeCwKqpwg4s4VRogYMgeG3AOyY1DlJ0UJFetXLjJ+7pMA+8jUmYyE6xRJd9R+jn/cPnkNyDceLd1iFB/ODUgVXLcyrYb7jFIIeucRNGqkgQ7lwe0xvmyiBLkHyIe3ML8lZQcmhyinz9Ynab880cOGsBYBuTRjCS7dCvLr5XKDjw8SpdnWXgjVrzLA49wE+NZXVkPz6tTVzFUR36unRBsIJZJKfYRi3ZUToT1eFyKKayvxZpI9V1QkNJvJehPwqf797u2T57qMuE1UrtnhSso1kQh/BEo33aCHrNXfO/H71RV95KbXakyaLfqTAeeKA2hUrQSbAFnewyRS+3eKmTvbe0+XL1ltwMKAccDiSJR+DZ8iMZhQx91qBpU6W4Gu5c+XA4oT8DFrhGWR1aHzdLLf2ICtG8VUVnLizIL/+Wv790=) an implementation of the A* algorithm which internally uses two HASH-TABLEs:

- `cost-so-far`, mapping all the searched states the cost required to get there, from the initial state -- this is used to calculate the final cost to move from the initial state to the goal one
- `come-from`, mapping all the searched states to their previous state -- this is used to calculate the complete path to move from the initial state to the goal one

As I was profiling this, I came to the conclusions that representing search states with 2D ARRAYs instead of HASH-TABLEs of the same size would have a huge positive impact on the overal runtime (i.e. it's better to use the former than the latter), but I cannot quite understand why, hence this post.

Now, it does not matter if you are not familiar with A*; in fact, we can completely ignore all its mechanics and simply pretend we have two HASH-TABLEs we need to update.  In addition, let's pretend our search state was a `2` rows by `10` columns grid, with numbers in it.

I came up with (and benchmarked) the following 3 state representations:

1. HASH-TABLE having `(row col)` as keys
2. HASH-TABLE having `(+ (* row 100) col)` as keys -- this to rule out any excessive consing from the above

The benchmark is simple:

- We initialize `cost-so-far` and `come-from` hash tables
- We initialize our starting grid
- Then, 10000 times
- We copy the grid (I used ALEXANDRIA:COPY-HASH-TABLE and ALEXANDRIA:COPY-ARRAY for this)
- Move two random elements
- Save the new state inside `cost-so-far` and `come-from`

Here is the run time for the first state representation, HASH-TABLE with LISTs as keys:
  (loop with cost-so-far = (make-hash-table :test 'equalp)
        with come-from = (make-hash-table :test 'equalp)
        with rows = 2 and cols = 10 and burrow = (make-hash-table :test 'equal)
        initially (loop for row below rows do
                        (loop for col below cols do
                              (setf (gethash (list row col) burrow) (random 100))))
        repeat 10000 do
        (let ((next (copy-hash-table burrow))
              (row1 (random rows)) (col1 (random cols))
              (row2 (random rows)) (col2 (random cols)))
          (rotatef (gethash (list row1 col1) next) (gethash (list row2 col2) next))
          (setf (gethash next cost-so-far) (random 1000)
                (gethash next come-from) burrow
                burrow next))
        finally (return cost-so-far)))
Evaluation took:
  8.908 seconds of real time
  8.398207 seconds of total run time (8.174489 user, 0.223718 system)
  94.27% CPU
  20,484,287,455 processor cycles
  13,913,664 bytes consed

Here is the run time for the second state representation, HASH-TABLE with NUMBERs as keys:
> (time
    (flet ((key (row col) (+ (* row 100) col)))
      (loop with cost-so-far = (make-hash-table :test 'equalp)
            with come-from = (make-hash-table :test 'equalp)
            with rows = 2 and cols = 10 and burrow = (make-hash-table :test 'eql)
            initially (loop for row below rows do
                        (loop for col below cols do
                              (setf (gethash (key row col) burrow) (random 100))))
            repeat 10000 do
            (let ((next (copy-hash-table burrow))
                  (row1 (random rows)) (col1 (random cols))
                  (row2 (random rows)) (col2 (random cols)))
              (rotatef (gethash (key row1 col1) next) (gethash (key row2 col2) next))
              (setf (gethash next cost-so-far) (random 1000)
                    (gethash next come-from) burrow
                    burrow next))
            finally (return cost-so-far))))
Evaluation took:
  7.051 seconds of real time
  6.668094 seconds of total run time (6.502108 user, 0.165986 system)
  94.57% CPU
  16,217,080,790 processor cycles
  11,858,336 bytes consed

#<HASH-TABLE :TEST EQUALP :COUNT 9389 {10041E7953}>
While this is the runtime of the third and last state representation, i.e. 2D ARRAY:
> (time
    (loop with cost-so-far = (make-hash-table :test 'equalp)
          with come-from = (make-hash-table :test 'equalp)
          with rows = 2 and cols = 10 with burrow = (make-array (list rows cols))
          initially (loop for row below rows do
                          (loop for col below cols do
                                (setf (aref burrow row col) (random 100))))
          repeat 10000 do
          (let ((next (copy-array burrow))
                (row1 (random rows)) (col1 (random cols))
                (row2 (random rows)) (col2 (random cols)))
            (rotatef (aref next row1 col1) (aref next row2 col2))
            (setf (gethash next cost-so-far) (random 1000)
                  (gethash next come-from) burrow
                  burrow next))
          finally (return cost-so-far)))
Evaluation took:
  0.027 seconds of real time
  0.026169 seconds of total run time (0.016391 user, 0.009778 system)
  96.30% CPU
  62,550,110 processor cycles
  4,964,608 bytes consed

#<HASH-TABLE :TEST EQUALP :COUNT 9285 {1004F65393}>
That's a 100x speed-up!

For fun I run another experiment in which I still used a HASH-TABLE (with `(row col)` as keys) to represent the search state, but also serialized that into a LIST of its elements and used _that_ to update update `cost-so-far` and `came-from`:
> (time
    (loop with cost-so-far = (make-hash-table :test 'equalp)
          with come-from = (make-hash-table :test 'equalp)
          with rows = 2 and cols = 10 and burrow = (make-hash-table :test 'equal)
          initially (loop for row below rows do
                          (loop for col below cols do
                                (setf (gethash (list row col) burrow) (random 100))))
          repeat 10000 do
          (let ((next (copy-hash-table burrow))
                (row1 (random rows)) (col1 (random cols))
                (row2 (random rows)) (col2 (random cols)))
            (rotatef (gethash (list row1 col1) next) (gethash (list row2 col2) next))
            (let ((state-key (uiop:while-collecting (add)
                               (loop for row below rows do
                                     (loop for col below cols do
                                           (add (gethash (list row col) next)))))))
              (setf (gethash state-key cost-so-far) (random 1000)
                    (gethash state-key come-from) burrow
                    burrow next)))
          finally (return cost-so-far)))
Evaluation took:
  0.093 seconds of real time
  0.090564 seconds of total run time (0.052988 user, 0.037576 system)
  97.85% CPU
  214,038,690 processor cycles
  26,702,832 bytes consed

I was expecting this run to cons up more than the previous ones, but the runtime?! I was definitely not expecting this to be comparable with the run in which we used a 2D ARRAY as our search state!

So, can anyone tell me what's going on here? Are ARRAYs really that easier to EQUALP as compared to HASH-TABLEs, or am I looking at the wrong place?  I thought with such small states using an array or a hash table would not make much of a difference, but apparently it does!

An array is basically the cost of a write to memory + a tiny bit of arithmetic.
A hash table requires walking a list (the key), producing a hash code, finding a suitable location in the underlying storage, reallocating and copying if needed, checking for collisions (also requires walking a list), and storing.
I get what you are saying: HASH-TABLEs require more "bookkeeping" than ARRAYs, and I 100% agree agree with you on that.  However, most of these things that you mentioned are executed in **all** the different scenarios that we run, and yet the runtime seems to increase only when we try to EQUALP hash tables together.  Let's review what each scenario does.

Scenario 1 (HASH-TABLE with `(row col)` as keys):

- Cost to generate a new state

  - Copy a HASH-TABLE 10000 times
  - Rotate two random elements

- Bookkeeping costs for `cost-so-far`

  - Look up the new state, i.e. a HASH-TABLE instance, into the hash table
  - Potentially resize the hash table
  - Potentially insert the new entry in the hash table

- Bookkeeping costs for `come-from` (same as above)

Scenario 3 (2D ARRAY):

- Cost to generate a new state

  - Copy a ARRAY 10000 times
  - Rotate two random elements

- Bookkeeping costs for `cost-so-far`

  - Look up the new state, i.e. a ARRAY instance, into the hash table
  - Potentially resize the hash table
  - Potentially insert the new entry in the hash table

- Bookkeeping costs for `come-from` (same as above)

In one case we get a 7s runtime, while in the other a 0.2s one; the difference can be attributed to:

- Different cost required to copy a HASH-TABLE / ARRAY
- Different cost required to rotate elements inside a HASH-TABLE / ARRAY
- Different cost required to see if a HASH-TABLE / ARRAY is contained inside another HASH-TABLE (this last one being `cost-so-far` or `come-from`)

The last scenario though, where we serialize the hash table into a list of its elements, should help us better understand where most of the time is actually spent:

- Cost to generate a new state

  - Copy a HASH-TABLE 10000 times
  - Rotate two random elements

- Bookkeeping costs for `cost-so-far`

  - Serialize the HASH-TABLE into a list of its elements
  - Look up the new serialized state, i.e. a LIST, into the hash table
  - Potentially resize the hash table
  - Potentially insert the new entry in the hash table

- Bookkeeping costs for `come-from` (same as above)

With this, the runtime is now to 0.9s, and the only differences with the first scenario are that here:

- There is an extra serialization step
- EQUALP for the `cost-so-far` and `come-from` tables works with LISTs and not HASH-TABLEs

I mean, I get that scanning a HASH-TABLE would require more work, but _that_ more work really?  Here are two more tests where we try to measure the time to generate the new states, and the time to update `cost-so-far` and `come-from`, separately.

Here we use use HASH-TABLE as keys:
> (let ((cost-so-far (make-hash-table :test 'equalp))
        (come-from (make-hash-table :test 'equalp))
    ;; Generate all the states
      (loop with rows = 2 and cols = 10 and burrow = (make-hash-table :test 'equal)
            initially (loop for row below rows do
                            (loop for col below cols do
                                  (setf (gethash (list row col) burrow) (random 100))))
            repeat 10000 do
            (let ((next (copy-hash-table burrow))
                  (row1 (random rows)) (col1 (random cols))
                  (row2 (random rows)) (col2 (random cols)))
              (rotatef (gethash (list row1 col1) next) (gethash (list row2 col2) next))
              (let ((state-key next)
                    (cost-next (random 1000)))
                (push (list next state-key cost-next) states)
                (setf burrow next)))))
    ;; Update cost-so-far, came-from
      (loop for (state key cost) in states for prev = nil then state do
            (setf (gethash key cost-so-far) cost
                  (gethash key come-from) prev)))
Evaluation took:
  0.092 seconds of real time
  0.090417 seconds of total run time (0.070339 user, 0.020078 system)
  [ Run times consist of 0.067 seconds GC time, and 0.024 seconds non-GC time. ]
  97.83% CPU
  213,098,600 processor cycles
  12,494,080 bytes consed

Evaluation took:
  7.497 seconds of real time
  7.245294 seconds of total run time (7.097045 user, 0.148249 system)
  96.64% CPU
  17,254,061,811 processor cycles
  2,054,608 bytes consed

#<HASH-TABLE :TEST EQUALP :COUNT 9378 {10092E6233}>
And it takes around 7.5s for keys comparisons and resizing.

Here instead we serialize HASH-TABLEs into a LIST first, and then use that as key:
> (let ((cost-so-far (make-hash-table :test 'equalp))
        (come-from (make-hash-table :test 'equalp))
    ;; Generate all the states
      (loop with rows = 2 and cols = 10 and burrow = (make-hash-table :test 'equal)
            initially (loop for row below rows do
                            (loop for col below cols do
                                  (setf (gethash (list row col) burrow) (random 100))))
            repeat 10000 do
            (let ((next (copy-hash-table burrow))
                  (row1 (random rows)) (col1 (random cols))
                  (row2 (random rows)) (col2 (random cols)))
              (rotatef (gethash (list row1 col1) next) (gethash (list row2 col2) next))
              (let ((state-key (uiop:while-collecting (add)
                                 (loop for row below rows do
                                       (loop for col below cols do
                                             (add (gethash (list row col) next))))))
                    (cost-next (random 1000)))
                (push (list next state-key cost-next) states)
                (setf burrow next)))))
    ;; Update cost-so-far, came-from
      (loop for (state key cost) in states for prev = nil then state do
            (setf (gethash key cost-so-far) cost
                  (gethash key come-from) prev)))
    (values cost-so-far come-from))
Evaluation took:
  0.034 seconds of real time
  0.034214 seconds of total run time (0.033711 user, 0.000503 system)
  100.00% CPU
  80,230,767 processor cycles
  25,301,488 bytes consed

Evaluation took:
  0.008 seconds of real time
  0.008258 seconds of total run time (0.008023 user, 0.000235 system)
  100.00% CPU
  19,813,576 processor cycles
  2,044,816 bytes consed

#<HASH-TABLE :TEST EQUALP :COUNT 9323 {1002275C53}>
#<HASH-TABLE :TEST EQUALP :COUNT 9323 {1002275D13}>
I am still puzzled by this, but it is what it is, so I guess I will have to make peace with it.

In your Common Lisp implementation, which looks like SBCL, ...
That's correct.
...the EQUALP hash of a hash table depends only on the number of elements in the hash table (see SB-IMPL::PSXHASH), which in this case is NEXT and which never changes size because it always has row*col entries (it also depends on the test, but that's the same too). So your gethash on COST-SO-FAR and COME-FROM are not O(1) operations. They are O(N).
What about the scenario in which I serialize `next` into a LIST first, and then use that to GETHASH into `cost-so-far` and `come-from`?  I would expect the hash function to behave similarly to the case where we are hashing hash tables; yet somehow there is almost a 100 folds difference in the overall runtime.
Plus on top of that, the cost of EQUALP on a set of hash tables is quite high. You have to verify that every key+value pair in one hash table is in the other and vice versa.
For _fun_, I even tried to serialize not just `next` values, but their keys too; it conses more of course, but its runtime is still way better than if we used EQUALP on hash tables:
> (time
    (loop with cost-so-far = (make-hash-table :test 'equalp)
          with come-from = (make-hash-table :test 'equalp)
          with rows = 2 and cols = 10 and burrow = (make-hash-table :test 'equal)
          initially (loop for row below rows do
                          (loop for col below cols do
                                (setf (gethash (list row col) burrow) (random 100))))
          repeat 10000 do
          (let ((next (copy-hash-table burrow))
                (row1 (random rows)) (col1 (random cols))
                (row2 (random rows)) (col2 (random cols)))
            (rotatef (gethash (list row1 col1) next) (gethash (list row2 col2) next))
            (let ((state-key (hash-table-alist next)))
              (setf (gethash state-key cost-so-far) (random 1000)
                    (gethash state-key come-from) burrow
                    burrow next)))
          finally (return cost-so-far)))
Evaluation took:
  0.034 seconds of real time
  0.033374 seconds of total run time (0.032833 user, 0.000541 system)
  97.06% CPU
  78,876,062 processor cycles
  20,299,376 bytes consed

#<HASH-TABLE :TEST EQUALP :COUNT 9368 {1006C41283}>
"For a _small_ dataset, a LIST performs better than a HASH-TABLE" -cit

I am not sure where did I read this from, but I guess this rabbit hole I find myself in, right now, kind of testifies to that ;-)


>> What about the scenario in which I serialize next into a LIST first, and then use that to GETHASH into cost-so-far and come-from?
Then this works fine and the hash table cost is now O(1) for each gethash call and everything is fast.
>> I would expect the hash function to behave similarly to the case wherewe are hashing hash tables; yet somehow there is almost a 100 folds difference in the overall runtime.
You are wrong with your expectation. The hash of the list version is nicely randomly distributed as it includes all the key value pairs. Just try it! call #'sb-impl::psxhash on NEXT hash tables and then call it on the list you construct with your loop over row and cols. You will find it is the SAME VALUE each time through your loop in the first case, and unique and random in the second case. This means when you use a hash-table as a key in your COST-SO-FAR and COME-FROM equalp hash tables, they behave like lists, wherein each gethash call has to search all the elements in the hash table to find the key that is equal to your hash table, which is now an O(N) operation where N is the number of entries in the COST-SO-FAR and COME-FROM hash tables. Please message me directly if this explanation isn't making sense and I can try again to explain it some different ways.
Oh, I finally see what's going on, and I think I misread your: "So your gethash on COST-SO-FAR and COME-FROM are not O(1) operations. They are O(N)." in the parent comment!

Let's see if I am getting this right -- please, do correct me if I am wrong:

- When writing to / reading from a HASH-TABLE, the provided key, `k`, gets _hashed_ (SBCL uses SB-IMPL::PSXHASH for this), and if an entry with the same key is already found in the HASH-TABLE, the implementation then goes on and tests for _equality_
- If equal, the we found the entry to update / read, otherwise, the implementation needs to try a different entry; this usually translates to either scanning a list with all the keys clashing on the same hash value, until one is found which is _equal_ to `k` or until the list is exhausted; or, a different implementation could decide to keep on trying `(hash + 1) % size`, `(hash + 2) % size`, until an _empty_ entry, or one which is _equal_ to `k` is found
- Now, when a HASH-TABLE is passed in to SB-IMPL::PSXHASH, the function will return a hash value which is only function of a) the test function the HASH-TABLE got set up with, and b) the number of entries of the HASH-TABLE (yes, the **number of entries**, and not their values; I wanted to stress this because I originally did not pay too much attention to this while reading the parent comment, as I assumed it was referring to the _complexity_ to calculate this hash, i.e. the more elements, the longer the hash function would take, but clearly I was wrong about it!)
- But the number of entries of our HASH-TABLE instances, is always the same (i.e. `20`), and so is the `test` function we initialized them with (i.e. EQUAL)
- This means that SB-IMPL::PSXHASH, for any given state, would always be returning the same hash number (i.e. `4242822982970596467` for the case of a EQUAL HASH-TABLE with `20` entries)
- This means that GETHASH's performance will quickly degrade from O(1) to O(N), as the implementation is forced to scan the full set of entries until one which is _equal_ to `k` is found, or we wrapped around

This also means that under these circumstances, `cost-so-far` and `come-from` are almost behaving as if we they were association lists; actually, let's try to confirm this with yet another scenario:
> (time
    (loop with cost-so-far = '()
          with come-from = '()
          with rows = 2 and cols = 10 and burrow = (make-hash-table :test 'equal)
          initially (loop for row below rows do
                          (loop for col below cols do
                                (setf (gethash (list row col) burrow) (random 100))))
          repeat 10000 do
          (let ((next (copy-hash-table burrow))
                (row1 (random rows)) (col1 (random cols))
                (row2 (random rows)) (col2 (random cols)))
            (rotatef (gethash (list row1 col1) next) (gethash (list row2 col2) next))
            (let ((existing (assoc next cost-so-far :test 'equalp)))
              (if existing
                (setf (cdr existing) (random 1000))
                (setf cost-so-far (cons (cons next (random 1000)) cost-so-far))))
            (let ((existing (assoc next come-from :test 'equalp)))
              (if existing
                (setf (cdr existing) burrow)
                (setf come-from (cons (cons next burrow) come-from))))
            (setf burrow next))
          finally (return (length cost-so-far))))
Evaluation took:
  7.588 seconds of real time
  7.153604 seconds of total run time (7.015142 user, 0.138462 system)
  [ Run times consist of 0.043 seconds GC time, and 7.111 seconds non-GC time. ]
  94.28% CPU
  17,452,971,454 processor cycles
  12,449,472 bytes consed

And we exclude HASH-TABLE resizing costs from the first scenario, we are getting a very similar runtime!
> (time
    (loop with cost-so-far = (make-hash-table :size 10000 :test 'equalp)
          with come-from = (make-hash-table :size 10000 :test 'equalp)
          with rows = 2 and cols = 10 and burrow = (make-hash-table :test 'equal)
          initially (loop for row below rows do
                          (loop for col below cols do
                                (setf (gethash (list row col) burrow) (random 100))))
          repeat 10000 do
          (let ((next (copy-hash-table burrow))
                (row1 (random rows)) (col1 (random cols))
                (row2 (random rows)) (col2 (random cols)))
            (rotatef (gethash (list row1 col1) next) (gethash (list row2 col2) next))
            (setf (gethash next cost-so-far) (random 1000)
                  (gethash next come-from) burrow
                  burrow next))
          finally (return cost-so-far)))
Evaluation took:
  7.296 seconds of real time
  7.080104 seconds of total run time (6.941640 user, 0.138464 system)
  97.04% CPU
  16,782,132,425 processor cycles
  12,445,328 bytes consed

#<HASH-TABLE :TEST EQUALP :COUNT 9301 {1003339C63}>
A note on your code above. Using hash-table-alist does not guaranteed the ORDER of the resulting ALIST is the same, so it CANNOT be used as a replacement because you can now have the case where (EQUALP hash-table-1 hash-table-2) is T, but (EQUALP (hash-table-alist hash-table-1) (hash-table-alist hash-table-2)) is NIL, and so while it appears to work in this case (because the hash table is never resized, thus internal order of KV pairs does not change), and thus is a bug in your code.
Right: I wanted to simplify the code a little bit, to make it easier for readers to follow, but clearly I did not think this through.  Thanks for noticing this silly mistake of mine!
The 'equalp' hash function (psxhash) when applied to a hash table in SBCL is a number generated from the number of elements in the hash table and the hash table type, and so is always the same number for all your NEXT hash tables and thus your hash table is no longer a hash table and just a slow list doing equalp comparisons. There are reasons psxhash does this and I recommend you think about them and figure them out, along with what equalp means on two hash tables.
Well, there is a comment in the source code covering just that:
        (cond ((hash-table-p key)
               ;; This is a purposely not very strong hash so that it does not make any
               ;; distinctions that EQUALP does not make. Computing a hash of the k/v pair
               ;; vector would incorrectly take insertion order into account.
               (mix (mix 103924836 (hash-table-count key))
                    (sxhash (hash-table-test key))))
              ((pathnamep key) (pathname-sxhash key))
               (structure-object-psxhash key depthoid))))
Hashing the k/v pair vector would take insertion order into account; this could lead to SB-IMPL::PSXHASH to generate two different hash numbers, even though the two HASH-TABLE instances are _equal_ (as per the EQUALP definition); this could lead to duplicate data getting stored in the HASH-TABLE.  Instead, we prefer SB-IMPL:PSXHASH to do the minimal amount of work, that cannot cause duplicate keys to accidentally be stored in the HASH-TABLE, and let the `test` predicate (i.e. EQUALP) do the rest.

Thanks /u/PhysicistWantsDonuts for helping me with this!

2022-01-11 (permalink)

Advent of Code: [2021/25](https://adventofcode.com/2021/day/25)
This is it: the bottom of the ocean trench, the last place the sleigh keys could be. Your submarine's experimental antenna **still isn't boosted enough** to detect the keys, but they **must** be here. All you need to do is **reach the seafloor** and find them.
At least, you'd touch down on the seafloor if you could; unfortunately, it's completely covered by two large herds of sea cucumbers, and there isn't an open space large enough for your submarine.
So we call a deep-sea marine biologist (whose number we found on a handwritten note taped to the wall of the submarine's cockpit), which explains us what's going on:
"Sea cucumbers? Yeah, they're probably hunting for food. But don't worry, they're predictable critters: they move in perfectly straight lines, only moving forward when there's space to do so. They're actually quite polite!"
There are two herds of sea cucumbers sharing the same region; one always moves **east** (`>`), while the other always moves **south** (`v`). Each location can contain at most one sea cucumber; the remaining locations are **empty** (`.`). The submarine helpfully generates a map of the situation (your puzzle input).
For example:
Every **step**, the sea cucumbers in the east-facing herd attempt to move forward one location, then the sea cucumbers in the south-facing herd attempt to move forward one location. When a herd moves forward, every sea cucumber in the herd first simultaneously considers whether there is a sea cucumber in the adjacent location it's facing (even another sea cucumber facing the same direction), and then every sea cucumber facing an empty location simultaneously moves into that location.
OK, kinda makes sense...
Due to **strong water currents** in the area, sea cucumbers that move off the right edge of the map appear on the left edge, and sea cucumbers that move off the bottom edge of the map appear on the top edge. Sea cucumbers always check whether their destination location is empty before moving, even if that destination is on the opposite side of the map.
The task for the day:
Find somewhere safe to land your submarine. **What is the first step on which no sea cucumbers move?**
We are going to simulate this, i.e. move the two herds, the east facing one first, and the south facing one next, until none of them can move any more.

But first, the input; let's parse the input map into a 2D array:
(defun parse-map (lines &aux (rows (length lines)) (cols (length (car lines))))
  (make-array (list rows cols) :element-type 'character :initial-contents lines))
Now, let's see how we are going to move the two herds:

- All the sea cucumber from the same herd are moving at the same time, so we will be creating a copy of the input map, `next`: we will be reading from `curr`, and writing to `next`
- Then, we will move the **east** facing heard first, and the **south** facing one next
- For each row, for each column
- If the _next_ spot is empty, we swap the content of the two elements
- And we also keep track that at least one sea cucumber did move
(defun tick (curr &aux
                  (rows (array-dimension curr 0))
                  (cols (array-dimension curr 1))
                  (next (copy-array curr))
  (flet ((is (ch row col) (eql ch (aref curr row col)))
         (swap (row1 col1 row2 col2)
           (rotatef (aref next row1 col1) (aref next row2 col2))
           (setf moved t)))
    (dotimes (row rows)
      (dotimes (col cols)
        (let ((col-next (mod (1+ col) cols)))
          (when (and (is #\> row col) (is #\. row col-next))
            (swap row col-next row col)))))
    (setf curr next next (copy-array curr))
    (dotimes (row rows)
      (dotimes (col cols)
        (let ((row-next (mod (1+ row) rows)))
          (when (and (is #\v row col) (is #\. row-next col))
            (swap row-next col row col))))))
  (values next moved))
Then, all is left to do is call TICK repeatedly until the sea cucumbers stop moving, and return the number of elapsed steps:
(defun part1 (map &aux (moved t))
  (loop while moved count t do (setf (values map moved) (tick map))))
And that's it!

Final plumbing:
(define-solution (2021 25) (input parse-map) (part1 input))

(define-test (2021 25) (532))
Test run:
> (time (test-run))
Success: 1 test, 1 check.
Evaluation took:
  1.093 seconds of real time
  1.056084 seconds of total run time (1.034671 user, 0.021413 system)
  [ Run times consist of 0.006 seconds GC time, and 1.051 seconds non-GC time. ]
  96.61% CPU
  2,513,864,169 processor cycles
  81,222,576 bytes consed
This is not too far off from what I [used](https://topaz.github.io/paste/#XQAAAQBUBwAAAAAAAAAUGQimgx+p6PZhRh7uIO+WahagjQFtPUiCoxNgFycn2BrqOj/HOBL8Z7DCsyBqFgSXDidjaytfAiVhJKKToKeI2JxiZ1r3xMJuElxpV/LfXi+nfkg/MWFCDwfkS6x35gl2OmlvYLs1AppDHrAR5MIjo5hAvQiA9y9MxVh48Mqj81RrYO7q1GfdyW/pFPj3hl6kTirDyjE5L6TtFsCxNfeCLugBv8cfYt3DY/ia2Epg1VkMZ91iBAFrgeSef1aRh4arkULTSjUqIMa588z4sP9TTTv0PqBamaxPUZGDu1Tr1KRPunpf8tB2zfbmEabkt86WtB7WTltXj6WGRoeFCcg8tGjKV9/bV7Qj4xMu1NVIm94XISUtHE6i7n6xaAHOiMFQyEhLctU51vQxOHfIJwnL81D7j9i6oYdEMRjfNJfahIPYpKmBTlc4V8/AaCLCBumz24LNRMR305NuNFQNX1OmgK2eTToEfjF94iG26pTN5KbOoxpmaUGs56fY7XAU1shggy59mMrVqjquFxuBkorPucWyv4rA00DIw4fSC/UrRvGV5kduUHEkQBxEGGVxenkuzNkZ/cgwceHNxLAEHF3vYIPjbPdBPtFtCtE8tLpm9p4Y+7iqDKi2dpxvYceCYl3bVrxRR3fz4Sm1PMTVhC6KZWlh8Sq+TqhXvrfpqHPDFEqU6zd+0YHxwiK8Iv35pare3idS1FS8oeYm6+vSgDstpS04s/OwxCdMJSADz/VguTxIcg07RU9e0P/pqzT/) to get my star for the day; main differences:

- Used a HASH-TABLE instead of a 2D array for the map
- Kept on forgetting about ROTATEF -- instead, I went with the usual SETF / GETHASH combo

Rest is pretty much the same; and if you wonder how does that compare to the solution presented here, well, as expected, the HASH-TABLE one would cons up more and take longer to run:
Success: 1 test, 1 check.
Evaluation took:
  5.764 seconds of real time
  5.592039 seconds of total run time (5.371084 user, 0.220955 system)
  [ Run times consist of 0.221 seconds GC time, and 5.372 seconds non-GC time. ]
  97.02% CPU
  13,256,996,615 processor cycles
  1,541,412,576 bytes consed
And this is really it, folks; this was the last day of Advent of Code 2021, and these are my personal stats for the year:
      --------Part 1--------   --------Part 2--------
Day       Time   Rank  Score       Time   Rank  Score
 25   00:35:38   1765      0       >24h   7849      0
 24       >24h   9639      0       >24h   9523      0
 23       >24h  10091      0       >24h   8567      0
 22   00:16:12   1233      0   09:26:47   4814      0
 21   00:36:54   3629      0   06:33:55   6294      0
 20   01:18:22   2795      0   01:18:53   2519      0
 19   13:53:21   5731      0   13:59:27   5473      0
 18   09:38:24   6701      0   09:48:56   6514      0
 17   01:21:38   4923      0   01:47:03   4742      0
 16   01:42:44   3507      0   02:33:20   3958      0
 15   00:10:48    464      0   00:59:10   1927      0
 14   00:24:18   4237      0   01:38:04   4805      0
 13   00:56:19   6101      0   01:02:34   5368      0
 12   00:29:21   2620      0   01:09:05   3880      0
 11   00:37:06   3370      0   00:38:14   2960      0
 10   00:20:08   4481      0   00:30:26   3688      0
  9   00:14:25   3057      0   01:57:27   8388      0
  8   00:22:15   6338      0   11:31:30  25666      0
  7   00:11:03   4999      0   00:13:57   3206      0
  6   00:10:01   3043      0   00:39:17   4775      0
  5   07:28:10  27193      0   07:41:09  24111      0
  4   00:30:52   2716      0   00:41:34   2704      0
  3   00:12:42   4249      0   00:48:51   5251      0
  2   00:05:42   3791      0   00:09:24   3731      0
  1   00:02:45   1214      0   00:07:16   1412      0
What to say:

- Another great year, with lots of interesting challenges
- First year where I tried to be aware at 0600 when the problem unlocked
- First year where I ranked sub-1000 for one of the problems
- A bit upset that I failed to solve day 23 and day 24 on time

Until the next time...

2022-01-10 (permalink)

TIL: You better cache your CL-SOURCE-FILE instances if you don't want ASDF to compile your files...TWICE!

Let's create ourselves a little playground, with a couple dummy files in it, `foo1.lisp` and `foo2.lisp`, each defining an empty package (`:foo1` and `:foo2` respectively):
cd /tmp
mkdir asdf-playground/ && cd asdf-playground/
mkdir foo/
echo '(defpackage :foo1)' > foo/foo1.lisp
echo '(defpackage :foo2)' > foo/foo2.lisp
Create `playground.asd`, and dump the following in it:
(defclass auto-module (module) ())

(defmethod component-children ((self auto-module))
  (mapcar (lambda (p)
            (make-instance 'cl-source-file :type "lisp"
                           :pathname p
                           :name (pathname-name p)
                           :parent (component-parent self)))
          (directory-files (component-pathname self)
                           (make-pathname :directory nil :name *wild* :type "lisp"))))

(asdf:defsystem #:playground
  :components ((:auto-module "foo")))
Before we move on, let's take a closer look at the snippet above:

- We define a new class, AUTO-MODULE
- We specialize COMPONENT-CHILDREN for AUTO-MODULE instances to look for all the `.lisp` files contained in the current directory (i.e. `self`), and create instances of CL-SOURCE-FILE off of them
- We define our system, `:playground`, and add an AUTO-MODULE component to it (this way ASDF will load / compile all the files contained inside `foo` without us having to explicitly listing them all)

Pop SBCL open (or any other CL implementation), make sure ASDF can load systems defined in the current working directory, then load `:playground`:
* (pushnew '*default-pathname-defaults* asdf:*central-registry*)
* (asdf:load-system 'playground)
; compiling file "/private/tmp/asdf-playground/foo/foo1.lisp" (written 10 JAN 2022 10:54:31 AM):
; processing (DEFPACKAGE :FOO1)

; wrote /Users/matteolandi/.cache/common-lisp/sbcl-2.1.9-macosx-x64/private/tmp/asdf-playground/foo/foo1-tmpGHU3ALSV.fasl
; compilation finished in 0:00:00.003
; compiling file "/private/tmp/asdf-playground/foo/foo2.lisp" (written 10 JAN 2022 10:54:33 AM):
; processing (DEFPACKAGE :FOO2)

; wrote /Users/matteolandi/.cache/common-lisp/sbcl-2.1.9-macosx-x64/private/tmp/asdf-playground/foo/foo2-tmpAAURSO1.fasl
; compilation finished in 0:00:00.001
; compiling file "/private/tmp/asdf-playground/foo/foo1.lisp" (written 10 JAN 2022 10:54:31 AM):
; processing (DEFPACKAGE :FOO1)

; wrote /Users/matteolandi/.cache/common-lisp/sbcl-2.1.9-macosx-x64/private/tmp/asdf-playground/foo/foo1-tmp5GEXGEG5.fasl
; compilation finished in 0:00:00.000
; compiling file "/private/tmp/asdf-playground/foo/foo2.lisp" (written 10 JAN 2022 10:54:33 AM):
; processing (DEFPACKAGE :FOO2)

; wrote /Users/matteolandi/.cache/common-lisp/sbcl-2.1.9-macosx-x64/private/tmp/asdf-playground/foo/foo2-tmpAR3FSGEY.fasl
; compilation finished in 0:00:00.001
We defined 2 packages, `:foo1` and `:foo2`, yet somehow we ended up with 4 different compilation messages, two per package; this isn't happening if I manually list all the files in my .asd file, is it?  Let's confirm this.

Update `playground.asd`:
(defclass auto-module (module) ())

(defmethod component-children ((self auto-module))
  (mapcar (lambda (p)
            (make-instance 'cl-source-file :type "lisp"
                           :pathname p
                           :name (pathname-name p)
                           :parent (component-parent self)))
          (directory-files (component-pathname self)
                           (make-pathname :directory nil :name *wild* :type "lisp"))))

(asdf:defsystem #:playground
  :components ((:module "foo" :components ((:file "foo1") (:file "foo2")))))
Load the system again:
* (asdf:load-system 'playground)
; compiling file "/private/tmp/asdf-playground/foo/foo1.lisp" (written 10 JAN 2022 10:54:31 AM):
; processing (DEFPACKAGE :FOO1)

; wrote /Users/matteolandi/.cache/common-lisp/sbcl-2.1.9-macosx-x64/private/tmp/asdf-playground/foo/foo1-tmpGHU3ALSV.fasl
; compilation finished in 0:00:00.004
; compiling file "/private/tmp/asdf-playground/foo/foo2.lisp" (written 10 JAN 2022 10:54:33 AM):
; processing (DEFPACKAGE :FOO2)

; wrote /Users/matteolandi/.cache/common-lisp/sbcl-2.1.9-macosx-x64/private/tmp/asdf-playground/foo/foo2-tmpAAURSO1.fasl
; compilation finished in 0:00:00.001
As expected, no double compilation messages.

So what's going on exactly?  Well, COMPONENT-CHILDREN is getting called twice, one from within LOAD-OP and one from within COMPILE-OP, and there is nothing we can do about that (i.e. it's expected); the problem is, we are creating new CL-SOURCE-FILE instances at each COMPONENT-CHILDREN invocation, and that seems to trick ASDF into believing that there were 4 different files instead of 2.

Let's solve this by creating a cache of reusable CL-SOURCE-FILE instances (we will be attaching it to the AUTO-MODULE instance so that the cache will be deleted as soon as the AUTO-MODULE instance object is disposed of):
(defclass auto-module (module)
  ((file-cache :initform (make-hash-table))))

(defmethod component-children ((self auto-module)
                               &aux (file-cache (slot-value self 'file-cache)))
  (mapcar (lambda (p &aux (existing (gethash p file-cache)))
            (if existing
                (setf (gethash p file-cache)
                      (make-instance 'cl-source-file :type "lisp"
                                     :pathname p
                                     :name (pathname-name p)
                                     :parent (component-parent self)))))
          (directory-files (component-pathname self)
                           (make-pathname :directory nil :name *wild* :type "lisp"))))

(asdf:defsystem #:playground
  :components ((:auto-module "foo")))
Reload the system:
* (asdf:load-system 'playground)
; compiling file "/private/tmp/asdf-playground/foo/foo1.lisp" (written 10 JAN 2022 10:54:31 AM):
; processing (DEFPACKAGE :FOO1)

; wrote /Users/matteolandi/.cache/common-lisp/sbcl-2.1.9-macosx-x64/private/tmp/asdf-playground/foo/foo1-tmp5GEXGEG5.fasl
; compilation finished in 0:00:00.001
; compiling file "/private/tmp/asdf-playground/foo/foo2.lisp" (written 10 JAN 2022 10:54:33 AM):
; processing (DEFPACKAGE :FOO2)

; wrote /Users/matteolandi/.cache/common-lisp/sbcl-2.1.9-macosx-x64/private/tmp/asdf-playground/foo/foo2-tmpAR3FSGEY.fasl
; compilation finished in 0:00:00.001
...and it looks like we fixed it: this time around, `foo1.lisp` and `foo2.lisp` were compiled only once, as if we listed them explicitly inside the system definition.

So yeah, you better cache your CL-SOURCE-FILE instances if you don't want ASDF to compile your files twice!


2022-01-09 (permalink)

Advent of Code: [2021/24](https://adventofcode.com/2021/day/24)
Magic smoke starts leaking from the submarine's arithmetic logic unit (ALU). Without the ability to perform basic arithmetic and logic functions, the submarine can't produce cool patterns with its Christmas lights!
The ALU is a four-dimensional processing unit: it has integer variables `w`, `x`, `y`, and `z`. These variables all start with the value `0`. The ALU also supports **six instructions**:
`inp a` - Read an input value and write it to variable `a`.
`add a b` - Add the value of `a` to the value of `b`, then store the result in variable `a`.
`mul a b` - Multiply the value of `a` by the value of `b`, then store the result in variable `a`.
`div a b` - Divide the value of `a` by the value of `b`, truncate the result to an integer, then store the result in variable `a`. (Here, "truncate" means to round the value toward zero.)
`mod a b` - Divide the value of `a` by the value of `b`, then store the remainder in variable `a`. (This is also called the modulo operation.)
`eql a b` - If the value of `a` and `b` are equal, then store the value `1` in variable `a`. Otherwise, store the value `0` in variable `a`.
In all of these instructions, `a` and `b` are placeholders; `a` will always be the variable where the result of the operation is stored (one of `w`, `x`, `y`, or `z`), while `b` can be either a variable or a number. Numbers can be positive or negative, but will always be integers.
Once you have built a replacement ALU, you can install it in the submarine, which will immediately resume what it was doing when the ALU failed: validating the submarine's **model number**. To do this, the ALU will run the MOdel Number Automatic Detector program (MONAD, your puzzle input).
Submarine model numbers are always **fourteen-digit numbers** consisting only of digits `1` through `9`. The digit `0` cannot appear in a model number.
When MONAD checks a hypothetical fourteen-digit model number, it uses fourteen separate inp instructions, each expecting a **single digit** of the model number in order of most to least significant. (So, to check the model number `13579246899999`, you would give `1` to the first inp instruction, `3` to the second inp instruction, `5` to the third inp instruction, and so on.) This means that when operating MONAD, each input instruction should only ever be given an integer value of at least `1` and at most `9`.
Then, after MONAD has finished running all of its instructions, it will indicate that the model number was **valid** by leaving a `0` in variable `z`. However, if the model number was **invalid**, it will leave some other non-zero value in `z`.
MONAD imposes additional, mysterious restrictions on model numbers, and legend says the last copy of the MONAD documentation was eaten by a tanuki. You'll need to **figure out what MONAD does** some other way.

Finally, the task:
To enable as many submarine features as possible, find the largest valid fourteen-digit model number that contains no `0` digits. **What is the largest model number accepted by MONAD?**

First off, let's take care of the input:

- For each line
- We start reading _forms_ out of it
- Instruction and variable names will be read into symbols
- Integers will be read into...well, integers
(defun read-instructions (lines)
  (loop for string in lines collect
        (loop with start = 0 and value-read while (< start (length string))
              collect (setf (values value-read start)
                            (read-from-string string nil nil :start start)))))
Let's join the instructions together, to see if anything interesting comes up.

First we define a function, EXEC, with associated modify macro EXECF, to symbolically execute the given instructions, i.e. wrap two operands together with the symbol of operations the instruction represents:
(define-modify-macro execf (other op) exec)
(defun exec (rand1 rand2 rator)
  (list rator rand1 rand2))

> (exec 1 2 '+)
(+ 1 2)

> (exec 'x 'y '*)
(* x y)
Next we start executing instructions, and inspect the content of the variables (and reset them), every time an INP instruction is found:
(defun b= (n1 n2) (if (= n1 n2) 1 0))

> (let ((vars (make-array 4 :initial-element 0)))
    (labels ((offset (v) (ecase v (w 0) (x 1) (y 2) (z 3)))
             (value (v) (if (numberp v) v (aref vars (offset v)))))
      (loop for (cmd a b) in (read-instructions (uiop:read-file-lines "src/2021/day24.txt")) do
            (ecase cmd
              (inp (pr vars)
                   (setf (aref vars 0) 'w
                         (aref vars 1) 'x
                         (aref vars 2) 'y
                         (aref vars 3) 'z)
                   (setf (aref vars (offset a)) '?))
              (add (execf (aref vars (offset a)) (value b) '+))
              (mul (execf (aref vars (offset a)) (value b) '*))
              (div (execf (aref vars (offset a)) (value b) 'truncate))
              (mod (execf (aref vars (offset a)) (value b) 'mod))
              (eql (execf (aref vars (offset a)) (value b) 'b=))))))
#(0 0 0 0)
#(? #1=(B= (B= (+ (MOD (+ (* X 0) Z) 26) 10) ?) 0)
  #2=(* (+ (+ (* #3=(+ (* (+ (* Y 0) 25) #1#) 1) 0) ?) 12) #1#)
  (+ (* (TRUNCATE Z 1) #3#) #2#))
#(? #1=(B= (B= (+ (MOD (+ (* X 0) Z) 26) 10) ?) 0)
  #2=(* (+ (+ (* #3=(+ (* (+ (* Y 0) 25) #1#) 1) 0) ?) 10) #1#)
  (+ (* (TRUNCATE Z 1) #3#) #2#))
#(? #1=(B= (B= (+ (MOD (+ (* X 0) Z) 26) 12) ?) 0)
  #2=(* (+ (+ (* #3=(+ (* (+ (* Y 0) 25) #1#) 1) 0) ?) 8) #1#)
  (+ (* (TRUNCATE Z 1) #3#) #2#))
Let's see if we can simplify some of these expressions a bit (I am thinking about those `(* ... 0)` expressions):
(defun exec (rand1 rand2 rator)
  (if (and (numberp rand1) (numberp rand2))
    (funcall rator rand1 rand2)
    (list rator rand1 rand2)))

(defun simplify (rator rand1 rand2)
  (case rator
    (* (cond ((or (eql rand1 0) (eql rand2 0)) 0)))))
With this, the output tends to get a bit better, but there is more we can do:
#(0 0 0 0)
#(? #1=(B= (B= (+ (MOD (+ 0 Z) 26) 10) ?) 0) #2=(* (+ (+ 0 ?) 12) #1#)
  (+ (* (TRUNCATE Z 1) (+ (* (+ 0 25) #1#) 1)) #2#))
#(? #1=(B= (B= (+ (MOD (+ 0 Z) 26) 10) ?) 0) #2=(* (+ (+ 0 ?) 10) #1#)
  (+ (* (TRUNCATE Z 1) (+ (* (+ 0 25) #1#) 1)) #2#))
#(? #1=(B= (B= (+ (MOD (+ 0 Z) 26) 12) ?) 0) #2=(* (+ (+ 0 ?) 8) #1#)
  (+ (* (TRUNCATE Z 1) (+ (* (+ 0 25) #1#) 1)) #2#))
All those `(+ 0 ...)` expressions can be replaced with their non-zero operand; all those `(truncate z 1)` expressions can be replaced with `z`:
(defun simplify (rator rand1 rand2)
  (case rator
    (+ (cond ((eql rand1 0) rand2)))
    (* (cond ((eql rand2 0) 0)))
    (truncate (cond ((eql rand2 1) rand1)))))
With the above changes we get:
#(0 0 0 0)
#(? #1=(B= (B= (+ (MOD Z 26) 10) ?) 0) #2=(* (+ ? 12) #1#)
  (+ (* Z (+ (* 25 #1#) 1)) #2#))
#(? #1=(B= (B= (+ (MOD Z 26) 10) ?) 0) #2=(* (+ ? 10) #1#)
  (+ (* Z (+ (* 25 #1#) 1)) #2#))
#(? #1=(B= (B= (+ (MOD Z 26) 12) ?) 0) #2=(* (+ ? 8) #1#)
  (+ (* Z (+ (* 25 #1#) 1)) #2#))
#(? #1=(B= (B= (+ (MOD Z 26) 11) ?) 0) #2=(* (+ ? 4) #1#)
  (+ (* Z (+ (* 25 #1#) 1)) #2#))
#(? #1=(B= (B= (+ (MOD Z 26) 0) ?) 0) #2=(* (+ ? 3) #1#)
  (+ (* (TRUNCATE Z 26) (+ (* 25 #1#) 1)) #2#))
#(? #1=(B= (B= (+ (MOD Z 26) 15) ?) 0) #2=(* (+ ? 10) #1#)
  (+ (* Z (+ (* 25 #1#) 1)) #2#))
#(? #1=(B= (B= (+ (MOD Z 26) 13) ?) 0) #2=(* (+ ? 6) #1#)
  (+ (* Z (+ (* 25 #1#) 1)) #2#))
#(? #1=(B= (B= (+ (MOD Z 26) -12) ?) 0) #2=(* (+ ? 13) #1#)
  (+ (* (TRUNCATE Z 26) (+ (* 25 #1#) 1)) #2#))
#(? #1=(B= (B= (+ (MOD Z 26) -15) ?) 0) #2=(* (+ ? 8) #1#)
  (+ (* (TRUNCATE Z 26) (+ (* 25 #1#) 1)) #2#))
#(? #1=(B= (B= (+ (MOD Z 26) -15) ?) 0) #2=(* (+ ? 1) #1#)
  (+ (* (TRUNCATE Z 26) (+ (* 25 #1#) 1)) #2#))
#(? #1=(B= (B= (+ (MOD Z 26) -4) ?) 0) #2=(* (+ ? 7) #1#)
  (+ (* (TRUNCATE Z 26) (+ (* 25 #1#) 1)) #2#))
#(? #1=(B= (B= (+ (MOD Z 26) 10) ?) 0) #2=(* (+ ? 6) #1#)
  (+ (* Z (+ (* 25 #1#) 1)) #2#))
#(? #1=(B= (B= (+ (MOD Z 26) -5) ?) 0) #2=(* (+ ? 9) #1#)
  (+ (* (TRUNCATE Z 26) (+ (* 25 #1#) 1)) #2#))
If you carefully look at this you will notice that:

- `w`, i.e. the first element of the array of variables, will **always** contain `?`, i.e. the value read with the IMP instruction
- `z`, i.e. the last element of the array of variables, will **always** contain an expression, function of the `?` and `z` only

Knowing this, we can _compile_ these expressions into 14 different functions which later on we are going to use to find the largest model number which our monad can validate.  Note: these functions are going to accept two parameters, one for `?`, and one for the previous `z` value:
(defun read-monad (lines &aux
                         (instructions (read-instructions lines))
                         (vars (make-array 4 :initial-element 0)))
  (labels ((offset (v) (ecase v (w 0) (x 1) (y 2) (z 3)))
           (value (v) (if (numberp v) v (aref vars (offset v))))
           (compile-into-lambda (e) (compile nil (pr `(lambda (? z) ,e)))))
    (uiop:while-collecting (add)
      (loop for (cmd a b) in instructions do
        (ecase cmd
          (inp (unless (eql (aref vars (offset 'z)) 0)
                 (add (compile-into-lambda (aref vars (offset 'z)))))
               (setf (aref vars 0) 'w (aref vars 1) 'x
                     (aref vars 2) 'y (aref vars 3) 'z)
               (setf (aref vars (offset a)) '?))
          (add (execf (aref vars (offset a)) (value b) '+))
          (mul (execf (aref vars (offset a)) (value b) '*))
          (div (execf (aref vars (offset a)) (value b) 'truncate))
          (mod (execf (aref vars (offset a)) (value b) 'mod))
          (eql (execf (aref vars (offset a)) (value b) 'b=))))
      ;; Make sure to pick up all the 14 ALU stages
      (add (compile-into-lambda (aref vars (offset 'z)))))))
What next?  Well, we are going to brute-force this, hoping that the simplifications that we discussed above will reduce the complexity of each of the 14 ALU stages.

Here is what we are going to do:

- We pick a value in between `1` and `9` (well, better to start from `9` and then down to `1` if we want to find the *largest** model number)
- We feed that digit the previous value of `z` (it starts from `0`) to the first function that we compiled, and get the new `z` value back
- We move to the next ALU function and repeat
- Until there are no more functions to execute **and** `z` is 0; in which case, we found our solution and all we have to do is walk backward and construct the model number
- Or we run out of functions to execute, in which case we will have to backtrack as we hit a dead end (i.e. we found an **invalid** 14-digits model number)
(defun first-valid-model (expressions &optional
                                      (input (iota 9 :start 9 :step -1))
                                      (z 0)
                                      (remaining (length expressions)))
  (cond ((and (= remaining z 0)) 0)
        ((= remaining 0) nil)
        (t (loop for w in input for z-next = (funcall (car expressions) w z) do
                 (when-let (model (first-valid-model (rest expressions) input z-next))
                   (return (+ (* (expt 10 (1- remaining)) w) model)))))))
Easy right?!  Well, no: there are a lot of different states to try out, i.e. `9**14`, so the above will take quite some time to run, irrespective of all the simplifications we did before.

So what can we do?  We can give memoization a try, and hope that it would help us quickly prune input choices which will ultimately lead to an invalid model number:
(defun first-valid-model (expressions &optional
                                      (input (iota 9 :start 9 :step -1))
                                      (z 0)
                                      (dp (make-hash-table))
                                      (remaining (length expressions))
                                      (key (+ (* z 100) remaining)))
  (multiple-value-bind (model foundp) (gethash key dp)
    (if foundp
      (setf (gethash key dp)
            (cond ((and (= remaining z 0)) 0)
                  ((= remaining 0) nil)
                  (t (loop for w in input for z-next = (funcall (car expressions) w z) do
                           (when-let (model (first-valid-model (rest expressions) input z-next dp))
                             (return (+ (* (expt 10 (1- remaining)) w) model))))))))))

> (time (first-valid-model (read-monad (uiop:read-file-lines "src/2021/day24.txt"))))
Evaluation took:
  2.168 seconds of real time
  2.104432 seconds of total run time (1.844197 user, 0.260235 system)
  [ Run times consist of 0.271 seconds GC time, and 1.834 seconds non-GC time. ]
  97.05% CPU
  350 lambdas converted
  4,986,893,536 processor cycles
  175,630,864 bytes consed

Let's go!

What about part 2?
**What is the smallest model number accepted by MONAD?**
We lucked out!  All we have to do is tell FIRST-VALID-MODEL to try the different digits starting from `1`, up to `9`:
> (time (first-valid-model (read-monad (uiop:read-file-lines "src/2021/day24.txt"))
                           (iota 9 :start 1)))
Evaluation took:
  0.338 seconds of real time
  0.327866 seconds of total run time (0.311545 user, 0.016321 system)
  97.04% CPU
  350 lambdas converted
  778,391,119 processor cycles
  45,884,672 bytes consed

Final plumbing:
(define-solution (2021 24) (monad read-monad)
  (values (first-valid-model monad) (first-valid-model monad (iota 9 :start 1))))
Test run:
> (time (test-run))
Success: 1 test, 2 checks.
Evaluation took:
  2.152 seconds of real time
  2.096319 seconds of total run time (2.027116 user, 0.069203 system)
  [ Run times consist of 0.036 seconds GC time, and 2.061 seconds non-GC time. ]
  97.40% CPU
  350 lambdas converted
  4,950,528,890 processor cycles
  204,097,808 bytes consed
Now, how did I fare while on the spot, as the clock was ticking?  Not so well actually:

- I only managed to complete this few days after Christmas
- I got stuck and was _forced_ (well, nobody really _forced_ me...) to peek into the day's [solutions megathread](https://www.reddit.com/r/adventofcode/comments/rnejv5/2021_day_24_solutions/) looking for just the minimal amount of help that would unblock me

What did go well?

- Even back then I was able to simplify input expressions quite a bit; some might say I even went [a little overboard](https://topaz.github.io/paste/#XQAAAQCqEAAAAAAAAAAUHMineAofRq+qilruyMI8srv1Rg1uS/cG09uVz6iNOSymsWauBb4HdvwRzUgG3psYwA9ob03xmHHICYRXgzfu2i0nGZpixjzLcDptv6TCLlXXrz98p9BIthGkSVFhdvb6gLoZ1STrdqf969ZW0uN6e9tnSLpzmV/cfNPAa4yZuX8r/QUt5GPLLw9WBxZsvBwVd+iA3jgA0dSOkuWY/Wny+WZ9MDi9HufgXnKGeCV9q3rqEv9KMjQ7gvBB7qXZi3DbHfcXP/Rh5GRrbuKUI+p1fChdFmfPWTq303v6WlVEh9SXwXYEzgQo26eLpHG4d3v+POROD1yIk3I5aLlQQCbalbasGItRN9EpKT8aP5jk9wEX/EBTX64iRSAlXAHUTpTA6S19gXIvN1cX6wbx4iux2hdjw0fcADYjKwMXcNYHV/cBZxaas46sis7VUpI6UEZHIk5SS1s2cq48brLVFAp+OSMpMLN9sOvxwWwjdoql1MQlqAZeZj/lwvJC9+7zYdov52Qpa1Jl12AWN1m2hS8gNsxl+VRv41FtOBqLPxJEvCXUjTPR5SjIc4LRgvS3ZRlfvnrn46LGR86q5IJvYCMP2cYVLmbsWhDTljQQsfIpRaUbQlDnAOjdj441qqd0uq3wCwnKYCckEDY8rZzulQOAuK3CcIbgA1rx4xFqoD6A1E63buHyYhivk3EeBq2AFEEwlWVK3F5WtIxLTFTpUTjN2O2P94/BuIHPFfOBBRCmN9yhrZTpHEcyjHYV3QpstwDdoVaV9ZCW0GoMp6SpCOHDnYg1ApFYBjbxXZXFNOwIwLqs4jbvGZEgucK/Xl5H5FJmi23msTBaKI1KosOczB92Vls4M81OP/LixEsGk1Oxb6++JIcGLQQj1/wAkXBodz9ipb9R3mzuqTD6E7032eMaJH8ZT6VVkpWHdQTY+weIJFMP+2USy+MZcwwsw/E2fJY+khg+iHwavAJzu99224jTDLMR7jGwPjhxmHM62eLj+bngOyhZXLlICMckWQ5qbyjunI1c78GYZoG5Pkc/pRkecAqa1EOe4kLtAFdssxTzRoOescam6+9bAP4Y3+1Zsl98+ToPxb+OimqcU5ogi7jFf27Iq/FkOAH3qwcoc7rqOpeiD4krQRVwwd/gIagp6A7n0oFzBZcMzzfkF/tUAYvLLNk+KIcB+hDqfvqrqAchZTFjh2zj2pAwONRTSW3JQBf14HZ9+BiM26wMCxashcpd26q0Rf/7dksh) with it, especially as I started reducing them based on the minimum and maximum values of each expression
- I was able to notice that each expression wound up being a function of the previous value of `z` and of the current digit of the model number, i.e. `w`

What did not go well?

- I spent time trying to get an ALU runner to work with local bindings (instead of an ARRAY as presented above), only to realize SYMBOL-VALUE cannot not be used to access the value of a lexical variable (see: http://clhs.lisp.se/Body/f_symb_5.htm).  I should have used DEFPARAMETER instead, as shown [here](https://topaz.github.io/paste/#XQAAAQBRBAAAAAAAAAAUGQimgulVkMqJePdJ12uYSyrx9HETSDeM7iPwMm5IzbnhqpMgXbIZ4j+wT3G3yM/Phd9tY3YWGUV5vVqRYE41FzNleAVB3CZ1VdavQwqHLspPcheinZ29SxrxBs+UugGbKyMw8Cjyc5tfvwceJi2lK3srvAqCYe8dTUDreC1lbu5JIlNdTDSvkoQ4U6tfizTr5wNdnl/PT+rX4qRsvS/HylH4dGdRVXNkttLCoE3/ZuaGbju5AcTub2qhXsAH2D2KWb7m368oylPdNdaeeElu3w+21SA/wiHJlBPSZgs/daoC4FQiNa7Zvj46biP2W4tShV9lpsclm0+zKm9ADHlF29MYMkpuv6p+JVndGry6/HywWWuO4mKd7jem6ik1BVOJLXPmecX3AKsQFX058TVpucNg+P0QL6xyB1/ISRh3Npa7pBPXMRhsLa4LWseDuqM4RhdnT7B0B5ECH5pler/vvA1RJepfEq/NGbbzTgI+96j4UsPgAarLdXIi5BIUY7n///JNZps=), but go figure why it took me so long to figure this out...
- I went [a little overboard](https://topaz.github.io/paste/#XQAAAQCqEAAAAAAAAAAUHMineAofRq+qilruyMI8srv1Rg1uS/cG09uVz6iNOSymsWauBb4HdvwRzUgG3psYwA9ob03xmHHICYRXgzfu2i0nGZpixjzLcDptv6TCLlXXrz98p9BIthGkSVFhdvb6gLoZ1STrdqf969ZW0uN6e9tnSLpzmV/cfNPAa4yZuX8r/QUt5GPLLw9WBxZsvBwVd+iA3jgA0dSOkuWY/Wny+WZ9MDi9HufgXnKGeCV9q3rqEv9KMjQ7gvBB7qXZi3DbHfcXP/Rh5GRrbuKUI+p1fChdFmfPWTq303v6WlVEh9SXwXYEzgQo26eLpHG4d3v+POROD1yIk3I5aLlQQCbalbasGItRN9EpKT8aP5jk9wEX/EBTX64iRSAlXAHUTpTA6S19gXIvN1cX6wbx4iux2hdjw0fcADYjKwMXcNYHV/cBZxaas46sis7VUpI6UEZHIk5SS1s2cq48brLVFAp+OSMpMLN9sOvxwWwjdoql1MQlqAZeZj/lwvJC9+7zYdov52Qpa1Jl12AWN1m2hS8gNsxl+VRv41FtOBqLPxJEvCXUjTPR5SjIc4LRgvS3ZRlfvnrn46LGR86q5IJvYCMP2cYVLmbsWhDTljQQsfIpRaUbQlDnAOjdj441qqd0uq3wCwnKYCckEDY8rZzulQOAuK3CcIbgA1rx4xFqoD6A1E63buHyYhivk3EeBq2AFEEwlWVK3F5WtIxLTFTpUTjN2O2P94/BuIHPFfOBBRCmN9yhrZTpHEcyjHYV3QpstwDdoVaV9ZCW0GoMp6SpCOHDnYg1ApFYBjbxXZXFNOwIwLqs4jbvGZEgucK/Xl5H5FJmi23msTBaKI1KosOczB92Vls4M81OP/LixEsGk1Oxb6++JIcGLQQj1/wAkXBodz9ipb9R3mzuqTD6E7032eMaJH8ZT6VVkpWHdQTY+weIJFMP+2USy+MZcwwsw/E2fJY+khg+iHwavAJzu99224jTDLMR7jGwPjhxmHM62eLj+bngOyhZXLlICMckWQ5qbyjunI1c78GYZoG5Pkc/pRkecAqa1EOe4kLtAFdssxTzRoOescam6+9bAP4Y3+1Zsl98+ToPxb+OimqcU5ogi7jFf27Iq/FkOAH3qwcoc7rqOpeiD4krQRVwwd/gIagp6A7n0oFzBZcMzzfkF/tUAYvLLNk+KIcB+hDqfvqrqAchZTFjh2zj2pAwONRTSW3JQBf14HZ9+BiM26wMCxashcpd26q0Rf/7dksh) with trying to simplify the given expressions, to the point that I started reducing expressions based on the minimum and maximum values of each expression.  Anyways, why would this a problem? Well, it's not a problem per-se, it's just that even with these fully reduced expressions, I still did not know what to make out of them -- which leads me to the next 2 points
- I could not "connect the dots": the monad is made of 14 expressions, 7 of which multiply by 26, and seven of which either multiply **or** divide by 26; we do not have any control over the former ones (i.e. the ones that multiply by 26), but in order for the final number to be `0` then the remaining 7 expressions need to execute a division and not a multiplication.  Well, with this hint, I was then able to put together a [brute-force solution](https://topaz.github.io/paste/#XQAAAQBDBAAAAAAAAAAUGQimgx+p6Nd2KTQBipD+ebHgWB2KYDhg0p5cT2fB6T5DWjG1mGzImJ/kMxYYski2R0EnsbD9DF3grfTnWe90IMbzhaDW4WiGLYZjXrM+/SmzHR9JwBqvEb4zsQ2dj3FKSP6OxJxreUccJIEvgbi4LjKXtUhxNe40qDl8DSS9h7DUMOup9fUxuPTx0mL4cgxRxXwzP1SXolNrWIqaIeRySLL+xqHk5qdYz4YgF8M+Yji7vb1/s6fj6MkkleWB1ugxORngfKvDSR5y4EeQRytPcHZCqWY+zNzGh7KciXn3Q+23tQvK6tEmHvKq9K/QcUn1aYxKHJLQHQup8Kd60xBQo81hImkTexgQfd6UbIZeJmAbxizkRmGiuCbYHgLcnDzMT30Fimb3sM4s9nFYbr7W4rAxzNVhn+Bd/NDK8JtQDB2OFPRC47sERu/1Cm5gKJWCrv9XpNfIdUDYx8ei1hh5ESnFidaugWaEhu+N1rWGbu+j0osqaMSX4OSV/82ZuEs=) that _ultimately_ got me the two stars for the day
- Last but not least, I could not see that the monad would do the same operations over and over again and that memoization could have helped pruning lots of the dead ends; instead, I let the thing run for hours (true story: I went outside, had a few drinks with friends, and on my return, the solution was there ready for me to submit it).  Cherry on top: using ERROR to notify when a valid model number was found -- a clear sign of burn-out from my side lol

Well, what else to say? Another great problem, with quite a few things to learn from!

2022-01-05 (permalink)

Advent of Code: [2021/23](https://adventofcode.com/2021/day/23)
A group of amphipods notice your fancy submarine and flag you down. "With such an impressive shell," one amphipod says, "surely you can help us with a question that has stumped our best scientists."
They go on to explain that a group of timid, stubborn amphipods live in a nearby burrow. Four types of amphipods live there: **Amber** (`A`), **Bronze** (`B`), **Copper** (`C`), and **Desert** (`D`). They live in a burrow that consists of a **hallway** and four **side rooms**. The side rooms are initially full of amphipods, and the hallway is initially empty.
We are given a diagram of the situation (your puzzle input), for example:
The amphipods would like a method to organize every amphipod into side rooms so that each side room contains one type of amphipod and the types are sorted `A`-`D` going left to right, like this:
Amphipods can move up, down, left, right, but there is a cost attached to each movement:
Amphipods can move up, down, left, or right so long as they are moving into an unoccupied open space. Each type of amphipod requires a different amount of **energy** to move one step: Amber amphipods require `1` energy per step, Bronze amphipods require `10` energy, Copper amphipods require `100`, and Desert ones require `1000`. The amphipods would like you to find a way to organize the amphipods that requires the **least total energy**.
In addition, amphipods are not free to move wherever they want:
However, because they are timid and stubborn, the amphipods have some extra rules:
- Amphipods will never **stop on the space immediately outside any room**. They can move into that space so long as they immediately continue moving. (Specifically, this refers to the four open spaces in the hallway that are directly above an amphipod starting position.)
- Amphipods will never **move from the hallway into a room** unless that room is their destination room **and** that room contains no amphipods which do not also have that room as their own destination. If an amphipod's starting room is not its destination room, it can stay in that room until it leaves the room. (For example, an Amber amphipod will not move from the hallway into the right three rooms, and will only move into the leftmost room if that room is empty or if it only contains other Amber amphipods.)
- Once an amphipod stops moving in the hallway, **it will stay in that spot until it can move into a room**. (That is, once any amphipod starts moving, any other amphipods currently in the hallway are locked in place and will not move again until they can move fully into a room.)
OK, so given all these rules, and given the energy that each amphipod uses to make a move, we need to help the amphipods organize themselves and at the same time minimize the energy required to do that:
**What is the least energy required to organize the amphipods?**
Input first:

- we will be parsing the burrow into a CHARACTER 2D array
- we will also keep track, separately, of the `(row col)` locations of all the siderooms -- bottom ones first!
(defun parse-input (lines &aux
                          (rows (length lines))
                          (cols (length (car lines)))
                          (burrow (make-array (list rows cols)
                                              :element-type 'character
                                              :initial-element #\Space))
  (loop for row from 0 for string in lines do
        (loop for col from 0 for ch across string do
              (setf (aref burrow row col) ch)
              (when (and (< 1 row (1- rows)) (member col '(3 5 7 9)))
                (let* ((type (case col (3 #\A) (5 #\B) (7 #\C) (9 #\D)))
                       (existing (assoc type rooms)))
                  (if existing
                    (push (list row col) (second existing))
                    (push (list type (list (list row col))) rooms))))))
  (cons burrow rooms))
(defun siderooms (rooms type) (second (assoc type rooms)))
Next, a little constant for all the _valid_ locations in the hallway (i.e. the ones **not** facing any rooms):
(defparameter *hallway* '((1 1) (1 2) (1 4) (1 6) (1 8) (1 10) (1 11)))
We are going to treat this as a search problem:

- Starting the input situation
- We try to move every amphipod that can actually be moved (as per the rules above) -- POSSIBLE-MOVES
- We keep on doing this until each amphipod is in the right room

Note: we don't have a heuristic to play with, hence we will go with Dijkstra's algorithm, and not A*.
(defun organize (input &aux (burrow (car input)) (rooms (cdr input)))
    (dijkstra burrow :test 'equalp :goalp (partial-1 #'donep rooms)
              :neighbors (partial-1 #'possible-moves rooms))))
Let's now take a look at DONEP, the function used to break out of the search loop:

- For each sideroom -- a sideroom is characterized by a type (i.e. `#\A`, `#\B`, `#\C`, and `#\D`) and a location
- We check if it's occupied by an amphipod of matching type
- If not, it means we are not done yet
(defun donep (rooms burrow)
  (loop for (type siderooms) in rooms always
        (loop for (row col) in siderooms
              always (char= (aref burrow row col) type))))
We now have to figure out, given the current amphipods configuration, all the possible moves that do not violate any of the rules above; we will do this in two steps: first we will try to move amphipods from their current (and likely wrong) room into the hallway; next from the hallway into to the _correct_ room/sideroom.

For the first part, i.e. out of the current sideroom, and into the hallway:

- For each sideroom location
- If it's empty or occupied by an amphipod already in the right place, we move on
- Otherwise, for each valid position in the hallway
- If not blocked, we move into that location
    (loop for (_ positions) in rooms do
          (loop for pos in positions for type = (aref burrow (car pos) (cadr pos))
            unless (or (char= type #\.)
                       (in-place-p (siderooms rooms type) burrow type pos)) do
            (loop for target in *hallway*
                  unless (blockedp burrow pos target) do (add-move type pos target))))
For the second step instead, i.e. out of the hallway, and into the _right_ room / sideroom:

- For each hallway location
- If occupied by a amphipod (of type `type`)
- For each of of the siderooms for type `type` (from the bottom one, up to the top)
- If not blocked, we move into that location
- As we scan the siderooms of type `type`, bottom up, we stop as soon as we bump into a location occupied by an amphipod of the wrong type -- it does not make sense to move any amphipod into any sideroom above the current one, occupied by the wrong amphipod, if later on we would be forced to make space to let the amphipod out of this room and into their right one
    (loop for pos in *hallway* for type = (aref burrow (car pos) (cadr pos))
          unless (char= type #\.) do
          (loop for target in (siderooms rooms type)
            unless (blockedp burrow pos target) do (add-move type pos target)
            always (eql (aref burrow (car target) (cadr target)) type))))
We add two locally defined functions, ADD-MOVE, responsible for:

- Creating a copy of the current situation, i.e. the burrow
- Swapping the content of `pos` with `target`, i.e. we move `pos` into `target`
- Calculating the cost to move there, i.e. manhattan distance times the amphipod unitary energy cost

We wrap this all into a DEFUN, and there is our POSSIBLE-MOVES complete definition, in all its glory:
(defun possible-moves (rooms burrow &aux moves)
  (labels ((add-move (type pos target &aux
                           (next (copy-array burrow))
                           (cost (* (manhattan-distance pos target)
                                    (ecase type (#\A 1) (#\B 10) (#\C 100) (#\D 1000)))))
             (rotatef (aref next (car pos) (cadr pos))
                      (aref next (car target) (cadr target)))
             (push (cons next cost) moves)))
    ;; From the wrong sideroom to the hallway
    (loop for (_ positions) in rooms do
          (loop for pos in positions for type = (aref burrow (car pos) (cadr pos))
            unless (or (char= type #\.)
                       (in-place-p (siderooms rooms type) burrow pos)) do
            (loop for target in *hallway*
                  unless (blockedp burrow pos target) do (add-move type pos target))))
    ;; Fromt he hallway to the right sideroom
    (loop for pos in *hallway* for type = (aref burrow (car pos) (cadr pos))
          unless (char= type #\.) do
          (loop for target in (siderooms rooms type)
            unless (blockedp burrow pos target) do (add-move type pos target)
            always (eql (aref burrow (car target) (cadr target)) type)))
Let's now take a look at IN-PLACE-P, the function responsible for figuring out if the amphipod at position `pos` needs to move out of the way or not:

- We get the amphipod type first
- Then for each sideroom of the same type (again, bottom up)
- If the location of the room matches with the position of the amphipod, then we the amphipod is already in place
- Otherwise we check if that sideroom is occupied by an amphipod of the right type
- If the types match, we move to the next sideroom (i.e. the one above)
- Otherwise it means the amphipod needs to move into the hallway
(defun in-place-p (rooms burrow pos
                         &aux (type (aref burrow (car pos) (cadr pos))))
  (loop for r in (siderooms rooms type)
        thereis (equal r pos)
        ;; if not in the room, at least let's make sure the one below is
        ;; occupied by an amphipod of the right type
        always (eql (aref burrow (car r) (cadr r)) type)))
The last missing piece of the puzzle, BLOCKEDP; we know our starting position, `pos`, and we know where we are headeded, `target`; all we have to do is to check all the locations _along the way_ and break out if any of them is occupied by another amphipod.  But what are all the locations along the way?  If we are inside a room, we need to move vertically first to get to the hallway, and then horizontally; otherwise, horizontally first to get in front of the room, and then vertically into the sideroom:

- We destruct `pos` and `target` into their row / col components
- Then using the spaceship operator <=> we figure out which direction we need to move horizontally to get from `pos` to `target`
- Then, we check if `pos` is in the hallway
- If it is, we move up first, then horizontally
- Otherwise, horizontally first, and then down into the room
(defun blockedp (burrow pos target)
  (destructuring-bind (row1 col1) pos
    (destructuring-bind (row2 col2) target
      (let ((col-step (<=> col2 col1)))
        (flet ((check-all (row1 col1 row2 col2 row-step col-step)
                 (loop do (incf row1 row-step) (incf col1 col-step)
                       thereis (char/= (aref burrow row1 col1) #\.)
                       until (and (= row1 row2) (= col1 col2)))))
          (if (= row1 1)
            (or (check-all row1 col1 row1 col2 0 col-step)
                (check-all row1 col2 row2 col2 1 0))
            (or (check-all row1 col1 row2 col1 -1 0)
                (check-all row2 col1 row2 col2 0 col-step))))))))
And that's it:
> (organize (parse-input (uiop:read-file-lines "src/2021/day23.txt")))
Let's take a look at part 2.  It begins with:
As you prepare to give the amphipods your solution, you notice that the diagram they handed you was actually folded up. As you unfold it, you discover an extra part of the diagram.
Uh oh...
Between the first and second lines of text that contain amphipod starting positions, we are asked to insert the following lines:
Using the initial configuration from the full diagram, **what is the least energy required to organize the amphipods?**
OK, let's _massage_ our input as instructed:
(defun massage (lines)
    (subseq lines 0 3)
      "  #D#C#B#A#"
      "  #D#B#A#C#")
    (subseq lines 3)))
And that's it -- our part 1 solution should be well equipped to deal with the bigger burrow!

Final plumbing:
(define-solution (2021 23) (lines)
  (values (organize (parse-input lines)) (organize (parse-input (massage lines)))))

(define-test (2021 23) (18282 50132))
Test run:
> (time (test-run))
Success: 1 test, 2 checks.
Evaluation took:
  3.197 seconds of real time
  3.089562 seconds of total run time (2.924923 user, 0.164639 system)
  [ Run times consist of 0.134 seconds GC time, and 2.956 seconds non-GC time. ]
  96.65% CPU
  7,353,654,160 processor cycles
  249,187,888 bytes consed

Unfortunately, I was not able to solve this problem on time; actually, I could not solve this until a couple of days after Christmas.  In addition, the [solution](https://topaz.github.io/paste/#XQAAAQDMIAAAAAAAAAAUGQimgulVPYrJhMVjyYOkoNMRrPzuMVk/CekO/LruVC9Ms1jGXD3R2Xlv5UJlKv+s2h7+RTSP3QyatBEOZf/6fJFeUYm3COdf7TjD+XiU8nnIhdlmU0sqbnEce3WEEZT8fVJzKzYGaenESgPH65+4oNltMKuRZg8ao2GtPWXf8DnPXqqfbnFIsTeDvaTSGfGvdU9GUnQcBljQ9K6Nw62ag26yS8cwUZM/bCRnajawkIc2weuEY/hmFsnGIZxPhCSIQv7WZjvc+kaKuCbmoC7YAP3d0c7vaOVDEMl8PwCANg/SZTB/sfaFOPtnGTv8Aw/ZllojmEAwPe6v5CCwiNzIMs2pWUE3AuVriSTxOZYEb2kkrzpIzLelBDL3h1xDxMmaEMTyF6FcxebFSEUbdi5gCjNtETqx/6YsOJ2izvZyVGVV20ZX8pvHfkrZF2dDiK2CsQUpshHYh70NioDVWqVT6C5+MIxmuSWYE3gMVGJ+34O+ZspJOAiEsGsKofsG6U/l1up9CudsT49cOlUVLNwxS417crs99EZqzLUdVNcV202tbGHGYKxjKr+C/1s6IBE2lOSYkovTgEjPHj1LqrzyXSXDCOAcdwaLHbm/3xICgVMBOjUZX4YnbmWpuU4K7Lz2h/1aMnZ0jd1fE/UMwug8thiL1zUnk/xM3/5JcYhbrJ8QpDQCOLP1CL2VVj8BdN20hldXz/YXxAgGC/vpaIp88uS9l6km595sHdXAQlV1uc+tY3Dw/HpL7nt1Oy2tUSzsmTbSuSt3gVq3zrspMLvorORLUL/1OP/8ptBJ/7I3ZKDbIkRp31hQBuhUGtchpJWrCelDqLHBZS9196+ilHJ1YnjAoHVc2gBDDtplDUcqd9nBF/AT9Gxi9oOmQGgBSPkQveQxrQiyWjQU1rXHWKDO9KHvjb8m4vXBCladiph1PygKi/cNvmHEJeo+wtwDX8RL/T69B3AAi6Xy9y+/aF2KF3USpeYkGl9bXX/UV8GzI4dNnqJXS3VEunEjaK50B/Z7ZjLcSisiNdH1mcgp35yzegmbULYgn6m3X6UMGMrwENtR2CAXXX0renSLm0GZYQ5t1sao7iazmv26Q4KCjg66PQPlw33Z8loRnLhzQ5CvDeW2RFWb6Bx92mDaNroqrhYhirq5jmtwaLFCIM7/UYtNvmSCxL5bIzqfjvycajG/0G3gkv4aeASr2kdbQuET1wOky/53hlsJmzG2GR7gF1gzSZwHWiu/PscBM3Qme2VA9kV2CiQx0HHEd9VM1LjmO7os/I7tUUspmwzuH3RFdX/YSOJ2U5Til55ns4/ki0qhjPcU8z4CFgvqrSIyv4gh9muPYP2OHN2duPrNh3T1OV9OfDnnSIYj6sPDBzFvfsCd84Fcj/LioSVKTkAsggTg/bw4/5bSlHr8LZpZS+CTOT7qRGOIfXKCLNAMQ9kXPYNW9wB3tsnlPy10RZIE/Q99PG6ALFJeknjKSDd1gBC6LCYQHTJyTFLXt3FPkQw8tq/HEyP3yuXtimfm2pWAmNXP9oCyuf73v2uHS8clt6EPMSTMOcVfv+6kI5ZrAHDKk80hlbGq8OvyG5Rmitr1an8gOoHbmUTAkjc36GpdGhS0ILoAR4wDIVwLeuGwggQeNNep1JLCKN5D5bN1pyfgwI0Q0Qpj8p8d/Is7vkgZrPW6JApPNAsRvhHB5aUVr6W4/gEcCTWn+KHUQ7Bj4AqnwHLwXmWmzKdRULMM+4OGs+xYMaHB0GR9Dbt7oHqpRng8Sv0e68LlP2rWZO2kq9zkxunEYf2KA6QOB2pfAz7B5DdDtVGFWLIra/WG7ODKfeMthR14vY3foNDwg2B7nQE1ePGzmIyLALxf/c1u8efki3ERj8PMwrok+eJ/HR0RmwP35YGYbhfGq94R9sry1amPvlquBI8ipNx1IG3ain0JX0172IgZpvsPe/jbIyzCu/EUZaUGIOSPp/eoVpkNZ62E8VHLtpi23n52xRdYMj0cA64EHSciXL169QdRwYTTw4iluj/D3tVkce92ZOMW2F/CQs4mJUSZiZWw31euDnYDVZWO7yH98PFM6XajEsV3J74muooEA89szjItMuBrx8Zax6pmznTZfHaIh/9nW73Lzmf82wsd1pKIu6ylYjWbijMXWlFyPKG7jFpOTUOQjN0LEESDkjOOhhneuoKbbKWJjiOYkt8gpgNNWjz1Zsiy//Mj1U55JIHMfEe79iA7fcmiMckT9T0I6c1KrgZklYNErsOuF9bVQfAxqWUZaJ7gZCshwckMI7STzOsupvQV6JEMk2dsMPOuuDmjKuND93MBuEb9AyV4MoyoR3PyYWbaL6JRbLj291lXd+XyDAtUMdpSAvPKtN13KAgHuywMslkYFHdjSP8uLWvYcupGKzHYG3ceUpnBgh1lqweJOxnZ4rmvOOApFgqnIXh4ww3yH9MgvTw8gEwoNFZU186OyIzOFrJFPkRJ+DcBWSMi5weqOBEKb1zdCqvEIrRnPCOR9J51976qmOV5bNXfyUKAEgARdjgoEeA85Wh5A6Rzf/+DdtYAz/BLd44yRpnAUTQ6IjWNoW4ySzZBtOUAIkOy/tO8ArYaY5nNYMKxGkICAEUEWh1K4ULTEupaF0m2QeMtPufvwwOmVHdINMF4YKobMNondrDtyWKwgRQ9xvS5jnLtRTIv4OWMKVUS1bRD0rynV4f/knwKtg==) I came up with back then was not nearly as efficient as this one: around 8 seconds for part 1, while for part 2...
Evaluation took:
  2742.203 seconds of real time
  2570.780518 seconds of total run time (2510.998329 user, 59.782189 system)
  [ Run times consist of 17.916 seconds GC time, and 2552.865 seconds non-GC time. ]
  93.75% CPU
  6,306,877,691,085 processor cycles
  130,990,228,720 bytes consed

What did I get so wrong, back then, to achieve such a huge runtime? Well, a few things actually.

First, I did not realize I was _pre-calculating_ all the costs to move between any two locations, over and over again every time A* would invoke my heuristic function...WTF?!  Well, I did realize my solution was taking forever to run, so I came up with the following heuristic hoping that this would speed things up a little:

- For each amphipod type
- We find where these amphipods are
- We find the minimal cost to move them in place, as if no other amphipod could get into our way (e.g. with two amphipods, `A1` and `A2`, and the two siderooms, `R1` and `R2`, we have two possible scenarios: `A1` moves into `R1` and `A2` into `R2`, or `A1` moves into `R2` and `A2` into `R1`; we try both, and pick whichever costs less)
- We sum all these minimum costs together, and that's it

_But we cannot calculate all these costs, all the time, right?_  So we are going to pre-calculate these, _once_,  and then look the values up:
(defun heuristic (costs amphipods &aux by-type)
  (loop for (type) being the hash-values of amphipods using (hash-key pos)
        for existing = (assoc type by-type) do
        (if existing
          (push pos (second existing))
          (push (list type (list pos)) by-type)))
  (loop for (type (pos1 pos2 pos3 pos4)) in by-type for m = (cost type) sum
        (loop for (room1 room2 room3 room4) in (all-permutations (rooms type))
              minimize (* (+ (gethash (cons pos1 room1) costs)
                             (gethash (cons pos2 room2) costs)
                             #+#:part2 (gethash (cons pos3 room3) costs)
                             #+#:part2 (gethash (cons pos4 room4) costs))
Except...I did a silly mistake on the calling site:
(defun part1 (input &aux (opens (car input)) (amphipods (cdr input)))
  (multiple-value-bind (best cost path all-costs)
      (a* amphipods :test 'equalp :goalp #'donep
          :neighbors #'next
          :heuristic (partial-1 #'heuristic (calculate-all-costs opens)))
Spot the problem? It's the use of PARTIAL-1!  Let's MACROEXPAND-1 it:
I am not exactly sure what I was thinking, but I guess I thought it would create a new local binding, assign to it the result of `(calculate-all-costs opens)`, and than pass that along to HEURISTIC; instead, PARTIAL-1 is keeping all its input forms as they are, meaning CALCULATE-ALL-COSTS would be invoked over and over again!  Oh boy...

Something else that I did, and that only later I realized was not beneficial at all (in terms of performance) was keeping track of the number of moves each amphipod had made (and of course backtrack when any of them did two and still found itself _not in place_).  The thing is, we only allow two types of moves, from a room to the hallway, and vice versa, and each move has a cost attached to it; so why care if an amphipod keeps on bouncing back and forth between the same two locations, especially if it does not make the burrow anymore organized than it was before?  The cost to get to that state would simply increase over time, and after a bit A* would simply pick a different, and cheaper branch.  My original solution (**without** the heuristic!) would take around 316s to organize the example burrow situation, and when I changed it to stop keeping track of the amphipod moves and simply let A* do its thing, the runtime went down to 264s.  Still a lot, but definitely better.

Last but not least: search state representation.  My [A*](https://topaz.github.io/paste/#XQAAAQDdBwAAAAAAAAAUGQimgx+p6Nchs0/Hc3cRxcmyfeV+Bzi2FxeDDBZEkgZaDBSyRjbUkuFXP6dhVAaOH8AG99AzdDUHR/lSrXrRzt2ru8sshOcmQ5UeY7P5AZQhS4ncXon3a6r8j5K/gr4xuK1jjTG4PHnVajamzB6t54TV5dSPS63MDOK6jD6CsHfC+VkV4bms/+I8MmYXI4GU2JxpblEiXKntPh1orBPfGpyTCfhx4v+kHU29G6mF1Y4EM6ozuoAyzjwxpY0eKSSV1zFprANFIIlsaPqm/nWF9W4i224u0tUm3IHb+tnMDrFtscipbIplaGiZN8B2Q1apClGT/+51hyLVNGM+4prkx8VWxBWeWuN62LCmMrY0iqRvFeCwKqpwg4s4VRogYMgeG3AOyY1DlJ0UJFetXLjJ+7pMA+8jUmYyE6xRJd9R+jn/cPnkNyDceLd1iFB/ODUgVXLcyrYb7jFIIeucRNGqkgQ7lwe0xvmyiBLkHyIe3ML8lZQcmhyinz9Ynab880cOGsBYBuTRjCS7dCvLr5XKDjw8SpdnWXgjVrzLA49wE+NZXVkPz6tTVzFUR36unRBsIJZJKfYRi3ZUToT1eFyKKayvxZpI9V1QkNJvJehPwqf797u2T57qMuE1UrtnhSso1kQh/BEo33aCHrNXfO/H71RV95KbXakyaLfqTAeeKA2hUrQSbAFnewyRS+3eKmTvbe0+XL1ltwMKAccDiSJR+DZ8iMZhQx91qBpU6W4Gu5c+XA4oT8DFrhGWR1aHzdLLf2ICtG8VUVnLizIL/+Wv790=) implementation, makes use of two HASH-TABLEs internally:

- `cost-so-far`, mapping from a given search state, to the cost required to get there -- required to build up the final cost
- `come-from`, mapping from a given search state, to the state before it -- required to build up the list of steps to get to the goal state

And as it turns out, putting HASH-TABLEs (my original representation of the burrow) inside another HASH-TABLE (e.g. `cost-so-far` or `come-from`) can slow things down...**a lot**; I am not entirely sure why, but I guess it has something to do with the EQUALP function performing way better on ARRAYs than HASH-TABLEs.  Let's put together a little experiment to showcase this:

- We initialize our `cost-so-far` and `come-from` hash tables
- We initialize our burrow, with 20 elements
- Then, 10000 times
- We copy the burrow
- Move two random elements
- Save the new state inside `cost-so-far` and `come-from`

First we do this, with the burrow backed by a HASH-TABLE:
> (time
    (loop with cost-so-far = (make-hash-table :test 'equalp)
          with come-from = (make-hash-table :test 'equalp)
          with rows = 2 and cols = 10 and burrow = (make-hash-table :test 'equal)
          initially (loop for row below rows do
                          (loop for col below cols do
                                (setf (gethash (list row col) burrow) (random 100))))
          repeat 10000 do
          (let ((next (copy-hash-table burrow))
                (row1 (random rows)) (col1 (random cols))
                (row2 (random rows)) (col2 (random cols)))
            (rotatef (gethash (list row1 col1) next) (gethash (list row2 col2) next))
            (setf (gethash next cost-so-far) (random 1000)
                  (gethash next come-from) burrow
                  burrow next))
          finally (return cost-so-far)))
Evaluation took:
  8.590 seconds of real time
  7.674754 seconds of total run time (7.435813 user, 0.238941 system)
  [ Run times consist of 0.117 seconds GC time, and 7.558 seconds non-GC time. ]
  89.35% CPU
  19,757,710,348 processor cycles
  4 page faults
  13,891,952 bytes consed

Then, with the burrow backed by a 2D ARRAY:
> (time
    (loop with cost-so-far = (make-hash-table :test 'equalp)
          with come-from = (make-hash-table :test 'equalp)
          with rows = 2 and cols = 10 with burrow = (make-array (list rows cols))
          initially (loop for row below rows do
                          (loop for col below cols do
                                (setf (aref burrow row col) (random 100))))
          repeat 10000 do
          (let ((next (copy-array burrow))
                (row1 (random rows)) (col1 (random cols))
                (row2 (random rows)) (col2 (random cols)))
            (rotatef (aref next row1 col1) (aref next row2 col2))
            (setf (gethash next cost-so-far) (random 1000)
                  (gethash next come-from) burrow
                  burrow next))
          finally (return cost-so-far)))
Evaluation took:
  0.016 seconds of real time
  0.016474 seconds of total run time (0.014154 user, 0.002320 system)
  100.00% CPU
  38,822,509 processor cycles
  4,955,568 bytes consed

#<HASH-TABLE :TEST EQUALP :COUNT 9472 {10053A5973}>
/me gulps...

Ok...let's tweak the first experiment a little; let's serialize the burrow into a list of values (i.e. one value per location of the burrow), and see if that affects or not the runtime -- it surely will cons more than before:
> (time
    (loop with cost-so-far = (make-hash-table :test 'equalp)
          with come-from = (make-hash-table :test 'equalp)
          with rows = 2 and cols = 10 and burrow = (make-hash-table :test 'equal)
          initially (loop for row below rows do
                          (loop for col below cols do
                                (setf (gethash (list row col) burrow) (random 100))))
          repeat 10000 do
          (let ((next (copy-hash-table burrow))
                (row1 (random rows)) (col1 (random cols))
                (row2 (random rows)) (col2 (random cols)))
            (rotatef (gethash (list row1 col1) next) (gethash (list row2 col2) next))
            (let ((state-key (uiop:while-collecting (add)
                               (loop for row below rows do
                                     (loop for col below cols do
                                           (add (gethash (list row col) next)))))))
              (setf (gethash state-key cost-so-far) (random 1000)
                    (gethash state-key come-from) burrow
                    burrow next)))
          finally (return cost-so-far)))
Evaluation took:
  0.082 seconds of real time
  0.079201 seconds of total run time (0.065240 user, 0.013961 system)
  96.34% CPU
  190,095,780 processor cycles
  26,703,456 bytes consed

It does cons more indeed, but the overall runtime is now comparable to the second experiment, the one in which we used the 2D array!

So I went on and [updated](https://topaz.github.io/paste/#XQAAAQBXCAAAAAAAAAAUGQimgx+p6Nchs0/Hc3cRxcmyfeV+Bzi2FxeDDBZEkgZaDBSyRjbUkuFXP6dhVAaOH8AG99AzdDUHQi11t+qsyRC3uWF4xf7knr94XFFlmAeHILKPJwTwqqiga/1AEzyjEmZm8Yje6+ZHdKsFZGwHMCe/PZHtbXN/DaFKTz/nMcvv0X0lc8AlYgDi14Hn9ISQyH9XRgO+JbXFpgPy71pVPWxKSeE9inD39mrgND27XXv7js03AL688VbWW8oZQyLz1u5RAXqcKElyltwGhlBEB6XVIKJAuaP/FuFdbS3SHHBNncyV7y7P6mMraqzuyU4MEyUWrNiUkhkRpwZbu/s+fvqjI9sdiDoQKZbnjy7V+/gv1v4wkhkkv1NrFjXEjD2+jiiJVJc431kuzbcB/juIWu+jV0SBCzURL96i8kES2htSfzAyxPuOqzuPoWZ+Q72smhX9RJXjeTq2pVRlpu0D1LNOiakkvMJxGtK1+HdQOGRD3UBQhw9POB14yS+g4KS8NBTn0hBS9r2IRYc/LmAfNjy9aBys2QEtgSx/fl03ASL6xd6y4VSCgeus+A+GGfcdRsNfXIvKnVjMx7XWksMgNbFGkpzejqIunal82kOYfSuIRv+cBTydZ4w5WCjVkePqmn+COe7U6k5LneS/8KgPSmFl+W0GjKFiOtWzF7yILs7M9ErGHAUEeuCpCcp+fMb5NXkNsedoDxc7zQNP0FO+XOTsktOJ1EWnf7RMjcC+XZr+kYITTwOplIy23GfpjIiQN7pwWJAF5O5hf03PlvWKriOw/3zKT9MbMIhP33+f4viqU/auPXhPLIpnkOX3IUyD2WjKYQVh5y7vs0s7/8K2ADA=) A* to accept a new `:state-key` argument to be used to transform search states into easier to EQUALP structures; I then [changed](https://topaz.github.io/paste/#XQAAAQA0EAAAAAAAAAAUGQimgulVPYrJhMVjyYOkoNMRrPzuMVk/CekO/LruVC9Ms1jGXD3R2Xlv5UJlKv+s2h7+RTSP2wDdB5SpbuTIPB6pqSVlqYlUsNiH6bdTOE5QKqdFl92a7Ozx1/mb/dNm2aI7MI+ZxX5n9plD5QGYOpO2b4mm1/FiuMP2shOFemQ/Wfwh8V+p129h/QDVtymp0AdnAEQOhyl+bp4E9+3xL2NcfGe1BSlhs0YyeE3zE8wJrVNf9GvX3K72f0cvPEq7Y3F5Bv4AxMtxLwKd6gOTIQXo/WttDCftDO9XmR3inU3/uaIviMgdOQt+DqZBy95/qjG9yOXUUmBYvAsF1fSiVBxiKa4/fNkL//VVW7gJnOGymMv+JdnfRgnPuKXr6K5ldamg5rh1TUe24OBDzxAyGrjcc5J0nAG/ug8002ZOFrZEVa09UhGe1gYuGG3zmGD7zXGZIzmOefMWykGdUaZkRMWV1EyTvmPUu45wHk06qlY8VO4v5k9eZBWnlqvubobMmlUsYNfyP+ytl89kzRG9llQFOeHy5wj4qPrWbKDWOxIMyLMoufo6F9RQwcQWugoxqZ2+bN8agbTYfs5jbEMFuysB8OkPXs18Da6Lp0MJVk758pxSaCtHmJNHG2a0jc5hdLQ3a72g6+i/b/RaARny+CxcMX6ZszC3fWJzQq+33Xx2IPKvAnwWpDCQ9cVJ+AwPEWe/m06hVs4gpSJGVSfM9M5Gkigl9erK6cesztVVhlGAGUOX5C7f/Nug2IwTH/xAUEonOzKIUofdgkxExwuMwh3+7P1UQpbP4cccrjSP8BJGwgjWEn2KpF0j6Q3xIx82P3kUkiY3JIiWY+bV7WhJNGSdoqcMYNOLTZdAykAQ6hIHFqfV8JRz3ml17yG6GroygkZ3lO7d9X93p9TzrbKibQ3cfZ8l5M1Dm3uY+wPggdZj9mx//MrifxUqQulfVnTS4tUQWlqqnQ6rGpUdbjc34k2U5Wdb+PEwf+YA5rC7+jYTbUz+wZG8v/yPHuQchRr0w4wI6yvg14V+cRthX5AemM9pUDhBFjRFCerJ9XlNVdvU2UIrYhtqKDF63OintpoEnlyA9iqtheBIdRxiUsWyZnfhI9U49ahnaOjO2l21fMOZzwzJmT/5BxIvU91gDUlqJ/VnLI/KV2gIVlJulaigucc77sDzEkxB/5GW0lbXE9hXBbMoDehA3Sx6VsSzAQfRde1tklW+v+YyiAnhaD4i57j+W6KZOBd5lrS0On+gXcQSX9JJdjqkrB5MzOmQTb5+88+i5V1uU+8pwZmXxjMEuEGE0+eIFkmAOyzWI0ggGkCIuWNtnadXjWG3bLKcPFRWCl01YIuWBp8qVe7txAggenOaGnnAKF6PJsuP4QNaPY64+Omb5jkmK+i7oli8mTzBv93BVS9csxalztzUF/zlzveWfRWTm3fSR2dJbXI6o5c6m1TtBgdH0CsJRv4lebrKOlFIE98fYubgxL9XkKSOA0vuQtWeFjzq9tAb6Sl4UXnKWutcUnbZtsysRHEj36u7Hktd8QtA8+UpLkXKaDx6vZ5E95IbVCshSbw7irz3/bIVOAcxyExnprnrFmSneaPwOTpwFddbYcKhMJhSKuOqCn4JIhwz40VXkRd/W38RZo9DKS3PHAqQD+T5isN95R4WP7BlrzlDB/L+j9lB3GKTRcLDvEz7e9m+2dsMTb9aWaB5/HRMnZDvzu0x00SuBfQiQiP3NydjG1M/ECIhDvkVbDt563KmpWj/GDTXU/vFg9m3URYO9LD606zyyF9P00OGUDlvsnywI7BxrOr/O7/8AA==) the calling site of A* to leverage this new argument and serialize the burrow into a mere list of amphipods; and with this:

- Part 1 with the example configuration would now complete in less than 1 second (that's a 100x speed up!!!)
- Part 1 with my input would complete in less than 8 seconds, while part 2 in less than 11 seconds

That's huge.  It's still better to use a 2D array instead of a HASH-TABLE, but at least I would not have to wait minutes to see an answer!

From 40 minutes, to 3 seconds...what a ride!

2022-01-04 (permalink)

Advent of Code: [2021/22](https://adventofcode.com/2021/day/22)

We overloaded the submarine's reactor, and we need to reboot it:
The reactor core is made up of a large 3-dimensional grid made up entirely of cubes, one cube per integer 3-dimensional coordinate (`x,y,z`). Each cube can be either **on** or **off**; at the start of the reboot process, they are all **off**.
To reboot the reactor, you just need to set all of the cubes to either **on** or **off** by following a list of **reboot steps** (your puzzle input). Each step specifies a cuboid (the set of all cubes that have coordinates which fall within ranges for `x`, `y`, and `z`) and whether to turn all of the cubes in that cuboid **on** or **off**.
For example, this is how our input is going to look like:
on x=10..12,y=10..12,z=10..12
on x=11..13,y=11..13,z=11..13
off x=9..11,y=9..11,z=9..11
on x=10..10,y=10..10,z=10..10
Let's begin by parsing this; for each line, we are going to be creating a tuple containing:

- :ON/:OFF as first element
- Min and max x values, as second and third argument
- Min and max y values, as fourth and fifth argument
- Min and max z values, as sixth and seventh argument
(defun parse-instructions (data)
  (mapcar #'parse-instruction data))
(defun parse-instruction (string)
  (cl-ppcre:register-groups-bind ((#'as-keyword state) (#'parse-integer x1 x2 y1 y2 z1 z2))
      ("(on|off) x=(-?\\d+)..(-?\\d+),y=(-?\\d+)..(-?\\d+),z=(-?\\d+)..(-?\\d+)" string)
    (list state (min x1 x2) (max x1 x2) (min y1 y2) (max y1 y2) (min z1 z2) (max z1 z2))))
Let's keep on reading:
The initialization procedure only uses cubes that have `x`, `y`, and `z` positions of at least `-50` and at most `50`. For now, ignore cubes outside this region.
Execute the reboot steps. Afterward, considering only cubes in the region `x=-50..50,y=-50..50,z=-50..50`, **how many cubes are on?**
Easy peasy:

- For each instruction
- _Clip_ the cuboid to the given -/+50 units region
- For each 1x1x1 cuboid in the resulting region, turn it on or off accordingly
- At the end, count the 1x1x1 cuboids which are on
(defun part1 (instructions &aux (cuboids (make-hash-table :test 'equal)))
  (loop for (state x1 x2 y1 y2 z1 z2) in instructions do
        (setf x1 (max x1 -50) x2 (min x2 50)
              y1 (max y1 -50) y2 (min y2 50)
              z1 (max z1 -50) z2 (min z2 50))
        (loop for x from x1 to x2 do
              (loop for y from y1 to y2 do
                    (loop for z from z1 to z2 do
                          (setf (gethash (list x y z) cuboids) state)))))
  (loop for state being the hash-values of cuboids count (eql state :on)))
Part 2 starts with:
Now that the initialization procedure is complete, you can reboot the reactor.
Starting again with all cubes **off**, execute all reboot steps. Afterward, considering all cubes, **how many cubes are on?**
Note: the text contains an example, the solution of which seems to be: `2758514936282235`.  So I am afraid we won't be able to bruteforce this -- not naively at least!

The solution that I ended up implementing, which honestly took me way too long to get to and that only later on I found out that in literature it goes under the name of: [**coordinate compression**](https://stackoverflow.com/a/29532057), builds on the idea that we could _merge_ together adjacent cuboids if a) they are in the same state, i.e. :ON or :OFF, and b) if the final shape is still a cuboid.

So, if we figured out a way to _split_ all the input cuboids (possibly overlapping) into another set of **non-overlapping** cuboids, we could then solve the problem as follows:

- First we build a 3D array, where each element represents the state of a **non-overlapping** cuboid -- we will use a BIT array for this
- Then, for each input cuboid
- We figure out which **non-overlapping** cuboid it covers, and turn that on / off accordingly -- here we turn on / off the NxMxO cuboid, not each of its 1x1x1 ones
- When done turning processing input instructions, we scan the grid of the **non-overlapping** cuboids, and sum up the _volume_ of the ones which are on

The above should translate into the following function -- We are going to be looking into COMPRESS-COORDINATES in a sec:
(defun part2 (instructions &aux)
  (destructuring-bind (xx yy zz) (compress-coordinates instructions)
    (let ((grid (make-array (list (length xx) (length yy) (length zz))
                            :element-type 'bit
                            :initial-element 0)))
      (flet ((index-of (item vector &aux (start 0) (end (length vector)))
               (binary-search start end (partial-1 #'<=> (aref vector _) item))))
        (loop for (state x1 x2 y1 y2 z1 z2) in instructions
              for i-min = (index-of x1 xx) for i-max = (index-of (1+ x2) xx)
              for j-min = (index-of y1 yy) for j-max = (index-of (1+ y2) yy)
              for k-min = (index-of z1 zz) for k-max = (index-of (1+ z2) zz) do
              (loop for i from i-min below i-max do
                (loop for j from j-min below j-max do
                      (loop for k from k-min below k-max do
                            (setf (aref grid i j k) (ecase state (:on 1) (:off 0))))))))
      (loop with i-max = (1- (length xx)) and j-max = (1- (length yy)) and k-max = (1- (length zz))
            for i below i-max for x1 = (aref xx i) for x2 = (aref xx (1+ i)) sum
            (loop for j below j-max for y1 = (aref yy j) for y2 = (aref yy (1+ j)) sum
                  (loop for k below k-max for z1 = (aref zz k) for z2 = (aref zz (1+ k))
                        when (= (aref grid i j k) 1) sum (* (- x2 x1) (- y2 y1) (- z2 z1))))))))
So how can we split the complete 3d space into a set of **non-overlapping** cuboids?  Well, we could use all the distinct `x1`, `x2`, `y1`, `y2`, `z1` and `z2` values mentioned in the input instructions; unfortunately though, this choice would still result in a list of overlapping cuboids, and that's because all the `*2` values mentioned in the input instructions turn out to be inclusive.  So, what can we do instead? We could use all the `v1` and `(1+ v2)` values, and _that_ would for sure result in a set of **non-overlapping** cuboids.
(defun compress-coordinates (instructions)
  (flet ((unique&sorted (list &aux (list (remove-duplicates list)))
           (make-array (length list) :initial-contents (sort list #'<))))
    (loop for (_ x1 x2 y1 y2 z1 z2) in instructions
          collect x1 into xx collect (1+ x2) into xx
          collect y1 into yy collect (1+ y2) into yy
          collect z1 into zz collect (1+ z2) into zz
          finally (return (list (unique&sorted xx) (unique&sorted yy) (unique&sorted zz))))))
And that's it!

Final plumbing:
(define-solution (2021 22) (instructions parse-instructions)
  (values (part1 instructions) (part2 instructions)))

(define-test (2021 22) (570915 1268313839428137))
Test run:
> (time (test-run))
Success: 1 test, 2 checks.
Evaluation took:
  14.835 seconds of real time
  13.695186 seconds of total run time (13.327857 user, 0.367329 system)
  [ Run times consist of 0.162 seconds GC time, and 13.534 seconds non-GC time. ]
  92.32% CPU
  34,120,998,148 processor cycles
  231,646,416 bytes consed
Unfortunately this takes quite some time to generate the correct answer for part 2, but I am not sure what we could do to speed things up a bit.  When I timed the two LOOP forms of PART2 inside:
(defun part2 (instructions &aux)
  (destructuring-bind (xx yy zz) (compress-coordinates instructions)
    (let ((grid (make-array (list (length xx) (length yy) (length zz))
                            :element-type 'bit
                            :initial-element 0)))
      (flet ((index-of (item vector &aux (start 0) (end (length vector)))
               (binary-search start end (partial-1 #'<=> (aref vector _) item))))
        (time (loop for (state x1 x2 y1 y2 z1 z2) in instructions
                for i-min = (index-of x1 xx) for i-max = (index-of (1+ x2) xx)
                for j-min = (index-of y1 yy) for j-max = (index-of (1+ y2) yy)
                for k-min = (index-of z1 zz) for k-max = (index-of (1+ z2) zz) do
                (loop for i from i-min below i-max do
                      (loop for j from j-min below j-max do
                            (loop for k from k-min below k-max do
                                  (setf (aref grid i j k) (ecase state (:on 1) (:off 0)))))))))
      (time (loop for i below (1- (length xx)) for x1 = (aref xx i) for x2 = (aref xx (1+ i)) sum
                  (loop for j below (1- (length yy)) for y1 = (aref yy j) for y2 = (aref yy (1+ j)) sum
                        (loop for k below (1- (length zz)) for z1 = (aref zz k) for z2 = (aref zz (1+ k))
                              when (= (aref grid i j k) 1) sum (* (- x2 x1) (- y2 y1) (- z2 z1)))))))))
And run the test again I got the following:
> (time (test-run))
Evaluation took:
  4.088 seconds of real time
  3.189448 seconds of total run time (3.073847 user, 0.115601 system)
  [ Run times consist of 0.037 seconds GC time, and 3.153 seconds non-GC time. ]
  78.01% CPU
  9,402,713,194 processor cycles
  65,504 bytes consed

Evaluation took:
  10.920 seconds of real time
  9.741171 seconds of total run time (9.536960 user, 0.204211 system)
  89.20% CPU
  25,115,619,739 processor cycles
  416 bytes consed

Success: 1 test, 2 checks.
Evaluation took:
  15.704 seconds of real time
  13.589492 seconds of total run time (13.193504 user, 0.395988 system)
  [ Run times consist of 0.084 seconds GC time, and 13.506 seconds non-GC time. ]
  86.53% CPU
  36,120,260,013 processor cycles
  231,686,928 bytes consed
Which means the two thirds of the run time are spent while scanning the grid of non overlapping cuboids (FYI, it's a `828x836x828` 3D array, with `573148224` elements in it).  I am going to leave this for now, but if anyone had any idea on how to optimize this, do reach out.

For comparison, my [original solution](https://topaz.github.io/paste/#XQAAAQDzDQAAAAAAAAAUGQimgx+p6PZhRh7uIO5BVe7pcMkiG6eOV22j2MYKuFs91c+ybCSTgFinLxYMDBog9+D9wK6Mym3ryCQmnaAMvpJKQHyEdtVsPNjjYnuytyD7Qrf6bGu6LFg1BiAQ72xs23DoX1Ss84dXIY34R1KEstD5kaS3TZaBhOkCRt1rui/F8X46S4FNkH8lmNyXe65HGJxbLmM5AxC70fIrc01PnIeYXoBYDXOWDzVB/LWRj/aG+TH+ub2IlKnO2F+v6DYfX2gwfzmZLlgQw+QhaHr0GPGmPWgaMnfHgKe7blNOvRP1NRvQ6FTbxw5mMV1TIJxFCc/lapiFsx6hxF5SPmxBQe6Btb2qY+fbFBRVnoSuvtD2+klndh7SHKu9rsBkp+sXSacQCv1uey4f7dheHjB0H+G4gQK3CvxjXd1ePXR972yZ/R5KQ789h7lPj5gU+zhSC3ckrECpA9D0pqD5j3hr2RXNun3RxEY9/u9KM29QW8U24xifKkVT9ugCuaB1JcAOFPluLox+xz+5z0lFEyXVGnLPN6i+zx2xnZiJ1Hwxt/irXSdaV2Y/hqGyxagGeuOSFBUxhropP0WtGMcggJ7h+uTFi03SUVFSdAMdE89APmygn4tZ8Ao54unEWsiD9ePqp/LcGKnOcgdq3fJkE47Alx9QR2F8DQKau80zfX3qYOOOTzXqAxVk8aUbZOtMV6iz5GUr5798oXsSWXKn0/ghAnO7GjQrOCbeGz7uYBi4u1E7ehUSfpJG+MKlo4fpL7Jyf7Nmn1nAACLn/he+SIWsa/6VkYIQT0IJis/C3tuiGeOEicXwvpjNNuR4n5czJY0Ta0UyWnbyftmHNNakZXKJcGWEZq4p0XFuLuzZqfp22bwwxOJWEIPTmPCrYL3bgpoaDpFbJhAc8T07WauEsQbDJA4UzfSAA1yClQpeW3HG6LUOTdM5s+q5ef7uEkxOZ8lqdbwVVJqh1R5NI5ITL2u2tXpWlH7thLThrYBccNXPxqFWlQLlFWWkgwQbkxqrx5NrygPJ2qqvBUHE2mFQic9ovGOTTF1T9Dj9FMQ7OL1TZ46ZtHbZynWaYYwnPpTfyZQDXPjuvpgPtYA9TWuq0dXupWhex6Nfi+OqXuzxtnDtLUgUpHl+ps8Wq1jEGk4jWijdseIIRMaur9U8U7Sv2nMAr0JFrbS7b8AOc+jkFxDNtOwNvkkn4oIMzrRiTKAecYWCMpERpIDgJEYI6RbyzeWrv073RkOY00F9LYH2uIP86bmY5cBcapqcu4cDzpbgIv6HZGV50onnbTTFfJ5pKTHt4mGefrymPqzdSKJF9uXVM+EMtJw6iSFT4pt6XOf/PbBBAA==) took around twice as much to get the answer for part 2.  What did I do wrong, there?! Well, as I moped things up for this entry, I realized that we could speed things up a little bit by a) using **binary search** instead of a linear scan to find the min / max indices of the **non-overlapping** cuboids, but most importantly, b) we could do this search **once** per input instruction (i.e. once for each `x1`, `(1+ x2)`, `y1`, `(1+ y2)`, `z1`, `(1+ z2)` value) instead of doing this over and over again inside each one of the nested LOOP forms.

Live and learn!

2022-01-03 (permalink)

Advent of Code: [2021/21](https://adventofcode.com/2021/day/21)

We have been challenged by the computer to play a game with it:
This game consists of a single die, two pawns, and a game board with a circular track containing ten spaces marked `1` through `10` clockwise. Each player's **starting space** is chosen randomly (your puzzle input). Player 1 goes first.
Players take turns moving. On each player's turn, the player rolls the die **three times** and adds up the results. Then, the player moves their pawn that many times **forward** around the track (that is, moving clockwise on spaces in order of increasing value, wrapping back around to `1` after `10`). So, if a player is on space `7` and they roll `2`, `2`, and `1`, they would move forward 5 times, to spaces `8`, `9`, `10`, `1`, and finally stopping on `2`.
After each player moves, they increase their **score** by the value of the space their pawn stopped on. Players' scores start at `0`. So, if the first player starts on space `7` and rolls a total of `5`, they would stop on space `2` and add `2` to their score (for a total score of `2`). The game immediately ends as a win for any player whose score reaches **at least `1000`**.
Since the first game is a practice game, the submarine opens a compartment labeled **deterministic dice** and a 100-sided die falls out. This die always rolls `1` first, then `2`, then `3`, and so on up to `100`, after which it starts over at `1` again. Play using this die.
The task:
Play a practice game using the deterministic 100-sided die. The moment either player wins, **what do you get if you multiply the score of the losing player by the number of times the die was rolled during the game?**
As usual, let's worry about our the input first; this is what we are expecting:
Player 1 starting position: 4
Player 2 starting position: 8
And we are going to parse this into a CONS cell, with accessors functions for each player starting position:
(defun parse-positions (data)
  (flet ((player-position (string)
           (cl-ppcre:register-groups-bind ((#'parse-integer pos))
               ("Player \\d+ starting position: (\\d+)" string)
    (cons (player-position (first data)) (player-position (second data)))))
(defun player1 (positions) (car positions))
(defun player2 (positions) (cdr positions))
Now...to get the answer for part 1, we are just going to have to simulate the game:

- Each player score starts at `0`
- We initialize `last-die` to `100`, and `roll-count` to `0`
- We roll the die thrice, and store its sum, taking care of applying the _right_ wrap around logic
- We increase `last-die` by 3 -- again, taking care of applying the _right_ wrap around logic
- We move player 1 forward
- We increase its score
- We increase the die roll count
- If player one won, we stop; otherwise, we recurse and swap player 1 data with with player 2 one
(defun play (p1 p2 &optional (s1 0) (s2 0) (last-die 100) (roll-count 0)
                 (result (mod1 (+ (+ last-die 1) (+ last-die 2) (+ last-die 3)) 10))
                 (last-die (mod1 (+ last-die 3) 100))
                 (p1 (mod1 (+ p1 result) 10))
                 (s1 (+ s1 p1))
                 (roll-count (+ roll-count 3)))
  (if (>= s1 1000) (* s2 roll-count) (play p2 p1 s2 s1 last-die roll-count)))
MOD1's job is simply to apply the _right_ wrap around logic, i.e. if the new value is equal to the maximum value, then set it to `1`:
(defun mod1 (n max)
  (loop while (> n max) do (decf n max))
Now that we're warmed up, it's time to play the real game -- part 2:
A second compartment opens, this time labeled **Dirac dice**. Out of it falls a single three-sided die.
As you experiment with the die, you feel a little strange. An informational brochure in the compartment explains that this is a **quantum die**: when you roll it, the universe **splits into multiple copies**, one copy for each possible outcome of the die. In this case, rolling the die always splits the universe into **three copies**: one where the outcome of the roll was `1`, one where it was `2`, and one where it was `3`.
The game is played the same as before, although to prevent things from getting too far out of hand, the game now ends when either player's score reaches at least `21`.
The task:
Using your given starting positions, determine every possible outcome. **Find the player that wins in more universes; in how many universes does that player win?**
Again...we are going to simulate this.

Let's suppose we had a COUNT-WINS function, capable of telling us, given each player positions, how many times each player would win; with it, getting the answer for part 2 would be as simple as calling MAX with each player win count:
(defun part2 (positions)
  (destructuring-bind (wc1 . wc2) (count-wins (player1 positions) (player2 positions))
    (max wc1 wc2)))
Nice! Let's now see what our COUNT-WINS function is going to look like:

- Each player rolls the three-faced die, thrice, and collect its score
- If it won, we increase the player win counter
- Otherwise, we let the other player play, and collect win count accordingly (when recursing, player 1 and player 2 data is swapped, so the win counts will be swapped as well)
(defun count-wins (p1 p2 &optional (s1 0) (s2 0))
  (let ((wc1 0) (wc2 0))
    (loop for d1 from 1 to 3 do
          (loop for d2 from 1 to 3 do
                (loop for d3 from 1 to 3
                      for p1-next = (mod1 (+ p1 d1 d2 d3) 10)
                      for s1-next = (+ s1 p1-next) do
                      (if (>= s1-next 21)
                        (incf wc1)
                        (destructuring-bind (w2 . w1) (count-wins p2 p1-next s2 s1-next dp)
                          (incf wc1 w1)
                          (incf wc2 w2))))))
    (cons wc1 wc2)))
The problem with this naive solutions is that it will take forever to run; the text for part 2 should have hinted us about this:
Using the same starting positions as in the example above, player 1 wins in **`444356092776315`** universes, while player 2 merely wins in `341960390180808` universes.
Let's think about this:

- each player maximum score is `29` -- worst case, it started off position `20` (with a score of `20`) and rolled three `3`
- the same is valid for player positions

So maybe there aren't that many _distinct_ states after all; maybe we can _memoize_ all these calls?!

Well, it turns out we can, and it will work pretty fast too!
(defun count-wins (p1 p2
                      &optional (s1 0) (s2 0) (dp (make-hash-table :test 'equal))
                      &aux (key (list p1 s1 p2 s2)))
  (uiop:if-let (wins (gethash key dp))
    (setf (gethash key dp)
          (let ((wc1 0) (wc2 0))
            (loop for d1 from 1 to 3 do
                  (loop for d2 from 1 to 3 do
                        (loop for d3 from 1 to 3
                              for p1-next = (mod1 (+ p1 d1 d2 d3) 10)
                              for s1-next = (+ s1 p1-next) do
                              (if (>= s1-next 21)
                                (incf wc1)
                                (destructuring-bind (w2 . w1) (count-wins p2 p1-next s2 s1-next dp)
                                  (incf wc1 w1)
                                  (incf wc2 w2))))))
            (cons wc1 wc2)))))
Final plumbing:
(define-solution (2021 21) (positions parse-positions)
  (values (part1 positions) (part2 positions)))

(define-test (2021 21) (998088 306621346123766))
Test run:
> (time (test-run))
Success: 1 test, 2 checks.
Evaluation took:
  0.066 seconds of real time
  0.064617 seconds of total run time (0.062712 user, 0.001905 system)
  98.48% CPU
  153,951,731 processor cycles
  20,318,160 bytes consed
And that's it!

As I was trying to clean this up and make use of my custom DEFUN/MEMO macro, I realized it was not properly supporting functions with `&optional` and `&aux` arguments:
;     (LIST AOC/2021/21::P1 AOC/2021/21::P2 &OPTIONAL (AOC/2021/21::S1 0)
;           (AOC/2021/21::S2 0))
; caught WARNING:
;   undefined variable: COMMON-LISP:&OPTIONAL

;     (AOC/2021/21::S1 0)
;   undefined function: AOC/2021/21::S1

;     (AOC/2021/21::S2 0)
;   undefined function: AOC/2021/21::S2
; compilation unit finished
;   Undefined functions:
;     S1 S2
;   Undefined variable:
;   caught 1 WARNING condition
;   caught 2 STYLE-WARNING conditions
WARNING: redefining AOC/2021/21::COUNT-WINS in DEFUN
So I went on, and fixed it -- pay attention to EXPAND-DEFUN/MEMO-KEYABLE-ARGS, in which we:

- Skip `&optional`
- Skip `&key`
- Skip `&rest` and the following parameter
- Exit on `&aux`
- _Unwrap_ optional / key parameter
(defmacro defun/memo (name args &body body)
  (expand-defun/memo name args body))

(defun expand-defun/memo (name args body)
  (let* ((memo (gensym  (mkstr name '-memo))))
    `(let ((,memo (make-hash-table :test 'equalp)))
        ,(expand-defun/memo-memoized-fun memo name args body)
        ,(expand-defun/memo-clear-memo-fun memo name)))))

(defun expand-defun/memo-memoized-fun (memo name args body)
  (with-gensyms (key result result-exists-p)
    `(defun ,name ,args
      (let ((,key (list ,@(expand-defun/memo-keyable-args args))))
        (multiple-value-bind (,result ,result-exists-p) (gethash ,key ,memo)
          (if ,result-exists-p
            (setf (gethash ,key ,memo)

(defun expand-defun/memo-keyable-args (args)
  (uiop:while-collecting (collect)
    (loop while args for a = (pop args) do
          (cond ((eq a '&optional) nil)
                ((eq a '&key) nil)
                ((eq a '&rest) (pop args))
                ((eq a '&aux) (return))
                ((consp a) (collect (car a)))
                (t (collect a))))))

(defun expand-defun/memo-clear-memo-fun (memo name)
  (let ((clear-memo-name (intern (mkstr name '/clear-memo))))
    `(defun ,clear-memo-name ()
      (clrhash ,memo))))
With this, COUNT-WINS can be updated as follows:
(defun/memo count-wins (p1 p2 &optional (s1 0) (s2 0))
  (let ((wc1 0) (wc2 0))
    (loop for d1 from 1 to 3 do
          (loop for d2 from 1 to 3 do
                (loop for d3 from 1 to 3
                      for p1-next = (mod1 (+ p1 d1 d2 d3) 10)
                      for s1-next = (+ s1 p1-next) do
                      (if (>= s1-next 21)
                        (incf wc1)
                        (destructuring-bind (w2 . w1) (count-wins p2 p1-next s2 s1-next)
                          (incf wc1 w1)
                          (incf wc2 w2))))))
    (cons wc1 wc2)))
(Yes, all we did was replacing DEFUN with DEFUN/MEMO)

2021-12-20 (permalink)

Advent of Code: [2021/20](https://adventofcode.com/2021/day/20)
With the scanners fully deployed, you turn their attention to mapping the floor of the ocean trench. When you get back the image from the scanners, it seems to just be random noise. Perhaps you can combine an image enhancement algorithm and the input image (your puzzle input) to clean it up a little.
For example:

The first section is the **image enhancement algorithm**.  The second section is the **input image**, a two-dimensional grid of light pixels (`#`) and dark pixels (`.`).
The image enhancement algorithm describes how to enhance an image by **simultaneously** converting all pixels in the input image into an output image. Each pixel of the output image is determined by looking at a 3x3 square of pixels centered on the corresponding input image pixel. So, to determine the value of the pixel at (5,10) in the output image, nine pixels from the input image need to be considered: (4,9), (4,10), (4,11), (5,9), (5,10), (5,11), (6,9), (6,10), and (6,11). These nine input pixels are combined into a single binary number that is used as an index in the image **enhancement algorithm string**.
The task for the day:
Start with the original input image and apply the image enhancement algorithm twice, being careful to account for the infinite size of the images. **How many pixels are lit in the resulting image?**
OK, first off, the input:

- A tuple
- First element is the algorithm, with all the `#` and `.` characters replaced with Ts and NILs
- The second element, a HASH-TABLE mapping from `(row col)` pairs, to whether the pixel was on (T), or not (NIL)
(defun parse-input (data)
  (list (parse-enhancement-algorightm (first data)) (parse-image (cddr data))))

(defun parse-enhancement-algorightm (string)
  (map 'vector (partial-1 #'eql #\#) string))

(defun parse-image (data &aux (image (make-hash-table :test 'equal)))
  (loop for r below (length data)
        for string in data do
        (loop for c below (length string)
              for ch across string
              when (eql ch #\#) do (setf (gethash (list r c) image) t)))

(defun enhancement-algorithm (input) (first input))
(defun image (input) (second input))
Next, the core of the enhance algorithm:

- All the pixels are enhanced at the same time, so start off by copying the current image
- Then, for each pixel -- plus some extra ones, to properly account account for the enhancements happening on the border of the image
- We fetch the 3x3 block centered on _that_ pixel -- NEIGHBORS9
- We convert that into an index
- We use that to access the algorithm and figure out whether the pixel will be on or off
(defun enhance-step (algo curr &aux
                          (next (make-hash-table :test 'equal))
                          (row-min (loop for (r _) being the hash-keys of curr minimize r))
                          (row-max (loop for (r _) being the hash-keys of curr maximize r))
                          (col-min (loop for (_ c) being the hash-keys of curr minimize c))
                          (col-max (loop for (_ c) being the hash-keys of curr maximize c)))
  (flet ((block-3x3 (pos)
           (mapcar (partial-1 #'gethash _ curr) (neighbors9 pos))))
    (loop for row from (- row-min 3) to (+ row-max 3) do
          (loop for col from (- col-min 3) to (+ col-max 3)
            for pos = (list row col)
            for i = (as-binary (block-3x3 pos)) do
            (setf (gethash pos next) (aref algo i)))))
NEIGHBORS9 simply checks all the adjacent positions (including the current one), and does this in the _right_ order -- fail to do so, and we would be accessing elements of the enhancement algorithm wrong!
(defparameter *nhood* '((-1 -1) (-1  0) (-1 1)
                        ( 0 -1) ( 0  0) ( 0 1)
                        ( 1 -1) ( 1  0) ( 1 1)))

(defun neighbors9 (pos)
  (loop for d in *nhood* collect (mapcar #'+ pos d)))
Inside AS-BINARY, we scan the input list of surrounding pixels, and thanks to DPB and BYTE we construct the index to use while accessing the enhancement algorithm:
(defun as-binary (list &aux (number 0))
  (loop for ch in list for i from 8 downto 0 when ch do
        (setf number (dpb 1 (byte 1 i) number)))
The last bit: the guy that does the enhancement, a given number of times (`2` in our case), and that finally returns the number of pixels which are on:
(defun enhance (iterations input &aux (algo (enhancement-algorithm input)) (curr (image input)))
  (dotimes (_ iterations)
    (setf curr (enhance-step algo curr)))
  (loop for state being the hash-values of curr count state))
Feed this our input, and we should be getting our result back...right?!  Well...no!

As it turns out, the first entry of the enhancement algorithm is a `#`; that entry can only be accessed with index `0` which can only be obtained for pixels centered around a 3x3 with all the pixels off; now, since our image is _infinite_, and since all the surrounding pixels not specified by our input are expected to be off, that means that after the first enhancement step...**all the surrounding pixels will be on...WTF?!**

Well, lucky for us, the last element of the enhancement algorithm is a `.`, which means that a 3x3 block with all the pixels turned on will cause the middle one to be turned off; so maybe, if after the first iteration all the _background_ pixels will be turned on, after the second one they would all be turned off again.

OK, let's start by changing ENHANCE to check for the current enhancement step, and use that information to decide what the background state is going to be (i.e. it should be on, T, for all the odd steps):
(defun enhance (iterations input &aux (algo (enhancement-algorithm input)) (curr (image input)))
  (dotimes (i iterations)
    (setf curr (enhance-step algo curr (oddp i))))
  (loop for state being the hash-values of curr count state))
Next we update ENHANCEMENT-STEP to use `background-lit-p` accordingly (i.e. use that when accessing a pixel which is outside of the current image):
(defun enhance-step (algo curr background-lit-p &aux
                          (next (make-hash-table :test 'equal))
                          (row-min (loop for (r _) being the hash-keys of curr minimize r))
                          (row-max (loop for (r _) being the hash-keys of curr maximize r))
                          (col-min (loop for (_ c) being the hash-keys of curr minimize c))
                          (col-max (loop for (_ c) being the hash-keys of curr maximize c)))
  (flet ((block-3x3 (pos)
           (mapcar (partial-1 #'gethash _ curr background-lit-p)
                   (neighbors9 pos))))
    (loop for row from (- row-min 3) to (+ row-max 3) do
          (loop for col from (- col-min 3) to (+ col-max 3)
            for pos = (list row col)
            for i = (as-binary (block-3x3 pos)) do
            (setf (gethash pos next) (aref algo i)))))
Calling ENHANCE on the input image should now work as expected.

What about part 2?  It looks like `2` steps were not enough:
Start again with the original input image and apply the image enhancement algorithm 50 times. **How many pixels are lit in the resulting image?**
Easy peasy -- the code we have for part 1 will work for part 2 as well!

Final plumbing:
(define-solution (2021 20) (input parse-input)
  (values (enhance 2 input) (enhance 50 input)))

(define-test (2021 20) (5503 19156))
Test run:
> (time (test-run))
Success: 1 test, 2 checks.
Evaluation took:
  8.300 seconds of real time
  7.358258 seconds of total run time (6.756652 user, 0.601606 system)
  [ Run times consist of 0.756 seconds GC time, and 6.603 seconds non-GC time. ]
  88.65% CPU
  19,090,743,941 processor cycles
  2,622,069,728 bytes consed
Well, that conses a lot, but again...it gets the job done, so I am going to keep it as is for now!  Maybe I could use a 2D BIT array, rather than a HASH-TABLE of Ts and NILs?

PS. As usual, my [REPL buffer](https://topaz.github.io/paste/#XQAAAQBRCgAAAAAAAAAUGQimgx+p6PZhRh7uIO5BVNPIdM7CxikuU7ERkd0dQuCDrZBhpxZTCueZNLCEOlFkCLmeqCJZFrxgbEMOXRzMoYJB/FMrFs8T4gJnMYA3TPqI3D5vjVHPr1ZfdXnTvjYJBkm6A/55PkeXgxNmxzw3LW7bJ0kUiMhOp6QbHb2yWa8RAgVsSF0ty2N7peZQ76w6a8ZVyDteIUd5jJxoqqPOE8oBWOjlb8NquqBmrq4rss2WDa1qyYznBxpJBUT/pZULLTPsKTfDwp5rT5gp+dGNnfoqSwkGzc32cfjmmEYQQiT7cqMgAgDNOAX5vWFDGdudGXY9Qu7SaTQdPgBCVOc8r/y6qgUKDvREdOixMDwECLpDGRtnBhJL9EJ76Fxmf1R7alP2C58s9yw8gzkzvM3EJzGmiABCfwiS7T/g6gx7gFz/32oGCvVFtgQ65rq3JfWMtNvooXIuGCBZ+fdsXzUv8UC+0GIejPoaYe3EIyXdBr4I//Pb3c0qM+TeCcKGEkSV32QnI6xshkfMIC98CwvKFXZaZLNjXfw52gPUFiKoZAomS3hYxzlPf4x7jP5e9LrWxGIEf2SGl69YuU2r2R1QJ9rAQIJ9/HBbX1+sh/TIBQGI+8t71PWDMHI7VzpUGU8RRQ9kjsd/MIDb2xNnLlzKatDYnhq+NUIhXfvIs4Q8nU+ptkfQd0HEtYHOE6mkea/iymvu72Wmqt0NMgI5vlFtuD+jj+QQRAksMjtVtA2dnqSaxbAet5W37ws30UvbcpI1zxZ47/tC6pc1OQmtdq2tdnAzToPU1ZDBryRsDy5Cc+d2s2pLa7LjCbrjFnGY+6GGqOB/Nidfzbt+nlTkxXydENby7ITRAiyv/JgmR3U9ve44B7lSntPwWR39Lbpk4exY/0AABHYeou36vUEfEJF/HfT0L+DAFQicFBvYteYCgrehEwePyMQLTu/jgUux+SRXDBG0s7n7yRZjbiYWIHuw84v3bYwxPI/cVvEAbizScOWYoBCDZj7whw2pZZ99hNv8f9RIBCeXuAbno+SRNCNpH+O1DrfA515YQCl4ep23IsSZxKx3o2FESuCqlcH+ZpTJaAO8MYjJb//VGo7k) -- it's not too bad...I have seen (and authored) worse!

2021-12-19 (permalink)

Advent of Code: [2021/19](https://adventofcode.com/2021/day/19)

Solving today's problem took me way to long, so I am not going put together a full write-up for it, not yet, especially because my solution takes around 2 minutes to complete, and conses up like hell!
Evaluation took:
  124.405 seconds of real time
  92.907883 seconds of total run time (90.247127 user, 2.660756 system)
  [ Run times consist of 0.972 seconds GC time, and 91.936 seconds non-GC time. ]
  74.68% CPU
  286,124,018,756 processor cycles
  13,399,299,136 bytes consed
Very high level:

- We pick a scanner and check it against every other one, and see if they overlap
- To see if the two scanners overlap...
- ...we keep one as is, while rotate the other in all the 24 possible directions
- ...for each beacon of the first scanner, we calculate the relative distances with all the other beacons seen by the scanner
- ...for each beacon of the second scanner (the rotated one), we calculate the relative distances with all the others beacons seen by the scanner
- ...if these two sets of relative distances have at least 12 elements in common, then the two scanners overlap!
- ...do some math to return the scanner point, and the rotation matrix that we used to _orient_ the second scanner
- We now have a new scanner (with a new origin, and a new rotation matrix), so we can:
- ...convert the positions of the local beacons to the system of coordinates of the scanner used as reference
- ...use the new scanner to check against all of the others

Input parsing:
(defun parse-scanner (paragraph &aux (paragraph (cl-ppcre:split "\\n" paragraph)))
  (flet ((numbers (string)
           (mapcar #'parse-integer (cl-ppcre:all-matches-as-strings "-?\\d+" string))))
      (first (numbers (first paragraph)))
      (mapcar #'numbers (rest paragraph)))))
(defun id (scn) (car scn))
(defun beacons (scn) (cdr scn))
Some utilities to deal with matrices first:
(defun row (i m) (nth i m))
(defun col (j m) (loop for row in m collect (nth j row)))
(defun dot-product (v1 v2) (apply #'+ (mapcar #'* v1 v2)))
(defun invert (m) (transpose m))
(defun transpose (m) (loop for i below (length m) collect (col i m)))

; Coutertesy of: https://www.euclideanspace.com/maths/algebra/matrix/orthogonal/rotation/index.htm
(defparameter *rotate-90-x*
  '((1 0  0)
    (0 0 -1)
    (0 1  0)))
(defparameter *rotate-90-y*
  '(( 0 0 1)
    ( 0 1 0)
    (-1 0 0)))
(defparameter *rotate-90-z*
  '((0 -1 0)
    (1  0 0)
    (0  0 1)))

(defun m* (m1 m2)
  (loop for i below (length m1) collect
        (loop for j below (length (first m1)) collect
              (dot-product (row i m1) (col j m2)))))

(defun compute-all-rotation-matrices (&aux
                                       (midentity '((1 0 0) (0 1 0) (0 0 1)))
  (loop repeat 4
        for rz = midentity then (m* rz *rotate-90-z*) do
        (loop repeat 4
              for ry = (m* rz midentity) then (m* ry *rotate-90-y*) do
              (loop repeat 4
                    for rx = (m* ry midentity) then (m* rx *rotate-90-x*) do
                    (push rx rez))))
  (reverse (remove-duplicates rez :test 'equal)))

(defparameter *all-rotation-natrices* (compute-all-rotation-matrices))
The beef for part 1:
(defun find-beacons (scanners &aux
                       (queue (list (list (first scanners) '(0 0 0) '((1 0 0) (0 1 0) (0 0 1)))))
                       (rez (beacons (first scanners)))
                       (scanners (rest scanners)))
  (loop while queue
        for (scn1 d10 r10) = (pop queue) do
        (loop for scn2 in scanners do
              (multiple-value-bind (p1 r21) (locate-scanner scn1 scn2)
                (when p1
                  (flet ((transform21 (p2) (v+ (rotate p2 r21) p1))
                         (transform10 (p1) (v+ (rotate p1 r10) d10)))
                    (let ((d20 (transform10 p1))
                          (r20 (m* r10 r21)))
                      (prl (id scn1) d10 r10 (id scn2) p1 r21 d20 r20)
                      (setf rez (append rez (mapcar (lambda (p2)
                                                      (transform10 (transform21 p2)))
                                                    (beacons scn2)))
                        scanners (remove scn2 scanners :test 'equal)
                        queue (cons (list scn2 d20 r20) queue))))))))
  (remove-duplicates rez :test 'equal))

(defun locate-scanner (scn1 scn2 &aux (bb1 (beacons scn1)))
  (dolist (b1 bb1)
    (let ((dd1 (relative-to b1 bb1)))
      (dolist (m *all-rotation-natrices*)
        (let ((bb2 (rotate-all (beacons scn2) m)))
          (dolist (b2 bb2)
            (let ((dd2 (relative-to b2 bb2)))
              (let ((common (intersection dd1 dd2 :test 'equal)))
                (when (>= (length common) 12)
                  (let ((b2-orig (rotate b2 (invert m))))
                    (return-from locate-scanner
                                 (values (locate b1 b2-orig m) m))))))))))))

(defun relative-to (b bb) (mapcar (lambda (o) (mapcar #'- o b)) bb))
(defun rotate-all (points matrix) (mapcar (partial-1 #'rotate _ matrix) points))
(defun rotate (point matrix)
  (loop for row in matrix collect (dot-product row point)))
(defun locate (b1 b2 m) (v- b1 (rotate b2 m)))

#; DO IT!
(find-beacons (mapcar #'parse-scanner (cl-ppcre:split "\\n\\n" (uiop:read-file-string "src/2021/day19.txt"))))
(setq beacons *)
(length *)
With part 1 done (and working!), part 2 should be pretty easy:

- With the same algorithm we used for part 1, we figure out the list of scanners -- FIND-SCANNERS is 99% copy-pasta of FIND-BEACONS
- Then we check each scanner against each other, and maximize the manhattan distance
(defun find-scanners (scanners &aux
                       (queue (list (list (first scanners) '(0 0 0) '((1 0 0) (0 1 0) (0 0 1)))))
                       (scanners (rest scanners)))
  (loop while queue
        for (scn1 d10 r10) = (pop queue) do
        (loop for scn2 in scanners do
              (multiple-value-bind (p1 r21) (locate-scanner scn1 scn2)
                (when p1
                  (flet ((transform21 (p2) (v+ (rotate p2 r21) p1))
                         (transform10 (p1) (v+ (rotate p1 r10) d10)))
                    (let ((d20 (transform10 p1))
                          (r20 (m* r10 r21)))
                      (prl (id scn1) d10 r10 (id scn2) p1 r21 d20 r20)
                      (setf rez (cons d20 rez)
                        scanners (remove scn2 scanners :test 'equal)
                        queue (cons (list scn2 d20 r20) queue))))))))

#; DO IT!
(find-scanners (mapcar #'parse-scanner (cl-ppcre:split "\\n\\n" (uiop:read-file-string "src/2021/day19.txt"))))
(setq scanners *)
(loop for (s1 . remaining) on scanners maximize
      (loop for s2 in remaining maximize (manhattan-distance s1 s2)))
Things have gotten...interesting!

2021-12-18 (permalink)

Advent of Code: [2021/18](https://adventofcode.com/2021/day/18)

First off, it's the last _serious_ weekend before December 25th, so expect this **not** to be easy!

We descend into the ocean trench and encounter some snailfish; they say they saw the sleigh keys; they will tell us which direction the keys went only if we help them with a little math problem...sigh!
Snailfish numbers aren't like regular numbers. Instead, every snailfish number is a **pair** - an ordered list of two elements. Each element of the pair can be either a regular number or another pair. Pairs are written as `[x,y]`, where `x` and `y` are the elements within the pair.
This snailfish homework is about **addition**. To add two snailfish numbers, form a pair from the left and right parameters of the addition operator. For example, `[1,2]` + `[[3,4],5]` becomes `[[1,2],[[3,4],5]]`.
There's only one problem: **snailfish numbers must always be reduced**, and the process of adding two _snailfish numbers can result in snailfish numbers that need to be reduced.
To reduce a snailfish number, you must repeatedly do the first action in this list that applies to the snailfish number:
- If any pair is **nested inside four pairs**, the leftmost such pair **explodes**.
- If any regular number is **10 or greater**, the leftmost such regular number **splits**.
Once no action in the above list applies, the snailfish number is reduced.
During reduction, at most one action applies, after which the process returns to the top of the list of actions. For example, if **split** produces a pair that meets the **explode** criteria, that pair **explodes** before other **splits** occur.
To **explode** a pair, the pair's left value is added to the first regular number to the left of the exploding pair (if any), and the pair's right value is added to the first regular number to the right of the exploding pair (if any). Exploding pairs will always consist of two regular numbers. Then, the entire exploding pair is replaced with the regular number `0`.
To **split** a regular number, replace it with a pair; the left element of the pair should be the regular number divided by two and rounded **down**, while the right element of the pair should be the regular number divided by two and rounded **up**. For example, `10` becomes `[5,5]`, `11` becomes `[5,6]`, `12` becomes `[6,6]`, and so on.
The homework assignment involves adding up a **list of snailfish numbers** (your puzzle input). The snailfish numbers are each listed on a separate line. Add the first snailfish number and the second, then add that result and the third, then add that result and the fourth, and so on until all numbers in the list have been used once.
To check whether it's the right answer, the snailfish teacher only checks the **magnitude** of the final sum. The magnitude of a pair is 3 times the magnitude of its left element plus 2 times the magnitude of its right element. The magnitude of a regular number is just that number.
Add up all of the snailfish numbers from the homework assignment in the order they appear. **What is the magnitude of the final sum?**
Well, well, well...

OK, first off we need a way to represent snailfish numbers; nothing fancy, we are just going to parse these as list of objects (i.e. `[`, `]`, or a digit):
(defun parse-snumbers (data) (mapcar #'sread data))

(defun sread (string)
  (flet ((ch->number (ch) (- (char-code ch) (char-code #\0))))
    (loop for ch across string
          unless (eql ch #\,) collect (if (find ch "[]") ch (ch->number ch)))))
Let's confirm it's working as expected:
> (sread "[[1,2],[3,4]]")
(#\[ #\[ 1 2 #\] #\[ 3 4 #\] #\])
Now, adding two snailfish numbers together translates to:

- Append the second number to the first one
- Wrap it all up inside `[` and `]` characters
- Then _reduce_ the number

Let's do this:
(defun s+ (number1 number2)
  (sreduce (append
             (list #\[)
             (list #\]))))
To reduce a snailfish number:

- Try to _expode_ it
- On success, go back to the first step
- Otherwise, try to _split_ it
- On success, go back to the first step
- Otherwise the number is fully reduced, and there is not anything else we can do with it
(defun sreduce (number)
    (uiop:if-let (exploded (sexplode number))
      (setf number exploded)
      (uiop:if-let (split (ssplit number))
        (setf number split)
        (return number)))))
_Exploding_ a snailfish number...now that's a tricky one!

To recap what we have to do:

- Find the first pair nested inside 4 pairs, say `[left, right]`
- Replace the current pair with `0`
- Add `left` to the first regular number _to the left_ of the exploded pair
- Add `right` to the first regular number _to the right_ of the exploded pair

We are going to be iterating over the tokens of the number; as we do it, we are going to keep track of:

- the depth of the current element -- increase it by `1` on each `[` element, decrease it by `1` on each `]` one
- the list of of processed elements, i.e. _to the left_, in the form of a stack
- the list of yet to process elements, i.e. _to the right_

This way, as soon as the depth of the current element equals `4` we will know we found the number to explode.

When this happens:

- The first element inside _to the left_ will be `left` in the recap above, i.e. the element to add to the first regular number _to the left_
- The second element instead will represent `right`
- The third one will be a `]` -- which we can discard, as the pair just exploded

The last step, is to re-assemble the number:

- We add `left` _to the left_, and reverse it (remember, it's a stack)
- Then we add the `0` in place of the exploded number
- We add `right` _to the right_, and append the result

It sounds a bit more complicated than it actually is, so hopefully the following will help understanding what the general idea is:
(defun sexplode (number &aux (depth 0) to-the-left)
  (loop for (v . to-the-right) on number do
        (cond ((numberp v) (push v to-the-left))
              ((eql v #\]) (push v to-the-left) (decf depth))
              ((and (eql v #\[) (/= depth 4)) (push v to-the-left) (incf depth))
              (t (let ((left (pop to-the-right))
                       (right (pop to-the-right)))
                   (assert (and (eql v #\[) (= depth 4)))
                   (assert (eql (pop to-the-right) #\]))
                   (flet ((add-first (addend list)
                            (loop for (v . to-the-right) on list
                                  collect (if (numberp v) (+ v addend) v) into vv
                                  when (numberp v) append to-the-right into vv and return vv
                                  finally (return vv))))
                         (reverse (add-first left to-the-left))
                         (list 0)
                         (add-first right to-the-right)))))))))
_Splitting_ a snailfish number on the other hand, should hopefully be easier:

- For each element of the number
- If it's not a regular number, or if it's smaller than `10`, we keep it as is
- Otherwise we found our number to split -- so we split it, and assemble things back together
(defun ssplit (number)
  (loop for (v . to-the-right) on number
        if (or (not (numberp v)) (< v 10)) collect v into to-the-left
        else return (append
                      (list #\[)
                      (list (floor v 2) (ceiling v 2))
                      (list #\])
OK, the last missing piece of the puzzle is calculating the _magnitude_ of a snailfish number:

- We are going to push the number elements into a stack
- When we find a `]` it means we found a _leaf_
- So the pop one item from the stack -- the `right` part of the pair
- We pop one more -- the `right` part of the pair
- We pop and discard one more -- the `[` char
- We do the _math_, i.e. `left * 3 + right * 2`, and push the result into the stack
- Eventually, the only element inside the stack will be the magnitude of the number:
(defun smagnitude (number &aux stack)
  (loop for (ch . remaining) on number do
        (case ch
          (#\[ (push ch stack))
          (#\] (let ((right (pop stack))
                     (left (pop stack)))
                 (assert (eql (pop stack) #\[))
                 (push (+ (* left 3) (* right 2)) stack)))
          (otherwise (push ch stack))))
  (pop stack))
Alright, time to implement the solution for part 1:
(defun part1 (numbers) (smagnitude (reduce #'s+ numbers)))

Off to part 2:
What is the largest magnitude of any sum of two different snailfish numbers from the homework assignment?
We are just going to brute-force this, keeping into account that addition of snailfish numbers is not commutative:
(defun part2 (numbers)
  (loop for (n1 . remaining) on numbers maximize
        (loop for n2 in remaining
              maximize (smagnitude (s+ n1 n2))
              maximize (smagnitude (s+ n2 n1)))))
And that's it!

Final plumbing:
(define-solution (2021 18) (numbers parse-snumbers)
  (values (part1 numbers) (part2 numbers)))

(define-test (2021 18) (3987 4500))
Test run:
> (time (test-run))
Success: 1 test, 2 checks.
Evaluation took:
  0.197 seconds of real time
  0.187851 seconds of total run time (0.179690 user, 0.008161 system)
  [ Run times consist of 0.020 seconds GC time, and 0.168 seconds non-GC time. ]
  95.43% CPU
  453,704,694 processor cycles
  403,829,360 bytes consed
It conses up a bit more than I thought it would, but there are a lot of APPEND operations going on in there, so it's all right.

PS. I spent way too much time trying implement a _recursive_ solution for this, but I simply could not wrap my head around it so eventually I gave up and implemented the solution described above.  I did come back to it later though, and finally made it work, though I am not sure if the [recursive solution](https://topaz.github.io/paste/#XQAAAQCbDQAAAAAAAAAUGQimgx+p6PxOfQ0HAVSqOhDGRhLYS4AIXHNjFDrj6MgyqfWSlccS4l0btc+HyT+rEaJLGW07vOLFN859pjh40yell1zYamhcJsv23wn7PYA+QotQJxAOejK0GThz/3JyaALDPO1FOA2Oh9rE+UCIBjPgLv1KgU8B2d4Isir5AoL77R1bJFucR5fG0jnx96p8ADEKuWZ0eer/W2cSVC139qTnga4EZ4U7kIB2ZPkwHwcWifaKp8TShG6LsU1H1LWhjnTRVZ97yPEYWIUM5l91rQVYL2cYFl2IL6hQfK/lafy5migGPSHgDKGRfk0jH0OUfo11MRRUc9kABZzHtKemqfpmgoT2U90kOC4nxD/cTPqX7rvVZoQ0/ECO3Tl/TvbKf9vpSYdGa/6EWYLATSBYI+cHf+kDDhD02OabvQOUVgTmxPiadyqWbi5m4BF6cvHQiANd8pYaxslsvqWp6vVmBMST0Ruvyolglg6x26k3rEqar5xl9RccmxrU1T6xCOSFTGdkbZqAYwk7Yhtw8Tl2eKoC0URbBgWZgVeueZ8aVKnXKdgNRf1BJnuh0Lw1yKOseKlNNkROWE29DU6hDMKwB4WDCs6DRB9qaxN3he3YUQvXPyi3Zh9YQSWQPsjAuLkoKC0iEtjlqitQqdvrv83NcCFHwpnp8OQfOXrY4nKT6RZUnHR+vWaowZVr/51xUkrqatpUJpAGwlvcV4tz9xf5RE742xu0gbuImB1KcpaOM0InJmY+WuibPdpeT8WIAvXAxgsIB3g1aFwsf1MdWJFJhmlQ4B6TOHcBYEaBonWnQVlu+TwEnYeNTxY9qtUKG6qo1qzuhj/7/ziyiNM3ODOCsXMJmXRifRLAuk9Xl28hYHzB6nJtzvARYkM6tUhNBagB+gmsBqQPh4sW2Tv9e9usJv+vxDZQg69XxtW6Vc93rp4snBoDm+n/UzMZBywC1aq5T4//hie7AA==) reads any better than the _stack-based_ one!
PPS. Looking for today's REPL buffer?  It's a [mess](https://topaz.github.io/paste/#XQAAAQD4JwAAAAAAAAAUGQimgulVPYrJhMVjyYOkoNMRrPzuCgpk2vvB8zuHocU/UmwqVZ1sTCnaK0NdCN8xvWnzuO3o0oJ7tsGitzMD6VSohAXfG4WdBUVsQWHtwENe+DuLgLzypkLjE9jdVpaLDtkB1SAZOluLi6VUwc+cK6L7r5mNZ45uwXhWAEB+he1ylnjo1og34aLT4Olg97TfLchGgGxx69H0lWGppixzj8/Ye3xH7xNWjMK5AKgdRw4re0xAJ12W1NjC64skEPhp33qlfUuZhNG6AIF6HoaVw7ihxxO9DRx4hL6+/Q5/ymImMiOBk1M4n5RAa6xXaDGf8sN/YAETfEx688/kVaEJBiYf+ZgnAsMzGn1GLJp/KZipYjReUBGBkbnOzmIvQTaV2s96zAqBDtQz4A+vdGeIvc9QW+3OniYGyCYx5w4FTrh0CmhwVGq0Xk1v/jX/OYGOLx4ESnecHbJS3oH0Pxfivzi2dXqN2GByKB0HAZ3FXbN5icYs+DD3Ba0LGo3G7ajf+ivbM3aozm/vMnTVBCCTTwV9Nawdp5LKKANekBeTqyCKORZ3wDcbYMm5rIvIev2DzXKAj0t2VMFRCaSumE4EtKmgOk0/3+WYz/rFwh8WWBbmM1NijdBixwE2q6qmOdi+wU0r1Fagpga1vRO8F/x2Xn2S7R2QainlYiBMUO6KBbT2Tlow/gB4yrrllQA7NJEiEAxW9y+lgnRnanp94p0FJmT24eFQqahr1/JWUlZN3MWV0iXenX0DVa8u8RM9Sgptk8Ps8WrhBXJVvukOuSUy25jn8prYL3xSNeE9PVK5Y69m1X/AE0bwCX8pW0DLT7a80tuWRtsKamx8kXly2LUFTWCMYkSoZclYlQ3nChB0rrSqKpdpjpQU60/21r7hpA1MUbFI9n7BvDjfH6jhEdKxjt0AJxUEURu7zR2V/l+HH4A3ELNN99xN75pFQ9W1927WtkJdDn8RSSFvqGhIikV2nfrMc/fQZaJUsOfQ8XNxJM3/saiNKUj+zomYIKuVKsBPzgEc15uveinzmXFvOoATeXLAq4ASljHgaBJa9irXVtU1F58/rGkuipGoCSDDR0TGEL9MAgFPDFNAjPmwnSKcJoA9YthyXAhYj4oxhLqNXVVbQXuKD28S9RHQHZet1Zc06nVDixvsX1x7OYXb8eBX1avSMhBRqtczjUrxqMcOiWyQBkpqn/M8xVY/8FaoD1ELMH7Dqvxh7y6tLt+22UGMBIByrC89D7G0JVag4lseMvSbwpLkdScJqAz4TBoOpfG+PRmAqxcvWooCD3Bf0lRwXtzHEXN4brr2lAs1DMrU1nrDFhUy222kG1oUZNe9MckjVRAvlh6YSdmkI4pdNMEP+qrSd1mSQ+lNelL0ihRfpl7sxc+O7sQApbtMrEVOcWxTHaRcEgGmmVkLuIYmZ8HG4rnosSaOZTNLzV0nIRbB8L7Se03CQ6/qx6WFZip/LPW1TpsrtBlbJAPW/XTbMLyTXZT9QWxt9vKvRkdxBwauTXzPWnWkTWLuEmpOA0Jb8nHUpKs8F7qByf1iRlQlM7gea5zHT/tXeWitAZ7AewZK3FKMSOkf9xp9THmed/POhsAmt1jOqTSE2wmY/qgdv5HiVrm80LaESczOeI8l9E1VskJ/RVJ/RueYlI0fZaBoT9Li0DkJnkob6VeIfk1Lxwhp00besZ/7zMgYdp6QoTYIBpiUYlgyzaz757ooHonsb5QurgJRDaT91Bax2xP1gNODPr+y5UEv8cyo4JJgNgIWJVJkugn5Fi3dbln6aemOjc44/9RJ2zBLZ132TECJJ9gTPnpDruCrNDqroA962YmLDLQwPXH0YBJwu9GnUqvP3z24wyk93ytkIrPws7q42H7/6PnpI3uPN3XHwdHkeMw2KN/gCVcCoIJDZo8EFZek6xGROnZYPN8iaLYfDf5b8WH83awmetYvsO60RHx9mURTXwX8OpR+OVYALCbKj8J1wx2WyC/xvdb1TKoleEw5ReKdz+2s5plyO9SDudzNItxWtPDMAvuzSoJ5imnAsOoiuahS3W7LuKoxR60U85Ar34I62BWxxuGLmsZgNEwW0blsRRZpAm288VPHr/Lek9IcV3dXkl5USQJdKFBVrbw/knrnGgRFC6G4SF/joMXVq6fDUKPd/TmhYyUZcmmLmz2r4x8T2g5i18KkDEgkvamZ+WitpTMQ5wARz47L6z5PCSd74ble3VDQ+aFHAVK++mmkDfx2KdCMJCK+82s22cyO1G3j5Vn5R44EsUmyk8k3smaRoY71KYt4wWIbMsg/8mgdmr++pZd1bxNLWhzIhxIGqBVwIV0BQzherLiDMSZY9nc85mCzvBcrZz3ar+5b70cRXbnNjlwYRphta5ilxiY8Zjjj0okzz6OONYfCVCoPx5G3sNsq/7BBSvVnBJVDVZ56HSPdmnG0qQvQoScEmhaGa2MggqwdcbbbInyB3HZ3IITNI2qWBgkyl6gwTYlfcK9z4f+mGCONhMDtT/5oA/POEXuF0TciMfGpbqCG0+etGupHsaL79B2OABKIMuyFdaCiW7c6oJaCPnUpjvK2NPnRPNTq6CkvBm3yiNsjPdcMIg2hbeO+2lbbqzZDxW9gzre5wcOS2+saBulbuqijxa60wF/oyM7y1yJQfKhZqAYJKDOrpJKhFvtksepJ4IRcQ/WyzIaPPSSc2cTlpWS5OijgRUavVerCAT1ebCepnnQoG+D5BX+LOhURz4CrwoCSmkDfsqATXL4KciCKrHG60Yxa4K9cQLYO30xTSFKSKlA+l7oyxJ2DoBuHS6fece+pJtjmb2au8luaTqcTuQhMpboORdZlsKhDeLk2QlLibtwUOBROxRiBIGifPaaxC6FeYm2ykUtGDTHsDYnsktG8kkRfs5g5Ir97/Fsu8//8jFu7)...
PPPS. People on [r/adventofcode](https://www.reddit.com/r/adventofcode/comments/rizw2c/2021_day_18_solutions/hp14ia0/) have been mentioning [Zippers](https://en.wikipedia.org/wiki/Zipper_(data_structure)) as a way to solve today's problem; it's the first time I hear of them, so I might as well look into this again and implement an alternate solution!

2021-12-17 (permalink)

Advent of Code: [2021/17](https://adventofcode.com/2021/day/17)

We treat ourselves with a little physics problem today:
Ahead of you is what appears to be a large ocean trench. Could the keys have fallen into it? You'd better send a probe to investigate. The probe launcher on your submarine can fire the probe with any integer velocity in the `x` (forward) and `y` (upward, or downward if negative) directions. For example, an initial `x`,`y` velocity like `0,10` would fire the probe straight up, while an initial velocity like `10,-1` would fire the probe forward at a slight downward angle.
The probe's `x`,`y` position starts at `0,0`. Then, it will follow some trajectory by moving in **steps**. On each step, these changes occur in the following order:
- The probe's `x` position increases by its `x` velocity.
- The probe's `y` position increases by its `y` velocity.
- Due to drag, the probe's x velocity changes by `1` toward the value `0`; that is, it decreases by `1` if it is greater than `0`, increases by `1` if it is less than `0`, or does not change if it is already `0`.
- Due to gravity, the probe's `y` velocity decreases by `2`.
OK, let's create a PROBE structure with `x`, `y`, `vx` and `vy` slots, and let's add a MOVE function as well, implementing the _step_ logic above:
(defstruct (probe (:conc-name) (:constructor %make-probe)) x y vx vy)

(defun make-probe (vx vy) (%make-probe :x 0 :y 0 :vx vx :vy vy))

(defun move (probe)
  (with-slots (x y vx vy) probe
    (let ((dx (<=> vx 0)))
      (incf x vx)
      (incf y vy)
      (decf vx dx)
      (decf vy)))
For the probe to successfully make it into the trench, the probe must be on some trajectory that causes it to be within a **target area** after any step. The submarine computer has already calculated this target area (your puzzle input).
Here is an example input:
target area: x=20..30, y=-10..-5
For this, we are going to create a new structure, AREA, having slots for the min / max `x` and `y` values it represents; we are also going to implement PARSE-INPUT to extract `x1`, `x2`, `y1`, and `y2` values from our input, and create a new AREA instance off of it:
(defstruct (area (:conc-name) (:constructor %make-area)) x1 x2 y1 y2)

(defun make-area (x1 x2 y1 y2)
  (%make-area :x1 (min x1 x2) :x2 (max x1 x2)
              :y1 (min y1 y2) :y2 (max y1 y2)))

(defun parse-input (data)
  (flet ((parse (string)
           (cl-ppcre:register-groups-bind ((#'parse-integer x1 x2 y1 y2))
               ("target area: x=(-?\\d+)..(-?\\d+), y=(-?\\d+)..(-?\\d+)" string)
             (make-area x1 x2 y1 y2))))
    (parse (first data))))
Find the initial velocity that causes the probe to reach the highest y position and still eventually be within the target area after any step. **What is the highest y position it reaches on this trajectory?**
To recap:

- We know the position where the probe is shot from, i.e. `0,0`
- We know where the target area is, i.e. our input
- We know how the probe moves
- We need to figure out the initial velocity which a) maximises the `y` value, and b) causes the probe to eventually hit the target area (without overshoot!)

We are going to brute-force it:

- For all the horizontal velocities in a _sensible_ range
- For all the vertical velocities in a _sensible_ range
- Shoot the probe
- Check that the probe does not miss the target area
- Maximise `y` values
(defun solve (target)
  (loop for vx from (vx-min target) to (vx-max target) maximize
        (loop for vy from (vy-min target) to (vy-max target)
              when (shoot target vx vy) maximize it)))
The tricky part here, is of course to figure out some _sensible_ ranges to use, for the horizontal and vertical velocities, but let us worry about the simulation part first:

- We shoot
- We keep on moving the probe until it gets past the target
- We keep track of the maximum reached `y` value, and if the probe ever make it to the target area, we return it
(defun shoot (target vx vy &aux (probe (make-probe vx vy)))
  (loop until (past-target-p target probe)
        when (on-target-p target probe) return y-max
        do (move probe)
        maximize (y probe) into y-max))
A probe gets past the target area if its `y` value is smaller than the area _smallest_ `y` value:
(defun past-target-p (target probe) (< (y probe) (y1 target)))
On the other hand, a probe hits the target if its `x` and `y` coordinates are contained within the box defined by the target area min / max `x` and `y` values:
(defun on-target-p (target probe)
  (and (<= (x1 target) (x probe) (x2 target))
       (<= (y1 target) (y probe) (y2 target))))
Now, let's do some guesswork to figure out some _sensible_ ranges to use for our horizontal and vertical velocities.

The maximum horizontal velocity we can shoot the probe at, is the maximum `x` value of the target area; one more, and we are going to miss the target:
(defun vx-max (target) (x2 target))
What about its minimum value? Well, the probe loses speed at each step, and if we don't want to miss our target then it means the horizontal speed of the probe when it reaches the target area, has to be `0` at least; so, if we start from the minimum `x` value of the target area, and keep on _accelerating_ towards `x=0`, we should find our minimal horizontal velocity value:
(defun vx-min (target &aux (x (x1 target)))
  (loop while (> x 0) for vx from 0 do (decf x vx) finally (return vx)))
On the vertical axis too we lose speed at every step; but what's interesting about this, is that if we shoot up with vertical speed equal to `10`, by the time the probe reaches `y=0` again (i.e. after it has decreased, reached its highest `y` value with vertical speed of `0`, and then came down again), its vertical speed would now be `-10`; also, the probe keeps on moving faster and faster after it has come back to `y=0`, and if we don't want miss our target, we cannot set the vertical speed any higher than _minus target's min `y` value_:
(defun vy-max (target) (- (y1 target)))
The minimum value?  Well, `0` of course, given that we are trying to shoot the probe as high as possible:
(defun vy-min (target) (declare (ignore target)) 0)
> (solve (parse-input (uiop:read-file-lines "src/2021/day17.txt")))
We change our mind for part 2, and rather than focusing on shooting as high as possible, we want to count all the distinct velocities that cause the probe to hit target:
How many distinct initial velocity values cause the probe to be within the target area after any step?
First we change SOLVE to cater for both part 1 and part 2 requirements:

- For each horizontal velocity
- For each vertical velocity
- If we eventually hit the target
- Maximize the highest `y` value reached by the trajectory
- And add `1` to a counter
(defun solve (target &aux (part1 0) (part2 0))
  (loop for vx from (vx-min target) to (vx-max target) do
        (loop for vy from (vy-min target) to (vy-max target) do
              (uiop:if-let (highest (shoot target vx vy))
                (setf part1 (max part1 highest)
                      part2 (1+ part2)))))
  (values part1 part2))
We are not done yet!  We were trying to maximize the `y` value of our trajectories previously, so we had our VY-MIN always return `0`; well, for part 2 we need to look into negative vertical velocities too, and the smallest value is if we aimed at the smallest `y` value of the target area -- one more, and we would miss it!
(defun vy-min (target) (y1 target))
And that's it!

Final plumbing:
(define-solution (2021 17) (target parse-input) (solve target))

(define-test (2021 17) (5565 2118))
Test run:
> (time (test-run))
Success: 1 test, 2 checks.
Evaluation took:
  0.338 seconds of real time
  0.307870 seconds of total run time (0.297925 user, 0.009945 system)
  91.12% CPU
  779,177,010 processor cycles
  1,669,536 bytes consed
PS. Here is my [REPL buffer](https://topaz.github.io/paste/#XQAAAQDNCgAAAAAAAAAUGQimgx+p6PZhRh7uIPIyuyAIK6ZLP67xEPqz09kRR9BiodGX5LsCfgvy7BcuypYSfKKqx+AtXqQWFxe0nzhH61j8l3QKC7iUulObpvghgkmWSzGbWXfAo2DfFk/uD/VFsTKmOVQ8gM51BSHXEnuVpXLu3En+2I7QlQuk+v/qdHpkX1k2pvnzU3FY17neHkMiN07lznDaUuQ9FveRKi8KRjH0Tkbz6+e4b1GESJw0SPeXeqbQJIVjsbogeqPejiqBNkvWzwHsvNNDCUMoVFWzeK+lYgUPLHrDPQDnia5mR2YZnmkldkqJN8i1H6o70I47UqxX+Sr37anfTCLucm4vhiVOo9DnaqRoazpfALZhTmwAFxV1NNbaVoXrJtfuEDApdHKT1UPG9PBVcHC/huBEqL7oWGklWU7yaCvsjVgokBwUxOc8XFzV+HSM4BhNrlDcpVLIZGWnG2Ga1SM3IXYUV3/Dcmsux4713urVdHE84mZoY0ZsUmkxYlH5mGLtAL/kop2VawVsveSpBF4ZD7v3a+5fhP88J90bg2dxPi3Dj+wKaQqwq/CACIWwXVWmt+Ea1RstX9fcvOOqp2LY5FhBAaR0TyrEPa+iVawT/eboY2CiYDuQq33e/ENI3a2LYll5GBvMycU2RlaTL7xVXHfjt+lxmu74+DYOq0UosOCaou9aQaMwYlH2X8ecR9j0/t1Fsp+Yfuc8VZUE2jt9DQ1Ix1s3b63dHAG8V4YdtSjHutjGjrDsIZ2a5irbaD83PZsecsxhBgAXouytNJOBp5nwEAc1QFe3Z78PB6PuVU8JWvYQZn9sgVzGouy6LaVcfJfjvKbbTFnCFquXDpYrUf+mbDZak+pxubXUYZs+N6L9iua9OEBYhysjz7cCffsi9zHJI6aMkDC0H3on3i8sQwuhLC2vt/0FPvFLy6ubHhHS/BaKfoMu+whNzHLXFxr1Xf6a3B9yzxDFPpVB92TqaCebEaIG1rb0aTXIjl664T+qAiRBvwTbteQk3EIoKStPd7Cj1MOfuedOXhM/k9xXmxZglPi1LS9dPJlfqCHf3TgQFieew3CzCe0xkGTmzEQTQ2BkxH1Ijuv7/v1CKw==) for the day, and I kind of feel ashamed by the things you can see there:

- `x1` and `x2` are min / max, while `y1` and `y2` are max / min -- why?!
- PAST-TARGET-P was not working, so inside SIMULATE I ended up breaking out of the loop if the probe `y` value is smaller than `-200` -- lol
- With pen and paper I (thought I) figured out the _optimal_ vertical speed for part 1; but was not working with the input, hence the 1+
- For part 2 instead, I made the search space big enough to make it work with the example, and then feed the actual input to it -- and `304.470` seconds later, it spit out the right answer!

It's clear my brain is not _functioning_ too well under time pressure, but _whatever it takes_ right?! ;-)

2021-12-16 (permalink)

Advent of Code: [2021/16](https://adventofcode.com/2021/day/16)

We reached open waters; we received a transmission form the Elves; the transmission uses Buoyancy Interchange Transmission System (BITS), a method for packing numeric expressions into a binary sequence; we need to figure out what the Elves want from us.

(It's one of those wordy problems, so sit tight!)
The first step of decoding the message is to convert the hexadecimal representation into binary. Each character of hexadecimal corresponds to four bits of binary data:
(defun parse-input (data)
  (hexadecimal->binary (first data)))
The BITS transmission contains a single packet at its outermost layer which itself contains many other packets. The hexadecimal representation of this packet might encode a few extra 0 bits at the end; these are not part of the transmission and should be ignored.
Let's update PARSE-INPUT to account for that:
(defun parse-input (data)
  (parse-packet (hexadecimal->binary (first data))))
Now, we know we are going to be fiddling with bits a lot (well, `0` and `1` characters more than bits), so let's create ourselves BIN, a utility function to:

- extract a range of characters from a given string
- parse that into a binary number

PARSE-INTEGER already does all this, so BIN will just be a tiny wrapper around it:
(defun bin (s start &optional (end (length s)))
  (parse-integer s :start start :end end :radix 2))
Now we got everything we need to implement the parser.  Let's begin:
Every packet begins with a standard header: the first three bits encode the packet **version**, and the next three bits encode the packet **type ID**. These two values are numbers; all numbers encoded in any packet are represented as binary with the most significant bit first. For example, a version encoded as the binary sequence `100` represents the number `4`.
Packets with type ID `4` represent a **literal value**. Literal value packets encode a single binary number. To do this, the binary number is padded with leading zeroes until its length is a multiple of four bits, and then it is broken into groups of four bits. Each group is prefixed by a `1` bit except the last group, which is prefixed by a `0` bit. These groups of five bits immediately follow the packet header.
OK, let's give PARSE-LITERAL a shot:

- We pop `1` bit from the stream, interpret it -- MOREP
- We pop `3` more bits from the stream, and _add_ it to `literal` -- we are reading `4` bits at a time, starting from the most significant ones, so for every group we need to multiply the current result by `16` and then add the current group to it
- We do this until there are no more groups to read
- When done, we return the literal value and the number of consumed characters
(defun parse-literal (s &optional (start 0) &aux (literal 0))
    (let ((morep (= (bin s start (incf start)) 1)))
      (setf literal (+ (* literal 16) (bin s start (incf start 4))))
      (unless morep (return (values literal start))))))
Every other type of packet (any packet with a type ID other than `4`) represent an **operator** that performs some calculation on one or more sub-packets contained within. Right now, the specific operations aren't important; focus on parsing the hierarchy of sub-packets.
An operator packet contains one or more packets. To indicate which subsequent binary data represents its sub-packets, an operator packet can use one of two modes indicated by the bit immediately after the packet header; this is called the **length type ID**:
- If the length type ID is `0`, then the next **15** bits are a number that represents the **total length in bits** of the sub-packets contained by this packet.
If the length type ID is `1`, then the next **11** bits are a number that represents the **number of sub-packets immediately contained** by this packet.
Finally, after the length type ID bit and the 15-bit or 11-bit field, the sub-packets appear.
OK, that's a lot to unpack, but let's see what we can do:

- We pop `1` bit and do a switch/case on it
- If `0`, we pop `15` bits (the number of bits used by the current packet sub-packets), and recurse into PARSE-PACKET until we cannot parse packets anymore (note: the `3` there is used to discard any padding value introduced by the hex to bin conversion)
- If `1` instead, we pop `11` bits and recurse into PARSE-PACKET a number of times equal to the number we just read
- Lastly, we return all the sub-packets we read, plus the number of consumed characters
(defun parse-operator (s &optional (start 0) &aux pkts)
  (ecase (bin s start (incf start))
    (0 (let* ((remaining (bin s start (incf start 15))))
         (loop while (> remaining 3) do
               (multiple-value-bind (pkt consumed) (parse-packet s start)
                 (setf remaining (- remaining (- consumed start))
                       pkts (cons pkt pkts)
                       start consumed)))))
    (1 (dotimes (_ (bin s start (incf start 11)))
         (multiple-value-bind (pkt consumed) (parse-packet s start)
           (setf pkts (cons pkt pkts) start consumed)))))
  (values (reverse pkts) start))
All is left now, is to implement PARSE-PACKET:

- The first `3` bits represent the packet version
- The next `3` bits represent the packet type
- Next is the call to either PARSE-LITERAL or PARSE-OPERATOR
- The result is all wrapped up inside a PLIST with: `:version`, `:type`, `:literal`, and `:sub-packets`
(defun parse-packet (s &optional (start 0))
  (let ((version (bin s start (incf start 3)))
        (type (bin s start (incf start 3))))
    (multiple-value-bind (parsed start)
        (if (= type 4) (parse-literal s start) (parse-operator s start))
      (values (list :version version
                    :type type
                    :literal (if (= type 4) parsed)
                    :sub-packets (if (/= type 4) parsed))
OK, but what's the task for part 1?
Decode the structure of your hexadecimal-encoded BITS transmission; **what do you get if you add up the version numbers in all packets?**
- We take the `:version` of the current packet
- We recursively calculate the version-sum of all the sub-packets
- Finally we add it all up together
(defun part1 (pkt)
  (labels ((recur (pkt)
             (+ (getf pkt :version)
                (loop for next in (getf pkt :sub-packets)
                      sum (recur next)))))
    (recur pkt)))

For part 2 instead, we are asked to calculate the value of the expression the packet represents:
Literal values (type ID `4`) represent a single number as described above. The remaining type IDs are more interesting:
- Packets with type ID `0` are **sum** packets - their value is the sum of the values of their sub-packets. If they only have a single sub-packet, their value is the value of the sub-packet.
- Packets with type ID `1` are **product** packets - their value is the result of multiplying together the values of their sub-packets. If they only have a single sub-packet, their value is the value of the sub-packet.
- Packets with type ID `2` are **minimum** packets - their value is the minimum of the values of their sub-packets.
- Packets with type ID `3` are **maximum** packets - their value is the maximum of the values of their sub-packets.
- Packets with type ID `5` are **greater than** packets - their value is **1** if the value of the first sub-packet is greater than the value of the second sub-packet; otherwise, their value is **0**. These packets always have exactly two sub-packets.
- Packets with type ID `6` are **less than** packets - their value is **1** if the value of the first sub-packet is less than the value of the second sub-packet; otherwise, their value is **0**. These packets always have exactly two sub-packets.
- Packets with type ID `7` are **equal to** packets - their value is **1** if the value of the first sub-packet is equal to the value of the second sub-packet; otherwise, their value is **0**. These packets always have exactly two sub-packets.
Of all the operators listed above, only three behave differently from builtins: >, <, and =; builtins all return T or NIL, while we want to return `1` or `0` here; let's fix that:
(defun b> (a b) (if (> a b) 1 0))
(defun b< (a b) (if (< a b) 1 0))
(defun b= (a b) (if (= a b) 1 0))
**What do you get if you evaluate the expression represented by your hexadecimal-encoded BITS transmission?**
Again, another recursive solution:

- If the current packet is a _literal_ one, we return the value
- Otherwise we pick the _right_ operator function based on the type
- We recursively _evaluate_ all the sub-packets
- Finally apply the operator to the the values returned by the sub-packets
(defun part2 (pkt)
  (labels ((recur (pkt &aux (literal (getf pkt :literal)))
             (cond (literal literal)
                   (t (apply
                        (aref #(+ * min max identity b> b< b=) (getf pkt :type))
                        (loop for next in (getf pkt :sub-packets)
                              collect (recur next)))))))
    (recur pkt)))
And that's it!

Final plumbing:
(define-solution (2021 16) (pkt parse-input)
  (part2 pkt))

(define-test (2021 16) (943 167737115857))
Test run:
> (time (test-run))
Success: 1 test, 2 checks.
Evaluation took:
  0.002 seconds of real time
  0.002154 seconds of total run time (0.001430 user, 0.000724 system)
  100.00% CPU
  5,031,393 processor cycles
  1,062,096 bytes consed
Note: replacing APPLY with CONS inside PART2 will cause the function to actually output the AST of the expression stored inside the packet:
> (defun part2 (pkt)
    (labels ((recur (pkt &aux (literal (getf pkt :literal)))
               (cond (literal literal)
                     (t (cons
                          (aref #(+ * min max identity b> b< b=) (getf pkt :type))
                          (loop for next in (getf pkt :children)
                                collect (recur next)))))))
      (recur pkt)))

> (part2 (parse-input (uiop:read-file-lines "src/2021/day16.txt")))
(+ (MIN 2479731930) 7 (* (B< 10381 100) 8) (* 188 242 135) (* 236 42)
   (MAX 57613) (* 189 (B> 2027 2269116)) (* 131 104 201 105 155)
   (* 2920854314 (B> (+ 6 3 7) (+ 13 3 15))) 6
   (* 2165 (B= (+ 14 8 5) (+ 2 15 7))) 38575271553
   (* (B< (+ 12 5 3) (+ 11 6 12)) 41379) (* (B> 727598 727598) 752530)
   (* 2939 (B< 1007934 660540)) (MAX 498405352 113 1358) (* 3146 (B> 5 5))
   17083744931 (* 11) (MIN 214 46980) 4 (MIN 1 25270 47 240)
   (MAX 247 45266 27 147874 49322) (* 45822 (B< 2491 2491)) 12
   (* (B= 129 1004426) 4030837) 2368 172 (MAX 186 479 31 225) (+ 62424214260)
   (* 3722699 (B= 232 232)) (* 220 (B< 3667 50625)) (+ 960 9)
   (+ 852362 59034 3 10186466 44654) (* (B> 33431 39079) 712787)
   (* 146 140 4 55) (+ 3682 248 76) (MAX 14 184) (MIN 2 1206 15 1476 5005)
             (MIN (MIN (+ (MIN (+ (MIN (+ (+ (MAX (* 2673))))))))))))))))))))
   (* 1343 (B> 38853 33177)) (* 119 (B< (+ 3 14 2) (+ 13 8 3)))
   (+ (* 4 5 7) (* 6 7 10) (* 11 3 3)) (+ 202 2004893107 2899 107)
   (* 901241660305 (B> (+ 4 4 8) (+ 13 6 3)))
   (* (+ 9 3 12) (+ 9 3 15) (+ 11 4 12)) (* (B= 217 40) 2513437316) 13
   (* 11 (B< 2706 2706)) (MIN 63 216 14909) (* (B< 3296 812743) 52657) 72078562
   (* (B> 144588154 652519) 5077920))
You feed that to EVAL, and there you will have the answer for part 2:
> (eval *)
For comparison, here is the [code](https://topaz.github.io/paste/#XQAAAQDICwAAAAAAAAAUGQimgx+p6GAb9hHoZ3fvJbGJPjDLntuErw51NVAnlOviYJRwBSse/MwTGbUr6VRb/SGDBS7kDEyk5SGdBerzuXLz99I6xz4UE07I7T9xro7z8E3/5FvKuhBM1EyjH89HLViGE1wOfzuYF4dSuAU0kZjDxsAacWmaJ2GqNu7x7WpDV0pNg5DozIPKhOWVzh70FgJjwFGNxhijdV7azmNxn8SZ008PetsCLXUXQ1FT/jqVGAmTyqN/2vDApllBSrD13UxM0cWt63WWmkbx/xQzqCBsb4DoZb/w2+FqOQd83gKDS/YWR23w3BaG2T67KyKI1Zu0dbqWc3rr5M7rqtcyJM84KNZqz2hW3W27glGGzniv8ThG+pxCKZfptUY1/G2ZDQdAVOyVHeZN06972GEJJNtm6PD0PtRWy/Vac7vRmCYPh5K8FmWkQuEBYJAGqOD7jwn/8L9wX+E6jc35rN7bcf4phh/4OjrifPuV89u2hmK0juizfB24T9DBvuStiUgbAWf4pGLPMkQEDw2ADeL5tU121zh/Agza6vISnSjJ5efeWPNvWo68dqURjJRtO8l1mAWTKCsbsOe7VX3cvnu57rEbt6tfLejUcaUIdOuO+91RP4DtJ9n15vAiwuH3xMrz2q+8XT2A3EFbeh/ASEfcssk4hPcnopj7c6sVWLbY7sQpKjcCIYBRqxVR1aHZvV+XPIoix1x5SxSWv2Uj0hGfQesMP8fbKFNlflpC6tMebZ6u09GY+wnvWGBAiQV8HQL3SNF5kYOuqvaIy+2dX7cRtMKM0/sgg3SFKh75KtxYCY2RNSZE1jgTZtiLlsN+1Kh9WJNfMQhEUU0VdD1h24ZUPmQy7jDvXKGs1E/ddB1zxHVmNRz1L1poE18CJIcEOiFIqfgj4gubRTzFJsp8J1ZMJxi4h2N4ZNOersMTz7iRvzx1UkPK1uQS9WftxQ7CWueVHa6GGM3lABVXI0vW/GS2MZ1MSvc4/ZPSqrvx50dQIT4AR2fYE+s3R6bCAM612MP+q1lHcegVLv6Z2E5+Z+rVH1eEOk/3wFkc31NjP98pIJPoil/i6m9Yfuuh5tscRUqoDRUF58qlwxU2zXIlnjv9T3KcBlsZagbElfimHt9YpMUiEM13iJo8rH6K+TUCJMARFj/wJDdwMdo2ECMGwye3sDPpfFbV1D4ZPequ7prPK+VCCltmSKCugTNi/UPntF73fQI2AyMNzOxqXPm/LiUnL7d9fX8EUqLRe5chvLeMt0bvrJw8Tt1MSjBCNRTotImsijgWTz/91cNA) that I used to get the stars; in there:

- The _parser_ and the _evaluator_ are the same thing, so to sum the versions and evaluate the expression (part 1 and part 2 respectively) I had to resort to global ^Wspecial variables to keep track of the sum so far
- I was creating new strings all over the place, instead of simply playing with indices

Note for the future self: when implementing a parser, think about PARSE-INTEGER:

- It uses a string as its input stream
- It optionally can be told to skip a certain amount of characters from the beginning of the string
- It returns not only the parsed object, but also the index of the next unprocessed character -- and that makes for easier composition of the different parsing functions you most likely are going to implement


2021-12-15 (permalink)

Advent of Code: [2021/15](https://adventofcode.com/2021/day/15)
You've almost reached the exit of the cave, but the walls are getting closer together. Your submarine can barely still fit, though; the main problem is that the walls of the cave are covered in chitons, and it would be best not to bump any of them.
The cavern is large, but has a very low ceiling, restricting your motion to two dimensions. The shape of the cavern resembles a square; a quick scan of chiton density produces a map of risk level throughout the cave (your puzzle input).
For example:
You start in the top left position, your destination is the bottom right position, and you cannot move diagonally. The number at each position is its **risk level**; to determine the total risk of an entire path, add up the risk levels of each position you **enter** (that is, don't count the risk level of your starting position unless you enter it; leaving it adds no risk to your total).
OK; as usual, let's start with the input -- let's parse ours into a HASH-TABLE:
(defun parse-risk-levels (data &aux (levels (make-hash-table :test 'equal)))
  (loop for r below (length data)
        for string in data do
        (loop for c below (length string)
              for ch across string do
              (setf (gethash (list r c) levels)
                    (- (char-code ch) (char-code #\0)))))
Next we need to find the path, starting from the top left corner and ending to the bottom right one, that minimizes the total risk, i.e. the sum of all the risk levels along the path.

Well, it looks like we are only going to have to use A* for this, with the usual MANHATTAN-DISTANCE to the end state as heuristic:
(defun lowest-risk (levels &aux
                           (row-max (loop for (r _) being the hash-keys of levels maximize r))
                           (col-max (loop for (_ c) being the hash-keys of levels maximize c))
                           (end (list row-max col-max)))
  (search-cost (a* '(0 0)
                   :goal-state end
                   :test 'equal
                   :neighbors (partial-1 #'neighbors levels)
                   :heuristic (partial-1 #'manhattan-distance end))))
Pretty standard NEIGHBORS function (note: we are not allowed to move diagonally):
(defparameter *nhood* '((-1 0) (0 1) (1 0) (0 -1)))

(defun neighbors (levels p)
  (loop for d in *nhood* for n = (mapcar #'+ p d)
        for level = (gethash n levels)
        when level collect (cons n level)))
Et voila`!  Let's take a look at part 2 now:
The entire cave is actually **five times larger in both dimensions** than you thought; the area you originally scanned is just one tile in a 5x5 tile area that forms the full map. Your original map tile repeats to the right and downward; each time the tile repeats to the right or downward, all of its risk levels **are 1 higher** than the tile immediately up or left of it. However, risk levels above `9` wrap back around to `1`.
Using the full map, what is the lowest total risk of any path from the top left to the bottom right?
So to repeat:

- We need to expand the input map, both horizontally and vertically
- Every time we do this (again, 5 times horizontally, and 5 times vertically), we use the risk levels of the input map, incremented by 1, 2, 3...(0 for the original tile, +1 for the second one, +2 for the third one...)
- When risk levels get higher than `9`, the wrap around starting from `1` (and not `0`)

Let's have the MASSAGE function take care of all that:
(defun massage (levels &aux
                       (rows (1+ (loop for (r _) being the hash-keys of levels maximize r)))
                       (cols (1+ (loop for (_ c) being the hash-keys of levels maximize c)))
                       (rez (make-hash-table :test 'equal)))
  ;; Extend the map -- horizontally first
  (loop for (r c) being the hash-keys of levels using (hash-value risk) do
        (loop for i below 5
              for c1 = (+ c (* cols i))
              for risk1 = (+ risk (* 1 i)) do
              (setf (gethash (list r c1) rez) (if (> risk1 9) (- risk1 9) risk1))))
  ;; Then vertically
  (setf levels (copy-hash-table rez)) ;; copy because we are changing it as we scan it
  (loop for (r c) being the hash-keys of levels using (hash-value risk) do
        (loop for i below 5
              for r1 = (+ r (* rows i))
              for risk1 = (+ risk (* 1 i)) do
              (setf (gethash (list r1 c) rez) (if (> risk1 9) (- risk1 9) risk1))))
We call MASSAGE first, then pass the result to LOWEST-RISK, and that's it!

Final plumbing:
(define-solution (2021 15) (risk-levels parse-risk-levels)
  (values (lowest-risk risk-levels) (lowest-risk (massage risk-levels))))

(define-test (2021 15) (745 3002))
Test run:
> (time (test-run))
Success: 1 test, 2 checks.
Evaluation took:
  1.256 seconds of real time
  1.193335 seconds of total run time (1.131199 user, 0.062136 system)
  [ Run times consist of 0.097 seconds GC time, and 1.097 seconds non-GC time. ]
  94.98% CPU
  2,890,660,005 processor cycles
  206,780,240 bytes consed
Few closing notes:

- I already got a [A*](https://topaz.github.io/paste/#XQAAAQDdBwAAAAAAAAAUGQimgx+p6Nchs0/Hc3cRxcmyfeV+Bzi2FxeDDBZEkgZaDBSyRjbUkuFXP6dhVAaOH8AG99AzdDUHR/lSrXrRzt2ru8sshOcmQ5UeY7P5AZQhS4ncXon3a6r8j5K/gr4xuK1jjTG4PHnVajamzB6t54TV5dSPS63MDOK6jD6CsHfC+VkV4bms/+I8MmYXI4GU2JxpblEiXKntPh1orBPfGpyTCfhx4v+kHU29G6mF1Y4EM6ozuoAyzjwxpY0eKSSV1zFprANFIIlsaPqm/nWF9W4i224u0tUm3IHb+tnMDrFtscipbIplaGiZN8B2Q1apClGT/+51hyLVNGM+4prkx8VWxBWeWuN62LCmMrY0iqRvFeCwKqpwg4s4VRogYMgeG3AOyY1DlJ0UJFetXLjJ+7pMA+8jUmYyE6xRJd9R+jn/cPnkNyDceLd1iFB/ODUgVXLcyrYb7jFIIeucRNGqkgQ7lwe0xvmyiBLkHyIe3ML8lZQcmhyinz9Ynab880cOGsBYBuTRjCS7dCvLr5XKDjw8SpdnWXgjVrzLA49wE+NZXVkPz6tTVzFUR36unRBsIJZJKfYRi3ZUToT1eFyKKayvxZpI9V1QkNJvJehPwqf797u2T57qMuE1UrtnhSso1kQh/BEo33aCHrNXfO/H71RV95KbXakyaLfqTAeeKA2hUrQSbAFnewyRS+3eKmTvbe0+XL1ltwMKAccDiSJR+DZ8iMZhQx91qBpU6W4Gu5c+XA4oT8DFrhGWR1aHzdLLf2ICtG8VUVnLizIL/+Wv790=) implemented in my utilities file, so part 1 was just plumbing things together -- first time I ranked below 1000!!!
- For part 2 instead, I had to use my brain a little more, but apparently neither me nor my brain were actually ready for it: first I got the wrap-around logic wrong, then I realized I was only expanding the map diagonally, and last, I bumped into some weird errors because I was iterating and changing the same HASH-TABLE at the same time

      --------Part 1--------   --------Part 2--------
Day       Time   Rank  Score       Time   Rank  Score
 15   00:10:48    464      0   00:59:10   1927      0
_"50 fucking minutes to _expand_ a map?!"_ Yes, I know, that's ridiculous!

2021-12-14 (permalink)

Advent of Code: [2021/14](https://adventofcode.com/2021/day/14)
The incredible pressures at this depth are starting to put a strain on your submarine. The submarine has polymerization equipment that would produce suitable materials to reinforce the submarine, and the nearby volcanically-active caves should even have the necessary input elements in sufficient quantities.
The submarine manual contains instructions for finding the optimal polymer formula; specifically, it offers a **polymer template** and a list of pair **insertion** rules (your puzzle input). You just need to work out what polymer would result after repeating the pair insertion process a few times.
Here is how puzzle input is looking like:

CH -> B
HH -> N
CB -> H
NH -> C
HB -> C
HC -> B
HN -> C
NN -> C
BH -> H
NC -> B
NB -> B
BN -> B
BB -> N
BC -> B
CC -> N
CN -> C
A note about how the polymer is being created:
The following section defines the pair insertion rules. A rule like `AB -> C` means that when elements `A` and `B` are immediately adjacent, element `C` should be inserted between them. These insertions all happen simultaneously.
And finally, the task for part 1:
Apply 10 steps of pair insertion to the polymer template and find the most and least common elements in the result. What do you get if you take the quantity of the most common element and subtract the quantity of the least common element?
As we need to make room for new elements, I figured I would use a list of characters instead of a STRING; so, the first line we are going to COERCE it into a LIST; the remaining ones instead, we are going to store them inside a HASH-TABLE:
(defun parse-input (data)
  (list (coerce (first data) 'list)
        (parse-insertion-rules (cddr data))))

(defun parse-insertion-rules (data &aux (rules (make-hash-table :test 'equal)))
  (flet ((rule (string)
           (cl-ppcre:register-groups-bind (from to)
               ("(\\w+) -> (\\w)" string)
             (setf (gethash (coerce from 'list) rules) (coerce to 'list)))))
    (dolist (string data rules)
      (rule string))))
I think we are going to _simulate_ this:

- For 10 times in a row -- DOTIMES
- We are going to evolve the current polymer -- TICK
- And at the end, we are going to return the expected result -- RESULT
(defun evolve (times input &aux (polymer (first input)) (rules (second input)))
  (dotimes (_ times) (setf polymer (tick rules polymer)))
  (result polymer))
What does TICK look like?

- We traverse the polymer by pairs
- See if the set of insertion rules contain one for the current pair
- If it does, we _collect_ the first element of the pair, plus the output of the insertion rule (i.e. this is how we insert the new element between the two existing ones)
- Otherwise, we _collect_ the first element of the pair only
(defun tick (rules curr)
  (loop for (a b) on curr
        for (z) = (gethash (list a b) rules)
        if z collect a and collect z
        else collect a))
Last, we are asked for the difference of the number of occurrences of the most and least common element:

- We calculate the frequency of each element, and sort them
- Pick the last one, the first one, and do the subtraction
(defun result (polymer &aux (freqs (sort (frequencies polymer) #'< :key #'cdr)))
  (- (cdar (last freqs)) (cdar freqs)))
Cool, let's take a look at part 2 now:
Apply **40** steps of pair insertion to the polymer template and find the most and least common elements in the result. What do you get if you take the quantity of the most common element and subtract the quantity of the least common element?
Uh oh...I am pretty sure the above is not going to work:
(defun evolve (times input &aux (polymer (first input)) (rules (second input)))
  (dotimes (i times)
    (setf polymer (tick rules polymer))
    (pr i (length polymer)))
  (result polymer))
Let's take this new version of EVOLVE out for a spin:
> (evolve 20 (parse-input (uiop:read-file-lines "src/2021/day14.txt")))
0 39
1 77
2 153
3 305
4 609
5 1217
6 2433
7 4865
8 9729
9 19457
10 38913
11 77825
12 155649
13 311297
14 622593
15 1245185
16 2490369
17 4980737
18 9961473
19 19922945
I am afraid we are going to need something clever than that!  But what exactly?

Well, if we think about it, insertion rules are defined for pairs of adjacent elements only; also, if the same pair of elements is present `10` times in the polymer, if an insertion rule for that pair of elements exists, then that will produce `10` new output elements; so what if we simply kept track of the number of occurrences of each pair of elements in the polymer?

Let's begin by _massaging_ the original polymer representation (i.e. the LIST of characters), into a HASH-TABLE counting the frequencies of each pair of adjacent elements:
(defun massage (input &aux (freqs (make-hash-table :test 'equal)))
  (destructuring-bind (template rules) input
    (loop for (a b) on template do (incf (gethash (list a b) freqs 0)))
    (list freqs rules)))
With this, we are then going to update TICK as follows:

- For each pair of adjacent element, `a` and `b` (and their number of occurrences, `n`)
- We check if an insertion rule for the pair exists (insertion rule producing the new element `z`)
- If it does, then we know the new polymer will contain both `(a z)` and `(z b)`; how many of these will it contain? Well, we started from `n` occurrences of `(a b)`, so we will have `n` occurrences of `(a z)` and `(z b)`
- Otherwise, if the no insertion rule is defined for `(a b)`, we simply carry that pair (and all its occurrences) along
(defun tick (rules freqs &aux (next (make-hash-table :test 'equal)))
  (loop for (a b) being the hash-keys of freqs using (hash-value n)
        for (z) = (gethash (list a b) rules)
        if z do (incf (gethash (list a z) next 0) n) and do (incf (gethash (list z b) next 0) n)
        else do (incf (gethash (list a b) next 0) n))
Last but not least, we got to update RESULT to account for the fact that the polymer has now a different representation.  How do we do this?

- We know the frequency of each pair, `(a b)`
- We know pairs are overlapping, i.e. for the polymer `(a b c)` we are going to have one entry for `(a b)`, one for `(b c)`, and one for `(c nil)`
- If we don't want to double count frequencies, we then have consider only the first element of each entry
(defun result (freqs &aux (ch-freqs (make-hash-table)))
  (loop for (a) being the hash-keys of freqs using (hash-value n) do
        (incf (gethash a ch-freqs 0) n))
  (- (loop for v being the hash-values of ch-freqs maximize v)
     (loop for v being the hash-values of ch-freqs minimize v)))
And that's it!

Final plumbing:
(define-solution (2021 14) (input parse-input)
  (let ((input (massage input)))
    (values (evolve 10 input) (evolve 40 input))))

(define-test (2021 14) (5656 12271437788530))
Test run:
> (time (test-run))
Success: 1 test, 2 checks.
Evaluation took:
  0.012 seconds of real time
  0.004045 seconds of total run time (0.003179 user, 0.000866 system)
  33.33% CPU
  28,206,847 processor cycles
  976,656 bytes consed

2021-12-13 (permalink)

Advent of Code: [2021/13](https://adventofcode.com/2021/day/13)
You reach another volcanically active part of the cave. It would be nice if you could do some kind of thermal imaging so you could tell ahead of time which caves are too hot to safely enter.
The submarine has a thermal camera; the camera has never been activated; to activate the camera we need to enter the code found on page 1 of the manual, page which falls out as we open the manual page 1; luckily for us, the paper is marked with dots and includes instructions on how to fold it up.  For example:

fold along y=7
fold along x=5
The first set of data represent dots, i.e. `x`, `y` pairs, with `0, 0` representing the top-left corner of the sheet; while the second set of data, are the folding instructions.

What are we asked to do with this?
How many dots are visible after completing just the first fold instruction on your transparent paper?
OK, first things first, the input.  We are going to store all the dots into a HASH-TABLE (with `(row col)` as keys, and Ts as values); for the folding instructions instead, we are simply going to use a list:
(defun parse-input (data &aux
                         (dots (make-hash-table :test 'equal))
  (dolist (s data (list dots (reverse folds)))
    (when-let (point (parse-dot s))
      (setf (gethash point dots) t))
    (when-let (fold (parse-fold s))
      (push fold folds))))
(defun dots (data) (car data))
(defun folds (data) (cadr data))

(defun parse-dot (string)
  (cl-ppcre:register-groups-bind ((#'parse-integer col row))
      ("(\\d+),(\\d+)" string)
    (list row col)))

(defun parse-fold (string)
  (cl-ppcre:register-groups-bind ((#'as-keyword axis) (#'parse-integer n))
      ("fold along (\\w)=(\\d+)" string)
    (list axis n)))
Now, applying the folding instructions should not take that long:

- For each fold instruction (only one in our case)
- For each dot
- If it's on the left of a vertical folding line, or above an horizontal folding line, leave it where it is
- Otherwise, project it left / up, using the folding line as rotation center (it sounds more complicated than it actually is)
- Lastly, count the number of distinct dots
(defun fold (input &aux (curr (dots input)))
  (dolist (f (folds input))
    (let ((next (make-hash-table :test 'equal)))
      (destructuring-bind (axis n) f
        (loop for (row col) being the hash-keys of curr do
              (ecase axis
                (:x (when (> col n) (decf col (* (- col n) 2))))
                (:y (when (> row n) (decf row (* (- row n) 2)))))
              (setf (gethash (list row col) next) t)))
      (setf curr next)
      (return-from fold (hash-table-count curr)))))
Cool!  What about part 2?  Well, we are asked to run all the folding instructions, and to _decode_ the message to use to activate the thermal camera:
Finish folding the transparent paper according to the instructions. The manual says the code is always eight capital letters. What code do you use to activate the infrared thermal imaging camera system?
OK, let's update FOLD to run all the instructions (and still store the answer for part 1), and then see how we could decode the message:
(defun fold (input &aux (curr (dots input)) part1)
  (dolist (f (folds input))
    (let ((next (make-hash-table :test 'equal)))
      (destructuring-bind (axis n) f
        (loop for (row col) being the hash-keys of curr do
              (ecase axis
                (:x (when (> col n) (decf col (* (- col n) 2))))
                (:y (when (> row n) (decf row (* (- row n) 2)))))
              (setf (gethash (list row col) next) t)))
      (setf curr next)
      (unless part1
        (setf part1 (hash-table-count curr)))))
  (values part1 (print-paper curr)))
PRINT-PAPER?  Nothing crazy:

- We figure out the max row and col values
- Then for each point from `(0 0)` to `(row-max col-max)`
- If a dots exists, we output a `#`, or a ` ` otherwise
(defun print-paper (dots &aux
                         (rows (loop for d being the hash-keys of dots maximize (car d)))
                         (cols (loop for d being the hash-keys of dots maximize (cadr d))))
  (with-output-to-string (s)
    (terpri s) ; print an additional newline, so I can better format the expected string
    (dotimes (row (1+ rows))
      (dotimes (col (1+ cols))
        (princ (if (gethash (list row col) dots) #\# #\Space) s))
      (terpri s))))
And that's it!

Final plumbing:
(define-solution (2021 13) (input parse-input) (fold input))

(define-test (2021 13) (724 "
 ##  ###    ## ###  #### ###  #  # #
#  # #  #    # #  # #    #  # #  # #
#    #  #    # ###  ###  #  # #  # #
#    ###     # #  # #    ###  #  # #
#  # #    #  # #  # #    # #  #  # #
 ##  #     ##  ###  #### #  #  ##  ####
Test run:
> (time (test-run))
Success: 1 test, 2 checks.
Evaluation took:
  0.022 seconds of real time
  0.011407 seconds of total run time (0.005706 user, 0.005701 system)
  50.00% CPU
  51,063,965 processor cycles
  1,038,608 bytes consed
PS. [This morning](https://topaz.github.io/paste/#XQAAAQDUBQAAAAAAAAAUGQimgx+p6PZhRh7uIPCirs4j6zlpXvdP2y9WdJ6sK9eG/VBt3Z4fnkHxTu3NuCMJ2nmF+B13yA+9sGhCoAHyJ2ZXjZpF6phEUDagCs+h8Ufu+I4B+Xi9tITOLsXEvZfLoH40qd+kCDFX2l8GZ6e6N7gbel/1Cdrm2GooklZBGu591PFeRTEdeTCp9z/uectoPBE3rIyaSwHho1c4uTLfVtTHQrRsa2lxYHWf1y1QGSZ/HceOadgyazkIf6sjipC2net/qdX0s8EM+giC1iAp0f0pJDMBVHJncpyD7r+jix3nC0Fx3vmEBtRvkGTbxtpSPL2FFPaH766gk2rfa5U5djoRM+qJFHzhSDrTOGhUq8QB8+KleQsKs+cswBfYJI3+IrJJrRCN8ilo8rJhMp76rJOzDQ/A3KtynWTFA4mc5+HUaTAhbYrNJdjV9exUecAeMv8UMH96PU4xRZPbKAD7fYPqGs2Q9AQlu6vUT/Aa1t25huhQyhVgWIh7Xxit7AMMMX5/pJLvpuh18R+1vqXSHZh0BrfpGdmJCt03ejSNjizp1lNKQNawGyAJNJerOw09ENby0Nxap8j5WzGHlVzj6kyIu87klBKwrs62U6qb+g4Q3Lc5hg0397CqQxaBTh9mL7irkrlMwVLkDuyYCNZCNT9RrpFBWxXWlpkpIoX96shVBJnpiW+Ma/idPjPGhHXplUMNzgpA7cg1YjcYckkvH0+yT8i0W3Qxvj8aUedIlTCyEHEVcTW2DG/tTHgL7thj9k/dX/YDjyb9lZr/q7Jc0miXU9VitIDMCbbMS8l6WUpt/qD1khDJf661iTz7C8bIK1uucMrS9Dc89Gr+2jDd), while solving this, I forgot to swap `x` and `y` while reading dots; all my `(row col)` were flipped, and because of that:

- Had to flip the bindings inside the for loop
- Ended up swapping rows for columns inside PRINT-PAPER (see `(dotimes (c rows) ...)` and `(dotimes (r cols) ...)`)

It's clear and simple, I am just not meant to be productive at 0600!

2021-12-12 (permalink)

Advent of Code: [2021/12](https://adventofcode.com/2021/day/12)
With your submarine's subterranean subsystems subsisting suboptimally, the only way you're getting out of this cave anytime soon is by finding a path yourself. Not just a path - the only way to know if you've found the **best** path is to find **all** of them.

So, we are given the map of the cave:
    /   \
    \   /
Except that it's encoded, like this:
And we are asked to find all the paths connecting `start` to `end` that don't visit _small_ caves more than once:
Your goal is to find the number of distinct **paths** that start at `start`, end at `end`, and don't visit small caves more than once. There are two types of caves: big caves (written in uppercase, like A) and small caves (written in lowercase, like b). It would be a waste of time to visit any small cave more than once, but big caves are large enough that it might be worth visiting them multiple times. So, all paths you find should visit small caves at most once, and can visit big caves any number of times.
First off, let's load this map into memory.  We are going to implement an adjacency list storing which cave is connected to which other one, and we are are going to do this using a HASH-TABLE instead of classic association lists:
(defun parse-map (data &aux (map (make-hash-table)))
  (dolist (s data map)
    (destructuring-bind (from . to) (parse-line s)
      (push to (gethash from map))
      (push from (gethash to map)))))

(defun parse-line (string)
  (cl-ppcre:register-groups-bind ((#'make-keyword from to))
      ("(\\w+)-(\\w+)" string)
    (cons from to)))
Next, let's walk through the cave.  We are going to recursively build up all the possible paths starting from `start`, and count the ones that lead up to `end`:

- If we reached `end` then we found one more path to count -- add `1`
- Otherwise, check all the caves reachable from the current position
- And if _visitable_, add the new cave to the current path, and _recur_
(defun walk (map)
  (while-summing (path#)
    (labels ((recur (curr)
               (cond ((eq (car curr) :|end|) (path#))
                     (t (loop for next in (gethash (car curr) map)
                              when (or (big-cave-p next) (not (member next curr))) do
                              (recur (cons next curr)))))))
      (recur '(:|start|)))))
And when can we visit a cave? When a cave is visitable?  If a big one, or if a small one, only if we haven't stepped foot into it already:
(defun visitablep (path next) (or (big-cave-p next) (not (member next path))))
(defun big-cave-p (cave) (not (small-cave-p cave)))
(defun small-cave-p (cave) (loop for ch across (string cave) thereis (lower-case-p ch)))
Cool!  Let's now take a look at part 2:
After reviewing the available paths, you realize you might have time to visit a single small cave **twice**. Specifically, big caves can be visited any number of times, a single small cave can be visited at most twice, and the remaining small caves can be visited at most once. However, the caves named `start` and `end` can only be visited **exactly once each**: once you leave the `start` cave, you may not return to it, and once you reach the `end` cave, the path must end immediately.
So, it looks like we will have to tweak VISITABLEP a little; for part 2, we can visit a cave if:

- It's a big one (as before)
- Or when a small one, if it's not `start`
- Or if the number of small caves, visited more than once, is less than or equal to `1`
(defun part2-visitable-p (path next)
  (cond ((big-cave-p next) t)
        ((eq next :|start|) nil)
        (t (<= (loop for p on (cons next path) for (c) = p
                     when (small-cave-p c) count (> (count c p) 1))
Let's update WALK to accept `visitablep` from the caller:
(defun walk (map &optional (visitablep #'visitablep))
  (while-summing (path#)
    (labels ((recur (path)
               (cond ((eq (car path) :|end|) (path#))
                     (t (loop for next in (gethash (car path) map)
                              when (visitablep path next) do
                              (recur (cons next path)))))))
      (recur '(:|start|)))))
And that's it!

Final plumbing:
(define-solution (2021 12) (map parse-map)
  (values (walk map) (walk map #'part2-visitable-p)))

(define-test (2021 12) (3000 74222))
Test run:
> (time (test-run))
Success: 1 test, 2 checks.
Evaluation took:
  1.981 seconds of real time
  1.867072 seconds of total run time (1.818760 user, 0.048312 system)
  [ Run times consist of 0.007 seconds GC time, and 1.861 seconds non-GC time. ]
  94.25% CPU
  4,557,591,987 processor cycles
  29,753,344 bytes consed
It's not the fastest solution, I will give you that, but I love how self contained the changes for part 2 were, especially if you compare that to my [original solution](https://topaz.github.io/paste/#XQAAAQAvCAAAAAAAAAAUGQimgx+p6PZhRh7uIO9C+TaXA2WvmEoBbmvcTSA2hNnhLNH9+NrXNpDAj8zGpwdcigC0xrwLmwiSZZAqQM7Oije2QNF71VjFvbAaFekLQoen1k9WQoYxw2xUV7tO5VVf85sY5Oa1paBH1vKEy0XuNJQkr4d4ZBPPgQpnUP2/M0gvBS5jcjfPws1bA77znBuGdmednXnc7webfWPkjzoFmI2/JMLMWyd9ChVUdsl89VQNxsrW7nLnAV/fzJVBrXBveYAs8qmrgh28XeCtT3uF6Vnk8jodHq5fjM/wz+fQScMyhH/q/m+SdBAJ+Qn9YYOqoAJqjY/0LdPJOraJAy5Jrox9keHXNZOGNx9qO4m56n2/b0y9V7AkAkeFftoVYskrcdqFROPXVP98FdUvcXv/uNdduoJbcIKeNa4XwMh0v7pSr0yV+U2ZJfUH57G+8812kphcBYJYnwoF8YJMbskQlannpznyiPa2QsPaK9sOYK8uahPK8kuWymzKkl3+R1aMs4bGo+dpfO5D5eUqW4gZ6ZBHfD4FJncXQYPCDmlCUp5dLYIDL8l911ZnaQYBnPGcLCsR7D9mQ+lBOG0ccOety6yKlGTujVWaz9MUKEYXkHBhgmiK8FcuklQlsG68lJTMp/engA6bpjJqilPIXGVU7aZEVTG5bILS7BQNSwuRbcD+iaylfEpg3amoDfSD7KGN+wIU/EEjpBihZPT7mWNKQG3P7H1eeIh+rFNTsg6VgyqvsIQ4orijcjh4V7l+OdeL0ShFU4IsMy4cHeCUzxPIEp7asE97eQH8q9g1JNuNXJ/1opC/61n1SV0F1AwHwGeN1Sfh3pI/HMtGgxe46JzXHl8d9znhNDMeGmpUZ6jcQk87N9mhuziYsKnh3g//wFhpgw==), in which I ended up copying PART1 into PART2 because I needed to pass around a new boolean flag to keep track of whether walking into a small cave was allowed or not.

PS. I was using UIOP:WHILE-COLLECTING to collect all the paths first, and then pass the result to LENGTH to get the answers to today's problems; well, I did not like that, and since I found myself with a little bit of time I decided to create the WHILE-SUMMING macro:
(defmacro while-summing (summers &body body)
  (expand-while-summing summers body))

(defun expand-while-summing (summers body)
  (let* ((gen-syms (mapcar #'expand-while-summing-gensym summers))
         (let-bindings (mapcar #'expand-while-summing-let-binding gen-syms))
         (flet-bindings (mapcar #'expand-while-summing-flet-binding summers gen-syms)))
    `(let* (,@let-bindings)
      (flet (,@flet-bindings)
        (values ,@gen-syms)))))

(defun expand-while-summing-gensym (name) (gensym (string name)))

(defun expand-while-summing-let-binding (gensym) (list gensym 0))

(defun expand-while-summing-flet-binding (name binding)
  `(,name (&optional (delta 1))
    (incf ,binding delta)))
I am sure there are already 100+ different versions of this, better ones for sure, but still, one more would not hurt, so here is mine!

+ untrace function does not work with Vlime

? while-counting

2021-12-11 (permalink)

Advent of Code: [2021/11](https://adventofcode.com/2021/day/11)
You enter a large cavern full of rare bioluminescent dumbo octopuses! They seem to not like the Christmas lights on your submarine, so you turn them off for now.
There are 100 octopuses arranged neatly in a 10 by 10 grid. Each octopus slowly gains energy over time and flashes brightly for a moment when its energy is full. Although your lights are off, maybe you could navigate through the cave without disturbing the octopuses if you could predict when the flashes of light will happen.
Each octopus has an energy level whose value we receive as our puzzle input:
You can model the energy levels and flashes of light in steps. During a single step, the following occurs:
- First, the energy level of each octopus increases by `1`.
- Then, any octopus with an energy level greater than `9` flashes. This increases the energy level of all adjacent octopuses by `1`, including octopuses that are diagonally adjacent. If this causes an octopus to have an energy level greater than `9`, it also flashes. This process continues as long as new octopuses keep having their energy level increased beyond `9`. (An octopus can only flash at most once per step.)
- Finally, any octopus that flashed during this step has its energy level set to `0`, as it used all of its energy to flash.
Note, adjacent flashes can cause an octopus to flash on a step even if it begins that step with very little energy.
Before any steps:

After step 1:

After step 2:
OK, our task for today:
Given the starting energy levels of the dumbo octopuses in your cavern, simulate 100 steps. How many total flashes are there after 100 steps?
As usual, let's parse our input first.  Like we did for [2021/09](https://matteolandi.net/plan.html#day-2021-12-09), we are going to parse this into a HASH-TABLE, mapping from `(row col)` pairs, to the octopus energy level:
(defun parse-octopuses (data &aux (octopuses (make-hash-table :test 'equal)))
  (loop for r below (length data)
        for string in data do
        (loop for c below (length string)
              for ch across string do
              (setf (gethash (list r c) octopuses)
                    (- (char-code ch) (char-code #\0)))))
With the puzzle input taken care of, we can now try to sketch our solution:

- We safely copy our input -- we are going to mutate it, and one too many times I have been bit by me accidentally using a mutated list / hash-table, so when in doubt, I copy things
- Then we let the octopuses dance for 100 times, and keep track of the number of flashes at the end of each dance
(defun flash-dance (octopuses &aux (octopuses (copy-hash-table octopuses)))
  (loop repeat 100 sum (dance octopuses)))
Great, now we have to figure out how flashing octopuses influence adjacent ones during the dance; here is what we are going to do:

- First off, we increment the energy level of each octopus, and if the octopus energy level is greater than 10, we FLASH it
- FLASH-ing an octopus means adding the octopus itself to the list of octopuses that already flashed (remember, octopuses can flash only once, per step...err dance), plus scheduling all its neighbors for an additional energy bump increase
- Next, until there are octopuses scheduled to be processed -- see `remaining` -- we increase their energy level, and if that winds up being greater than `9`, we call FLASH on them -- causing the octopus to be marked as having flashed, plus its neighbors to be added to `remaining`
- Note: this is very similar to the classic DFS / BFS, with the exception that nodes (i.e. octopuses) are marked as _visited_ not the first time we process them, but the first time their energy becomes greater than `9`
- At the end, we check all the octopus, count the ones that flashed and reset their energy level to `0`
(defun dance (curr &aux remaining (flashed (make-hset nil :test 'equal)))
  (flet ((flashesp (energy) (> energy 9))
         (flash (p)
           (hset-add p flashed)
           (setf remaining (append (neighbors curr p) remaining))))
    (loop for p being the hash-keys of curr
          when (flashesp (incf (gethash p curr))) do (flash p))
    (loop while remaining
          for n = (pop remaining)
          unless (hset-contains-p n flashed)
          when (flashesp (incf (gethash n curr))) do (flash n))
    (loop for p being the hash-keys of curr using (hash-value e)
          count (when (flashesp e) (setf (gethash p curr) 0)))))
NEIGHBORS is the exact same function we used for day 9; the inly difference is that *NHOOD* includes deltas for diagonal locations too:
(defparameter *nhood* '((-1 0) (-1 1) (0 1) (1 1) (1 0) (1 -1) (0 -1) (-1 -1)))

(defun neighbors (energies p)
  (loop for d in *nhood* for n = (mapcar #'+ p d)
        when (gethash n energies) collect n))
Cool, now let's take a look at part 2 instead:
If you can calculate the exact moments when the octopuses will all flash simultaneously, you should be able to navigate through the cavern. What is the first step during which all octopuses flash?
This should be easy enough: we let the octopus dance until the number of the ones that flashed is 100 (i.e. the size of the grid).

Let's update FLASH-DANCE accordingly:

- We use `:for step from 1` instead of `:repeat 100`
- We accumulate flashes into a loop variable, `flash-count`
- When the current step is `100`, then we save `flash-count` onto a new `part1` variable
- When the number of flashes in the current step is `100`, we break out of the function returning `(values part1 step)`
(defun flash-dance (octopuses &aux
                              (octopuses (copy-hash-table octopuses))
                              (part1 0))
  (loop for step from 1 for flashes = (dance octopuses)
        sum flashes into flash-count
        when (= step 100) do (setf part1 flash-count)
        when (= flashes 100) return (values part1 step)))
And that's it!

Final plumbing:
(define-solution (2021 11) (octopuses parse-octopuses) (flash-dance octopuses))

(define-test (2021 11) (1665 235))
Test run:
> (time (test-run))
Success: 1 test, 2 checks.
Evaluation took:
  0.026 seconds of real time
  0.026022 seconds of total run time (0.022644 user, 0.003378 system)
  [ Run times consist of 0.011 seconds GC time, and 0.016 seconds non-GC time. ]
  100.00% CPU
  59,922,835 processor cycles
  2,554,240 bytes consed
PS. [Here](https://topaz.github.io/paste/#XQAAAQC/BgAAAAAAAAAUGQimgx+p6PZhRh7uIOzWurUlN7fbFBw1HHgfNfGNNS7rNArrqFq/zq9awLKNdO5kmayCD7rk2HvFkonJ+07rAMPt3bdfbDoWgkRF4z+uYA8UeujpEPQ4K6D1P0e5Jc9C40O+I6B0/H+FvCYHKQ2vdlF2PoWqAiAZLsmHiW8H0ADVNJ3BSrpxO2VGYR+ONxGjvs9/lwjYPMB33adgSL9lbXZldKw10ruhWD45sMcoe5VPxiUdPM85iMzaG68++Xt0oUPct+MTdMyqSiq2cKJyUhJ3B4mQWzLZIuISnXaZOodJSkkRypBcS8jnZ7eOuxAGjWrUss2UTlwW//L913A6XkSnGc0y3XzvNySeS/uMMPMdwOdEOgzDtHvKA+jESWWoedLrMtaL/57EpEu9fdxCUrsDMTRdqzzw0aa7EkSUyqyMhjOHsR1eZfiL9DyQ2nOOyh05dQwGefb6r3/5PNMcLxk1QouVVOxfPTKrrEKiEpW3HXKfJqvyBaIvoH4684AQtXFJ9hE7RyR18SM1GWev+HGdr166bvCGmbKRA+YLXHJ4QXAh3F+rwJRa3mfJOJhWD4/8rsCXp2ciYlNiLPB1mBldaSnNOKLNGB06Pc3uDrcq6QJFxPGcQbcCwOlPvpzJbdy46dA6ZhFNOCADnkjEA9pMglyaxBPEnu5jNBV81/8akcYU3hfOJcGNiGFpBGJ/u8RBoH2Cg6mjSms9JWkwCHQ73RWuGvqrc3+IrJ2xAQjyu83lMTliIMGH9iD5/ICfBitI8NVyKcZfYE214UWNZM9gAQ6ExCvl87yEgOs2cVb1G10bI0lQnIyZqv/kKwU4O7so7K0nHHuREfaMONIM0FIMiUmX1waZlPKukXnK/PEJ0Q==) you can find my pre-refactoring REPL buffer, and as you can see, as I cleaned up my solution I mainly renamed things, and combined the _cowboy_ loops into FLASH-DANCE!

2021-12-10 (permalink)

Advent of Code: [2021/10](https://adventofcode.com/2021/day/10)
You ask the submarine to determine the best route out of the deep-sea cave, but it only replies:
Syntax error in navigation subsystem on line: all of them
All of them?! The damage is worse than you thought. You bring up a copy of the navigation subsystem (your puzzle input).
The navigation subsystem syntax is made of several lines containing chunks. There are one or more chunks on each line, and chunks contain zero or more other chunks. Adjacent chunks are not separated by any delimiter; if one chunk stops, the next chunk (if any) can immediately start. Every chunk must open and close with one of four legal pairs of matching characters:
- If a chunk opens with `(`, it must close with `)`.
- If a chunk opens with `[`, it must close with `]`.
- If a chunk opens with `{`, it must close with `}`.
- If a chunk opens with `<`, it must close with `>`.
It goes on explaining the difference between _corrupted_ lines (our focus for part 1) and _incomplete_ lines:
Some lines are incomplete, but others are corrupted. Find and discard the corrupted lines first.
A corrupted line is one where a chunk closes with the wrong character - that is, where the characters it opens and closes with do not form one of the four legal pairs listed above.
What we are asked to do, is process each _corrupted_ line, stop at the first invalid character, and generate a _score_ as described below:
To calculate the syntax error score for a line, take the first illegal character on the line and look it up in the following table:
- `)`: `3` points.
- `]`: `57` points.
- `}`: `1197` points.
- `>`: `25137` points.
To recap:
Find the first illegal character in each corrupted line of the navigation subsystem. What is the total syntax error score for those errors?
To solve this, we are going to use a stack; given a line, for each of its characters:

- If open parenthesis, push the relative closing one into the stack (e.g. if `(` is read, I push `)`)
- If close parenthesis: if it matches the value at the top of the stack, I pop that from the stack; otherwise it's an error, so we calculate the syntax error score

Let's start by defining a table to map between open and close parentheses:
(defparameter *closing*
  '((#\( . #\))
    (#\[ . #\])
    (#\{ . #\})
    (#\< . #\>)))
Next, let's define our main function, SYNTAX-SCORING, which we will use to:

- Process all the lines in our file
- Implement the stack based algorithm mentioned above
- Sum all the error scores together...and return it
(defun syntax-scoring (file &aux (error-score 0))
  (dolist (line file)
    (let (stack)
      (loop for ch across line do
            (cond ((find ch "([{<") (push (cdr (assoc ch *closing*)) stack))
                  ((eq ch (car stack)) (pop stack))
                  (t (return (incf error-score (syntax-error-score ch))))))))
Last missing bit, SYNTAX-ERROR-SCORE, which all it does, is looking up the given character into another table mapping from _invalid_ characters to their score:
(defparameter *syntax-error-table*
  '((#\) . 3)
    (#\] . 57)
    (#\} . 1197)
    (#\> . 25137)))

(defun syntax-error-score (ch) (cdr (assoc ch *syntax-error-table*)))
For part 2 instead, we are asked to focus on _incomplete_ entries:
Incomplete lines don't have any incorrect characters - instead, they're missing some closing characters at the end of the line. To repair the navigation subsystem, you just need to figure out the sequence of closing characters that complete all open chunks in the line.
Once we figure out these completion strings, we need to calculate their score:
Then, for each character, multiply the total score by 5 and then increase the total score by the point value given for the character in the following table:
- `)`: `1` point.
- `]`: `2` points.
- `}`: `3` points.
- `>`: `4` points.
Then we need to collect all the scores, sort them, and pick the one in the middle:
Autocomplete tools are an odd bunch: the winner is found by sorting all of the scores and then taking the middle score. (There will always be an odd number of scores to consider.)
Find the completion string for each incomplete line, score the completion strings, and sort the scores. What is the middle score?
OK, first off, let's update SYNTAX-SCORING to start processing _incomplete_ lines as well.

In our case, an entry is _incomplete_ if it's not _corrupted_ (i.e. it did not cause an _early_ return from the LOOP form), _and_ if when done processing all the line characters, `stack` turns out not to be empty; in which case:

- we calculate the completion score -- :FINALLY clause added to the LOOP form
- collect all the scores together -- see COMPLETION-SCORES added to the function parameter list
- sort them and pick the middle value -- final LET* form
(defun syntax-scoring (file &aux (error-score 0) completion-scores)
  (dolist (line file)
    (let (stack)
      (loop for ch across line do
            (cond ((find ch "([{<") (push (cdr (assoc ch *closing*)) stack))
                  ((eq ch (car stack)) (pop stack))
                  (t (return (incf error-score (syntax-error-score ch)))))
            finally (when stack
                      (push (completion-score stack) completion-scores)))))
    (let* ((completion-scores (sort completion-scores #'<))
           (n (floor (length completion-scores) 2)))
      (nth n completion-scores))))
Note that in our case, `stack`, actually represents the _completion string_ mentioned above, so we can simply go ahead and calculate its score as instructed:
(defparameter *point-value-table*
  '((#\) . 1)
    (#\] . 2)
    (#\} . 3)
    (#\> . 4)))

(defun completion-score (stack &aux (score 0))
  (dolist (ch stack score)
    (setf score (+ (* score 5) (cdr (assoc ch *point-value-table*))))))
And that's it!

Final plumbing:
(define-solution (2021 10) (data) (syntax-scoring data))

(define-test (2021 10) (323613 3103006161))
Test run:
> (time (test-run))
Success: 1 test, 2 checks.
Evaluation took:
  0.000 seconds of real time
  0.000799 seconds of total run time (0.000505 user, 0.000294 system)
  100.00% CPU
  1,872,677 processor cycles
  97,968 bytes consed
And we are good to go!

PS. Today's problem was a breath of fresh air, really, especially if we compare it to the problems of the last two days, and even my [pre-refactoring REPL buffer](https://topaz.github.io/paste/#XQAAAQA4BQAAAAAAAAAUGQimgulVkMqJePdJ12sc+KZM7LOyBuIC1dzJtzX33KNaMXj9jmVZTPyfQOmX9PpTg5GvWdrvb7nTb9ehlqWqzZpjVnVnMGhfrX1xdaAgSNaXDCXu2M2WHr9Ka+4Bd6xYmaJlsoLibJAMVSflYybG5/E4vkNUo9nyCiaOJft/b2FbN3paNVzj8HO6fbrKik9iPjcV2n7jQfOxB05bL4ApX44oMtesKX1WwNluA6y2i7r90FAeHyyaMNak929zF5LGhGiARa+nmXNv+6suXMQZiCOAD8O2O7R7OTmdtXH7GCK+Selm1SwZ1vC0BGB4xzp8njvpRtUewleaWkBHZu5aCElvzg9/Hn7gHgXkrzWzL9ONANarpyCBZGmIwIQAmm47PQvTUh1YLAUhSXnIOm6Yyy0lbHzdfEwLmq5PnbuthOsEWJ/aiTtxFGv7r9LvfcfwgxUT1dIyvqHoy6ktffQZGdaMKGy7xzLg/LPwnTzhPHjDbCGmPcTJdDLro1/iXCNwXfDowju2lnfal26uMNhzeBb/Y6GvH3lNlzKH4Yq4tBzQbJOwXUFMGrw/XY3GqH8BUK5TRxRhGBqi2vpQa2vHjRfwkn+oR3Lis3R6kK4BjEBeJK9J5OqyXern7uZkiOhAoKOvNHwIcTY0xQkVqOZJYnXEuI3JCDHiN/wpn1E=), seems to agree with that!

2021-12-09 (permalink)

Advent of Code: [2021/09](https://adventofcode.com/2021/day/9)
These caves seem to be lava tubes. Parts are even still volcanically active; small hydrothermal vents release smoke into the caves that slowly settles like rain.
Rain...underwater? Aren't we still on the submarine, are we?! Anyways...
If you can model how the smoke flows through the caves, you might be able to avoid it and be that much safer. The submarine generates a heightmap of the floor of the nearby caves for you (your puzzle input).
Smoke flows to the lowest point of the area it's in. For example, consider the following heightmap:
Each number corresponds to the height of a particular location, where 9 is the highest and 0 is the lowest a location can be.
The first step is to parse the input heightmap; we are are going to use a HASHTABLE, mapping from `(row col)` to the corresponding height:
(defun parse-heights (data &aux (heights (make-hash-table :test 'equal)))
  (loop for r below (length data)
        for string in data do
        (loop for c below (length string)
              for ch across string do
              (setf (gethash (list r c) heights)
                    (- (char-code ch) (char-code #\0)))))
Now, here is what we are asked to do for part 1:
Find all of the low points on your heightmap. What is the sum of the risk levels of all low points on your heightmap?
So, for each low point, we add one to its height, and sum everything together:
(defun part1 (heights)
  (loop for p in (low-points heights) sum (1+ (gethash p heights))))
But what are the low points of our heightmap? All the points surrounded by locations with a higher height:

- For each point in the heightmap
- For each of its neighbor locations
- Check if the height of the center is smaller
- If it is, and for all the neighbor locations, then it's a low point so we should collect it
(defun low-points (heights)
  (loop for p being the hash-keys of heights using (hash-value h)
        when (every (lambda (n) (< h (gethash n heights))) (neighbors heights p))
        collect p))
But given a point, what are all its neighbor locations?  The ones north, east, south, west, which do not fall off the grid:
(defparameter *nhood* '((-1 0) (0 1) (1 0) (0 -1)))

(defun neighbors (heights p)
  (loop for d in *nhood* for n = (mapcar #'+ p d)
        when (gethash n heights) collect n))
Now, for part 2 instead, we are asked to calculate the size of the largest _basin_:
A basin is all locations that eventually flow downward to a single low point. Therefore, every low point has a basin, although some basins are very small. Locations of height 9 do not count as being in any basin, and all other locations will always be part of exactly one basin.
And in particular:
What do you get if you multiply together the sizes of the three largest basins?
OK, so:

- We find the basins
- Sort them by size (biggest first)
- Take the first 3 values
- Multiply them together
(defun part2 (heights)
  (reduce #'* (subseq (sort (basins heights) #'>) 0 3)))
Let's dig a bit deeper, and see how we are going to calculate the size of the basins:

- For each low point (we know each basin expands around one of the low points from part 1)
- We apply BFS, scanning all the surrounding area, until we either fell off the grid (NEIGHBORS already implements this) or we hit a location with height of `9` (from the text: _locations of height `9` do not count as being in any basin_)
- As we navigate adjacent cells (i.e. our :NEIGHBORS function), we increase a counter representing the size of the current basin

I already have BFS available in my utility file, so implementing the above was quite straightforward:
(defun basins (heights)
  (uiop:while-collecting (acc!)
    (dolist (lp (low-points heights))
      (let ((size 0))
        (bfs lp :test 'equal :neighbors (lambda (p)
                                          (incf size)
                                          (remove-if-not (lambda (n)
                                                           (< (gethash n heights) 9))
                                                         (neighbors heights p))))
        (acc! size)))))
Final plumbing:
(define-solution (2021 09) (heights parse-heights)
  (values (part1 heights) (part2 heights)))

(define-test (2021 09) (502 1330560))
And that's it:
> (time (test-run))
Success: 1 test, 2 checks.
Evaluation took:
  0.050 seconds of real time
  0.046314 seconds of total run time (0.035313 user, 0.011001 system)
  92.00% CPU
  116,896,357 processor cycles
  9,133,328 bytes consed
I wished I were able to code this up, on the spot, as the clock was ticking; instead, my [REPL buffer](https://topaz.github.io/paste/#XQAAAQDHCwAAAAAAAAAUGQimgx+p6PZhRh7uIO+WahagjQFtPUroo2DHmQaY6XQY1BYJ+FMCLkj+Xpf+GXwi1mkfiahNXzGziXL6bKmTzHNvb+uG5oA435oUBU5eqNnZv0QZ7grjrReQCKmII7Qfq3wAMvM+VOxZt/n90rZ8BZYGbFNoRn3y2DYmMyM/VImdJxNjfq0yhV53UoWNkQL61HpdCStQHM2EJ1Mh9VhTk6oNhbfFah6BArXTOVwK1xvMxv4DAhqQ4F1DcTE/vpl9OinZXp1CKRSxMIHv9k4fkV8F6i1damTN7gRXQS/ahc1TkeJm3e28uw0DJM/hxRuUvrRAzQkX9WQzNqVqvYxNjp2T5c+rPq4uIoBap1pQsiGqBxJ+462us1oa8mDQijvvmNUm+B7F9wYizQfra3lEedhNyea50YTCG7eVi9cXU0+w1AszW7+Qc4N+EF/YHYOQ6pNO9oMWkXKLNwtFxFFZhv0lISG3hjowK5FMycdOZussyrF9M20IO/mq6feOWp309AO/T+Q83Z+iah4GzohefjT05OR0sv8+EWlDOoPadylXaPdXW4kV2Kz3SFXCriiTUkYpr8Nmu3ZSbtBNaEHn0k19uQ9XR8vqw6FKbpfslM+2WzNY7gpxi1TqZ4UtetR/30sSoYE8uly4hHPmd0U0IboW52c4PVg2wUluyUZuk2Dt5hNHUPh2CTR2out6u4EhJ6MOT2eEdtmhDrfHoHhWiuBcTJUVMbtnJstp9hBY/ce15UMEVKqgKQEXu2n8tOLALvezzfOq03TXkC6v2vpL2I3dvBkSkxY1DjGjSxNxpN3wAnsdtYpS5vuHZZnEnQ6YnP4k2QH5hOH8OZVg3kKY7AyAnajGyFtK0UbiF5TlP09/aYAbkBWGibTLWApQYfLyJmtYP7r5H6dZBzUBJiWQSzPcDnHED2m7Rzot9n8X/zXGfZvi3Ie8oRgu2vr1kovWfbzkkHFkt+OhjPMsgybIVfMGbk0lzRgJGHuVTPSfqiICgE/oPBxHkPQbjz8zUgj5IbmBoVKOo2ABq4EFzeYY1b5RpUtkL9/yZEIzyG9z0XLEbnAuo6RJPmKHdHii5xAUWUG1FGR6YEwFNFXoqpB0EeMWBBj8rt3JwNUSaUAGXJTYF6NcYGRpi3accSCK/X8JkA==) for today wound up looking anything but clean:

- Used a 2D array instead of a HASHTABLE to represent the map, and _that_ made pretty much all the rest really **really** verbose
- I was only checking the location above and to the left of the current one in first BASINS attempt, but that clearly generated the wrong answer
- It took me a while before I decided a NEIGHBORS function would have helped with the overall readability of the whole solution -- very important, especially when things ain't working as they should, but go figure why I had not thought about that sooner
- Last but not least, I forgot I already had AREF-OR in my utilities file, for safely accessing array values even with out of bound indices; so, for part 1 I ended up sprinkling a bunch of OR / IGNORE-ERRORS forms all over the place, while for part 2, I wound up re-implementing AREF-OR as SAREF...

Still got my 2 stars home anyway, and _that_ alone is a reason to celebrate!

PS. As it turns out, part 2 could also be solved using disjoint sets / union find:

- Each location starts as a disjoint-set / basin of its own
- For each location, for each of its neighbors, if the neighbor height is not 9 then we join the two sets / basins
- For each set, we find the root element and start counting how many locations roll up to the same set / basin

And these counters, one per basin, actually represent the list of sizes we were looking for:
(defun basins (heights &aux
                       (basins (make-hash-table :test 'equal))
                       (sizes (make-hash-table :test 'eq)))
  (loop for p being the hash-keys of heights using (hash-value h)
        unless (= h 9) do (setf (gethash p basins) (make-dset h)))
  (loop for p being the hash-keys of basins using (hash-value ds) do
        (loop for np in (neighbors heights p)
              for nds = (gethash np basins)
              when nds do (dset-union ds nds)))
  (loop for ds being the hash-values of basins do (incf (gethash (dset:dset-find ds) sizes 0)))
  (hash-table-values sizes))
And just to be 100% sure, let's confirm we did not break anything:
> (time (test-run))
Success: 1 test, 2 checks.
Evaluation took:
  0.021 seconds of real time
  0.020646 seconds of total run time (0.017324 user, 0.003322 system)
  100.00% CPU
  49,747,608 processor cycles
  6,147,472 bytes consed
We didn't!

2021-12-08 (permalink)

Advent of Code: [2021/08](https://adventofcode.com/2021/day/8)

We managed to escape from the whale, but something is not quite working as it should:
As your submarine slowly makes its way through the cave system, you notice that the four-digit seven-segment displays in your submarine are malfunctioning; they must have been damaged during the escape. You'll be in a lot of trouble without them, so you'd better figure out what's wrong.
Each digit of a seven-segment display is rendered by turning on or off any of seven segments named a through g:
  0:      1:      2:      3:      4:
 aaaa    ....    aaaa    aaaa    ....
b    c  .    c  .    c  .    c  b    c
b    c  .    c  .    c  .    c  b    c
 ....    ....    dddd    dddd    dddd
e    f  .    f  e    .  .    f  .    f
e    f  .    f  e    .  .    f  .    f
 gggg    ....    gggg    gggg    ....

  5:      6:      7:      8:      9:
 aaaa    aaaa    aaaa    aaaa    aaaa
b    .  b    .  .    c  b    c  b    c
b    .  b    .  .    c  b    c  b    c
 dddd    dddd    ....    dddd    dddd
.    f  e    f  .    f  e    f  .    f
.    f  e    f  .    f  e    f  .    f
 gggg    gggg    ....    gggg    gggg
So, to render a `1`, only segments `c` and `f` would be turned on; the rest would be off. To render a `7`, only segments `a`, `c`, and `f` would be turned on.

The problem is that the signals which control the segments have been mixed up on each display. The submarine is still trying to display numbers by producing output on signal wires a through g, but those wires are connected to segments randomly. Worse, the wire/segment connections are mixed up separately for each four-digit display! (All of the digits within a display use the same connections, though.)
So, you might know that only signal wires `b` and `g` are turned on, but that doesn't mean segments `b` and `g` are turned on: the only digit that uses two segments is `1`, so it must mean segments `c` and `f` are meant to be on. With just that information, you still can't tell which wire (`b`/`g`) goes to which segment (`c`/`f`). For that, you'll need to collect more information.
For each display, you watch the changing signals for a while, make a note of all ten unique signal patterns you see, and then write down a single four digit output value (your puzzle input). Using the signal patterns, you should be able to work out which pattern corresponds to which digit.
For example, here is what you might see in a single entry in your notes:
acedgfb cdfbe gcdfa fbcad dab cefabd cdfgeb eafb cagedb ab | cdfeb fcadb cdfeb cdbaf
Each entry consists of ten unique signal patterns, a `|` delimiter, and finally the four digit output value. Within an entry, the same wire/segment connections are used (but you don't know what the connections actually are). The unique signal patterns correspond to the ten different ways the submarine tries to render a digit using the current wire/segment connections. Because `7` is the only digit that uses three segments, `dab` in the above example means that to render a `7`, signal lines `d`, `a`, and `b` are on. Because `4` is the only digit that uses four segments, `eafb` means that to render a `4`, signal lines `e`, `a`, `f`, and `b` are on.
_OK, I kind of get it, but what exactly are we supposed to do?_
In the output values, how many times do digits `1`, `4`, `7`, or `8` appear?
The following algorithm should get the job done:

- For each entry
- For each output value
- If its length is either `2`, `4`, `3`, or `7` (i.e. the number of segments on, for the digits: `1`, `4`, `7`, and `8` respectively), then it means the entry is trying to render one of those digits, so we add 1

However, let's parse our entries first:
(defun parse-entries (data)
  (loop for string in data
        for parts = (cl-ppcre:all-matches-as-strings "[a-z]+" string)
        collect (cons (subseq parts 0 10) (subseq parts 10))))
(defun inputs (entry) (car entry))
(defun outputs (entry) (cdr entry))
Now, the algorithm to solve part 1:
(defun part1 (entries)
  (loop for e in entries sum
        (loop for d in (outputs e) count (member (length d) '(2 4 3 7)))))
An easy part 1 is usually followed by a somewhat complicated part 2, and today is no exception:
Through a little deduction, you should now be able to determine the remaining digits.
So yeah, somehow we need to figure out which wire is connected to which segment, decode each output value, and add them up together:
For each entry, determine all of the wire/segment connections and decode the four-digit output values. What do you get if you add up all of the output values?
OK, before we actually figure out a way to find this wire / segment mapping, let's pretend we had it, and see what we would do with it.

Given the following wiring:
e    a
e    a
g    b
g    b
Our mapping would look something like: `(#\d #\e #\a #\f #\g #\b #\c)`.

Now, if we _had_ this, we would go on and _decode_ the output signals:
(defun decode (mapping signals &aux (rez 0))
  (dolist (s signals)
    (let ((d (signal->digit mapping s)))
      (setf rez (+ (* rez 10) d))))
Before looking at SIGNAL->DIGIT though, we are going to have to define a table mapping from digits to the list of segments which are on when the given digit is rendered; and instead of coding the list of segments as a list, we are actually going to use a bitmask instead (e.g. segment `a` -> `1`; segment `b` -> `2`; segment `c` -> `4`):
(defparameter *digits->segments* '((0 . #b1110111)
                                   (1 . #b0100100)
                                   (2 . #b1011101)
                                   (3 . #b1101101)
                                   (4 . #b0101110)
                                   (5 . #b1101011)
                                   (6 . #b1111011)
                                   (7 . #b0100101)
                                   (8 . #b1111111)
                                   (9 . #b1101111)))
And with this, to convert a signal to a digit all we have to do is:

- For each character of the signal -- DOVECTOR
- Find the segment it's connected to -- POSITION
- Set the _right_ bit in the bitmask -- DPB
- Look up the bitmask in the table above -- RASSOC
(defun signal->digit (mapping s &aux (segments-mask 0))
  (dovector (ch s)
    (let ((i (position ch mapping)))
      (setf segments-mask (dpb 1 (byte 1 i) segments-mask))))
  (car (rassoc segments-mask *digits->segments*)))

> (decode '(#\d #\e #\a #\f #\g #\b #\c) '("cdfeb" "fcadb" "cdfeb" "cdbaf"))
Now, let's take another look at what exactly we are asked to do for part 2:
For each entry, determine all of the wire/segment connections and decode the four-digit output values. What do you get if you add up all of the output values?
If we gloss over the "determine all of the wire/segment connects" part, and consider the fact that we already know how to decode a message (once we have the mapping), it should be pretty easy to wire things together and find the answer for part 2:
(defun part2 (entries)
  (loop for e in entries for m = (find-mapping (inputs e))
        sum (decode m (outputs e))))
Now we have to figure out what FIND-MAPPING will actually do.

Well, even though the following algorithm produces the right answer (for my input at least), I honestly think I just have been super lucky because the more I think about this, the more I question _why_ this would work.  But it does seem to work, so...

Anyway, we begin by saying that each segment could be linked to any wire:
0 -> abcdefg
1 -> abcdefg
2 -> abcdefg
3 -> abcdefg
4 -> abcdefg
5 -> abcdefg
6 -> abcdefg
7 -> abcdefg
From here, we process each signal pattern, from the _easy_ ones to the most difficult ones, and try to _refine_ our mapping until any of the following conditions happen:

- Each segment maps to a _different_ wire -- in which case we found our mapping!
- We cannot map segment to a wire -- in which case we will have to backtrack!

Given a signal, first we figure out based on its length the different digits it could refer to; for example:

- `ab` uniquely refers to the digit: `1`
- `abc` uniquely refers to the digit: `7`
- `abcd` uniquely refers to the digit: `4`
- `abcde` could refer to any of the following digits: `2`, `3`, `5`
- `abcdef` could refer to any of the following digits: `0`, `6`, `9`
- `abcdefg` uniquely refers to the digit: `8`

Then, for each of these options (e.g. a single option in case of `ab`, three different options for `abcde`), we see which segments the signal uses, and refine our mapping as follows:

- If a segment is used by the current signal, we know that it can only be linked to any of the wires used by the signal; we can then safely remove (i.e. INTERSECT between the current list of options, for the segment, and the wires of the signal) any other possible option for the current segment, as choosing that would cause the constraint defined by the current signal to break
- If a segment is not used by the current signal instead, we know it will never be linked to any of the wires used by the signal; we can safely remove (i.e. SET-DIFFERENCE between the current list of options, for the segment, and the wires of the signal) all the signal wires from the segment list of options, as again, choosing that, would cause the constraint defined by the current signal to break

We then continue to the next signal until either we find a valid mapping, or we hit a dead end step back.


- If each segment is linked to a single wire, we found our mapping
- If a segment, for whatever reason, winds up not being linked to any single wire, then we hit a dead end
- If we run out signals (and none of the above hold true), we hit another dead end
- Otherwise, for each digit that the signal could render -- POSSIBLE-DIGITS
- And for each segment of this digit
- We update the list of possible options as described above (i.e. INTERSECTION for the segments which are on, and SET-DIFFERENCE for the remaining ones)
(defun find-mapping (signals)
  (labels ((recur (curr remaining)
             (cond ((loop for x in curr always (= (length x) 1))
                    (return-from find-mapping (mapcar #'car curr)))
                   ((loop for x in curr thereis (zerop (length x))) nil)
                   ((null remaining) nil)
                   (t (let ((s (first remaining)))
                        (dolist (d (possible-digits s))
                          (let ((segs (cdr (assoc d *digits->segments*))))
                              (loop for c in curr for i below 7
                                    collect (if (= (ldb (byte 1 i) segs) 1)
                                              (intersection c (coerce s 'list))
                                              (set-difference c (coerce s 'list))))
                              (rest remaining)))))))))
      (loop repeat 7 collect (coerce "abcdefg" 'list))
      (sort (copy-seq signals) #'possible-digits<))))
But what are the digits that a signal could possibly try to render?  As mentioned above, we answer this by looking at the length of the signal:
(defparameter *length->digits* '((2 1)
                                 (3 7)
                                 (4 4)
                                 (5 2 3 5)
                                 (6 0 6 9)
                                 (7 8)))
(defun possible-digits (s) (cdr (assoc (length s) *length->digits*)))
Last, we said we wanted to process the _easy_ signals first; here is the predicate function to pass to SORT to make sure signals with the least number of _renderable_ digits are processed first:
(defun possible-digits< (s1 s2)
  (< (length (possible-digits s1)) (length (possible-digits s2))))
And that's it!

Final plumbing:
(define-solution (2021 08) (entries parse-entries)
  (values (part1 entries) (part2 entries)))

(define-test (2021 08) (330 1010472))
Test run:
> (time (test-run))
Success: 1 test, 2 checks.
Evaluation took:
  0.005 seconds of real time
  0.005222 seconds of total run time (0.004867 user, 0.000355 system)
  100.00% CPU
  13,017,280 processor cycles
  2,160,512 bytes consed
PS. As it turns out, part 2 could also be solved by brute-force:

- Generate all the possible mappings (it's only seven wires!)
- For each entry, try all the mappings until one successfully decodes the inputs
> (time
    (let ((all-mappings (all-permutations (coerce "abcdefg" 'list)))
          (sum 0))
      (dolist (e (parse-entries (uiop:read-file-lines "src/2021/day08.txt")))
        (dolist (m all-mappings)
          (when (every (partial-1 #'signal->digit m) (inputs e))
            (return (incf sum (decode m (outputs e)))))))
Evaluation took:
  0.384 seconds of real time
  0.370098 seconds of total run time (0.362925 user, 0.007173 system)
  96.35% CPU
  884,284,986 processor cycles
  2,194,816 bytes consed

WTF?! It runs pretty damn fast too!  I guess next time I am going to try to remember that sometimes inefficient but no-brainer is better than efficient but requiring a big-brain.

2021-12-07 (permalink)

Advent of Code: [2021/07](https://adventofcode.com/2021/day/7)

- A whale has decided to eat our submarine
- Lucky us, a swarm of crabs (each in its own tiny submarine...LOL) is willing to help
- They want to drill a hole in the ocean floor so we can escape through it
- They need some help to figure out the exact position where to drill

In particular, given the horizontal position of each crab, we are asked to align them and minimize the amount of fuel each tiny submarine needs to get there.

Parsing the input is a no brainer; it's a comma separated list of numbers, so the usual CL-PPCRE:ALL-MATCHES-AS-STRINGS, and MAPCAR with PARSE-INTEGER will do the trick:
(defun parse-crabs (data)
  (mapcar #'parse-integer (cl-ppcre:all-matches-as-strings "\\d+" (first data))))
Now, to minimize the amount of fuel required to align all the crabs, I am just going to brute-force it:

- For each point between mix and max crab positions
- Calculate the amount of fuel to move all the crabs there
- Minimize this
(defun minimize-fuel (crabs)
  (loop for p from (find-min crabs) to (find-max crabs) minimize
        (loop for c in crabs sum (abs (- c p)))))
For part 2, the situation changes a little bit:
As it turns out, crab submarine engines don't burn fuel at a constant rate. Instead, each change of 1 step in horizontal position costs 1 more unit of fuel than the last: the first step costs 1, the second step costs 2, the third step costs 3, and so on.
Determine the horizontal position that the crabs can align to using the least fuel possible so they can make you an escape route! How much fuel must they spend to align to that position?
If a crab needs to move 5 steps, it will consume: 1 + 2 + 3 + 4 + 5 units of fuel; and [in general](https://en.wikipedia.org/wiki/Arithmetic_progression#Derivation), to move `n` steps it will consume: `n * (n + 1) / 2` units of fuel:
(defun minimize-fuel (crabs)
  (loop for p from (find-min crabs) to (find-max crabs) minimize
        (loop for c in crabs sum (let ((n (abs (- p1 p2))))
                                   (floor (* n (1+ n)) 2)))))
Little bit of refactoring plus final plumbing:
(defun minimize-fuel (crabs distance-fun)
  (loop for p from (find-min crabs) to (find-max crabs) minimize
        (loop for c in crabs sum (funcall distance-fun c p))))

(define-solution (2021 07) (crabs parse-crabs)
  (values (minimize-fuel crabs (lambda (p1 p2) (abs (- p1 p2))))
          (minimize-fuel crabs (lambda (p1 p2 &aux (n (abs (- p1 p2))))
                                 (floor (* n (1+ n)) 2)))))

(define-test (2021 07) (359648 100727924))
And that's it:
> (time (test-run))
Success: 1 test, 2 checks.
Evaluation took:
  0.093 seconds of real time
  0.083847 seconds of total run time (0.081319 user, 0.002528 system)
  90.32% CPU
  215,462,807 processor cycles
  96,608 bytes consed
PS. In the heat of the moment, i.e. at something past 6 in the morning, I could not really remember the formula for the sum of an arithmetic progression; so what did I do, instead? [I simulated it!](https://topaz.github.io/paste/#XQAAAQC3AQAAAAAAAAAUHMind5jo50a72d6QS46YciZNqOxJy4V2WTiur+Qo5cWe6U9r6SyOJAjWcaSIJPsSzysjospBanW0NWsGyUEVOXTNYugyPi3UXtxVSE8V+6fKq9IN0ZrRch0UCsc5gmMVc2vkQkX2qH6njUoOmlI6Xm2D3GZKxJ+qvujI4z6GepPH5hvGuXJR4XqsTpmHnO9yonbOUFSylej8rV9O3jJvc7kMJNFPhxuzkK3ck//MJx9hg/flYZrAMJ/oKg+iWjJBqyliY44AHrsEmGNm2Aa4A7pc1ncIOIHxtWLrvtPRFIT/YnyyAA==), adding 1 unit of fuel at each step...
> (time (loop for p from (find-min crabs) to (find-max crabs) minimize
              (loop for c in crabs sum
                    (loop repeat (- (max c p) (min c p))
                              for f from 1 sum c))))
Evaluation took:
  18.734 seconds of real time
  10.576204 seconds of total run time (10.243097 user, 0.333107 system)
  56.45% CPU
  43,088,527,879 processor cycles
  16 bytes consed


2021-12-06 (permalink)

Advent of Code: [2021/06](https://adventofcode.com/2021/day/6)

Today we are asked to study a massive school of lanternfish; in particular, we are tasked to see how big this group will be, after 80 days:
Although you know nothing about this specific species of lanternfish, you make some guesses about their attributes. Surely, each lanternfish creates a new lanternfish once every 7 days.
However, this process isn't necessarily synchronized between every lanternfish - one lanternfish might have 2 days left until it creates another lanternfish, while another might have 4. So, you can model each fish as a single number that represents the number of days until it creates a new lanternfish.
Furthermore, you reason, a new lanternfish would surely need slightly longer before it's capable of producing more lanternfish: two more days for its first cycle.
So, to recap, we are given a comma separated list of numbers, each representing the age of each fish, and we are asked to measure the size of this group after 80 days, knowing that:

- A lanterfish will produce a new lanterfish every 7 days
- A newly spawn lanterfish will require 9 days before it can produce a new lanternfish

As usual, let's start form the input (it's a comma separated list of numbers):
(defun parse-ages (data)
  (mapcar #'parse-integer (cl-ppcre:all-matches-as-strings "\\d" (first data))))
With this, we can evolve the school of lanternfish as follows:

- For each fish in the school
- If it's 0 it means it can reproduce; we add the lanternfish itself back into the school (with the timer set to 6), and then we add a new lanternfish with the reproduction time set to 9: it's a new fish, so it will have to wait a little longer
- Otherwise, we simply decrease the reproduction counter
- We do this 80 times
(defun part1 (ages)
  (let ((an (copy-seq ages)))
    (dotimes (_ 80 (length an))
      (setf an (uiop:while-collecting (next)
                 (dolist (a an)
                   (cond ((= a 0) (next 6) (next 8))
                         (t (next (1- a))))))))))
For part 2 instead:
How many lanternfish would there be after 256 days?
Before you try it, replacing 80 with 256 is not going to work -- you will run out of memory real quick!

The big realization here, is that lanternfish with same age are all alike, and they all are going to reproduce at the same time; we don't care which one comes first; we only care of when in the future they are going to reproduce.

Let's parse the input again; this time though, let's output a 9 elements array, representing the number of fish reproducing in 0 days, 1 day, 2 days...
(defun parse-generations (data &aux
                               (generations (make-array 9 :initial-element 0))
                               (ages (cl-ppcre:all-matches-as-strings "\\d" (first data))))
  (loop for v in ages do (incf (aref generations (parse-integer v))))
Evolving the school of lanternfish for a given number of days (so we can use this for part 1 as well) would then become a matter of:

- Rotating array values to the left (i.e. aging + reproduction)
- Increasing the `6-days` slot by the number of fish that just reproduced (i.e. they will be able to reproduce again in 6 days)
(defun evolve (generations days &aux (gg (copy-seq generations)))
  (dotimes (_ days (reduce #'+ gg))
    (let ((n (aref gg 0)))
      (dotimes (i 8)
        (setf (aref gg i) (aref gg (1+ i))))
      (setf (aref gg 8) n)
      (incf (aref gg 6) n))))
Final plumbing:
(define-solution (2021 06) (generations parse-generations)
  (values (evolve generations 80) (evolve generations 256)))

(define-test (2021 06) (371379 1674303997472))
And that's it:
> (time (test-run))
Success: 1 test, 2 checks.
Evaluation took:
  0.000 seconds of real time
  0.000528 seconds of total run time (0.000354 user, 0.000174 system)
  100.00% CPU
  1,277,524 processor cycles
  32,768 bytes consed

2021-12-05 (permalink)

Advent of Code: [2021/05](https://adventofcode.com/2021/day/5)
You come across a field of hydrothermal vents on the ocean floor! These vents constantly produce large, opaque clouds, so it would be best to avoid them if possible.
Uh oh!  It continues:
They tend to form in lines; the submarine helpfully produces a list of nearby lines of vents (your puzzle input) for you to review. For example:
0,9 -> 5,9
8,0 -> 0,8
9,4 -> 3,4
2,2 -> 2,1
7,0 -> 7,4
6,4 -> 2,0
0,9 -> 2,9
3,4 -> 1,4
0,0 -> 8,8
5,5 -> 8,2
The task for the day:
Consider only horizontal and vertical lines. At how many points do at least two lines overlap?
Let's start by reading our input; we are going to parse each line int 4 elements lists, i.e. `(x1 y1 x2 y2)`:
(defun parse-lines (data)
  (mapcar #'parse-line data))

(defun parse-line (string)
  (mapcar #'parse-integer (cl-ppcre:all-matches-as-strings "\\d+" string)))
With this, the solution to our problem would be a simple:
(defun part1 (lines)
  (count-overlaps (remove-if #'diagonalp lines)))
"The evil is in the details" they use to say, so let's take a look at the details.  DIAGONALP simply checks if the points describing a line have the same `x` or `y` value:
(defun diagonalp (line)
  (destructuring-bind (x1 y1 x2 y2) line
    (and (not (= x1 x2)) (not (= y1 y2)))))
Inside COUNT-OVERLAPS instead:

- For each line in our input
- We generate all its points
- We store these inside a HASH-TABLE, keeping track of duplicates (as that represents our answer)
- A naive `(loop for xx from x1 to x2 ...)` would fail if `x1` is greater than `x2`; similarly, a naive `(loop for yy from y1 to y2 ...)` would fail in `y1` is greater than `y2`
- <=>, a.k.a. the spaceship operator, returns `-1` if the first value is smaller than the second, `+1` if greater, and `0` otherwise; this represents how much to move in a given direction
- Once we know how much in each direction to move, we just have to figure out the number of times we have to move -- `n` in the snippet below
(defun count-overlaps (lines &optional (grid (make-hash-table :test 'equal)))
  (dolist (l lines)
    (destructuring-bind (x1 y1 x2 y2) l
      (let ((dx (<=> x2 x1))
            (dy (<=> y2 y1))
            (n (max (abs (- x2 x1)) (abs (- y2 y1)))))
        (loop repeat (1+ n)
              for xx = x1 then (+ xx dx)
              for yy = y1 then (+ yy dy) do
              (incf (gethash (list xx yy) grid 0))))))
  (loop for o being the hash-values of grid count (> o 1)))
For part 2 instead, we are asked to consider diagonal lines as well; we could not have been any luckier, given that all we have to do is feed COUNT-OVERLAPS with the original list of lines (and not just the horizontal/vertical ones like we did for part 1):
(defun part2 (lines) (count-overlaps lines))
Final plumbing:
(define-solution (2021 05) (lines parse-lines)
  (values (part1 lines) (part2 lines)))

(define-test (2021 05) (6397 22335))
And that's it:
> (time (test-run))
Success: 1 test, 2 checks.
Evaluation took:
  0.155 seconds of real time
  0.131351 seconds of total run time (0.107549 user, 0.023802 system)
  84.52% CPU
  356,617,473 processor cycles
  44,400,176 bytes consed
PS. This the code I originally used to solve this:

- Part 1: MIN / MAX to figure out where to start and where to stop, when using `(loop for v from ... to ...)` kind of loops
- Part 2: consed up all the ranges of points, and iterated over them
;; Part 1
(loop with grid = (make-hash-table :test 'equal)
      for line in input for (x1 y1 x2 y2) = line
      when (or (= x1 x2) (= y1 y2)) do
      (loop for xx from (min x1 x2) to (max x1 x2) do
            (loop for yy from (min y1 y2) to (max y1 y2) do (incf (gethash (list xx yy) grid 0))))
      finally (return (loop for v being the hash-values of grid count (> v 1))))

;; Part 2
(loop with grid = (make-hash-table :test 'equal)
      for line in input for (x1 y1 x2 y2) = line do
      (loop for xx in (range x1 x2)
            for yy in (range y1 y2) do
            (incf (gethash (list xx yy) grid 0)))
      finally (return (count-if (partial-1 #'> _ 1) (hash-table-values grid))))

(defun range (v1 v2)
  (cond ((= v1 v2) (ncycle (list v1)))
        ((< v1 v2) (loop for v from v1 to v2 collect v))
        (t         (loop for v from v1 downto v2 collect v))))
PPS. I fucking bricked my laptop yesterday while upgrading to Big Sur, so I had to sort that issue first before even start thinking about today's problem -- what a bummer!
      --------Part 1--------   --------Part 2--------
Day       Time   Rank  Score       Time   Rank  Score
  5   07:28:10  27193      0   07:41:09  24111      0
  4   00:30:52   2716      0   00:41:34   2704      0
  3   00:12:42   4249      0   00:48:51   5251      0
  2   00:05:42   3791      0   00:09:24   3731      0
  1   00:02:45   1214      0   00:07:16   1412      0

2021-12-04 (permalink)

Advent of Code: [2021/04](https://adventofcode.com/2021/day/4)

Today, we are playing Bingo with a giant squid (yes, apparently it is on the submarine with us!):
Bingo is played on a set of boards each consisting of a 5x5 grid of numbers. Numbers are chosen at random, and the chosen number is marked on all boards on which it appears. (Numbers may not appear on all boards.) If all numbers in any row or any column of a board are marked, that board wins. (Diagonals don't count.)
This is the kind of input we are expected to process:

22 13 17 11  0
 8  2 23  4 24
21  9 14 16  7
 6 10  3 18  5
 1 12 20 15 19

 3 15  0  2 22
 9 18 13 17  5
19  8  7 25 23
20 11 10 24  4
14 21 16 12  6

14 21 17 24  4
10 16 15  9 19
18  8 23 26 20
22 11 13  6  5
 2  0 12  3  7
I am going to create a new structure, BINGO, with slots to keep track of the numbers still left to draw, and each board; for each board representation instead, I am going to use a 5x5 2D array:
(defstruct (bingo (:conc-name nil))

(defun parse-bingo (data)
  (let ((to-draw (extract-integers (car data)))
        (boards (parse-boards (cdr data))))
    (make-bingo :to-draw to-draw :boards boards)))

(defun parse-boards (data)
  (when data
      (make-array '(5 5)
                  (let ((lines (subseq data 1 6)))
                    (uiop:while-collecting (rows)
                      (dolist (s lines)
                        (let ((row (extract-integers s)))
                          (rows row))))))
      (parse-boards (subseq data 6)))))

(defun extract-integers (s)
  (mapcar #'parse-integer (cl-ppcre:all-matches-as-strings "\\d+" s)))
Now, let's play some bingo!

- For each number to be drawn
- For each board
- Let's mark the number on the board
- If it turns out to be a winning board, let's return its score:
(defun play (game)
  (with-slots (to-draw boards) game
    (dolist (n to-draw)
      (dolist (b boards)
        (when (mark-number b n)
          (return-from play (* (board-score b) n)))))))

- For each number of the input board
- Check if equal to the number just drawn, and in case, mark it (i.e. remove it)
- Finally return T if the move made the board a winning one
(defun mark-number (board n)
  (loop for i below 5 do
        (loop for j below 5
              when (eql (aref board i j) n) do
              (setf (aref board i j) nil)
              (return-from mark-number (board-won-p board i j)))))

(defun board-won-p (board last-marked-i last-marked-j)
  (or (loop for i below 5 never (aref board i last-marked-j))
      (loop for j below 5 never (aref board last-marked-i j))))
Lastly, the score function, i.e. the sum of all the remaining numbers of a given board:
(defun score (board)
  (loop for i below 25 for v = (row-major-aref board i) when v sum v))
For part 2 instead, we are asked to keep on playing, until there are anymore boards left.  All we have to do is update PLAY as follows:

- Collect the scores of each winning board, instead of simply returning it
- Stop processing winning boards
(defun play (game)
  (uiop:while-collecting (scores)
    (with-slots (to-draw boards) game
      (dolist (n to-draw)
        (dolist (b boards)
          (when (mark-number b n)
            (scores (* (score b) n))
            (setf boards (remove b boards))))))))
Final plumbing:
(define-solution (2021 04) (game parse-bingo)
  (let ((scores (play game)))
    (values (first scores) (car (last scores)))))

(define-test (2021 04) (82440 20774))
And that's it:
> (time (test-run))
Success: 1 test, 2 checks.
Evaluation took:
  0.006 seconds of real time
  0.005100 seconds of total run time (0.004883 user, 0.000217 system)
  83.33% CPU
  15,920,633 processor cycles
  851,856 bytes consed
PS. If you are curious, [this](https://topaz.github.io/paste/#XQAAAQCcBwAAAAAAAAAUGQimgwpCHOTIZ4cAdwSqqsL7YgkUjovizS3Rs1wUzEvkHpHd8o1Y7gEL0qcj1QOhqpL5g1eXcPnMqtKzYd+JHJvk5TCuqeIZwbOUxyOKy+tnAnfUMPqoLtTHNC71/R6Sv4Ls2YYxCXO/8YKt36evcoX5GU0U8VRI0OwWzBYML+rGFdtD2Wgh8+vxKgohgOBuhIjO89Cyj0VFzH9aSXZkIiuK42nn0rL9Oz6eaYbjUjjV7VU1AqKc5EV43BJVefv7+4HYe3lCBRu+BqyO4PcGrUOYVR4npFsgz7gDvTlOgryFzKpD2E0Vlf6A+k9bRwheXOXGZMgS38+zHzWN8biiDw6u9l6gdfonmBkGDdTyTCB4i6+yJyZrVPhM0FBsHMPe/+D5wPudKuZ13Az6U3B+NyZi5i1p14Uob+U9hA15Yprs3LLfyvdac8MfNXf1QKNoSOEOmZ+EjxBkpHuPWDWsgbDDeJZ5Xr/WPiGZvWrJrBj5am+ohpzDckuRSWyCd4CZGQobh8xSeadJNb9kQ5zuqJRDQiJRxy5UTeKWJTONi3R1EXI1wudHfDOAHlxpri78iWY0XWbCU8HxArT8vLP8RM6A+IdlaMWV8V6YdA5USqjI4v5l/UVc3+k1h0UUcikLRf/C8xetTSJTIfUXQaY4cTHsxnEiB7Gt0GfONV+gkYGPpy+k/hzA+v5mIVUNxAR6YI8WoHuELRuJ/1sNo4oHplkOBrWrHcrLjnKlzJH5YlnNvg0P65FtPnFRkY7OsyHmYBf4KxilEtZzhKxJAkxKegEbY6fcTpXWVTsHISzNLh8sv8OikMGNXYdsklMi9p7KRSXjD5kr01EdP0L0/VuDRDZfdvidseczEawqgtz1EOrCpm1T6VLBu+OiblOolwhUocYqZdndLv9j1pU5LTIvuSa/s+ErqCwh6K36jUEP1E/ZAIKE6lxXXo/+f8ce) is the non-cleaned-up of the code that I used to take my 2 stars home (see how I went overboard and used conditions to notify the runner about winning boards).

2021-12-03 (permalink)

Advent of Code: [2021/03](https://adventofcode.com/2021/day/3)

The submarine has been making some odd creaking noises today, and after we ask it to produce a diagnostic report (our input), we are tasked to _decode_ it and see if everything is all right (or not!):
The diagnostic report (your puzzle input) consists of a list of binary numbers which, when decoded properly, can tell you many useful things about the conditions of the submarine. The first parameter to check is the power consumption.
You need to use the binary numbers in the diagnostic report to generate two new binary numbers (called the gamma rate and the epsilon rate). The power consumption can then be found by multiplying the gamma rate by the epsilon rate.
Each bit in the gamma rate can be determined by finding the most common bit in the corresponding position of all numbers in the diagnostic report.
The epsilon rate is calculated in a similar way; rather than use the most common bit, the least common bit from each position is used.
We are not going to do anything crazy with our input; in fact, we are going to keep it as is (i.e. as a list of strings), and jump right into the core of our solution:

- We calculate the gamma rate...as instructed
- We calculate the epsilon rate starting from the gamma rate (i.e. it's just its complement)
- Parse the two strings into numbers, and multiply them
(defun part1 (strings)
  (let* ((gamma (gamma-rate strings))
         (epsilon (map 'string #'ch-complement gamma)))
    (* (parse-binary gamma) (parse-binary epsilon))))

(defun ch-complement (ch) (if (eq ch #\1) #\0 #\1))

(defun parse-binary (s) (parse-integer s :radix 2))
Note: we could have returned a number and not a string from GAMMA-RATE; but then, to calculate the epsilon rate, we would have had to mess with bits "a bit more" to account for any leading 0 (i.e. 0001, complemented, should be 1110, and not simply 0).

Inside GAMMA-RATE we collect the most common char at each position, and then coerce the result to a string:
(defun gamma-rate (strings &aux (n (length (first strings))))
    (uiop:while-collecting (bit!)
      (dotimes (i n)
        (bit! (most-common-ch-at strings i))))

(defun most-common-ch-at (strings i)
  (loop for s in strings for ch = (aref s i)
        count (eq ch #\1) into ones
        finally (return (if (>= ones (/ (length strings) 2)) #\1 #\0))))
Things are going to get messy for part 2:
Next, you should verify the life support rating, which can be determined by multiplying the oxygen generator rating by the CO2 scrubber rating.
Both the oxygen generator rating and the CO2 scrubber rating are values that can be found in your diagnostic report - finding them is the tricky part. Both values are located using a similar process that involves filtering out values until only one remains. Before searching for either rating value, start with the full list of binary numbers from your diagnostic report and consider just the first bit of those numbers. Then:
- Keep only numbers selected by the bit criteria for the type of rating value for which you are searching. Discard numbers which do not match the bit criteria.
- If you only have one number left, stop; this is the rating value for which you are searching.
- Otherwise, repeat the process, considering the next bit to the right.
The bit criteria depends on which type of rating value you want to find:
- To find oxygen generator rating, determine the most common value (0 or 1) in the current bit position, and keep only numbers with that bit in that position. If 0 and 1 are equally common, keep values with a 1 in the position being considered.
-To find CO2 scrubber rating, determine the least common value (0 or 1) in the current bit position, and keep only numbers with that bit in that position. If 0 and 1 are equally common, keep values with a 0 in the position being considered.
Thaaaaaaaaaat is a lot, with lots of details too; hopefully we are going to figure this out -- well, if you are reading this, it means I already "figured something out" ;-)

Let's start from the top:
(defun part2 (strings)
  (* (oxygen-generator-rating strings) (co2-scrubber-rating strings)))
Now, to calculate the oxygen generator rating, let's proceed as follows:

- Until there is only one string left
- Figure out what the most common char is -- at a given position
- Remove all the strings having a different char at that position
(defun oxygen-generator-rating (strings &optional (n (length (first strings))))
  (loop until (= (length strings) 1) for i below n
        for ch = (most-common-ch-at strings i) do
        (setf strings (remove ch strings :key (partial-1 #'aref _ i) :test-not 'eq)))
  (parse-binary (first strings)))
To calculate the CO2 scrubber rating instead, we just have to consider least common bits; this is done by wrapping MOST-COMMON-CH-AT inside a call to CH-COMPLEMENT (the rest is the same):
(defun co2-scrubber-rating (strings &optional (n (length (first strings))))
  (loop until (= (length strings) 1) for i below n
        for ch = (ch-complement (most-common-ch-at strings i)) do
        (setf strings (remove ch strings :key (partial-1 #'aref _ i) :test-not 'eq)))
  (parse-binary (first strings)))
Final plumbing:
(define-solution (2021 03) (data)
  (values (part1 data) (part2 data)))

(define-test (2021 03) (3985686 2555739))
And that's it:
> (time (test-run))
Success: 1 test, 2 checks.
Evaluation took:
  0.001 seconds of real time
  0.001470 seconds of total run time (0.000941 user, 0.000529 system)
  100.00% CPU
  3,440,150 processor cycles
  131,024 bytes consed
PS. My original solution was not as cleaned as this one: it was super messy, I coded it fully in the REPL, i.e. there is not a single function in my solution (I am not proud of this, I am just stating a fact), and even though messy, it still took me forever to came up with it; lastly, cherry on top, to actually get the answer for part 1, I manually converted an ARRAY of Ts and NILs into 1s and 0s!  Anyways, if you really, really want to take a look...[have at it](https://topaz.github.io/paste/#XQAAAQBxBwAAAAAAAAAUHMineAofMgXCOJWpWvfWz/ftuK+H3bIoU+EY9K2IiV45Fs/OqcTa6MWLBv78pFRraVXpcVqiedY9084OCmjcb/QJchZ9UEBYH48m0joS9Aw/nxLo/p6sl85X0fHkzN7M1g4UHvmCT6m5+RQoVi5WzXyVLb8BMgsVfyznajX7BnSwi1DY1pm/mdUrqcbzLt9S1/EJXnn20zKLNFUc8D7Z2GvFp03xL3z5WfIH+9bsPqUQr/7K2MDl21y6zGf0PPLzklkVeuxlkw7W2uXWL4clSS1jDWeSF1v3eiZ700d6El2Z3GzO3VmHKX24dPBwOrqJABV1tUILx7gUs/vmlmOHsSSBLtDMj1bmBC8bR12iX9oiz72csFMK/a5koBQ/mm1HpUShSuhVnevc1bGd0+HJfTqsV5LBRVVguP8fJcC4J+hiPcmv29AP8lM/4KUVH6UobmQLEyNR479SlukyI7cKuFPuO91W6TqvYHCKp810/eiS+28dL6RaktTLtEpYJRjLDSUMt5SyuYqAVGCKPHryDzeqHLvcwsngsZ7wKRIA3X/ZB9iJej0JlNMCYHxLV/FL0hfVWJoAJErxH+XQqATKi5ibPlo+ppSkj+kbNQejQq65jKdk2nkFiXMo5FjY4G/7SJj49Z9xywnPUqSy/NVX3OMUiJ43r8JSF/ypZvk=)

2021-12-02 (permalink)

Advent of Code: [2021/02](https://adventofcode.com/2021/day/2)
It seems like the submarine can take a series of commands like `forward 1`, `down 2`, or `up 3`:
- `forward X` increases the horizontal position by X units.
- `down X` increases the depth by X units.
- `up X` decreases the depth by X units.
Note that since you're on a submarine, down and up affect your depth, and so they have the opposite result of what you might expect.
Calculate the horizontal position and depth you would have after following the planned course. What do you get if you multiply your final horizontal position by your final depth?
OK, let's start by parsing our input, i.e. the list of instructions, into `<direction,unit>` pairs:

- CL-PPCRE:REGISTER-GROUPS-BIND to parse our instructions using regexps (we could have also used SPLIT-SEQUENCE)
- AS-KEYWORD to convert a string into a _keyword_, i.e. `:forward`, `:up`, and `:down`
(defun parse-instructions (data) (mapcar #'parse-instruction data))

(defun parse-instruction (string)
  (cl-ppcre:register-groups-bind ((#'as-keyword dir) (#'parse-integer delta))
      ("(\\w+) (\\d+)" string)
    (cons dir delta)))
(defun dir (i) (car i))
(defun delta (i) (cdr i))
Now to get the answer for part two:

- Iterate over the instructions
- If `:forward`, we move our horizontal position
- If `:up`, we reduce our depth
- If `:down`, we increase our depth
- Last, we return the product of our horizontal position and depth
(defun part1 (instructions &aux (horiz 0) (depth 0))
  (dolist (i instructions (* horiz depth))
    (ecase (dir i)
      (:forward (incf horiz (delta i)))
      (:up (decf depth (delta i)))
      (:down (incf depth (delta i))))))
For part 2 instead, the `:up` and `:down` instructions seem to assume a different meaning:
In addition to horizontal position and depth, you'll also need to track a third value, aim, which also starts at 0. The commands also mean something entirely different than you first thought:
- `down X` increases your aim by X units.
- `up X` decreases your aim by X units.
- `forward X` does two things:
- It increases your horizontal position by X units.
- It increases your depth by your aim multiplied by X.
Again note that since you're on a submarine, down and up do the opposite of what you might expect: "down" means aiming in the positive direction.
Using this new interpretation of the commands, calculate the horizontal position and depth you would have after following the planned course. What do you get if you multiply your final horizontal position by your final depth?
Nothing fancy, let's just implement this:

- Iterate over the instructions
- If `:forward`, we move our horizontal position and descend / ascend based on the current aim
- If `:up`, we increase our aim
- If `:down`, we decrease our aim
- Last, we return the product of our horizontal position and depth
(defun part2 (instructions &aux (horiz 0) (depth 0) (aim 0))
  (dolist (i instructions (* horiz depth))
    (ecase (dir i)
      (:forward (incf horiz (delta i)) (decf depth (* (delta i) aim)))
      (:up (incf aim (delta i)))
      (:down (decf aim (delta i))))))
Final plumbing:
(define-solution (2021 02) (instructions parse-instructions)
  (values (part1 instructions) (part2 instructions)))

(define-test (2021 02) (1648020 1759818555))
And that's it:
> (time (test-run))
Success: 1 test, 2 checks.
Evaluation took:
  0.002 seconds of real time
  0.002600 seconds of total run time (0.001565 user, 0.001035 system)
  150.00% CPU
  6,082,778 processor cycles
  327,536 bytes consed
PS. My [original solution](https://topaz.github.io/paste/#XQAAAQD2AgAAAAAAAAAUGQimgx+p6PZhRh7uIO5BVe7pcMkiG6eOV22j2MYKuFs91c+ybCSTgFinLxYMDBog9+B+g/uA1iuHBXT5kPl5FbfHsjTt/UH6LeDhwtxS++LtE/cfL47hqbZilPQVUp7nWL834NkL1xo124ObhgTHG4Vaq0sDvXEWYVT61bdI9njY95YkUyYD8jeF6mdT+45tB+epQP+OgkkUyyWgSGXG3mXlBPAuGtfQ2d6F6X56fu266chYGK/yaubPY1GuajRiidCeYijfEAPoeMsgyIZcp6U4hgNM0D1M6vPzyThdMrcJtAwVid8BqQUE8ebzylpWag4uZPUQgHKdoZTgteopbArgU34gzK34OzvfXyvwL063kO+uz3rYxgePqqnFxROOQdSe0tseMzpTFeHv+lLiE+efMBoId4Ao8BsmGov79aT/8r1yLg==) parsed instructions into COMPLEX numbers (e.g. `down 4` into `#C(0 -4)`); this way, for part 1 all I had to do was summing all the deltas together; for part 2 instead, I had to check if the delta was horizontal or vertical, and adjust the aim accordingly.  I figured a more readable solution would be better, hence the rewrite.

2021-12-01 (permalink)

Advent of Code: [2021/01](https://adventofcode.com/2021/day/1)

- An elf tripped, and accidentally sent the sleight keys flying into the ocean
- All of a sudden we found ourselves inside a submarine which has an antenna that hopefully we can use to track the keys
- The submarine performs a sonar sweep of the nearby sea floor (our input)
For example, suppose you had the following report:
This report indicates that, scanning outward from the submarine, the sonar sweep found depths of 199, 200, 208, 210, and so on.
We already have an utility function to parse a list of integers, PARSE-INTEGERS, so there isn't much we have to do here, except for calling it:
(parse-integers (uiop:read-file-lines "src/2021/day01.txt"))
Now, here is what we are tasked to do for part 1:
The first order of business is to figure out how quickly the depth increases, just so you know what you're dealing with - you never know if the keys will get carried into deeper water by an ocean current or a fish or something.
To do this, count **the number of times a depth measurement increases** from the previous measurement. (There is no measurement before the first measurement.) [...]
Nothing crazy; we simply do as told:
(defun part1 (numbers)
  (loop for (a b) on numbers
        when b count (< a b)))
For part 2 instead:
Instead, consider sums of a three-measurement sliding window. [...]
Here we can re-use PART1 completely, except that we need to pass to it a different list of numbers, each representing the sum of a three-measurement sliding window; again, LOOP/ON makes for a very readable solution:
(defun part2 (numbers)
    (loop for (a b c) on numbers
          when c collect (+ a b c))))
Final plumbing:
(define-solution (2021 01) (numbers parse-integers)
  (values (part1 numbers) (part2 numbers)))

(define-test (2021 01) (1557 1608))
And that's it:
> (time (test-run))
Success: 1 test, 2 checks.
Evaluation took:
  0.001 seconds of real time
  0.001252 seconds of total run time (0.000647 user, 0.000605 system)
  100.00% CPU
  2,873,667 processor cycles
  163,808 bytes consed

2021-11-26 (permalink)

Vlime: Make swank_repl a prompt buffer

(This was posted on [vlime/pull/55](https://github.com/vlime/vlime/pull/55))

Alright, since these changes got merged into @phmarek's fork, and since _that_ branch is the one I am using these days, I would like to provide some feedback about this; also, please note that I am running this with Vim, so expect some of the following feedback not be relevant to Neovim users.

I have no number to back this up, but I noticed that forms generating some non-trivial output are more likely to cause Vim to freeze if the REPL is made a prompt buffer than when it's not; don't take me wrong, t's a bit sluggish anyway, with or without these changes, but more so with the prompt buffer than without.

I have a [project](https://github.com/iamFIREcracker/adventofcode) with quite a few packages, and every time I touch the main .asd file and QL:QUICKLOAD it, Vim becomes very sluggish.

**Stealing focus**
The currently focused window seems to be losing focus every time a new line of output is added to the prompt buffer; this causes all sort of issues, especially if you were in insert mode when the focus got stolen.

I was messing around with a Web application of mine, and had the client fire a HEAD request every second (request which got logged by the Web server on the standard output, i.e. on the REPL buffer); that alone seemed to make editing in a different buffer impossible (e.g. missing characters, unable to move around the buffer), to the point that I had to disable the polling on the client!

I am not exactly sure what's going on, and I would not be surprised if this wasn't somehow related to the performance issue above; however, the moment I disabled this PR's changes, it all become responsive again.

**Current package**
While Vim itself might be the cause of the two problems mentioned above, this one is most definitely a bug with the current implementation.  Previously, forms typed into `vlime_input` buffer would be evaluated inside the package specified in the closest preceding IN-PACKAGE form; for example, imagine you had the following buffer open:
1: (in-package :cl-user)
5: (in-package :foo)
Opening up an input buffer (i.e. `<localleader>si`) from line 2, and typing `*package*` into it would cause `#<PACKAGE "COMMON-LISP-USER">` to be logged to the REPL buffer; opening up the input buffer from line number 6 instead, and typing in the same would output `#<PACKAGE "FOO">`.  Well, with this pull request, it would always output `#<PACKAGE "COMMON-LISP-USER">`.

**Only works while in insert mode (minor, opinionated)**
I am not sure if this is Vim's fault or if the plugin's implementation, but evaluation from the prompt buffer only seems to work while in insert mode, i.e. pressing enter while in insert mode seems to be the only way to trigger the evaluation to happen.  This unfortunately clashes with some of my settings / how I am used to work; I can work around this of course, re-wire my muscle memory, but I wanted to flag this anyway.

I use [delimitMate](https://github.com/Raimondi/delimitMate) to keep my parens balanced (yes, it's a poor man's solution, but gets the job done most of the times); this means that if I want to type in the form:
          1         2
(format t "Hello world")
By the time I type the closing quote at position 22, the closing parenthesis would be there already, so I would just leave insert mode and expect the form to be evaluated; however, as soon as I leave insert mode, a new line is added to the prompt buffer and then I am forced to type it all again, the closing parenthesis included, and then press `Enter`.

**Empty prompt (opinionated)**
Maybe it's just me, but the first time I entered insert mode while in the prompt buffer I started wondering what that initial space was for.  I thought it was a bug, having something to do with Vlime wrongly calculating the indentation level; but then I looked at the code and realized that that was the way we told the prompt buffer to look like.

I think we should either set this to an empty string, or maybe the classic `> ` or `* `, as that would make it super clear what's going on.

**Next steps**
I am OK with tinkering with this a bit more, but here is what I think we should do:

- Understand if the performance issue mentioned above, as well as the focus stealing one, are getting experienced by everybody, i.e. Vim and Neovim users, or if it's just one man's problem (me...sigh); this will help us understand if switching to a prompt buffer for the REPL is a sound idea or not
- If it was, maybe put everything behind a config flag, so people can enable / disable it, at least until it's a bit more stable (not everybody wants to deal with this)
- Work around some of the usability issues mentioned above, like setting the current package, or changing the prompt string, etc.

Let me know what you guys think about this.


2021-11-25 (permalink)

Building a _local_ system with :TRIVIAL-BUILD

(This was posted on [ceramic/trivial-build/issues/2](https://github.com/ceramic/trivial-build/issues/2))

Currently, there are at least two different ways of dealing with locally defined systems, i.e. systems defined in the working directory:

1. Create a symlink inside ~/quicklisp/local-projects (or ~/common-lisp), pointing to the current directory
2. Add the current directory to ASDF:\*CENTRAL-REPOSITORY\* (`(pushnew '*default-pathname-defaults* asdf:*central-registry*)`)

Now, with the first option, everything works as expected: you call TRIVIAL-BUILD:BUILD, it spawns a new CL image, it runs ASDF:LOAD-SYSTEM in it, and since the current directory is inside ~/quicklisp/local-projects (or ~/common-lisp), your system will be loaded just fine.

Unfortunately the current implementation does not seem to play nicely with the second option, i.e. updating ADSF:\*CENTRAL-REPOSITORY\*; that's because the newly spawn CL image, the one in which ASDF:LOAD-SYSTEM is called, has ASDF:\*CENTRAL-REPOSITORY\* in its default state, i.e. without our customizations.

Is there any interest in supporting this use case, or instead you prefer to push people to use option 1 instead?  Because if there was, here is how I _patched_ LOAD-AND-BUILD-CODE (note the line right above the ASDF:LOAD-SYSTEM call):

- I serialize the current value of ASDF:*CENTRAL-REGISTRY*, in reverse order, so while loading them later with PUSH, the original order is preserved; note: this happens on the host image, i.e. the one where :CERAMIC is run
- For each entry, I push it back into ASDF:*CENTRAL-REGISTRY* -- this instead is run on the final app image, the newly spawned one
(defun load-and-build-code (system-name entry-point binary-pathname)
  "Return a list of code strings to eval."
   "(setf *debugger-hook* #'(lambda (c h) (declare (ignore h)) (uiop:print-condition-backtrace c) (uiop:quit -1)))"
   (format nil "(dolist (entry '~W) (push entry asdf:*central-registry*))" (reverse asdf:*central-registry*))
   (format nil "(asdf:load-system :~A)" system-name)
   (format nil "(setf uiop:*image-entry-point* #'(lambda () ~A))"
   (format nil "(uiop:dump-image ~S :executable t
  #+sb-core-compression :compression #+sb-core-compression t)"
Let me know if you see any problems with this and most importantly if you find this somewhat interesting, in which case, I would not mind creating a pull request for this.


2021-11-23 (permalink)

Advent of Code: [2018/10](https://adventofcode.com/2018/day/10)

In today's problem, we are given a sky map: a list of stars, characterized by their position and velocity:
position=< 9,  1> velocity=< 0,  2>
position=< 7,  0> velocity=<-1,  0>
position=< 3, -2> velocity=<-1,  1>
We are also told, that after some time, stars would align to form a message; that message will be the answer to our part 1.

Let's start by parsing the sky map; we are going to define a custom structure, STAR, having one slot for the star position and one for its velocity; we are also going to store all these stars into a simple list:
(defstruct (star (:conc-name nil)) pos vel)

(defun parse-sky (data)
  (mapcar #'parse-star data))

(defun parse-star (string)
  (destructuring-bind (x y vx vy)
      (mapcar #'parse-integer (cl-ppcre:all-matches-as-strings "-?\\d+" string))
    (make-star :pos (list x y) :vel (list vx vy))))
Now, how can to figure out when the stars are aligned? My idea is to take a look at the bounding box containing all the stars: as the velocity of the stars does not change, I would expect the size of this box to become smaller and smaller, until it starts getting bigger again.  Well, the state of the sky when the bounding box is the smallest, is probably the one in which all the stars are aligned; so dumping it on the standard output would hopefully show us the message!

FIND-MESSAGE accepts the sky map as input, and keeps on moving stars (see SKY-NEXT) minimizing the bounding box (I am using the area of the bounding box, SKY-AREA, as a proxy for its size); lastly, SKY-MESSAGE is responsible for analyzing the end sky map, and print `#` when the stars are:
(defun find-message (input)
  (loop :for sky-prev = input :then sky
        :for sky = input :then (sky-next sky)
        :when (> (sky-area sky) (sky-area sky-prev))
        :return (sky-message sky-prev)))

(defun sky-next (sky) (mapcar #'star-move sky))

(defun star-move (star)
  (with-slots (pos vel) star
    (make-star :pos (mapcar #'+ pos vel) :vel vel)))

(defun sky-area (sky)
  (destructuring-bind ((x-min x-max) (y-min y-max)) (sky-box sky)
    (* (- x-max x-min) (- y-max y-min))))

(defun sky-message (sky)
  (destructuring-bind ((x-min x-max) (y-min y-max)) (sky-box sky)
    (let ((output (loop :repeat (1+ (- y-max y-min))
                        :collect (make-string (1+ (- x-max x-min)) :initial-element #\Space))))
      (loop :for star :in sky :for (x y) = (pos star)
            :for row = (nth (- y y-min) output)
            :do (setf (aref row (- x x-min)) #\#))
      (with-output-to-string (s)
        (format s "~%~{~a~^~%~}" output)))))

(defun sky-box (sky)
  (loop :for star :in sky :for (x y) = (pos star)
        :minimize x :into x-min
        :maximize x :into x-max
        :minimize y :into y-min
        :maximize y :into y-max
        :finally (return (list (list x-min x-max) (list y-min y-max)))))
Let's give this a go:
(setq input (parse-sky (uiop:read-file-lines "src/2018/day10.txt")))
> (find-message input)
#####   #    #  #####      ###   ####   #       #####   ######
#    #  #    #  #    #      #   #    #  #       #    #  #
#    #  #    #  #    #      #   #       #       #    #  #
#    #  #    #  #    #      #   #       #       #    #  #
#####   ######  #####       #   #       #       #####   #####
#    #  #    #  #           #   #  ###  #       #       #
#    #  #    #  #           #   #    #  #       #       #
#    #  #    #  #       #   #   #    #  #       #       #
#    #  #    #  #       #   #   #   ##  #       #       #
#####   #    #  #        ###     ### #  ######  #       ######"
Good!  Now part 2:
Impressed by your sub-hour communication capabilities, the Elves are curious: exactly how many seconds would they have needed to wait for that message to appear?
We should be able to answer this quite easily, by slightly tweaking FIND-MESSAGE:

- Keep track of how much time has elapsed
- And return it, as second value (the call to 1- is to offset the fact that the first iteration is a dummy one, given that `sky-prev` and `sky` are the same)
(defun find-message (input)
  (loop :for time :from 0
        :for sky-prev = input :then sky
        :for sky = input :then (sky-next sky)
        :when (> (sky-area sky) (sky-area sky-prev))
        :return (values (sky-message sky-prev) (1- time))))
Final plumbing:
(define-solution (2018 10) (sky parse-sky)
  (find-message sky))

(define-test (2018 10) ("
#####   #    #  #####      ###   ####   #       #####   ######
#    #  #    #  #    #      #   #    #  #       #    #  #
#    #  #    #  #    #      #   #       #       #    #  #
#    #  #    #  #    #      #   #       #       #    #  #
#####   ######  #####       #   #       #       #####   #####
#    #  #    #  #           #   #  ###  #       #       #
#    #  #    #  #           #   #    #  #       #       #
#    #  #    #  #       #   #   #    #  #       #       #
#    #  #    #  #       #   #   #   ##  #       #       #
#####   #    #  #        ###     ### #  ######  #       ######" 10831))
And that's it:
> (time (test-run))
Success: 1 test, 2 checks.
Evaluation took:
  0.440 seconds of real time
  0.416292 seconds of total run time (0.406274 user, 0.010018 system)
  [ Run times consist of 0.029 seconds GC time, and 0.388 seconds non-GC time. ]
  94.55% CPU
  1,012,629,537 processor cycles
  309,925,952 bytes consed

Advent of Code: [2018/7](https://adventofcode.com/2018/day/7)

Given a list of steps and requirements about which steps are blocked by which other step or steps, we are tasked to find the order in which the steps are going to be completed.

Step C must be finished before step A can begin.
Step C must be finished before step F can begin.
Step A must be finished before step B can begin.
Step A must be finished before step D can begin.
Step B must be finished before step E can begin.
Step D must be finished before step E can begin.
Step F must be finished before step E can begin.
Visually, these requirements look like this:
 /    \      \
C      -->D----->E
 \           /
And the order in which the steps are going to be completed is: `CABDFE` (in the event of multiple steps being unblocked, we should select based on the lexicographical order).

We are going to parse our input into an association list, mapping from steps to their dependencies:
(defun parse-instructions (data)
  (let (rez)
    (loop :for string :in data
          :for (step . dep) = (parse-instruction string)
          :for step-entry = (assoc step rez)
          :for dep-entry = (assoc dep rez)
          :if step-entry :do (push dep (cdr step-entry))
          :else :do (push (cons step (list dep)) rez)
          :unless dep-entry :do (push (cons dep nil) rez))

(defun parse-instruction (string)
  (cl-ppcre:register-groups-bind ((#'parse-char dep step))
      ("Step (\\w) must be finished before step (\\w) can begin." string)
    (cons step dep)))
Feeding the example above into PARSE-INSTRUCTIONS, should result in the following alist:
((#\E #\F #\D #\B)
 (#\D #\A)
 (#\B #\A)
 (#\F #\C)
 (#\A #\C))
Now, to find the order in which the different steps are completed, we are going to proceed as follows:

- We sort the instructions, placing entries with the fewest number of dependencies (ideally, 0) first, or lexicographically in case of ties (see SORT-INSTRUCTIONS, and INSTRUCTION<)
- We pop the first item -- this is the next step to complete
- We unblock any other instruction that might be depending on the step we just completed (see REMOVE-DEPENDENCY)
- We do this until we run out of instructions
(defun part1 (instructions &aux completed)
    (if (null instructions)
      (return (format nil "~{~C~}" (reverse completed)))
      (destructuring-bind ((step) . rest) (sort-instructions instructions)
        (setf completed (cons step completed)
              instructions (remove-dependency rest step))))))

(defun sort-instructions (instructions)
  (sort (copy-seq instructions) #'instruction<))

(defun instruction< (i1 i2)
  (destructuring-bind (step1 . deps1) i1
    (destructuring-bind (step2 . deps2) i2
      (or (< (length deps1) (length deps2))
          (and (= (length deps1) (length deps2))
               (char< step1 step2))))))

(defun remove-dependency (instructions dependency)
  (loop :for (step . deps) :in instructions
        :collect (cons step (remove dependency deps))))
Things are going to be a tiny bit more complicated for part 2:

- Each step takes a certain amount of time to complete: 60 seconds, plus a certain amount, depending on the step name (1 second for `A`, 2 seconds for `B`, ...)
- We are not alone in executing these instructions -- there are 4 elves helping us out, which means multiple steps can run in parallel

We are going to solve this by simulation; to do this, we are going to define a new structure, WORKERS, with the following slots:

- `'BUSY-FOR`, an array as big as the number of workers we have, to keep track of how much time each worker needs to complete the assigned step
- `'BUSY-WITH`, another array, this time keeping track of the step each worker is currently busy with
(defstruct (workers (:conc-name nil)
                    (:constructor make-workers%))

(defun make-workers (n)
  (make-workers% :busy-for (make-array n :initial-element 0)
                 :busy-with (make-array n :initial-element nil)))
With this, the skeleton of our solutions is pretty simple: we initialize our WORKERS state, and until there are workers..._working_...we do another tick of our simulation; when done, we simply return the elapsed time:
(defparameter *workers-count* 5)

(defun part2 (instructions)
  (let ((workers (make-workers *workers-count*))
        (time 0))
      (setf instructions (workers-tick workers instructions))
      (if (every #'null (busy-with workers))
        (return time)
        (incf time)))))
Let's now take a look at the meat of this, WORKERS-TICK:

- We let all the workers progress on their assigned step, and keep track of the completed ones (as well as the workers that finished them)
- We keep the workers busy until there are workers available and unblocked instructions
- Finally, we return the updated list of instructions -- i.e. instructions we could not assign because either we run out of workers, or because active workers were still working on some dependencies
(defun workers-tick (workers instructions)
  (with-slots (busy-for busy-with) workers
    (flet ((try-release-workers ()
             (loop :for i :from 0
                   :for left :across busy-for
                   :for step :across busy-with
                   :if (> left 1) :do (decf (aref busy-for i))
                   :else :collect i :and :do
                   (setf instructions (remove-dependency instructions step)
                         (aref busy-for i) 0
                         (aref busy-with i) nil)))
           (try-assign-work (available)
             (loop :while (and available instructions) :do
                   (let ((i (first available)))
                     (setf instructions (sort-instructions instructions)
                           (aref busy-for i) 0
                           (aref busy-with i) nil)
                     (destructuring-bind ((step . deps) . rest) instructions
                       (if (> (length deps) 0)
                         (setf (aref busy-for i) (step-time step)
                               (aref busy-with i) step
                               available (rest available)
                               instructions rest)))))))
      (let ((available (try-release-workers)))
        (try-assign-work available)

(defparameter *step-duration* 60)

(defun step-time (step)
  (+ *step-duration* (1+ (- (char-code step) (char-code #\A)))))
Final plumbing:
(define-solution (2018 7) (instructions parse-instructions)
  (values (part1 instructions) (part2 instructions)))

(define-test (2018 7) ("GRTAHKLQVYWXMUBCZPIJFEDNSO" 1115))
And that's it:
> (time (test-run))
Success: 1 test, 2 checks.
Evaluation took:
  0.005 seconds of real time
  0.005701 seconds of total run time (0.004362 user, 0.001339 system)
  120.00% CPU
  13,260,235 processor cycles
  2,686,944 bytes consed

2021-11-22 (permalink)

Advent of Code: [2018/12](https://adventofcode.com/2018/day/12)

Today we are given a bunch of pots, each containing (or not) plants; and some notes, describing how these pots change over time, based on the state of the adjacent pots:
The pots are numbered, with 0 in front of you. To the left, the pots are numbered -1, -2, -3, and so on; to the right, 1, 2, 3.... Your puzzle input contains a list of pots from 0 to the right and whether they do (#) or do not (.) currently contain a plant, the initial state. (No other pots currently contain plants.) For example, an initial state of #..##.... indicates that pots 0, 3, and 4 currently contain plants.
Your puzzle input also contains some notes you find on a nearby table: someone has been trying to figure out how these plants spread to nearby pots. Based on the notes, for each generation of plants, a given pot has or does not have a plant based on whether that pot (and the two pots on either side of it) had a plant in the last generation. These are written as LLCRR => N, where L are pots to the left, C is the current pot being considered, R are the pots to the right, and N is whether the current pot will have a plant in the next generation. For example:
- A note like ..#.. => . means that a pot that contains a plant but with no plants within two pots of it will not have a plant in it during the next generation.
- A note like ##.## => . means that an empty pot with two plants on each side of it will remain empty in the next generation.
- A note like .##.# => # means that a pot has a plant in a given generation if, in the previous generation, there were plants in that pot, the one immediately to the left, and the one two pots to the right, but not in the ones immediately to the right and two to the left.
Here is how our input is going to look like:
initial state: #..#.#..##......###...###

...## => #
..#.. => #
.#... => #
.#.#. => #
.#.## => #
.##.. => #
.#### => #
#.#.# => #
#.### => #
##.#. => #
##.## => #
###.. => #
###.# => #
####. => #
Anyways, as usual, let's get this input parsed; we are going to parse this into a list of 2 elements: in the first one, we are going to store the initial state of the pots; in the second, we are going to store all the notes describing how pots change over time.  I am going to be using a HASH-SET for both the initial state of the pots, as well as the set of rules; for the initial state, I am going to simply keep track of the positions of the pots that contain a plant; for the list of notes instead, I am only going to be storing the ones causing a plant to grow, or to stay unchanged (i.e. I am going to skip all the notes with a `.` on the left-hand side of the note...more to this later):
(defun parse-input (data)
  (list (parse-pots (first data)) (parse-notes (cddr data))))

(defun parse-pots (string)
  (let ((pots (make-hset nil)))
    (loop :for i :from 0
          :for ch :across (subseq string 15)
          :when (eq ch #\#) :do (hset-add i pots))

(defun parse-notes (data)
  (let ((notes (make-hset nil :test 'equal)))
    (loop :for string :in data
          :for from = (subseq string 0 5) :for to = (char string 9)
          :when (eq to #\#) :do (hset-add from notes))

(defun pots (input) (first input))
(defun notes (input) (second input))
Now, for part 1, we are asked to evolve our array of pots over time, and see how things look after 20 generations:
After one generation, only seven plants remain. The one in pot 0 matched the rule looking for ..#.., the one in pot 4 matched the rule looking for .#.#., pot 9 matched .##.., and so on.
In this example, after 20 generations, the pots shown as # contain plants, the furthest left of which is pot -2, and the furthest right of which is pot 34. Adding up all the numbers of plant-containing pots after the 20th generation produces 325.
After 20 generations, what is the sum of the numbers of all pots which contain a plant?
As with most of the Game of Life problems, we are going to _evolve_ our state a fixed number of times (i.e. `iterations`), and then inspect the final state to come up withe the challenge's answer:

- We always start anew inside POTS-NEXT, as if all the pots were empty; this is the reason why we don't care about the notes resulting in the central pot to become empty, but only the ones resulting in pots with a plant in it
- As we keep track of plants using a HASH-SET, and not an array, we need to find the min / max positions first, and then iterate for all the values in between
- The `(- ... 4)` and `(+ ... 4)` is to make sure we catch changes happening on the _edge_
- SURROUNDING-POTS returns the state of the pots around a certain given position (this mostly makes up for the fact that we are using a HASH-SET instead of an array)
- SUM-PLANT-POSITIONS simply sums the positions of the pots with a plant (i.e. the answer to our problem)
(defun part1 (input iterations)
  (destructuring-bind (pots notes) input
    (dotimes (i iterations)
      (setf pots (pots-next pots notes)))
    (sum-plant-positions pots)))

(defun pots-next (pots notes)
  (let ((next (make-hash-table))
        (min (find-min (hset-values pots)))
        (max (find-max (hset-values pots))))
    (loop :for i :from (- min 4) :to (+ max 4)
          :for key = (surrounding-pots pots i)
          :when (hset-contains-p key notes) :do (hset-add i next))

(defun surrounding-pots (pots i)
  (let ((string (make-string 5 :initial-element #\.)))
    (loop :for j :from (- i 2) :to (+ i 2)
          :for k :from 0
          :when (hset-contains-p j pots)
          :do (setf (char string k) #\#))

(defun sum-plant-positions (pots)
  (loop :for i :being :the :hash-keys :of pots :sum i))
Let's now take a look at part 2:
After fifty billion (50000000000) generations, what is the sum of the numbers of all pots which contain a plant?
What?!  There has to be a loop involved...

I fed this to my FLOYD loop detector utility function, but unfortunately that kept on spinning forever:
(let ((input (parse-input (uiop:read-file-lines "src/2018/day12.txt"))))
  (flet ((next (pots)
           (pots-next pots (notes input))))
    (floyd #'next (pots input))
    (error "Never gonna give you up!")))
Well, as it turns out, after a few generations the array of pots assume a certain configuration; a configuration which keeps on sliding to the right hand side, over and over again:
120: #.##...#.##.##.##.##.##.##...
121: .#.##...#.##.##.##.##.##.##..
122: ..#.##...#.##.##.##.##.##.##.
124: ...#.##...#.##.##.##.##.##.##
So how can we detect that?  Well, we could change SUM-PLANT-POSITIONS to count _relatively_ to the minimum position of a pot with a plant, instead of 0; and then tell FLOYD to use that to detect a loop:
(defun sum-plant-positions (pots &optional (center 0))
  (loop :for i :being :the :hash-keys :of pots :sum (- i center)))

> (let ((input (parse-input (uiop:read-file-lines "src/2018/day12.txt"))))
    (flet ((next (pots)
             (pots-next pots (notes input)))
           (key (pots)
             (let ((min (find-min (hset-values pots))))
               (loop :for i :being :the :hash-keys :of pots :sum (- i min)))))
        (floyd #'next (pots input) :key #'key)
        (list 'LOOP 'DETECTED)))
Once we found our loop, all is left to do is figure out the number of cycles left to be done, how much SUM-PLANT-POSITIONS increases at each cycle, and then do some math to put together our solution:
(defun part2 (input iterations)
  (flet ((next (pots)
           (pots-next pots (notes input)))
         (key (pots)
           (let ((min (find-min (hset-values pots))))
             (sum-plant-positions pots min))))
    (destructuring-bind (cycles-at cycle-size pots-new)
        (floyd #'next (pots input) :key #'key)
      (let* ((base (sum-plant-positions pots-new))
             (pots-new-next (pots-next pots-new (notes input)))
             (per-cycle-increment (- (sum-plant-positions pots-new-next) base))
             (cycles (- iterations cycles-at)))
        (assert (= cycle-size 1))
        (+ base (* per-cycle-increment cycles))))))
Final plumbing:
(define-solution (2018 12) (input parse-input)
  (values (part1 input 20) (part2 input 50000000000)))
And that's it:
> (time (test-run))
Success: 1 test, 2 checks.
Evaluation took:
  0.060 seconds of real time
  0.108866 seconds of total run time (0.107327 user, 0.001539 system)
  181.67% CPU
  139,759,679 processor cycles
  33,689,536 bytes consed

2021-11-19 (permalink)

Advent of Code: [2018/18](https://adventofcode.com/2018/day/18)

Game of life kind of problem today:
On the outskirts of the North Pole base construction project, many Elves are collecting lumber.
The lumber collection area is 50 acres by 50 acres; each acre can be either open ground (.), trees (|), or a lumberyard (#). You take a scan of the area (your puzzle input).
Strange magic is at work here: each minute, the landscape looks entirely different. In exactly one minute, an open acre can fill with trees, a wooded acre can be converted to a lumberyard, or a lumberyard can be cleared to open ground (the lumber having been sent to other projects).
This is how our input is going to look like:
The only difference is that our input is going to be bigger, i.e. 50x50 instead of 10x10; let's parse this into 2D array:
(defun parse-area (data)
  (let ((rows (length data))
        (cols (length (first data))))
    (make-array (list rows cols) :initial-contents data)))
Before taking a look at the rules governing how the area updates, minute after minute, let's see what exactly are we asked to do:
After 10 minutes, there are 37 wooded acres and 31 lumberyards. Multiplying the number of wooded
acres by the number of lumberyards gives the total resource value after ten minutes: 37 * 31 =
What will the total resource value of the lumber collection area be after 10 minutes?
So we need to:

- Figure out how the area is going to look like after `10` minutes
- Count the number of acres with trees
- Count the number of acres with lumber
- Multiply the two together
(defun part1 (area iterations)
  (dotimes (i iterations)
    (setf area (area-tick area)))
  (resource-value area))
To see how a given area is going to look like, after a minute we will have to:

- For each row
- For each column in the current row
- Inspect the surrounding 8 acres and count which ones are open, which ones have trees, and which ones have lumber
- Update the state of the current acre accordingly

Note: all the updates happen at the same time, so we need to make sure the original area is not updated as we process it.

As for the update rules instead:
- An open acre will become filled with trees if three or more adjacent acres contained trees. Otherwise, nothing happens.
- An acre filled with trees will become a lumberyard if three or more adjacent acres were lumberyards. Otherwise, nothing happens.
- An acre containing a lumberyard will remain a lumberyard if it was adjacent to at least one other lumberyard and at least one acre containing trees. Otherwise, it becomes open.
All of the above, i.e. rules plus algorithm, nicely translates into the following, long-winded, function:
(defun area-next (area)
  (let* ((rows (array-dimension area 0))
         (cols (array-dimension area 1))
         (next (make-array (list rows cols))))
    (loop :for r :below rows :do
          (loop for c :below cols :do
                (loop :with open = 0 :and trees = 0 :and lumby = 0
                      :for rr :from (max (1- r) 0) :upto (min (1+ r) (1- rows)) :do
                      (loop :for cc :from (max (1- c) 0) :upto (min (1+ c) (1- cols))
                            :unless (and (= rr r) (= cc c)) :do
                            (ecase (aref area rr cc)
                              (#\. (incf open))
                              (#\| (incf trees))
                              (#\# (incf lumby))))
                      :finally (setf (aref next r c)
                                     (ecase (aref area r c)
                                       (#\. (if (>= trees 3) #\| #\.))
                                       (#\| (if (>= lumby 3) #\# #\|))
                                       (#\# (if (and (>= lumby 1) (>= trees 1))
                                              #\# #\.)))))))
At this point, all is left to do is calculating the resource value of a given area:
(defun resource-value (area)
  (loop :for i :below (array-total-size area) :for ch = (row-major-aref area i)
        :count (eq ch #\|) :into trees
        :count (eq ch #\#) :into lumby
        :finally (return (* trees lumby))))
Now, off to part 2:
What will the total resource value of the lumber collection area be after 1000000000 minutes?

Well, it turns out this _game_ is cyclic, i.e. after some time a given area configuration will keep on re-appearing over and over again; this means all we have to do is:

- Figure out when it starts cycling
- How big the cycle is
- How many iterations are left to be done if we remove the cycles

Easy peasy, especially if you already bumped into a similar problem before, and had an implementation for the [Floyd-Marshall](https://en.wikipedia.org/wiki/Floyd%E2%80%93Warshall_algorithm) algorithm handy:
(defun part2 (area)
  (destructuring-bind (cycles-at cycle-size area-new)
      (floyd #'area-next area :test 'equalp)
    (let ((remaining (nth-value 1 (floor (- 1000000000 cycles-at) cycle-size))))
      (part1 area-new remaining))))
Final plumbing:
(define-solution (2018 18) (area parse-area)
  (values (part1 area 10) (part2 area)))

(define-test (2018 18) (549936 206304))
And that's it:
> (time (test-run))
Success: 1 test, 2 checks.
Evaluation took:
  0.877 seconds of real time
  0.826163 seconds of total run time (0.811340 user, 0.014823 system)
  94.18% CPU
  2,018,921,962 processor cycles
  47,953,424 bytes consed

2021-11-18 (permalink)

Advent of Code: [2018/23](https://adventofcode.com/2018/day/23)

This problem's input is a list of _nanobots_, each representing a 3-d location, and a radius (more to this later):
pos=<0,0,0>, r=4
pos=<1,0,0>, r=1
pos=<4,0,0>, r=3
pos=<0,2,0>, r=1
pos=<0,5,0>, r=3
pos=<0,0,3>, r=1
pos=<1,1,1>, r=1
pos=<1,1,2>, r=1
pos=<1,3,1>, r=1
We are going to put this into a structure with two slots:

- The 3-d location, stored as a list
- The communication radius
(defstruct (nanobot (:type list)
  pos r)
(defun x (pos) (first pos))
(defun y (pos) (second pos))
(defun z (pos) (third pos))

(defun parse-nanobots (data)
  (mapcar #'parse-nanobot data))

(defun parse-nanobot (string)
  (cl-ppcre:register-groups-bind ((#'parse-integer x y z r))
      ("pos=<(-?\\d+),(-?\\d+),(-?\\d+)>, r=(\\d+)" string)
    (make-nanobot :pos (list x y z) :r r)))
Now, let's take a look at the ask for part 1:
Find the nanobot with the largest signal radius. How many nanobots are in range of its signals?
OK, easy enough:
(defun part1 (bots)
  (let ((strongest (find-max bots :key #'r)))
    (count-if (partial-1 #'nanobot-contains-p strongest (pos _)) bots)))

(defun nanobot-contains-p (nb pos)
  (<= (manhattan-distance (pos nb) pos) (r nb)))
Now, things are going to get messy with part 2:
Find the coordinates that are in range of the largest number of nanobots. What is the shortest manhattan distance between any of those points and 0,0,0?
OK, let's figure out what the bounding box of all the bots is, and then look for the point with the highest number of bots in range; except, the search space might be bigger that we thought:
> (let ((bots (parse-nanobots (uiop:read-file-lines "src/2018/day23.txt"))))
    (loop :for bot :in bots :for pos = (pos bot)
          :minimize (x pos) :into x-min
          :maximize (x pos) :into x-max
          :minimize (y pos) :into y-min
          :maximize (y pos) :into y-max
          :minimize (z pos) :into z-min
          :maximize (z pos) :into z-max
          :finally (return (list x-min x-max y-min y-max z-min z-max))))
(-151734092 226831435 -103272011 144820743 -114031252 186064394)
That, and the fact that there will be a lot of overlapping (not only are bots well spread out into the 3d space, but they also have huge communication radius too):
> (let ((bots (parse-nanobots (uiop:read-file-lines "src/2018/day23.txt"))))
    (loop :for bot :in bots
          :minimize (r bot) :into r-min
          :maximize (r bot) :into r-max
          :finally (return (list r-min r-max))))
(49770806 99277279)
We are going to need something clever than this, but what exactly?  Well, I originally solved this by [random walking the search space and drilling into the areas with the highest number of bots in range](https://github.com/iamFIREcracker/adventofcode/commit/75abd836d5502df6307c5c77ccc24b7d91d679e4#diff-e25508cbdc3137033eb7cc9d1a6d6c5f0a278b331eca80162a8defb74d5379a7L1255); however, that was not always guaranteed to work (yes, I got lucky for my second star), and after taking a good look at [r/adventofcode](https://reddit.com/r/adventofcode) and even reading the AoC creator [commenting](https://www.reddit.com/r/adventofcode/comments/aa9uvg/day_23_aoc_creators_logic/) on what he thought people should be implementing to solve this, I decided to give the "Octree scan" solution a try.

The basic idea is simple, and it relies on the recursive 3-d space subdivision logic that [octrees](https://en.wikipedia.org/wiki/Octree) implement when efficiently organizing 3-d objects in the space; what follows is an overview of the algorithm that we are going to implement to solve this:

- We start with the biggest bounding box possible, and we count the number of bots in range (all)
- Then we split this box in 8 equal parts, and count the number of bots in range of each
- We keep on doing this until the cubes becomes 3-d points
- We are going to need a way to prune non optimal branches as soon as possible, or chances are we are going to get stuck in this loop for quite some time (more to this later)

Let's start off by defining a BOX structure and create one instance covering for all the bots in our input:
(defstruct (box (:type list)
                (:constructor make-box%))
  range-x range-y range-z)

(defun make-box (x-min x-max y-min y-max z-min z-max)
  (let* ((box (make-box% :range-x (list x-min x-max)
                         :range-y (list y-min y-max)
                         :range-z (list z-min z-max)))
         (volume (box-volume box)))
    (assert (or (= volume 0) (= (log volume 2) (floor (log volume 2)))))

(defun bots-box (bots)
  (loop :for bot :in bots :for pos = (pos bot)
        :minimize (x pos) :into x-min
        :maximize (x pos) :into x-max
        :minimize (y pos) :into y-min
        :maximize (y pos) :into y-max
        :minimize (z pos) :into z-min
        :maximize (z pos) :into z-max
        :finally (return (let* ((largest (max (- x-max x-min)
                                              (- y-max y-min)
                                              (- z-max z-min)))
                                (side (make-pow2 largest)))
                           (make-box x-min (+ x-min side)
                                     y-min (+ y-min side)
                                     z-min (+ z-min side))))))

(defun make-pow2 (number)
  (expt 2 (ceiling (log number 2))))
(Note: we decide to make sure the box is actually a cube, and that each side is a power of 2, so we know the splitting will always be _even_)

Now, while waking through the search space, we would like to pick our next cube to analyze based on:

- Number of nanobots in range of the box: the higher, the better, as that's what we want to maximize, eventually
- Volume of the current box: the smaller, the better, as we want to converge to a solution (optimal or not) as soon as possible so we can use that to prune branching
- Distance from origin: again, the smaller, the better

We are going to create a new structure, for this, STATE, and customize its constructor as follows:
(defstruct (state (:type list)
                  (:constructor make-state%)
  box num-nanobots volume distance)

(defun make-state (bots box)
  (let ((x (caar box))
        (y (caadr box))
        (z (caaddr box)))
    (make-state% :box box
                 :num-nanobots (count-in-range bots box)
                 :volume (box-volume box)
                 :distance (manhattan-distance (list x y z) (list 0 0 0)))))
There are a lot of helpers function used here, so let's take a look at their implementations; COUNT-IN-RANGE first:
(defun count-in-range (nanobots box)
  (count-if (partial-1 #'nanobot-overlaps-p _ box) nanobots))
It relies on another helper function, NANOBOT-OVERLAPS-P, which all it does is calculating how far the nanobot is, from the box, and return T if that distance (i.e. it's a Manhattan distance) is smaller or equal than the nanobot radius itself:
(defun nanobot-overlaps-p (nb box)
  (loop :with distance = 0
        :for (v-min v-max) :in box
        :for v :in (pos nb)
        :if (< v v-min) :do (incf distance (- v-min v))
        :else if (> v v-max) :do (incf distance (- v v-max))
        :never (> distance (r nb))))
Lastly, calculating one box's volume should be pretty straightforward:
(defun box-volume (box)
  (destructuring-bind ((x-min x-max) (y-min y-max) (z-min z-max)) box
    (* (- x-max x-min) (- y-max y-min) (- z-max z-min))))
Let's now take a look at the meat of this whole thing:

- We create the bounding box
- Initialize a sorted queue with it
- We start popping items from it until it's empty
- Now, until we have found _one_ solution (it could be the optimal one, or a sub-optimal one), or so long as the current box has a chance of containing a better solution (i.e. by having more nanobots than our current best solution)
- We split the box in 8, add those cubes to the priority queue, rinse and repeat
- Now, if we cannot split the cube further (i.e. it's a 3-d point), then we check for the number of nanobots covering this location, and if better than our best solution so far, we update that and carry on
(defun part2 (bots)
  (let ((box (bots-box bots))
    (loop :with queue = (make-hq :predicate #'state-better-p)
          :initially (hq-insert queue (make-state bots box))
          :until (hq-empty-p queue)
          :for state = (hq-pop queue)
          :do (when (or (not best) (>= (num-nanobots state) (num-nanobots best)))
                (if (box-point-p (box state))
                  (if (or (not best) (state-better-p state best))
                    (setf best state))
                  (dolist (subbox (box-subdivisions (box state)))
                    (hq-insert queue (make-state bots subbox)))))
          :finally (return (distance best)))))
Easy uh?! Yeah...

Let's take a look at those functions that we have never seen before; let's begin with STATE-BETTER-P, and as a reminder, this is the logic that we are going to try to implement:

- Number of nanobots in range of the box: the higher, the better, as that's what we want to maximize, eventually
- Volume of the current box: the smaller, the better, as we want to converge to a solution (optimal or not) as soon as possible so we can use that to prune branching
- Distance from origin: again, the smaller, the better
(defun state-better-p (state other)
  (destructuring-bind (num-nanobots volume distance) (rest state)
    (destructuring-bind (o-num-nanobots o-volume o-distance) (rest other)
      (or (> num-nanobots o-num-nanobots)
          (and (= num-nanobots o-num-nanobots)
               (or (< volume o-volume)
                   (and (= volume o-volume)
                        (< distance o-distance))))))))
BOX-POINT-P instead, checks if the current box is actually a point (i.e. each range is `0` units wide):
(defun box-point-p (box)
  (loop :for (min max) :in box
        :always (= min max)))
BOX-SUBDIVISIONS finally, takes care of taking a box / cube, and generating all its 8 subdivisions; one thing to note here: we are going to check for the volume of the box first, and if it's `1`, it means we cannot split it any further, and we will go ahead and return 8 new boxes, each covering for one vertex of the cube; otherwise, we will find the mid point of each range, and create sub-cubes accordingly:
(defun box-subdivisions (box)
  (unless (box-point-p box)
    (if (= (box-volume box) 1)
      (loop :for (x y z) :in (box-vertices box)
            :collect (make-box x x y y z z))
      (destructuring-bind ((x-min x-max) (y-min y-max) (z-min z-max)) box
        (let ((x-mid (+ x-min (floor (- x-max x-min) 2)))
              (y-mid (+ y-min (floor (- y-max y-min) 2)))
              (z-mid (+ z-min (floor (- z-max z-min) 2))))
          (loop :for (x-min x-max) :on (list x-min x-mid x-max)
                :when x-max :append
                (loop :for (y-min y-max) :on (list y-min y-mid y-max)
                      :when y-max :append
                      (loop :for (z-min z-max) :on (list z-min z-mid z-max)
                            :when z-max :collect (make-box x-min x-max
                                                           y-min y-max
                                                           z-min z-max)))))))))

(defun box-vertices (box)
  (destructuring-bind (x-range y-range z-range) box
    (loop :for x :in x-range :append
          (loop :for y :in y-range :append
                (loop :for z :in z-range :collect (list x y z))))))
And that should be pretty much it.

Final plumbing:
(define-solution (2018 23) (bots parse-nanobots)
  (values (part1 bots) (part2 bots)))

(define-test (2018 23) (433 107272899))
Et voila'!
> (time (test-run))
Success: 1 test, 2 checks.
Evaluation took:
  0.132 seconds of real time
  0.115931 seconds of total run time (0.105885 user, 0.010046 system)
  87.88% CPU
  305,851,279 processor cycles
  1,768,384 bytes consed

2021-11-12 (permalink)

To get started with Sketch (the Common Lisp graphic framework, inspired by the Processing language), you will have to install SDL2 and some of its add-ons.

On a Mac, the most convenient way to do this is by using Homebrew:
brew install sdl2
brew install sdl2_image
brew install sdl2_ttf
With that done, you should now be able to load Sketch's examples just fine:
(ql:quickload :sketch)
(ql:quickload :sketch-examples)
(make-instance 'sketch-examples:brownian)
Note: as mentioned on this issue, [OSX Can't run hello world example #29](https://github.com/vydd/sketch/issues/29), when trying this out on SBCL, you will have to wrap the MAKE-INSTANCE call inside SDL2:MAKE-THIS-THREAD-MAIN, or most likely you will be presented with the following:
debugger invoked on a SDL2::SDL-ERROR in thread
#<THREAD "SDL2 Main Thread" RUNNING {10056843D3}>:
  SDL Error: NSWindow drag regions should only be invalidated on the Main Thread!

The current thread is not at the foreground,
SB-THREAD:RELEASE-FOREGROUND has to be called in #<SB-THREAD:THREAD "main thread" waiting on: #<WAITQUEUE Anonymous condition variable {10056847B3}> {1007C501B3}>
for this thread to enter the debugger.

debugger invoked on a SB-SYS:INTERACTIVE-INTERRUPT @7FFF5DB4C58E in thread
#<THREAD "main thread" RUNNING {1007C501B3}>:
  Interactive interrupt at #x7FFF5DB4C58E.
restarts (invokable by number or by possibly-abbreviated name):
  0: [CONTINUE] Return from SB-UNIX:SIGINT.
  1: [ABORT   ] Exit debugger, returning to top level.
Alternatively, you can always switch to [CCL](https://ccl.clozure.com/) (another Common Lisp implementation) when playing with Sketch, as that one does not seem to be suffering from the same thread-related issue that SBCL does instead.

2021-11-10 (permalink)

Advent of Code: [2018/20](https://adventofcode.com/2018/day/20)

We find ourselves into an area made up entirely of rooms and doors; the thing is, the Elves don't have a map for this area; instead, they hand us over a list of instructions we can use to build the map ourselves:

- `^` is the "start-of-instructions" marker
- `$` is the "end-of-instructions" marker
- `N` means you can go north (i.e. a door exists between the _current_ room, and the one _north_ to it)
- `E` means you can go east
- `S` means you can go south
- `W` means you can go west
- `(` marks a _fork_, meaning that multiple sub-paths can be started from the current room
- `|` marks the end of a sub-path
- `)` marks the end of the fork

A couple of examples will hopefully help understanding this; this simple list of instructions, `^WNE$`, can be used to build up the following map:
This longer instructions list instead, ` ^ENWWW(NEEE|SSE(EE|N))$`, can be used to create the following map:
Before figuring out how to create the map from the instructions, it's probably best to take a look at what we are asked to do for part 1, so here it goes:
What is the largest number of doors you would be required to pass through to reach a room? That is, find the room for which the shortest path from your starting location to that room would require passing through the most doors; what is the fewest doors you can pass through to reach it?
OK, it seems like we are going to have to actually navigate through this map.  Let's parse the list of instructions into a HASH-SET of locations (i.e. COMPLEX numbers), referring to where the doors are; to do so, we are going to use a _stack_ of current positions, so we can keep track of every fork we step foot on; in particular:

- We skip the `^`
- If the current char is `$`, we stop and return the set of doors we went through
- If the current char is `(`, a new fork begins, so push the current position into the stack
- If the current char is `|`, a sub-path just ended, so let's go back to the room of the most recent fork
- If the current char is `)`, we visited all the sub-paths of a fork, so pop an item from the stack, and start from there
- Otherwise, i.e. one of `NESW`, move into the adjacent room and record the door we used to get there
(defun parse-doors (data &aux (string (first data)) (doors (make-hset nil)))
  (loop :with pos = 0 :with pos-stack = (list pos)
        :for ch :across string
        :do (ecase ch
              ((#\^) nil)
              ((#\$) (return doors))
              ((#\() (push pos pos-stack))
              ((#\)) (setf pos (pop pos-stack)))
              ((#\|) (setf pos (first pos-stack)))
              ((#\N) (hset-add (+ pos (complex 0 1)) doors) (setf pos (+ pos (complex 0 2))))
              ((#\E) (hset-add (+ pos (complex 1 0)) doors) (setf pos (+ pos (complex 2 0))))
              ((#\S) (hset-add (- pos (complex 0 1)) doors) (setf pos (- pos (complex 0 2))))
              ((#\W) (hset-add (- pos (complex 1 0)) doors) (setf pos (- pos (complex 2 0))))))
All is left to do now, is to explore the complete map, figure out which room is the furthest away from us, in terms of number of doors to go through, and then return that number of doors; we can do this with an unbounded BFS:

- start point: `0,0` (or `(complex 0 0)`)
- no goal state / function -- it will stop when there are no more states to process
- from a given location, you can move into the adjacent room if a door exists in between -- see NEIGHBORS
(defparameter *deltas* '(#C(0 1) #C(1 0) #C(0 -1) #C(-1 0)))

(defun neighbors (doors pos)
  (loop :for d :in *deltas*
        :for door-pos = (+ pos d)
        :when (hset-contains-p door-pos doors)
        :collect (+ pos d d)))

(let ((cost-so-far (nth-value 3 (bfs 0 :neighbors (partial-1 #'neighbors doors)))))
  (reduce #'max (hash-table-values cost-so-far)))
For part 2, instead:
Okay, so the facility is big.
How many rooms have a shortest path from your current location that pass through at least 1000 doors?
(let ((cost-so-far (nth-value 3 (bfs 0 :neighbors (partial-1 #'neighbors doors)))))
  (count-if (partial-1 #'>= _ 1000) (hash-table-values cost-so-far)))
Let's wire everything together:
(define-solution (2018 20) (doors parse-doors)
  (let ((cost-so-far (nth-value 3 (bfs 0 :neighbors (partial-1 #'neighbors doors)))))
      (reduce #'max (hash-table-values cost-so-far))
      (count-if (partial-1 #'>= _ 1000) (hash-table-values cost-so-far)))))

(define-test (2018 20) (3835 8520))
And that's it:
> (time (test-run))
Success: 1 test, 2 checks.
Evaluation took:
  0.029 seconds of real time
  0.028120 seconds of total run time (0.018474 user, 0.009646 system)
  96.55% CPU
  67,925,979 processor cycles
  6,870,096 bytes consed

2021-11-09 (permalink)

Advent of Code: [2018/06](https://adventofcode.com/2018/day/6)

Today, we are given a bunch of coordinates (i.e. X,Y locations), representing what could be either be dangerous safe places to land on -- and yes, we are falling.

This is a sample input:
1, 1
1, 6
8, 3
3, 4
5, 5
8, 9
We are going to be parsing this into _tagged_ coordinates, i.e. coordinates with a tag information attached to it (more to this later); as usual, we would be representing coordinates as COMPLEX numbers:
(defvar *tag* 0)

(defun parse-points (data)
  (let ((*tag* 0))
    (mapcar #'parse-point data)))

(defun parse-point (string)
  (cl-ppcre:register-groups-bind ((#'parse-integer col row))
      ("(\\d+), (\\d+)" string)
    (make-point (incf *tag*) row col)))

(defun make-point (tag row col) (cons tag (make-coord row col)))
(defun tag (coord) (car coord))
(defun (setf tag) (value coord) (setf (car coord) value))
(defun coord (point) (cdr point))

(defun make-coord (row col) (complex col (- row)))
(defun row (coord) (- (imagpart coord)))
(defun col (coord) (realpart coord))
For part 1, we are going to assume these to be _dangerous_ coordinates, and we are going to try and find which one is the furthest away (in terms of Manhattan distance) from all of the others.  How?
Using only the Manhattan distance, determine the area around each coordinate by counting the number of integer X,Y locations that are closest to that coordinate (and aren't tied in distance to any other coordinate).
Your goal is to find the size of the largest area that isn't infinite.
Here is how we are going to solve this:

- We find the bounding box, i.e. the smallest rectangle containing all the coordinates of the input -- see BOUNDING-BOX
- For each X,Y point of this bounding box
- We find the input coordinate which is _closest_ to that, being careful to ignore any point which is equally distant to 2 or more coordinates -- see CLOSEST-POINT
- Then if the we are _not_ on the border (see ON-THE-BORDER-P), we mark the location with the tag of the closest input coordinate (we are not actually marking it, but rather counting, but you get the idea)
- Finally, we find the input coordinate with the highest number of _closest_ locations

In code:
(defun part1 (points &aux (bbox (bounding-box points)))
  (destructuring-bind (row-min col-min row-max col-max) bbox
    (let ((sizes (make-hash-table)))
      (loop :for row :from row-min :to row-max :do
            (loop :for col :from col-min :to col-max
                  :for test := (make-coord row col)
                  :for closest := (closest-point test points)
                  :do (if (on-the-border-p test bbox)
                        (setf (gethash (tag closest) sizes) most-negative-fixnum)
                        (incf (gethash (tag closest) sizes 0)))))
      (loop :for size :being :the :hash-values :of sizes :using (hash-key tag)
            :when tag :maximize size))))

(defun bounding-box (points)
  (loop :for p :in points
        :for row := (row (coord p)) :for col := (col (coord p))
        :minimizing row :into row-min
        :maximizing row :into row-max
        :minimizing col :into col-min
        :maximizing col :into col-max
        :finally (return (list row-min col-min row-max col-max))))

(defun closest-point (test points)
  (let (closest closest-distance)
    (dolist (p points)
      (let ((distance (manhattan-distance test (coord p))))
        (when (or (not closest-distance) (<= distance closest-distance))
          (if (or (not closest-distance) (< distance closest-distance))
            (setf closest (list p) closest-distance distance)
            (push p closest)))))
    (when (= (length closest) 1)
      (first closest))))

(defun on-the-border-p (point bbox)
  (destructuring-bind (row-min col-min row-max col-max) bbox
    (or (= (row point) row-min) (= (row point) row-max)
        (= (col point) col-min) (= (col point) col-max))))
For part 2, we are going to assume these input locations to be _safe_ to land on instead, so we are going to try and find the region near as many coordinates as possible:
For example, suppose you want the sum of the Manhattan distance to all of the coordinates to be less than 32. For each location, add up the distances to all of the given coordinates; if the total of those distances is less than 32, that location is within the desired region.
What is the size of the region containing all locations which have a total distance to all given coordinates of less than 10000?
- For each X,Y location inside the same bounding box as part 1
- Sum all the distances with respect of all the input coordinates -- see TOTAL-DISTANCE
- Increment a counter if this sum is less than 10000
(defun part2 (points &aux (bbox (bounding-box points)))
  (destructuring-bind (row-min col-min row-max col-max) bbox
    (loop :for row :from row-min :to row-max :summing
          (loop :for col :from col-min :to col-max
                :for test := (make-coord row col)
                :for total-distance := (total-distance test points)
                :counting (< total-distance 10000)))))

(defun total-distance (test points)
  (loop :for p :in points
        :summing (manhattan-distance test (coord p))))
Final plumbing:
(define-solution (2018 6) (points parse-points)
  (values (part1 points) (part2 points)))

(define-test (2018 6) (3894 39398))
And that's it:
> (time (test-run))
Success: 1 test, 2 checks.
Evaluation took:
  0.661 seconds of real time
  0.618545 seconds of total run time (0.585166 user, 0.033379 system)
  93.65% CPU
  1,521,383,260 processor cycles
  14,120,976 bytes consed

2021-11-08 (permalink)

Advent of Code: [2018/04](https://adventofcode.com/2018/day/4)

December 1st is right around the corner, and we better get in "advent-of-code"-shape real quick; and what better way to do that than re-working some of my oldest solutions, i.e. the ones from 2018, when I was just trying to get started with Common Lisp?!  2018, day 4 it is!

For today's challenge we are tasked to sneak inside a supply closet so we can fix our suit; however, the closet is protected by a guard, so we need to figure out our way in without getting caught.  Lucky us, someone has been keeping tabs on guards shift (it's sleeping schedule, our input), and by the look of it it appears guards would fall asleep quite often, so hopefully we can figure out the best time to get in.

This is the kind of input that we are given:
[1518-11-01 00:00] Guard #10 begins shift
[1518-11-01 00:05] falls asleep
[1518-11-01 00:25] wakes up
[1518-11-01 00:30] falls asleep
[1518-11-01 00:55] wakes up
[1518-11-01 23:58] Guard #99 begins shift
[1518-11-02 00:40] falls asleep
[1518-11-02 00:50] wakes up
[1518-11-03 00:05] Guard #10 begins shift
[1518-11-03 00:24] falls asleep
[1518-11-03 00:29] wakes up
[1518-11-04 00:02] Guard #99 begins shift
[1518-11-04 00:36] falls asleep
[1518-11-04 00:46] wakes up
[1518-11-05 00:03] Guard #99 begins shift
[1518-11-05 00:45] falls asleep
[1518-11-05 00:55] wakes up
We are going to parse all this into an ALIST having the guard ID as index; the value would be another ALIST having the minute as index (note: for each day, we are told to analyze the `00:00`-`00:59` time range only); the value, would be the list of days in which the specific guard fell asleep for the specific minute.  The example above would be parsed into the following:
((99 (54 "1518-11-05") (53 "1518-11-05") (52 "1518-11-05") (51 "1518-11-05")
  (50 "1518-11-05") (39 "1518-11-04") (38 "1518-11-04") (37 "1518-11-04")
  (36 "1518-11-04") (49 "1518-11-05" "1518-11-02")
  (48 "1518-11-05" "1518-11-02") (47 "1518-11-05" "1518-11-02")
  (46 "1518-11-05" "1518-11-02") (45 "1518-11-05" "1518-11-04" "1518-11-02")
  (44 "1518-11-04" "1518-11-02") (43 "1518-11-04" "1518-11-02")
  (42 "1518-11-04" "1518-11-02") (41 "1518-11-04" "1518-11-02")
  (40 "1518-11-04" "1518-11-02"))
 (10 (28 "1518-11-03") (27 "1518-11-03") (26 "1518-11-03") (25 "1518-11-03")
  (54 "1518-11-01") (53 "1518-11-01") (52 "1518-11-01") (51 "1518-11-01")
  (50 "1518-11-01") (49 "1518-11-01") (48 "1518-11-01") (47 "1518-11-01")
  (46 "1518-11-01") (45 "1518-11-01") (44 "1518-11-01") (43 "1518-11-01")
  (42 "1518-11-01") (41 "1518-11-01") (40 "1518-11-01") (39 "1518-11-01")
  (38 "1518-11-01") (37 "1518-11-01") (36 "1518-11-01") (35 "1518-11-01")
  (34 "1518-11-01") (33 "1518-11-01") (32 "1518-11-01") (31 "1518-11-01")
  (30 "1518-11-01") (24 "1518-11-03" "1518-11-01") (23 "1518-11-01")
  (22 "1518-11-01") (21 "1518-11-01") (20 "1518-11-01") (19 "1518-11-01")
  (18 "1518-11-01") (17 "1518-11-01") (16 "1518-11-01") (15 "1518-11-01")
  (14 "1518-11-01") (13 "1518-11-01") (12 "1518-11-01") (11 "1518-11-01")
  (10 "1518-11-01") (9 "1518-11-01") (8 "1518-11-01") (7 "1518-11-01")
  (6 "1518-11-01") (5 "1518-11-01")))
Doing the actual parsing turned out way hairier than I originally anticipated, but whatever; anyways, high level:

- Sort entries lexicographically (yes, the input is out of order)
- Recursively
- Parse guard ID -- PARSE
- Parse "falls-asleep" time and day -- FILL-IN-SCHEDULE
- Parse "wakes-up" time and day -- FILL-IN-SCHEDULE
- For each minute in the time range
- Update the schedule accordingly -- ADD-ASLEEP-TIME-DAY
(defun parse-schedule (data)
  (let (schedule)
    (labels ((parse (remaining)
               (let ((guard (parse-schedule-guard (pop remaining))))
                 (when guard (fill-in-schedule guard remaining))))
             (fill-in-schedule (guard remaining)
               (if (not (parse-schedule-timeday (first remaining)))
                 (parse remaining)
                 (let ((falls-asleep (parse-schedule-timeday (pop remaining)))
                       (wakes-up (parse-schedule-timeday (pop remaining))))
                   (loop :with day = (cdr falls-asleep)
                         :for time :from (car falls-asleep) :below (car wakes-up)
                         :do (add-asleep-time-day guard time day))
                   (fill-in-schedule guard remaining))))
             (add-asleep-time-day (guard time day)
               (let ((entry (assoc guard schedule)))
                 (unless entry
                   (setf entry (list guard))
                   (push entry schedule))
                 (let ((st-entry (assoc time (sleeptable entry))))
                   (unless st-entry
                     (setf st-entry (list time))
                     (push st-entry (sleeptable entry)))
                   (push day (days st-entry))))))
      (parse (sort (copy-seq data) #'string<)))

(defun parse-schedule-guard (string)
  (cl-ppcre:register-groups-bind ((#'parse-integer guard))
      ("Guard #(\\d+) begins shift" string)

(defun parse-schedule-timeday (string)
  (cl-ppcre:register-groups-bind (date (#'parse-integer time))
      ("(\\d{4}-\\d{2}-\\d{2}) 00:(\\d\\d)] (falls asleep|wakes up)" string)
    (cons time date)))
The rest is a bunch of selectors for writing / pulling data into / out of the schedule:
(defun guard (schedule-entry) (car schedule-entry))
(defun sleeptable (schedule-entry) (cdr schedule-entry))
(defun (setf sleeptable) (value schedule-entry) (setf (cdr schedule-entry) value))

(defun minute (sleeptable-entry) (car sleeptable-entry))
(defun days (sleeptable-entry) (cdr sleeptable-entry))
(defun (setf days) (value sleeptable-entry) (setf (cdr sleeptable-entry) value))
Now, let's get to the juicy part:
Strategy 1: Find the guard that has the most minutes asleep. What minute does that guard spend asleep the most?
We find the schedule entry with the highest count of minutes asleep first; then, in that entry, we look for the _minute_ entry in the sleep table with the highest count of days; finally do the multiplication:
(defun part1 (schedule)
  (let ((entry (find-max schedule :key #'minutes-asleep)))
    (* (guard entry) (sleepiest-minute entry))))

(defun minutes-asleep (entry)
  (reduce #'+ (sleeptable entry)
          :key #'(lambda (e) (length (cdr e)))))

(defun sleepiest-minute (entry)
  (multiple-value-bind (st num-days-asleep)
      (find-max (sleeptable entry)
                :key #'(lambda (st)
                        (length (days st))))
    (values (minute st) num-days-asleep)))
For part 2 instead:
Strategy 2: Of all guards, which guard is most frequently asleep on the same minute?
We find the entry with the highest number of of days associated for any given minute (note: we don't want the minute but the number of days the guard feel asleep for that given minute, that's why we are using NTH-VALUE here); then we call SLEEPIEST-MINUTE on that; we do multiplication last:
(defun part2 (schedule)
  (let ((entry (find-max schedule
                         :key #'(lambda (entry)
                                 (nth-value 1 (sleepiest-minute entry))))))
    (* (guard entry) (sleepiest-minute entry))))
Final plumbing:
(defun part2 (schedule)
  (let ((entry (find-max schedule
                         :key #'(lambda (entry)
                                 (nth-value 1 (sleepiest-minute entry))))))
    (* (guard entry) (sleepiest-minute entry))))
And that's it:
> (time (test-run))
Success: 1 test, 2 checks.
Evaluation took:
  0.005 seconds of real time
  0.005837 seconds of total run time (0.004618 user, 0.001219 system)
  120.00% CPU
  13,727,746 processor cycles
  720,224 bytes consed

2021-11-07 (permalink)

Abusing Common Lisp feature expressions to comment things out! (another follow up) @vim @commonlisp

The simple syntax rules that I mentioned inside one of my previous [.plan entry](https://matteolandi.net/plan.html#day-2021-10-29) turned out to break quite easily in case of nested parentheses; so I decided to try a different approach instead, and after playing with Vim's syntax regions a bit, I came up with the following:
syntax region lispExcludedFormWithNil
            \ matchgroup=lispExcludedContentStart
            \ start="#+nil ("
            \ skip="|.\{-}|"
            \ matchgroup=lispExcludedContentStop
            \ end=")"
            \ contains=@lispBaseListCluster
highlight link lispExcludedContentStart lispComment
highlight link lispExcludedContentStop lispComment
With this I was able to deal with nested parentheses just fine; however, if I there were `highlight` commands defined for any of the elements contained inside the `@lispBaseListCluster` cluster (and of course there are!), then Vim would end up coloring those elements; what I wanted instead, was the whole region to look like a `lispComment`.  This is where I got stuck; this is where I started questioning the whole approach altogether, and where I ended up asking for [help](https://www.reddit.com/r/vim/comments/qmn052/override_colors_based_on_the_surrounding_syntax/) on the /r/vim sub-reddit.

Anyways, long story short, one reply made me realize that I did not actually need all the definitions included inside the `@lispBaseListCluster` cluster, especially if then I would need to figure out a way to mute their colors; so what I did instead, was creating a new, simpler but recursive syntax region, to cater for my _excluded_ lists:
syntax region lispExcludedList
            \ contained
            \ matchgroup=lispExcludedListOpenParen
            \ start="("
            \ skip="|.\{-}|"
            \ matchgroup=lispExcludedListCloseParen
            \ end=")"
            \ contains=lispExcludedList
highlight link lispExcludedList lispComment
highlight link lispExcludedListOpenParen lispComment
highlight link lispExcludedListCloseParen lispComment
I then updated my original `lispExcludedFormWithNil` definition to _contain_ `lispExcludedList` instead of the `@lispBaseListCluster` cluster:
syntax region lispExcludedFormWithNil
            \ matchgroup=lispExcludedContentStart
            \ start="#+nil ("
            \ skip="|.\{-}|"
            \ matchgroup=lispExcludedContentStop
            \ end=")"
            \ contains=lispExcludedList
highlight link lispExcludedFormWithNil lispComment
highlight link lispExcludedContentStart lispComment
highlight link lispExcludedContentStop lispComment
And that's it; with this, it all started to work as expected:
#+nil (list 1 (list 2 3) 4)
                          `- `lispComment` syntax up until this position

#+nil (list 1 (list 2 (list 3 4) 5))
                                   `- `lispComment` syntax up until this position
If I then were to add `lispExcludedFormWithNil` to the `@lispBaseListCluster`:
syn cluster lispBaseListCluster add=lispExcludedFormWithNil
Then this would work even in case the _excluded_ expression was found nested inside another expression / list:
(list #+nil (list 1 (list 2 (list 3 4) 5)) 6)
                                         ^ ^
                                         | |
                                         | `- this is highlighed as `lispNumber`
                                         `- `lispComment` syntax up until this position

So how is my Lisp syntax file looking?  There you go:

" Excluded forms {{{

syntax region lispExcludedList
            \ contained
            \ matchgroup=lispExcludedListOpenParen
            \ start="("
            \ skip="|.\{-}|"
            \ matchgroup=lispExcludedListCloseParen
            \ end=")"
            \ contains=lispExcludedList

highlight link lispExcludedList lispComment
highlight link lispExcludedListOpenParen lispComment
highlight link lispExcludedListCloseParen lispComment
highlight link lispExcludedContentStart lispComment
highlight link lispExcludedContentStop lispComment

function! s:createLispExcludedSyntaxRegion(regionName, regionPrefix) abort "{{{
    execute 'syntax region' a:regionName
                \ 'matchgroup=lispExcludedContentStart'
                \ 'start="' . a:regionPrefix . '("'
                \ 'skip="|.\{-}|"'
                \ 'matchgroup=lispExcludedContentStop'
                \ 'end=")"'
                \ 'contains=lispExcludedList'
    execute 'highlight link' a:regionName 'lispComment'
    execute 'syntax cluster lispBaseListCluster add=' . a:regionName
endfunction " }}}

" ... with NIL {{{

syntax match lispExcludedElementWithNil /\v\#\+nil [^(`][^ ]+/
highlight link lispExcludedElementWithNil lispComment
syn cluster lispBaseListCluster add=lispExcludedElementWithNil

call s:createLispExcludedSyntaxRegion("lispExcludedFormWithNil", "#+nil ")
call s:createLispExcludedSyntaxRegion("lispExcludedQuotedFormWithNil", "#+nil '")
call s:createLispExcludedSyntaxRegion("lispExcludedQuasiQuotedFormWithNil", "#+nil `")
call s:createLispExcludedSyntaxRegion("lispExcludedSharpsignedDotFormWithNil", "#+nil #\.")

" }}}
" ... with #:_description_ {{{

syntax match lispExcludedElementWithDescription /\v\#\+\#:[^ ]+ [^(`][^ ]+/
highlight link lispExcludedElementWithDescription lispComment
syn cluster lispBaseListCluster add=lispExcludedElementWithDescription

call s:createLispExcludedSyntaxRegion("lispExcludedFormWithDescription", "#+#:[^ ]\\+ ")
call s:createLispExcludedSyntaxRegion("lispExcludedQuotedFormWithDescription", "#+#:[^ ]\\+ '")
call s:createLispExcludedSyntaxRegion("lispExcludedQuasiQuotedFormWithDescription", "#+#:[^ ]\\+ `")
call s:createLispExcludedSyntaxRegion("lispExcludedSharpsignedDotFormWithDescription", "#+#:[^ ]\\+ #\.")

" }}}

" }}}
PS. Yes, I created a script function, `createLispExcludedSyntaxRegion`, to make the process of creating new syntax regions a tiny bit less hairy.

2021-11-05 (permalink)

(Trying to) Speed up Common Lisp Replit REPLs startup time -- or how to both install Quicklisp and cache *.fasl files in the current directory @commonlisp @replit

All of my Common Lisp Replit REPLs have the following defined inside '.replit-files/init.lisp' (that's the script I am loading when starting up SBCL):
(let ((quicklisp-init (merge-pathnames "quicklisp/setup.lisp"
  (if (probe-file quicklisp-init)
      (load quicklisp-init)
        (load (merge-pathnames "quicklisp.lisp" *load-truename*))
        (funcall (find-symbol "INSTALL" (find-package "QUICKLISP-QUICKSTART"))))))
This pretty much translates to checking if '~/quicklisp/setup.lisp' already exists; if it does, it means we already got Quicklisp set up, so we go ahead and load that file up; otherwise, we load 'quicklisp.lisp' from the local directory (it's a copy of the file available on [quicklisp.org](https://www.quicklisp.org/beta/)), and then install Quicklisp altogether.

After this, with Quicklisp installed, all is left to do is loading up our app (the following is an example from a [recent REPL of mine](https://github.com/iamFIREcracker/cl-face-mask-ce-verifier/blob/master/build/replit.lisp)):
(ql:quickload '("face-mask-ce-verifier-web" "swank-tunnel"))
(fmcv-web:start :web-interface "")
Now, as you can imagine, the following two main activities will be slowing down the REPL startup time:

1. Bootstrapping Quicklisp, i.e. downloading it, installing it, fetching the latest dist
2. Loading / compiling the application system, and all its dependencies

Now, doing this once is perfectly fine, but can we configure things so that we don't have to do this over and over again, when Replit decides your REPL needs to be put to sleep or moved off to a new instance?

The answer to that question is of course: "yes!"; basically all we have to do is make sure that all the artifacts which were dynamically generated and which were needed by the REPL, are saved somewhere inside the REPL workspace (yes, your REPL workspace will pretty much always be kept around).

Alright; let's start off by changing our Quicklisp install script as follows:
;;; Quicklisp (installed in the current directory)
(let ((quicklisp-init (merge-pathnames ".quicklisp/setup.lisp" *default-pathname-defaults*)))
  (if (probe-file quicklisp-init)
      (load quicklisp-init)
        (load (merge-pathnames "quicklisp.lisp" *load-truename*))
        (funcall (find-symbol "INSTALL" (find-package "QUICKLISP-QUICKSTART"))
                 :path (merge-pathnames ".quicklisp/" *default-pathname-defaults*)))))
Here, we look for Quicklisp's setup script inside '.quicklisp/setup.lisp' instead of the canonical '~/quicklisp/setup.lisp' (note how we are using *DEFAULT-PATHNAME-DEFAULTS* instead of `(user-homedir-pathname)`); we also pass `:path (merge-pathnames ".quicklisp/" *default-pathname-defaults*)` to `QUICKLISP-QUICKSTART:INSTALL`, effectively telling it to install everything inside the local directory, '.quicklisp'.  Nice!

Now, to change the location where ADFS stores compiled files to make sure it's in the current workspace, we will have to play with ADFS's [Configuration DSL](https://common-lisp.net/project/asdf/asdf/Output-Configuration-DSL.html) a bit; it took me some Googling around ([1](https://gitlab.common-lisp.net/asdf/asdf/-/issues/22), [2](https://www.mail-archive.com/asdf-devel@common-lisp.net/msg06372.html), [3](https://stackoverflow.com/questions/25323394/asdf-output-redirection), and [4](https://www.reddit.com/r/lisp/comments/23azqf/need_help_setting_asdf_cache_output_directory/)) and quite a few tries before I got this to work; but ultimately, this is what I came up with:
;;; Locally cached FASLs
(let ((cache-dir (merge-pathnames ".common-lisp/" *default-pathname-defaults*)))
    `(:output-translations (t (,cache-dir :implementation))
Again, you can read more about the DSL in the link above, but in general:

1. Using `:implementation` forces ASDF to use separate directories, inside '.common-lisp', based on the current Lisp implementation; this way, should we change the REPL to use CCL instead of SBCL, it would compile everything from scratch, instead of trying to load the artifacts generated while we were using the other implementation
2. Using `:disable-cache` tells ASDF to stop caching compiled files under '$XDG_CONFIG_HOME/.common-lisp'; this is the behavior that the previous rule is meant to override, so disabling this will hopefully make sure the default caching mechanism won't get in our way
3. Using `:ignore-inherited-configuration` tells ASDF to ignore any other _inherited_ rule; now, I am not entirely sure what this is for, but ASDF was complaining that I should use either `:inherit-configuration` or `:ignore-inherited-configuration` when calling `INITIALIZE-OUTPUT`, so here it is

Anyways, restart the REPL, and you should now finally see the '.quicklisp' and '.common-lisp' local directories being put to good use:
~/Face-mask-CE-marking-verifier$ ls .quicklisp/
asdf.lisp     dists       quicklisp   tmp
client-info.sexp  local-projects  setup.lisp

~/Face-mask-CE-marking-verifier$ ls .common-lisp/
And that's it!  Until the next time...

2021-11-03 (permalink)

TIL: You cannot use plain HTTP with a .dev domain; Google will force a HTTPS redirect! https://ma.ttias.be/chrome-force-dev-domains-https-via-preloaded-hsts/

2021-11-02 (permalink)

* Add new page for `lg` to matteolandi.net

+ Common Lisp / Replit: if we install dependencies inside ./ instead of ~/quicklisp, would Replit cache them between restarts?

? The new version of Vlime (the one supporting JSON protocol) made the REPL editable, but it's a little bit cumbersome to interact with it -- you got to press enter while in insert mode, and if you leave insert mode early, that's it... no way to undo that

Today I woke up to realize that I could not deploy anything to my VPN anymore:
task path: /Users/matteolandi/Workspace/matteolandi.net/system/ansible/roles/matteolandi-index/tasks/main.yml:4
fatal: [matteolandi.net]: FAILED! => {"changed": false, "msg": "Error pulling matteolandi.net:5000/matteolandi-index - code: None message: unable to ping re
gistry endpoint https://matteolandi.net:5000/v0/\nv2 ping attempt failed with error: Get https://matteolandi.net:5000/v2/: x509: certificate has expired or
is not yet valid\n v1 ping attempt failed with error: Get https://matteolandi.net:5000/v1/_ping: x509: certificate has expired or is not yet valid"}
        \   ^__^
         \  (oo)\_______
            (__)\       )\/\
                ||----w |
                ||     ||

matteolandi.net            : ok=2    changed=0    unreachable=0    failed=1    skipped=0    rescued=0    ignored=0
Figured it had something to do with with the recent [Let's Encrypt's root certificate getting expired](https://techcrunch.com/2021/09/21/lets-encrypt-root-expiry), so I went on and upgraded my OS certificates (yeah, that plus all the other packages I had not upgraded in a while):
yum update -y
Then I tried to re-deploy again; it did not succeed, but at least the error message was different:
task path: /Users/matteolandi/Workspace/matteolandi.net/system/ansible/roles/docker-login/tasks/main.yml:1
fatal: [matteolandi.net]: FAILED! => {"changed": false, "msg": "Failed to import the required Python library (Docker SDK for Python: docker (Python >= 2.7)
or docker-py (Python 2.6)) on centos-512mb-fra1-01.localdomain's Python /usr/bin/python. Please read module documentation and install in the appropriate loc
ation. If the required library is installed, but Ansible is using the wrong Python interpreter, please consult the documentation on ansible_python_interpret
er, for example via `pip install docker` or `pip install docker-py` (Python 2.6). The error was: cannot import name certs"}
        \   ^__^
         \  (oo)\_______
            (__)\       )\/\
                ||----w |
                ||     ||

matteolandi.net            : ok=33   changed=1    unreachable=0    failed=1    skipped=0    rescued=0    ignored=0
First I upgraded `pip`:
pip install --upgrade pip
Then uninstalled a bunch of borked packages:
pip uninstall docker requests urllib3
And re-installed them again:
pip install --upgrade urllib3 requests docker
And after this, it all went back to normal... Crisis averted!

2021-10-31 (permalink)

* While releasing `cg`, the created GH release is named `refs/tags/...` -- turns out I was setting the `name` input to `github.ref`: https://github.com/iamFIREcracker/cg/commit/c0c3ef5b17be40872c67709f445dcbc66c1936c2

+ migrate `ap` to GitHub actions -- see `cg`

+ migrate `adventofcode` to GitHub actions -- see `cg`

+ migrate `lg` to GitHub actions -- see `cg`

? migrate `plan-convert` to GitHub actions -- see `cg`

? migrate `xml-emitter` to GitHub actions -- see `cg`

2021-10-29 (permalink)

Abusing Common Lisp feature expressions to comment things out! (a follow up) @vim @commonlisp

I posted a link to my previous [.plan entry](https://matteolandi.net/plan.html#day-2021-10-23) on [/r/Common_Lisp](https://www.reddit.com/r/Common_Lisp/comments/qfgci7/abusing_common_lisp_feature_expressions_to/) the other day, and a couple of things came out.

First off, an evil person could add `:nil` to *FETURES* and break hell loose:
#+nil is not the best way of doing this because, in theory, an evil person can put :nil in *features* and then all hell will break lose. Use #+(or) instead, which is immune to this sort of trickery.
Uh oh...
> (or #+nil "WTF?!" "Hello, world!")
"Hello, world!"

> (pushnew :nil *features*)

> (or #+nil "WTF?!" "Hello, world!")

> (setf *features* (delete :nil *features*))
Then someone else suggested to use [unintenred symbols](http://www.lispworks.com/documentation/HyperSpec/Body/02_dhe.htm) instead, as that will not only keep you safe from the previously mentioned problem, but also give you the opportunity to _document_ why the specific form / element got intentionally excluded:
I use #+#:buggy or any other word to give a tiny piece of documentation as to why it’s commented out. Sometimes I’ll write #+#:pedagogical-implementation to provide a pedagogical, easy-to-understand version of an expression that has been made obscure through optimizations.
Again, let's try this out
> (or #+#:excluded "WTF?!" "Hello, world!")
"Hello, world!"

> (pushnew #:excluded *features*)
;     (LET* ((#:ITEM #:EXCLUDED))
; caught WARNING:
;   undefined variable: #:EXCLUDED
; compilation unit finished
;   Undefined variable:
;     #:EXCLUDED
;   caught 1 WARNING condition

debugger invoked on a UNBOUND-VARIABLE @5351833B in thread
#<THREAD "main thread" RUNNING {1001548143}>:
  The variable #:EXCLUDED is unbound.
restarts (invokable by number or by possibly-abbreviated name):
  0: [CONTINUE   ] Retry using #:EXCLUDED.
  1: [USE-VALUE  ] Use specified value.
  2: [STORE-VALUE] Set specified value and use it.
  3: [ABORT      ] Exit debugger, returning to top level.

((LAMBDA ()))
   source: (LET* ((#:ITEM #:EXCLUDED))
0] 3

> (pushnew :excluded *features*)

> (or #+#:excluded "WTF?!" "Hello, world!")
"Hello, world!"

> (setf *features* (delete :excluded *features*))
Alright then, I think I like _uninterned symbol_ approach better, so I am going to update my Vim files as follows:
diff --git a/.vim/vimrc b/.vim/vimrc
index 69c5db4..6ca6151 100644
--- a/.vim/vimrc
+++ b/.vim/vimrc
diff --git a/.vim/after/syntax/lisp.vim b/.vim/after/syntax/lisp.vim
index 5a2859c..cdf3c03 100644
--- a/.vim/after/syntax/lisp.vim
+++ b/.vim/after/syntax/lisp.vim
@@ -855,7 +921,7 @@ function! s:vim_sexp_mappings() " {{{
         " Check if it's a #+nil marker
         execute "normal \<Plug>(sexp_move_to_prev_element_head)"
         normal yW
-        if @@ =~? "#+nil"
+        if @@ =~? "#+#:"
             let should_add_comment = 0
         let @@ = reg_save
@@ -864,7 +930,7 @@ function! s:vim_sexp_mappings() " {{{
         " Do the needed
         if should_add_comment
             " Insert the comment marker, and move back to the element / form
-            execute "normal! i#+nil "
+            execute "normal! i#+#:excluded "
             execute "normal \<Plug>(sexp_move_to_next_element_head)"
             " Move back, and the delete forward
diff --git a/.vim/after/syntax/lisp.vim b/.vim/after/syntax/lisp.vim
index 5a2859c..cdf3c03 100644
--- a/.vim/after/syntax/lisp.vim
+++ b/.vim/after/syntax/lisp.vim
@@ -11,3 +11,11 @@ syn cluster lispBaseListCluster add=lispExcludedForm
 syntax match lispExcludedElement /\v\#\+nil [^(][^ ]+/
 highlight link lispExcludedElement lispComment
 syn cluster lispBaseListCluster add=lispExcludedElement
+syntax match lispExcludedFormWithDescription /\v\#\+\#:[^ ]+ \([^)]+\)/
+highlight link lispExcludedFormWithDescription lispComment
+syn cluster lispBaseListCluster add=lispExcludedFormWithDescription
+syntax match lispExcludedElementWithDescription /\v\#\+\#:[^ ]+ [^(][^ ]+/
+highlight link lispExcludedElementWithDescription lispComment
+syn cluster lispBaseListCluster add=lispExcludedElementWithDescription
Right on!

2021-10-27 (permalink)

? @read take a look at how mongosh implemented _awaiting_ for promises without the explicit `await` statement -- https://github.com/mongodb-js/mongosh/tree/main/packages/async-rewriter2#next-gen-async-rewriter

? CL/1AM: push tests into a custom special variable, say *MY-TESTS*, and bind 1AM:*TESTS* to *MY-TESTS* when actually running the tests (this way only the current project's tests will be executed)

2021-10-23 (permalink)

Abusing Common Lisp feature expressions to comment things out! @vim @commonlisp

Today I realized that I can easily comment out a form by placing a `#+nil` in front of it, and this turned out to be quite useful and productive especially when throwing things out at the REPL.

Let's take a look at an example (this was taken from a [recent](https://github.com/iamFIREcracker/cl-face-mask-ce-verifier) small project of mine):
(hunchentoot:define-easy-handler (index :uri "/") (q)
  (let ((bodies (and q (fmcv:nb-search q))))
    (with-page (:title "Face Mask CE Verifier")
The bit to focus your attention to is the second line, which in words, pretty much translates to:
when `q`, the request the query string is non-NIL, call FMCV:NB-SEARCH with it, and assign the result to `bodies`; otherwise, `bodies` is initialized with nil
Now, while working on this, I realized FMCV:NB-SEARCH could sometimes take up to 3 seconds to return, and since there wasn't much I could do about it (i.e. it was the external service the function was calling behind the scenes, which was taking so long to reply), I decide to make the whole development experience a tiny bit more enjoyable by introducing a little bit of caching:

- call the API once
- save the response somewhere
- change the logic to read from the cached response until I was done with everything else
(defvar *cached-bodies* (fmcv:nb-search "00"))

(hunchentoot:define-easy-handler (index :uri "/") (q)
  (let ((bodies (and q *cached-bodies*)))
    (with-page (:title "Face Mask CE Verifier")
The above just works, 100%; however, it also makes it difficult to quickly switch between implementations.  For example, should I want to hit the external API again, I would have to put the FMCV:NB-SEARCH function call back in, and that would be a lot of typing /s

This is where [feature expressions](http://www.ai.mit.edu/projects/iiip/doc/CommonLISP/HyperSpec/Body/sec_24-1-2-1.html) could come to the rescue:

- Keep both *cached-bodies* and the FMCV:NB-SEARCH call in place
- Want to use the cached response?  Place a `#+nil` in front of the FMCV:-NB-SEARCH call
(hunchentoot:define-easy-handler (index :uri "/") (q)
  (let ((bodies (and q *cached-bodies* #+nil (fmcv:nb-search q))))
    (with-page (:title "Face Mask CE Verifier")
- Want to hit the external service instead?  Place a `#+nil` in front of *cached-bodies*
(hunchentoot:define-easy-handler (index :uri "/") (q)
  (let ((bodies (and q #+nil *cached-bodies* (fmcv:nb-search q))))
    (with-page (:title "Face Mask CE Verifier")
Note: `#+` is a reader macro that operates on the next two expressions found in the stream; if the first expression evaluates to non-NIL, then the macro would emit the second expression; otherwise, it would swallow it.  Now, we are always feeding the macro with a NIL as first expression, so basically we are telling it to swallow the second expression, always!

Pretty cool uh?!

_All the Vim haters should stop reading here_

I was so excited by this that I ended up creating two Vim mappings to automate this:

- `gce` to add (or remove) a `#+nil` in front of the current element (_element_ in the [vim-sexp](https://github.com/guns/vim-sexp) sense)
- `gcf` to add (or remove) a `#+nil` in front of the current form
" Comment out the current form / element
function! ToggleCommonLispComment(toggle_form) abort "{{{
    " Move at the start of the current element / form; if on an open paren
    " already, do nothing
    if getline('.')[col('.') - 1] != "("
        let motion = ""
        if a:toggle_form
            let motion = "\<Plug>(sexp_move_to_prev_bracket)"
            let motion = "\<Plug>(sexp_move_to_prev_element_head)"
        execute "normal " . motion

    " Figure out if we need to add a comment, or remove the existing one
    let should_add_comment = 1
    let save_cursor = getcurpos()
    let reg_save = @@
    " Move back to the previous element
    " Yank it
    " Check if it's a #+nil marker
    execute "normal \<Plug>(sexp_move_to_prev_element_head)"
    normal yW
    if @@ =~? "#+nil"
        let should_add_comment = 0
    let @@ = reg_save
    call setpos('.', save_cursor)

    " Do the needed
    if should_add_comment
        " Insert the comment marker, and move back to the element / form
        execute "normal! i#+nil "
        execute "normal \<Plug>(sexp_move_to_next_element_head)"
        " Move back, and the delete forward
        execute "normal \<Plug>(sexp_move_to_prev_element_head)"
        normal dW
" }}}
nmap <silent><buffer>   gce  :call ToggleCommonLispComment(0)<CR>
nmap <silent><buffer>   gcf  :call ToggleCommonLispComment(1)<CR>
It's a bit hack-y to be honest, but whatever!

Also, you can even change the relevant syntax file (e.g. ~/.vim/after/syntax/lisp.vim) and make these regions look like comments:
syntax match lispExcludedForm /\v\#\+nil \([^)]+\)/
highlight link lispExcludedForm lispComment
syn cluster lispBaseListCluster add=lispExcludedForm

syntax match lispExcludedElement /\v\#\+nil [^(][^ ]+/
highlight link lispExcludedElement lispComment
syn cluster lispBaseListCluster add=lispExcludedElement
What a joy!

Until next time...

2021-10-22 (permalink)

? Can I get a Replit instance, open a ssh connection to my VPS, so that I can then use my VPS as jump server to connect back into the REPL?

2021-10-20 (permalink)

How to verify the CE marking on your face mask?!

First off, what does the CE marking mean? From [Wikipedia](https://en.wikipedia.org/wiki/CE_marking):
The CE mark on a product indicates that the manufacturer or importer of that product affirms its compliance with the relevant EU legislation and the product may be sold anywhere in the European Economic Area (EEA). It is a criminal offence to affix a CE mark to a product that is not compliant or offer it for sale.
It continues:
The marking does not indicate EEA manufacture or that the EU or another authority has approved a product as safe or conformant. The EU requirements may include safety, health, and environmental protection. If stipulated in any EU product legislation, assessment by a Notified Body or manufacture according to a certified production quality system may be required. Where relevant, the CE mark is followed by the registration number of the notified body involved in conformity assessment.
So, for example, when you read `CE-2233` on a product, `CE` represents the CE marking (duh), while `2233` refers to ID of the notified body involved in conformity assessment.

Now off to the verification part.

Note: the steps listed below will help you verify that the notified body involved with the assessment is..._legit_, i.e. can actually issue the specific conformity assessment certificate; however, these won't help with verifying that the specified notified body had indeed run a conformity assessment on a given product (i.e. the product seller could lie about this, but as the Wikipedia article states, this is considered a criminal offense and by consequence, and published by law).

- To sell face masks in the UE, you need the CE marking
- For this matter, notified bodies need to be competent with the '2016/425
  Personal protective equipment' legislation
- They also need to be entitled to issue specific conformity assessment modules for 'Equipment providing respiratory system protection' products (e.g. face masks)
- Luckily for us, an online database exists,
  [NANDO](https://ec.europa.eu/growth/tools-databases/nando/index.cfm), storing
  all this information about all the registered bodies

So, without further ado, navigate to: [Nando](https://ec.europa.eu/growth/tools-databases/nando/index.cfm) > [Free search](https://ec.europa.eu/growth/tools-databases/nando/index.cfm?fuseaction=search.main).

Take the CE numerical value (i.e. the `2233` part of `CE-2233`), populate the `Keyword On Notified body number` field with it, and press the `Search` button.

Now `Ctrl+F` for the CE numerical value (i.e. the `2233` part of `CE-2233`), and if you are lucky enough, you should find a match inside the `Body type` column:
NB 2233 [GÉPTESZT Termelőeszközöket Felülvizsgáló és Karbantartó Kft.](https://ec.europa.eu/growth/tools-databases/nando/index.cfm?fuseaction=directive.nb&body_type=NB&refe_cd=NANDO_INPUT_166341) Hungary
Click on the link in the second column to open up the page listing addresses, contact details, and all the legislations that the notified body has scoped to assess and certify against.
Body :
GÉPTESZT Termelőeszközöket Felülvizsgáló és Karbantartó Kft.
Jablonka u. 79.
1037 Budapest
Country : Hungary
Phone : +36 1 250 3531
Fax : +36 1 250 3531
Email : gepteszt@gepteszt.hu
Website : www.gepteszt.hu
Notified Body number : 2233
Last approval date : 13/04/2010
- Regulation (EU) 2016/425 Personal protective equipment [HTML](https://ec.europa.eu/growth/tools-databases/nando/index.cfm?fuseaction=notification.html&ntf_id=309049&version_no=1) [PDF](https://ec.europa.eu/growth/tools-databases/nando/index.cfm?fuseaction=notification.pdf&dir_id=155501&ntf_id=309049)
Search for the `Personal protective equipment` entry inside the Legislations table, and click on the link labeled with `HTML`.  On this page you will find all the information the notified body has been certified for.

You should first confirm that there is a entry for `Equipment providing respiratory system protection`; without this, even the PPE certification notified body cannot engage in CE certification activities of protective masks).  Then, you will have to confirm the "conformity assessment" modules that the notified body is entitled to issue; in particular we are looking for:

- [Module B: EU type-examination](https://support.ce-check.eu/hc/en-us/articles/360019298431-Conformity-Assessment-Module-B)
- [Module C2: Supervised product checks at random intervals](https://support.ce-check.eu/hc/en-us/articles/360019507611-Conformity-Assessment-Module-C2-)
- [Module D: Quality assurance of the production process](https://support.ce-check.eu/hc/en-us/articles/360019191712-Conformity-Assessment-Modules-D-D1)

Note: that manufacturers can only enter the EU market legally after obtaining a Module B + Module C2 certificate or a Module B + Module D certificate
- Equipment providing respiratory system protection
- Protective Equipment against falls from heights
- EU type-examination
- Quality assurance of the production process
- Supervised product checks at random intervals
Articles / Annexes
- Annex V
- Annex VIII
- Annex VII
By the looks of the information shown above, it appears that this notified body have all it takes to properly run conformity assessments against face masks...Hurray!


- [Guide to using NANDO website to identify Notified Bodies PPE](https://yewtu.be/watch?v=ceCAceuDOhE)
- [Mascherine Ffp2, ecco i codici certificati CE non a norma e lo strumento di verifica Ue](https://quifinanza.it/info-utili/video/mascherine-ffp2-covid-codici-certificati-ce-non-a-norma/471686/)
- [How to Verify CE Certification?](https://verifyfull.com/how-to-verify-ce-certification%EF%BC%9F/)

2021-10-15 (permalink)

Today I fucking figured out how to make Vim's * / # not suck! @vim
" Star search plugin {{{
" Few things that this plugin does, and that others might not:
" 1. Works in both normal, and visual mode
" 2. Plays nice with smartcase search -- vim's builtin * / # mappings do not
"    support that
" 3. Does not automatically jump to the _first_ next occurrence; instead, moves
"    the cursor to the beginning of the current occurrence so that if for
"    whatever reason you decided to swap search direction with `N`, you would
"    be taken to the first previous occurrence (and not to the beginning of the
"    current word
" Normal mode:
" - Save the word under the cursor -- running `expand('<cword>')` while
"   searching actually cuases the cursor to move around one char forward, and
"   if you happened to start from the end of the word, that
"   'move-one-char-ahead' behavior would actually move the cursor over to the
"   next word, causing `expand('<cword>')` to return something unexpected
"   instead
" - Saves the current view (so we can restore it later)
" - Triggers the word-boundary-d search
" - Restores to the previous view -- effectively undoing the first jump
" - Checks if the cursor wasn't already at the beginning of a match, and if
"   not, runs `b` to move at the beginning of the current word
nnoremap <Plug>NormalStarSearchForward
            \ :let star_search_cword = expand('<cword>')<CR>
            \ :let star_search_view = winsaveview()<CR>
            \ /\<<C-R>=escape(star_search_cword, '*\')<CR>\><CR>
            \ :call winrestview(star_search_view)<CR>
            \ :execute "normal! " . (getline('.')[col('.') - 1:] !~# '^' . star_search_cword ? "b" : "")<CR>
nnoremap <Plug>NormalStarSearchBackward
            \ :let star_search_cword = expand('<cword>')<CR>
            \ :let star_search_view = winsaveview()<CR>
            \ ?\<<C-R>=escape(star_search_cword, '*\')<CR>\><CR>
            \ :call winrestview(star_search_view)<CR>
            \ :execute "normal! " . (getline('.')[col('.') - 1:] !~# '^' . star_search_cword ? "b" : "")<CR>
nmap * <Plug>NormalStarSearchForward
nmap # <Plug>NormalStarSearchBackward

" Visual mode:
" - Saves the current view (so we can restore it later)
" - Triggers the search -- without word-boundaries...you can use normal the
"   normal mode alternative for that
" - Restores to the previous view -- effectively undoing the first jump
xnoremap <Plug>VisualStarSearchForward
            \ :<C-U>let star_search_view = winsaveview()<CR>
            \ /<C-R>=escape(GetCurrentSelection(visualmode()), '*\')<CR><CR>
            \ :call winrestview(star_search_view)<CR>
xnoremap <Plug>VisualStarSearchBackward
            \ :<C-U>let star_search_view = winsaveview()<CR>
            \ ?<C-R>=escape(GetCurrentSelection(visualmode()), '*\')<CR><CR>
            \ :call winrestview(star_search_view)<CR>
xmap * <Plug>VisualStarSearchForward
xmap # <Plug>VisualStarSearchBackward

" }}}
There is a lot of copy-pasta in there, but it gets the job done and given the amount of time I have already spent on this today, or in the past even, I think it's time to move on!

? git: switch back to per-branch stashes, but register a pre-push hook to abort if any unpushed commit is tagged as SAVEPOINT -- https://gist.github.com/GUI/676bcb25389cd01d47828fddcf37e1f2

+ change Vim/lisp setup so that the REPL is actually started in a different terminal (via Dispatch maybe) and does not take real estate

2021-10-07 (permalink)

? @A @read [Making a Game in Janet, Part 2: Judging Janet](https://ianthehenry.com/posts/janet-game/judging-janet/) -- self-modifying tests, [unified tests](https://www.mercurial-scm.org/wiki/WritingTests), [expect tests](https://blog.janestreet.com/testing-with-expectations/)

? @A @read [Clojure unique way of debugging](https://www.reddit.com/r/Clojure/comments/q31wpo/clojure_unique_way_of_debugging/) -- environment, capture, replay

2021-10-06 (permalink)

* Added GitHub Actions to `cg` -- bye bye TravisCI -- [cg/pull/8](https://github.com/iamFIREcracker/cg/pull/8)

+ See if it's possible to enhance [40ants/setup-lisp](https://github.com/40ants/setup-lisp) to support Windows

2021-09-30 (permalink)

`ScratchThis`, a Vim command to make the current buffer a `scratch` one (see `:h special-buffers` for additional info):
function! s:ScratchThis() abort
    setlocal buftype=nowrite bufhidden=delete noswapfile

command! ScratchThis call s:ScratchThis()

2021-09-15 (permalink)

Nice little vim/fugitive gem: use `!` to open a command prompt with the file under the cursor.
nnoremap ! :! <C-R><C-F><Home><Right>
It's very handy if you want to delete an untracked file (see https://github.com/tpope/vim-fugitive/issues/23).

netrw comes with an identical mapping too!

2021-09-13 (permalink)

* finished reading [Tools for Thought: The History and Future of Mind-Expanding Technology](https://www.amazon.com/Tools-Thought-History-Mind-Expanding-Technology/dp/0262681153/ref=sr_1_1?dchild=1&keywords=tools+for+thought&qid=1631599675&s=books&sr=1-1)

2021-09-10 (permalink)

? book: The Dream Machine https://www.amazon.com/Dream-Machine-M-Mitchell-Waldrop/dp/1732265119

2021-09-08 (permalink)

Notes about reverse engineering Basecamp's Recording pattern

Before we begin: I never used RoR nor programmed in Ruby, so apologies if some of the things mentioned are inaccurate or completely wrong; I simply got interested in the pattern, and tried my best to understand how it could have been implemented.  So bear with me, and of course feel free to suggest edits.

Main concepts:

- Buckets are collections of records, i.e. _recordings_, having different types e.g. `Todo`, `Todoset`
- Recordings are just pointers, pointing to the actual data record, i.e. the _recordables_
- Recordables are immutable records, i.e. every time something changes, a new _recordable_ record is created and the relevant recording is updated to point to it
- Events are created to link together 2 recordables (the current version and the previous one), the recording, the bucket, and the author of the change
- This enables data versioning
- This enables data auditing (at the recording level, at the bucket level, and at the user level)

It all started with this [tweet](https://twitter.com/dhh/status/962156053461843968?lang=en):
Probably the single most powerful pattern in BC3 ❤️. Really should write it up at one point.
This is how a Basecamp URL looks like: `/1234567890/projects/12345/todos/67890`

In it:

- `1234567890` is the account ID -- it identifies a company's instance, it represents a tenant
- `12345` is the bucket ID, pointing to a record whose `bucketable_type` is `Project`
- `67890` is the recording ID, pointing to a record whose `recordable_type` is `Todo`

Bucket is not a project, but links to one (project is the _bucketable_ entity):
class Bucket < ActiveRecord::Base
  belongs_to :account
  delegated_type :bucketable, types: %w[ Project ] # this adds bucketable_id, bucketable_type to the schema ... or `belongs_to :bucketable`
  has_many :recordings, dependent: :destroy
Recording is not a todo, but links to one (todo is the _recordable_ entity):
class Recording < ActiveRecord::Base
  belongs_to :bucket, touch: true
  belongs_to: :creator, class_name: 'Person', default: -> { Current.person }
  delegated_type :recordable, types: %w[ Todo Todolist Todoset Dock ] # this adds recordable_id and recordable_type to the schema

  delegate :account, to: :bucket

module Recordable extend ActiveSupport::Concern
  included do
    has_many :recordings, as: :recordable # same as specified inside `Recording` as `delegated_type`
Recordables are immutable, and do not usually have all the created-at, updated-at metadata...recordings do!

Recordings have parents, and children; it almost becomes a graph database, with a tree structure:

- recording.recordable_type: Todo
- recording.parent.recordable_type: Todolist
- recording.parent.parent.recordable_type: Todoset
- recording.parent.parent.parent.recordable_type: Dock
- recording.parent.parent.parent.children.map(&:recordable_type): Chat::Transcript, Message::Board, Todoset, Schedule, Questionnaire, Vault

The above, i.e. the tree structure, is most likely implemented via the `Tree` concern:
module Recording::Tree
  extend ActiveSupport::Concern

  included do
    belongs_to :parent, class_name: 'Recording', optional: true
    has_many :children, class_name: 'Recording', foreign_key: :parent_id
Note: I don't assume the above to be correct, but it should get the job done; I question whether it would make sense to add a _type_ field on this parent/child relation, but maybe it's not strictly required (i.e. you always know that one todo's parent is a todolist).

As stated above, recordings are pointers, almost like symlinks, and when you update a todo, you don't actually update the record, but create a new one, and update the recording to point to that; since URLs are all keyed-up accordingly (i.e. they use Recordings IDs, and not Recordable ones), the URL does not change but the actual todo the recording point to does.

Controllers then look something like this:
class TodosController < ApplicationController
  def create
    @recording = @bucket.record new_todo

  def update
    @recording.update! recordable: new_todo

    def new_todo
      Todo.new params.require(:todo).permit(:content, :description, :starts_on, :due_on)
While this is how `Bucket.record` looks like:
class Bucket < ActriveRecord::Base
  def record(recordable, children: nil, parent: nil, status: :active, creator: Current.person, **options)
    transaction do
      options.merge!(recordable: recordable, parent: parent, status: status, creator: creator)

      recordings.create!(options).tap do | recording |
        Array(children).each do | child |
          record child, parent: recording, status: status, creator: creator
This enables you to do things like, "Copy message to a new project", way more efficient: previously you had to set up a new job that literally copied the message, the comments, the events onto the new project; while now, all you have to do is set up a new recording pointing to the existing _immutable_ recordable.

For example:

- Project has messages -- messages are recordables
- Messages have comments -- comments are recordables
- When you copy a message from one project to another, you will have to create a recording not just for the message to be copied, but for all its existing comments as well (that's what `children` inside `Bucket.record` is for)

Note: `children` inside `Bucket.record` is expected to be a recordable, and not a recording; this means we will have to pass in all the child recordables, e.g. `message.recording.children.map(&:recordable)`.

Note: `Bucket.record` (at least the version above, which was taken from [shown in the presentation](https://www.youtube.com/watch?v=tmWlm0M6JSk&t=3060s)) does not seem to recurse, which means you can only create recordings for the parent entity and its direct children, i.e. messages and comments, but not `Todoset`s, `Todolist`s _and_ `Todo`s.

Note: is this more efficient, really?  Even though you will not have to copy the immutable records (i.e. the recordables), you will still have to recreate the same recordings tree, right?

After you copied a recording, you should end up with something like this:
>> Recording.find(111111111).recordable.id
=> 33333333333333

>> Recording.find(222222222).recordable.id
=> 33333333333333
Again, two pointers, i.e. recordings, pointing to the same immutable record, i.e. the recordable.

Note: a recordable has many recordings, not one; this is because the same recordable object can be pointed by two different recordings.

This Recording pattern works in tandem with `Event`s:

- They link to the newer version of the recordable, and the previous one
- They belong to a `Recording`: this way you can easily see all the times a recording was updated
- They belong to a `Bucket`: this way you can easily see all the times a bucket was updated
- They belong to a `Person`: this way you can easily see all the changes done by one user
- They have one `Detail`, i.e. a hash containing some additional metadata linked to the current event
class Event < ActiveRecord::Base
  belongs_to: :recording, required: false
  belongs_to: :bucket
  belongs_to: :creator, class_name: 'Person'

  has_one :detail, dependent: :delete, required: true

  include Notifiable...Requested

and with the `Eventable` concern:
module Recording::Eventable
  extends ActiveSupport::Concern
  included do
    has_many :events, dependent: :destroy

    after_create :track_created
    after_update :track_updated
    after_update_commit :forget_adoption_tracking, :forget_events

  def track_event(action, recordable_previous: nil, **particulars)
    Event.create! \
      recording: self, recordable: recordable, recordable_previous: recordable_previous,
      bucket: bucket, creator: Current.person, action: action,
      detail: Event::Detail.new(particulars)
`track_event` here is the generic method, but more specific ones exist as well, for more common use cases so we don't have to manually plumb arguments.

We track an event every single time an eventable is created, or updated:
    after_create: :track_created
    after_update :track_updated
The private method `track_created` simply delegates to `track_event`:
    def track_created
      track_event :created, status_was: status
A recordable has many events; again, the same recordable could referenced by multiple recordings, and since events are created when recordings are created, then that's how you end up with multiple events.

OK, great, nice, but what schema would enable such a pattern?  Again, not 100% accurate, but hopefully close enough:

- Buckets (id, bucketable_type, bucketable_id, ...)
- Recordings (id, bucket_id, recordable_type, recordable_id, parent_id, ...)
- Events (id, bucket_id, recording_id, recordable_id, prevous_recordable_id, user_id, ...)

Note: recordables are _untyped_ here, but the type information is actually one join-table away, i.e. it's available inside the linked recording

- Projects (id, ...)
- Messages (id, ...)

And this seems to be it really.

Note: this pattern heavily relies on Rails delegate types, i.e. sugar on top of polymorphic associations (see [rais/rails/pull#39341](https://github.com/rails/rails/pull/39341)), to automatically traverse the recording table to automatically get to the recordable entry.  My first thought was: how will this perform?  How is the extra join operation required to fetch any linked entity going to affect performance?  If it's working for Basecamp, hopefully it's working decently enough for other applications as well, but I guess it also depends on how much the different entities are linked together.

Note: in the end, this is not that different from 'time-based versioning' (read: [Keeping track of graph changes using temporal versioning](https://medium.com/neo4j/keeping-track-of-graph-changes-using-temporal-versioning-3b0f854536fa)), with recordings being the _identity_ nodes, and recordables being the _state_ objects.

Some references:

- [Basecamp 3 Unraveled](https://www.youtube.com/watch?v=tmWlm0M6JSk&t=3060s) -- where an overview of the Bucket/Recording pattern is given
- [On Writing Software Well #3: Using globals when the price is right](https://www.youtube.com/watch?v=D7zUOtlpUPw&t=653s) -- where `Eventable` concern, and the `Event` model are shown
- [On Writing Software Well #5: Testing without test damage or excessive isolation](https://www.youtube.com/watch?v=5hN6OZDyQtk&t=419s) -- where the `Recording` model is shown
- [GoRails: Recording pattern (Basecamp 3)](https://gorails.com/forum/recording-pattern-basecamp-3)

2021-09-06 (permalink)

* `gem install`-d all the things again. I guess the latest `brew upgrade` that I run last week might have broken a few things (and while at it, I also `rvm install --default ruby-3.0.0`)

? implement rainbow-blocks for Vim (see: https://twitter.com/lisperati/status/1434114082903367681). A simple hack would be to define 5/6 syntax rules for `(...)` blocks and configure them so that they can only exist within an outer block, i.e. block1, block2 only inside block1, block3 only inside block2.

2021-08-30 (permalink)

* finished reading [Grokking Simplicity](https://www.amazon.it/Grokking-Simplicity-Software-Functional-Thinking/dp/1617296201/ref=sr_1_1?__mk_it_IT=%C3%85M%C3%85%C5%BD%C3%95%C3%91&dchild=1&keywords=grokking+simplicity&qid=1630331790&sr=8-1)

Grokking Simplicity notes

This page of the book says it all:
We have learned the skills of professionals
Since we're at the end of the book, let's look back at how far we've come and do a high-level listing of the skills we've learned. These are the skills of professional functional programmers. They have been chose for their power and depth.
Part 1: Actions, Calculations, and Data:
- Identifying the most problematic parts of your code by distinguishing actions, calculations, and data
- Making your code more resusable and testable by extracting calculations from actions
- Improving the design of actions by replacing implicit inputs and outputs with explicit ones
- Implementing immutability to make reading data into a calculation
- Organizing and improving code with stratified design
Part 2: First-class abstractions
- Making syntactic operations first-class so that they can be abstracted in code
- Reasoning at a higher level using functional iteration and other functional tools
- Chaining functional tools into data transformation pipelines
- Understanding distributed and concurrent systems by timeline diagrams
- Manipulating timelines to eliminate bugs
- Mutating state safely with higher-order functions
- Using reactive architecture to reduce coupling between cause and effects
- Applying the onion architecture (Interaction, Domain, Language) to design services that interact with the world
Where here is the list of primitives presented in the book to work with timelines.

Serialize tasks with `Queue`:
function Queue(worker) {
  var queue_items = [];
  var working = false;

  function runNext() {
    if(queue_items.length === 0)
    working = true;
    var item = queue_items.shift();
    worker(item.data, function(val) {
      working = false;
      setTimeout(item.callback, 0, val);

  return function(data, callback) {
      data: data,
      callback: callback || function(){}
    setTimeout(runNext, 0);
function calc_cart_worker(cart, done) {
  calc_cart_total(cart, function(total) {

var update_total_queue = Queue(calc_cart_worker);
Serialize tasks and skip duplicate work with `DroppingQueue`:
function DroppingQueue(max, worker) {
  var queue_items = [];
  var working = false;

  function runNext() {
    if(queue_items.length === 0)
    working = true;
    var item = queue_items.shift();
    worker(item.data, function(val) {
      working = false;
      setTimeout(item.callback, 0, val);

  return function(data, callback) {
      data: data,
      callback: callback || function(){}
    while(queue_items.length > max)
    setTimeout(runNext, 0);
function calc_cart_worker(cart, done) {
  calc_cart_total(cart, function(total) {

var update_total_queue = DroppingQueue(1, calc_cart_worker);
Synchronize (i.e. cut) multiple timelines with `Cut`:
function Cut(num, callback) {
  var num_finished = 0;
  return function() {
    num_finished += 1;
    if(num_finished === num)
var done = Cut(3, function() {
  console.log("3 timelines are finished");

done(); // only this one logs
Make sure a function is called only once, with `JustOnce`:
function JustOnce(action) {
  var alreadyCalled = false;
  return function(a, b, c) {
    if(alreadyCalled) return;
    alreadyCalled = true;
    return action(a, b, c);
function sendAddToCartText(number) {
  sendTextAjax(number, "Thanks for adding something to your cart. Reply if you have any questions!");

var sendAddToCartTextOnce = JustOnce(sendAddToCartText);

sendAddToCartTextOnce("555-555-5555-55"); // only this one logs
Encapsulate state inside _reactive_ `ValueCell`s:
function ValueCell(initialValue) {
  var currentValue = initialValue;
  var watchers = [];
  return {
    val: function() {
      return currentValue;
    update: function(f) {
      var oldValue = currentValue;
      var newValue = f(oldValue);
      if(oldValue !== newValue) {
        currentValue = newValue;
        forEach(watchers, function(watcher) {
    addWatcher: function(f) {
var shopping_cart = ValueCell({});

function add_item_to_cart(name, price) {
  var item = make_cart_item(name, price);
  shopping_cart.update(function(cart) {
    return add_item(cart, item);
  var total = calc_total(shopping_cart.val());

Derived values can instead be implemented with `FormulaCell`s:
function FormulaCell(upstreamCell, f) {
  var myCell = ValueCell(f(upstreamCell.val()));
  upstreamCell.addWatcher(function(newUpstreamValue) {
    myCell.update(function(currentValue) {
      return f(newUpstreamValue);
  return {
    val: myCell.val,
    addWatcher: myCell.addWatcher
var shopping_cart = ValueCell({});

var cart_total = FormulaCell(shopping_cart, calc_total);

function add_item_to_cart(name, price) {
  var item = make_cart_item(name, price);
  shopping_cart.update(function(cart) {
    return add_item(cart, item);


Location sharing alternative to the Google built-in one @idea

- It would be fun
- You would not have to share data with Google (why would anyone share it with you instead? My friends and family probably would!)

Streamlining answering the donors questionnaire @idea

"Which cities have you visited during the last 4 weeks?"
"Which countries have you visited over the last 6 months?"

These are some questions donors get asked over and over again before they are are cleared for the donation, and it always catches me off guard.  Can this information be automatically pulled from Google?

What about my wife's data instead, would I have access to it?  During COVID, most of the time it's not just about the place you have visited, but the places other people you are living with had...

2021-08-27 (permalink)

The power of Lisp is its own worst enemy.
[The Lisp Curse](http://www.winestockwebdesign.com/Essays/Lisp_Curse.html)

2021-08-23 (permalink)

* finished reading [The Friendly Orange Glow: The Untold Story of the PLATO System and the Dawn of Cyberculture](https://www.amazon.com/Friendly-Orange-Glow-Untold-Cyberculture/dp/1101871555/ref=tmm_hrd_swatch_0?_encoding=UTF8&qid=1629734304&sr=1-1)

2021-06-28 (permalink)

* finished reading [10 PRINT CHR$(205.5+RND(1)); : GOTO 10](https://www.amazon.it/dp/0262526743?tag=duc01-21&linkCode=osi&th=1&psc=1)

2021-06-19 (permalink)

Remote live coding a Clack application with Swank, and Ngrok

Today I would like to show you how to create a very simple CL Web application with Clack, and how to use Swank and [Ngrok](https://ngrok.com/) to enable remote [live coding](https://en.wikipedia.org/wiki/Live_coding).

Here is what we are going to wind up with:

- Clack application running on `localhost:5000` -- i.e. the Web application
- Swank server running on `localhost:4006` -- i.e. the "live coding" enabler
- Ngrok tunnel to remotely expose `localhost:4006` (just in case you did not have SSH access into the server the app is running on)

As usual, let's start off by naming all the systems we are depending on:

- `clack`, Web application environment for CL
- `ngrok`, CL wrapper for installing and running Ngrok
- `swank`, what enables remote "live coding"
- `uiop`, utilities that abstract over discrepancies between implementations

Now, `ngrok` has not been published to Quicklisp yet, so you will have to manually download it first:
$ git clone https://github.com/40ants/ngrok.git
$ sbcl --noinform
> ...
...and then make sure it is QL:QUICKLOAD-able:
(pushnew '(merge-pathnames (parse-namestring "ngrok/")
Now you can go ahead and load all the required dependencies:
(ql:quickload '("clack" "ngrok" "swank" "uiop"))
Let's take a look at the MAIN function. It starts the Web application, the Swank server, and Ngrok:
(defun main ()
For the Web application:

- We define a special variable, *HANDLER*, to hold the Clack Web application handler
- We start the app, bind it on `localhost:5000`, and upon receiving an incoming request we delegate to SRV
- Inside SRV, we delegate to APP (more to this later)
- Inside APP, we finally process the request and return a dummy message back to the client
(defvar *handler* nil)

(defun clack () (setf *handler* (clack:clackup #'srv :address "localhost" :port 5000)))

(defun srv (env) (app env))

(defun app (env)
  (declare (ignore env))
  '(200 (:content-type "text/plain") ("Hello, Clack!")))
You might be wondering: "why not invoking CLACK:CLACKUP with #'APP? Why the middle-man, SRV?"  Well, that's because Clack would dispatch to whichever function was passed in at the time CLACK:CLACKUP was called, and because of that, any subsequent re-definitions of such function would not be picked up by the framework.  The solution? Add a level of indirection, and so long as you don't need to change SRV, but always APP, then you should be all right!

Let's now take a look at how to setup a Swank server.  First we ask the user for a secret with which to [secure](https://github.com/slime/slime/issues/286) the server (i.e. it will accept incoming connections only from those clients that do know such secret), then we write the secret onto ~/.slime-secret (yes, it's going to overwrite the existing file!), and last we actually start the server:
(defun getenv-or-readline (name)
  "Gets the value of the environment variable, or asks the user to provide
   a value for it."
  (or (uiop:getenv name)
        (format *query-io* "~a=" name)
        (force-output *query-io*)
        (read-line *query-io*))))

(defvar *slime-secret* (getenv-or-readline "SLIME_SECRET"))
(defparameter *swank-port* 4006)

(defun swank ()
  (swank:create-server :port *swank-port* :dont-close t))

(defun write-slime-secret ()
  (with-open-file (stream "~/.slime-secret" :direction :output :if-exists :supersede)
    (write-string *slime-secret* stream)))
Last but not least, Ngrok: we read the authentication token, so Ngrok knows the account the tunnel needs to be associated with, and then we finally start the daemon by specifying the port the Swank server is listening to (as that's what we want Ngrok to create a tunnel for) and the authentication token:
(defvar *ngrok-auth-token* (getenv-or-readline "NGROK_AUTH_TOKEN"))

(defun ngrok () (ngrok:start *swank-port* :auth-token *ngrok-auth-token*))
Give the MAIN function a go, and you should be presented with a similar output:
> (main)
Hunchentoot server is started.
Listening on localhost:5000.
;; Swank started at port: 4006.
 <INFO> [10:13:43] ngrok setup.lisp (install-ngrok) -
  Ngrok already installed, changing authtoken
 <INFO> [10:13:44] ngrok setup.lisp (start) - Starting Ngrok on TCP port 4006
 <INFO> [10:13:44] ngrok setup.lisp (start) -
  Tunnnel established! Connect to the tcp://6.tcp.eu.ngrok.io:12321
Let's test the Web server:
$ curl https://localhost:5000
Hello, Clack!
It's working.  Now open Vim/Emacs, connect to the Swank server (i.e. host: `6.tcp.eu.ngrok.io`, port: `12321`), and change the Web handler to return something different:
(defun app (env)
  (declare (ignore env))
  '(200 (:content-type "text/plain") ("Hello, Matteo!")))
Hit the Web server again, and this time it should return a different message:
$ curl https://localhost:5000
Hello, Matteo!
Note how we did not have to bounce the application; all we had to do was re-define the request handler...that's it!

Happy remote live coding!

PS. The above is also available on [GitHub](https://github.com/iamFIREcracker/cl-clack-swank-ngrok), and on [Replit](https://replit.com/@iamFIREcracker/cl-clack-swank-ngrok).

2021-06-17 (permalink)

* Opened [cl-cookbook/pull/385](https://github.com/LispCookbook/cl-cookbook/pull/385): Don't wrap SWANK:CREATE-SERVER inside BT:MAKE-THREAD

2021-06-11 (permalink)

? That will never work: Netflix https://www.amazon.ca/That-Will-Never-Work-Netflix/dp/0316530204

? No rules Netflix Culture Reinvetion https://www.amazon.ca/No-Rules-Netflix-Culture-Reinvention/dp/1984877860

? book: Designing Data-intensive applications https://www.amazon.ca/dp/1449373321/ref=cm_sw_r_wa_awdb_imm_CPN7ADPYR3CCM6EDZFMC

? book: Microservices Patterns https://www.amazon.ca/dp/1617294543?ref=ppx_pop_mob_ap_share

? book: Grokking algorithms https://www.amazon.ca/dp/1617292230/ref=cm_sw_r_wa_api_glt_i_FCXTQ5MH3YVCRVKBMHYR

2021-06-07 (permalink)

* finished reading [Object-Oriented Programming in COMMON LISP: A Programmer's Guide to CLOS](https://www.amazon.com/Object-Oriented-Programming-COMMON-LISP-Programmers/dp/0201175894)

2021-06-02 (permalink)

Web scraping for fun and profit^W for a registration to get my COVID-19 jab

Rumor says that in a few days everyone in Tuscany, irrespective of their age, could finally register for their first shot of the vaccine; that means "click day", i.e. continuously polling the registration website until your registration category is open.  I am sure a bird will eventually announce when the service will be open, i.e. the time of the day, so no need to stay on the lookout for the whole day, but I figured I could use this as an excuse to do a little bit of Web scraping in Common Lisp, so here it goes.

Dependencies first:
(ql:quickload "drakma")     ; to fire HTTP requests
(ql:quickload "local-time") ; to track of when the scraping last happened
(ql:quickload "st-json")    ; well...you guessed it
(ql:quickload "uiop")       ; read env variables
The next step is writing our MAIN loop:

- Wait for categories to be activated: new categories can be added, or existing ones can be activated
- Send an SMS to notify me about newly activated categories (so I can pop the website open, and try to register)
- Repeat
(defun main ()
    (with-simple-restart (continue "Ignore the error")
WAIT-FOR-ACTIVE-CATEGORIES is implemented as loop form that:

- Checks if categories have been activated, in which case the list of recently activated ones is returned
- Otherwise it saves the current timestamp inside *LAST-RUN-AT* and then goes to sleep for a little while
(defvar *last-run-at* nil "When the categories scraper last run")

(defun wait-for-active-categories ()
  (loop :when (categories-activated-p) :return it
        :do (progn
              (setf *last-run-at* (local-time:now))
Let's now dig into the category processing part.  First we define a special variable, *CATEGORIES-SNAPSHOTS*, to keep track of all the past snapshots of services we scraped so far; next we fetch the categories from the remote website, see if any of them got activated since the last scraping, and last we push the latest snapshot into *CATEGORIES-SNAPSHOTS* and do some _harvesting_ to make sure we don't run out of memory because of all these scraping done so far:
(defvar *categories-snapshots* nil
  "List of categories snapshots -- handy to see what changed over time.")

(defun categories-activated-p ()
  (let ((categories (fetch-categories)))
    (unless *categories-snapshots*
      (push categories *categories-snapshots*))
    (prog1 (find-recently-activated categories)
      (push categories *categories-snapshots*)
Before fetching the list of categories we are going to define a few utilities: a condition signaling when HTML is returned in spite of JSON (usually a sign that the current session expired):
(define-condition html-not-json-content-type () ())
The URLs for the registration page (this will be used later, when generating the SMS) and for the endpoint returning the list of categories:
(defparameter *prenotavaccino-home-url* "https://prenotavaccino.sanita.toscana.it")
(defparameter *prenotavaccino-categories-url* "https://prenotavaccino.sanita.toscana.it/api/index")
A cookie jar to create a _persistent_ connection with the website under scraping:
(defvar *cookie-jar* (make-instance 'drakma:cookie-jar)
  "Cookie jar to hold session cookies when interacting with Prenotavaccino")
With all this defined, fetching the list of categories becomes as easy as firing a request to the endpoint and confirm the response actually contains JSON and not HTML: if JSON, parse it and return it, otherwise, simply try again.  The idea behind "trying again" is that when the response contains HTML, that will most likely contain a redirect to the home page with a newly created session cookie, and since the same cookie jar is getting used, the simple fact that we processed that response should be enough to make the next API call succeed:
(defun parse-json (response)
  (let* ((string (flexi-streams:octets-to-string response)))
    (st-json:read-json string)))

(defun fetch-categories ()
  (flet ((fetch-it ()
           (multiple-value-bind (response status headers)
               (drakma:http-request *prenotavaccino-categories-url* :cookie-jar *cookie-jar*)
             (declare (ignore status))
             (let ((content-type (cdr (assoc :content-type headers))))
               (if (equal content-type "text/html")
                   (error 'html-not-json-content-type)
         (parse-it (response)
           (st-json:getjso "categories" (parse-json response))))
      (handler-case (fetch-it)
        (html-not-json-content-type (c)
          (declare (ignore c))
          (format t "~&Received HTML instead of JSON. Retrying assuming the previous session expired...")
Note: we retry only once, so if two consecutive responses contain HTML instead of JSON, that would result in the debugger to pop open.

Finding all the recently activated categories consists of the following steps:

- for each category just scraped
- compare it with the same in the last snapshot of categories
- a category is considered as _recently_ activated if it's active and it is not present inside the second to last snapshot, or it was present but either it was inactive or if its title has changed (yes, sometimes the title of a category is changed to advertise that new people with different age can now sign up)

Anyways, all the above translates to the following:
(defun category-name (cat) (st-json:getjso "idCategory" cat))
(defun category-title (cat) (st-json:getjso "title" cat))
(defun category-active-p (cat)
  (and (eql (st-json:getjso "active" cat) :true)
       (eql (st-json:getjso "forceDisabled" cat) :false)))

(defun find-recently-activated (categories)
  (flet ((category-by-name (name categories)
           (find name categories :key #'category-name :test #'equal)))
    (let ((prev-categories (first *categories-snapshots*)))
      (loop :for cat :in categories
            :for cat-prev = (category-by-name (category-name cat) prev-categories)
            :when (and (category-active-p cat)
                       (or (not cat-prev)
                           (not (category-active-p cat-prev))
                           (not (equal (category-title cat-prev) (category-title cat)))))
            :collect cat))))
Harvesting is pretty easy: define a maximum length threshold for the list of snapshots and when the list grows bigger than that, start popping items from the back of the list as new values are pushed to the front:
(defparameter *categories-snapshots-max-length* 50
  "Maximum numbers of categories snapshots to keep around")

(defun harvest-categories-snapshots ()
  (when (> (length *categories-snapshots*) *categories-snapshots-max-length*)
    (setf *categories-snapshots*
          (subseq *categories-snapshots* 0 (1+ *categories-snapshots-max-length*)))))
Note: I could have removed "consecutive duplicates" from the list, or prevented these from getting stored in the list to begin with, but I am going to leave this as an exercise for the reader ;-)

Two pieces of the puzzle are still missing: RANDOM-SLEEP, and SEND-SMS.  For RANDOM-SLEEP we decide the minimum number of seconds that the scraper should sleep for, and then add some _randomness_ to it (like the remote site cared that we pretended to try and act like a _human_, but let's do it anyway):
(defparameter *sleep-seconds-min* 60
  "Minimum number of seconds the scraper will sleep for")
(defparameter *sleep-seconds-jitter* 5
  "Adds a pinch of randomicity -- see RANDOM-SLEEP")

(defun random-sleep ()
  (sleep (+ *sleep-seconds-min* (random *sleep-seconds-jitter*))))
For the SMS part instead, we are going to use Twilio; first we define all the parameters required to send SMSs:

- Account SID
- Auth token
- From number
- To numbers (space separated values)
(defun getenv-or-readline (name)
  "Gets the value of the environment variable, or asks the user to provide
  a value for it."
  (or (uiop:getenv name)
        (format *query-io* "~a=" name)
        (force-output *query-io*)
        (read-line *query-io*))))

(defvar *twilio-account-sid* (getenv-or-readline "TWILIO_ACCOUNT_SID"))
(defvar *twilio-auth-token* (getenv-or-readline "TWILIO_AUTH_TOKEN"))
(defvar *twilio-messages-api-url*
  (format nil "https://api.twilio.com/2010-04-01/Accounts/~a/Messages.json" *twilio-account-sid*))
(defvar *twilio-from* (getenv-or-readline "TWILIO_FROM_NUMBER"))
(defvar *twilio-to-list*
  (split-sequence:split-sequence #\Space (getenv-or-readline "TWILIO_TO_NUMBERS")))
Next we assemble the HTTP request, fire it, and do some error checking to signal an error in case it failed to deliver the message to any of the _to_ numbers specified inside *TWILIO-TO-LIST*:
(defun send-sms (&optional (body (sms-body)))
  (flet ((send-it (to)
           (drakma:http-request *twilio-messages-api-url*
                                :method :post
                                :basic-authorization `(,*twilio-account-sid* ,*twilio-auth-token*)
                                :parameters `(("Body" . ,body)
                                              ("From" . ,*twilio-from*)
                                              ("To" . ,to)))))
    (let (failed)
      (dolist (to *twilio-to-list*)
        (let* ((jso (parse-json (send-it to)))
               (error-code (st-json:getjso "error_code" jso)))
          (unless (eql error-code :null)
            (push jso failed))))
      (if failed
          (error "Failed to deliver **all** SMSs: ~a" failed)
Last but not least, the SMS body; we want to include in the message which services got activated since the last time the scraper run, and to do so we:

- Take the latest snapshot from *CATEGORIES-SNAPSHOTS*
- Temporarily set *CATEGORIES-SNAPSHOTS* to its CDR, like the latest snapshot wasn't recorded yet
- Call FIND-RECENTLY-ACTIVATED effectively pretending like we were inside CATEGORIES-ACTIVATED-P and were trying to understand if anything got recently activated (dynamic variables are fun, aren't they?!)
(defun sms-body ()
  (let* ((categories (first *categories-snapshots*))
         (*categories-snapshots* (cdr *categories-snapshots*)))
    (format nil "Ora attivi: ~{~A~^, ~} -- ~a"
            (mapcar #'category-title
                    (find-recently-activated categories))
Note: "Ora attivi:" is the Italian for "Now active:"

And that's it: give `(main)` a go in the REPL, wait for a while, and eventually you should receive an SMS informing you which category got activated!
> (find "LastMinute" (first *categories-snapshots*)
        :key #'category-name
        :test #'string=)
   :ALIST (("idCategory" . "LastMinute") ("active" . :TRUE)
           ("forceDisabled" . :FALSE) ("start" . "2021-06-02")
           ("end" . "2021-07-31") ("title" . "Last<br/>Minute")
            . "<b>ATTENZIONE.</b> Prenotazione degli slot rimasti disponibili nelle prossime 24H.")
           ("message" . "Il servizio aprirà alle ore 19:00 di ogni giorno")
           ("updateMaxMinYear" . :TRUE)))

> (sms-body)
"Ora attivi: Last<br/>Minute -- https://prenotavaccino.sanita.toscana.it"
Happy click day, Italians!

2021-05-31 (permalink)

TIL: Untracked files are stored in the third parent of a stash commit.
Untracked files are stored in the third parent of a stash commit. (This isn't actually documented, but is pretty obvious from The commit which introduced the -u feature, 787513..., and the way the rest of the documentation for git-stash phrases things... or just by doing git log --graph stash@{0})
You can view just the "untracked" portion of the stash via:
git show stash@{0}^3
or, just the "untracked" tree itself, via:
git show stash@{0}^3:
or, a particular "untracked" file in the tree, via:
git show stash@{0}^3:<path/to/file>
Source: [stackoverflow](https://stackoverflow.com/questions/12681529/in-git-is-there-a-way-to-show-untracked-stashed-files-without-applying-the-stas#:~:text=Stash%20entries%20can%20be%20made,as%20part%20of%20the%20diff)

2021-05-27 (permalink)

Few days ago Replit announced support for [every programming language](https://twitter.com/Replit/status/1396915485325860868)!

I never heard of [NixOS](https://nixos.org/) (that's what enables them to support "every programming language"), but I decided to give it a go anyway, and after a little bit of struggles I was then able to put together the following Repls:

- [Common Lisp](https://replit.com/@iamFIREcracker/Common-Lisp) -- of course the first one had to be one about CL
- [Common Lisp w/ SSL](https://replit.com/@iamFIREcracker/Common-Lisp-with-SSL) -- it turns out you have to mess with `LD_LIBRARY_PATH` to get SSL to work with `:cl+ssl`
- [Game of Life in Common Lisp](https://replit.com/@iamFIREcracker/Common-Lisp-greater-Game-of-Life) -- why not...
- [10 PRINT.BAS](https://replit.com/@iamFIREcracker/10PRINTBAS) -- ever heard of the famous one-line Commodore 64 BASIC program to generate mazes?  They even wrote a whole [book](https://10print.org/) about it!

This is pretty cool, well done Replit!

This is the famous one-line Commodore 64 BASIC program to generate random mazes (the Commodore 64 uses the [PETSCII](https://www.c64-wiki.com/wiki/PETSCII) character set, not ASCII):
10 PRINT CHR$(205.5+RND(1)); : GOTO 10
While this is a Common Lisp equivalent:
(loop (princ (aref "/\\" (random 2))))
If the above run a little bit too fast int he REPL (it most likely will, these days), then you can try with this other alternative:
  (princ (aref "/\\" (random 2)))
  (sleep (/ 1 30)))
Happy retro-hacking!

2021-05-21 (permalink)

Example of how to use [AbortController](https://developer.mozilla.org/en-US/docs/Web/API/AbortController) to cancel an action when input changes:
let currentJob = Promise.resolve();
let currentController;

function showSearchResults(input) {
  if (currentController) currentController.abort();
  currentController = new AbortController();
  const { signal } = currentController;

  currentJob = currentJob
    .finally(async () => {
      try {
        const response = await fetch(getSearchUrl(input), { signal });
        await displayResults(response, { signal });
      } catch (err) {
        if (err.name === 'AbortError') return;
      } finally {
Source: [twitter](https://twitter.com/jaffathecake/status/1395682545090633729)

2021-05-16 (permalink)

Configuring lightline.vim to display the quickfix's title

In case anyone was interested in displaying the quickfix title while on quickfix buffers (works with location-list buffers too), all you have to do is loading the following into the runtime:
let g:lightline = {
    \ 'component': {
    \   'filename': '%t%{exists("w:quickfix_title")? " ".w:quickfix_title : ""}'
    \   },
    \ }
NORMAL  [Quickfix List] | -
And after:
NORMAL  [Quickfix List] :ag --vimgrep --hidden --smart-case --nogroup --nocolor --column stream-name | -
Related link: https://www.reddit.com/r/vim/comments/ej2bvx/change_the_quickfix_title/

Lack's Redis session store does not appear to be thread-safe

Steps to reproduce:

- Load the necessary dependencies:
(ql:quickload '(:clack :lack :lack-session-store-redis :cl-redis))
- Setup a dummy "Hello, World" application:
(defparameter *app*
  (lambda (env)
    (declare (ignore env))
    '(200 (:content-type "text/plain") ("Hello, World"))))
- Setup the middleware chain, one that uses Redis as session storage:
(setf *app*
         :store (lack.session.store.redis:make-redis-store :connection (make-instance 'redis:redis-connection
                                                                                      :host #(0 0 0 0)
                                                                                      :port 6379
                                                                                      :auth "auth")))
- Start the app:
> (defvar *handler* (clack:clackup *app*))
Hunchentoot server is started.
Listening on
- Run `wrk` against it:
$ wrk -c 10 -t 4 -d 10
Running 10s test @
  4 threads and 10 connections
Expected behavior:

- The test runs successfully, and the number of requests per second is reported

Actual behavior:

- Lots of errors are signaled; from unexpected errors:
Thread: 7; Level: 1

Redis error: NIL

ERR Protocool error: invalid multibulk length
   [Condition of type REDIS:REDIS-ERROR-REPLY]

  R 0. ABORT - abort thread (#<THREAD "hunchentoot-worker-" RUNNING {100154F693}>)

  F 0.  ((:METHOD REDIS:EXPECT ((EQL :STATUS))) #<unused argument>) [fast-method]
  F 1.  (REDIS::SET "session:fdf15f46896842bcd84ef2e87d2321e97419f70e" "KDpQQ09ERSAxICg6SEFTSC1UQUJMRSAxIDcgMS41IDEuMCBFUVVBTCBOSUwgTklMKSk=")
  F 4.  ((LAMBDA (LACK.MIDDLEWARE.BACKTRACE::ENV) :IN "/Users/matteolandi/Workspace/lack/src/middleware/backtrace.lisp") (:REQUEST-METHOD :GET :SCRIPT-NAME "" :PATH-INFO "/" ...))
... to unexpected end of file:
Thread: 8; Level: 1

Redis error: end of file on #<FLEXI-STREAMS:FLEXI-IO-STREAM {100AF94323}>

  R 0. RECONNECT - Try to reconnect and repeat action.
  R 1. ABORT     - abort thread (#<THREAD "hunchentoot-worker-" RUNNING {1008949253}>)

  F 0.  (REDIS::SET "session:5b391bd4699e86f5b6e757451572c0a3ba825557" "KDpQQ09ERSAxICg6SEFTSC1UQUJMRSAxIDcgMS41IDEuMCBFUVVBTCBOSUwgTklMKSk=")
  F 3.  ((LAMBDA (LACK.MIDDLEWARE.BACKTRACE::ENV) :IN "/Users/matteolandi/Workspace/lack/src/middleware/backtrace.lisp") (:REQUEST-METHOD :GET :SCRIPT-NAME "" :PATH-INFO "/" ...))
A similar error was mentioned on `cl-redis` GitHub's space ([cl-redis/issues#19](https://github.com/vseloved/cl-redis/issues/19)), to which the author replied that one the reasons why this might happen is when trying to share the same connection between multiple threads.  So a took a look the implementation of the Redis store, and it looks like it is indeed sharing the same Redis connection between different HTTP requests (i.e. multiple threads).

The following seems to be fixing the problem though I would like to hear @fukamachi's view on this, before submitting a pull-request:
diff --git a/src/middleware/session/store/redis.lisp b/src/middleware/session/store/redis.lisp
index 9e489d9..3ccfaec 100644
--- a/src/middleware/session/store/redis.lisp
+++ b/src/middleware/session/store/redis.lisp
@@ -54,9 +54,8 @@
 (defun redis-connection (store)
   (check-type store redis-store)
   (with-slots (host port auth connection) store
-    (unless (redis::connection-open-p connection)
-      (setf connection
-            (open-connection :host host :port port :auth auth)))
+    (setf connection
+          (open-connection :host host :port port :auth auth))

 (defmacro with-connection (store &body body)

2021-05-06 (permalink)

Is it possible to imagine a future where “concert programmers” are as common a fixture in the worlds auditoriums as concert pianists? In this presentation Andrew will be live-coding the generative algorithms that will be producing the music that the audience will be listening too. As Andrew is typing he will also attempt to narrate the journey, discussing the various computational and musical choices made along the way. A must see for anyone interested in creative computing.
Andrew Sorensen Keynote: "The Concert Programmer" - OSCON 2014 ([link to video](https://www.youtube.com/watch?v=yY1FSsUV-8c))

2021-05-03 (permalink)

Calling MAKE-INSTANCE with a class name which is only known at run-time

In CL, how can you instantiate a class (i.e. call MAKE-INSTANCE) if you had a string representing its name (and the package it was defined in)?
> (intern "chat.channels.turbo::turbo-channel")

> (intern (string-upcase "chat.channels.turbo::turbo-channel"))

> (read-from-string "chat.channels.turbo::turbo-channel")
At first I gave INTERN ago (I am not exactly sure why though, but that's where my mind immediately went to), but then irrespective of the case of the string I feed into it, it would end up using quote characters (i.e. `|`s).  READ-FROM-STRING on the other hand seemed to get the job done, though I would like to avoid using it as much as possible, especially if the string I am feeding it with is getting generated _externally_ (e.g. sent over a HTTP request).

Well, as it turns out, if you have the strings in `package:symbol` format, then you can parse out the package and symbol names and use FIND-PACKAGE and FIND-SYMBOL to get the symbol you are looking for:
> (find-symbol (string-upcase "turbo-channel") (find-package (string-upcase "chat.channels.turbo")))

> (make-instance *)
PS. @zulu.inuoe on Discord was kind enough to post this utility function that he uses:
this is a thing I use but uhh no guarantees. it also doesn't handle escape characters like | and \
(defun parse-entry-name (name)
  "Parses out `name' into a package and symbol name.

  symbol          => nil, symbol
  package:symbol  => package, symbol
  package::symbol => package, symbol
  (let* ((colon (position #\: name))
         (package-name (when colon (subseq name 0 colon)))
         (name-start (if colon
                         (let ((second-colon (position #\: name :start (1+ colon))))
                           (if second-colon
                               (prog1 (1+ second-colon)
                                 (when (position #\: name :start (1+ second-colon))
                                   (error "invalid entry point name - too many colons: ~A" name)))
                               (1+ colon)))
         (symbol-name (subseq name name-start)))
    (values package-name symbol-name)))

> (multiple-value-bind (package-name symbol-name)
      (parse-entry-name "chat.channels.turbo::turbo-channel")
    (find-symbol (string-upcase symbol-name)
                 (if package-name (find-package (string-upcase package-name)) *package*)))

2021-05-02 (permalink)

Splitting routes into packages, with Caveman

If you use [caveman](https://github.com/fukamachi/caveman) to build your Web applications, sooner or later you will realize that it would not let you create an <APP> instance in one package, and use DEFROUTE in another.  Why? Because the library internally keeps a HASH-TABLE mapping <APP> instances to the package that was active at the time these instances were created, and since DEFROUTE looks up the <APP> instance in this map using *PACKAGE* as key, if the current package is different from the one which was active when <APP> was created then no <APP> instance will be found, and error will be signalled, and you will be prompted to do something about it:
(defmethod initialize-instance :after ((app <app>) &key)
  (setf (gethash *package* *package-app-map*) app))

(defun find-package-app (package)
  (gethash package *package-app-map*))
Of course I was not the first one bumping into this (see [caveman/issues/112](https://github.com/fukamachi/caveman/issues/112)), but unfortunately the one workaround listed in the thread did not seem to work: it's not a matter of carrying the <APP> instance around (which might or might not be annoying), but rather of _changing_ *PACKAGE* before invoking DEFROUTE so that it can successfully look up the <APP> instance inside *PACKAGE-APP-MAP* at macro expansion time.

But clearly you cannot easily change *PACKAGE* before executing DEFROUTE, or otherwise all the symbols used from within the route definition will most likely be unbound at runtime (unless of course such symbols are also accessible from the package in which the <APP> instance was created).  So where do we go from here?  Well, if we cannot _override_ *PACKAGE* while calling DEFROUTE from within a different package, then maybe what we can do is _adding_ *PACKAGE* to *PACKAGE-APP-MAP* and make sure it links to the <APP> instance you already created!
(defun register-routes-package (package)
  (setf (gethash package caveman2.app::*package-app-map*) *web*))
Add a call to REGISTER-ROUTES-PACKAGE right before your first call to DEFROUTE and you should be good to go:
(in-package :cl-user)
(defpackage chat.routes.sessions
  (:use :cl :caveman2 :chat.models :chat.views :chat.routes))
(in-package :chat.routes.sessions)
(register-routes-package *package*)

(defroute "/" ()
  (if (current-user)
    (redirect-to (default-chatroom))
    (render :new :user)))

2021-04-26 (permalink)

On adding Webpack assets to a server-side rendered view (in Common Lisp)

First you need to instruct `webpack` to save compilation metadata such as the paths of the assets being generated, into a file in a well-known location:
// webpack.config.json

class MetaInfoPlugin {
  constructor(options) {
    this.options = { filename: 'meta.json', ...options };

  apply(compiler) {
    compiler.hooks.done.tap(this.constructor.name, stats => {
      const files = {};
      for (let filename of Object.keys(stats.compilation.assets)) {
        const parts = filename.split(/\./);
        const extension = parts[parts.length - 1];
        if (!files[extension]) {
          files[extension] = [ filename ];
        } else {
      const metaInfo = {
      const json = JSON.stringify(metaInfo);
      return new Promise((resolve, reject) => {
        fs.writeFile(this.options.filename, json, 'utf8', error => {
          if (error) {

  plugins: [
    new MetaInfoPlugin({ filename: 'static/dist/meta.json' }),
Run `webpack` and confirm the file is getting generated successfully:
$ cat static/dist/meta.json | json_pp
   "files" : {
      "css" : [
      "js" : [
      "map" : [
Next, somewhere inside your server application, parse the metadata file:
(defparameter *webpack-meta* (merge-pathnames #P"dist/meta.json" *static-directory*))
(defparameter *webpack-assets* (st-json:read-json (uiop:read-file-string *webpack-meta*)))
...define a function to generate proper HTML to include the assets:
(defun webpack-assets ()
    (loop for css in (st-json:getjso* "files.css" *webpack-assets*)
          do (:link :rel "stylesheet" :href (format nil "/dist/~A" css) :data-turbo-track "reload"))
    (loop for js in (st-json:getjso* "files.js" *webpack-assets*)
          do (:script :src (format nil "/dist/~A" js) :data-turbo-track "reload")) ))
...and use it:
(defmacro with-page ((&key (title "Chat!")) &body body)
      (:title ,title)
     (:body ,@body))))
That's it!
<!DOCTYPE html>
<html lang=en>
  <meta charset=UTF-8>
  <link rel=stylesheet
  <script src=/dist/app-8f5ff9499dfe4b9fa887.js

2021-04-16 (permalink)

* finished reading [Smalltalk Best Practice Patterns](https://www.amazon.it/Smalltalk-Best-Practice-Patterns-Kent-dp-013476904X/dp/013476904X)

2021-04-07 (permalink)

Skip "symbol" regions when matching parentheses

Lisp symbols can contain parenthesis when surrounded by vertical bars (i.e. `|Symbol-Name|`), and in case of _unbalanced_ parenthesis matchparen.vim would fail to highlight the "right" pair:
(foo |)-close|)
|     `-- highlighted
`-- cursor position
After "symbol" is added to the list of regions to test against, matchparen.vim will highlight the _expected_ closing pair:
(foo |)-close|)
|             `-- highlighted
`-- cursor position
I know it's very unlikely for one to define a symbol with such a name, but should that ever happen, this fix will make matchparen.vim behave properly.

Anyways, I opened [PR#8079](https://github.com/vim/vim/pull/8079) for this...let's see what Bram and the others think about it.

Looking into a 3 months old PR that I opened to vlime to get it to properly handle char literals like `#\(` or `#\)`

First things first: _disable_ vim-lispindent, or otherwise you are going to start doubting yourself, because unable to reproduce the issue.  Open .vim/pack/iamFIREcracker/start/vim-lispindent/plugin/lispindent.vim, and replace the following lines:
if !empty(g:lispindent_filetypes)
  augroup lispindent_filetypes
    execute 'autocmd FileType ' . g:lispindent_filetypes . ' call lispindent#init()'
  augroup END
if !empty(g:lispindent_filetypes)
  " augroup lispindent_filetypes
  "   autocmd!
  "   execute 'autocmd FileType ' . g:lispindent_filetypes . ' call lispindent#init()'
  " augroup END
Restart vim, have vlime take over lisp files indentation, and you should be good to go (i.e. reproduce the issue the MR is trying to fix).

Now, let's have a look at one of the comments which were left on the PR:
There are some more searchpairpos uses; shouldn't they all use this (or a similar) skip argument? Perhaps we should create one or two helper functions?
These are the functions which make use of `searchpairpos()`:

- `vlime#ui#CurTopExprPos()`
- `vlime#ui#CurOperator()`
- `vlime#ui#SurroundingOperator()`
- `vlime#ui#ParseOuterOperators(3)`
- `vlime#ui#CurArgPos()`

Let's load the following into a buffer and see how each of the above behave:
(defun hello-world()
  (format t "Hello, world!"))

(defun open-parenthesis-p (ch)
  "Some documentation containing unbalanced ("
  (char= ch #\())

(defun close-parenthesis-p (ch)
  "Some documentation containing unbalanced )"
  (char= ch #\)))
Placing the cursor onto the `#\(` character literal, and running `echo vlime#ui#CurTopExprPos('begin')` correctly logs: `[4, 1]`; however, if we pass in `'end'` we get `[10, 17]` instead of the expected `[6, 17]`.  Similarly, if we place the cursor onto the `#\)` character literal and run `echo vlime#ui#CurTopExprPos('end')` we get: `[10, 17]`, but if we use `'begin'` we get `[4, 1]` instead of `[8, 1]`.  Switching to our `searchpairpos()` wrapper seems to be fixing the problem (see [6fe69f3](https://github.com/vlime/vlime/commit/6fe69f330999e392493a3d7164a3b93ca614bb58)).

Placing the cursor onto the `#\(` character literal, and running `echo vlime#ui#CurOperator()` returns an empty string, and interestingly enough adding `skip` to `searchpairpos()` does not seem to make much difference.  Why? Because the function calls `vlime#ui#CurExpr()` which returns `()` instead of the expected `(char= ch #\())`.  Why is `vlime#ui#CurExpr()` returning `()`? Because we are re-using the same logic of matchparen.vim, which intentionally does _not_ try to skip over certain syntax regions if the position the was in at the time the search operation was issued happened to be within one of those syntax regions already (i.e. it will highlight parentheses inside comments).

Question: is this the expected behavior, or should `vlime#ui#CurExpr()` return the actual surrounding expression instead?

In any case, since we have to deal with incomplete expressions anyway, I am proposing that we stop calling `vlime#ui#CurExpr()` here, and simply use the `searchpairpos()` wrapper instead (see [debdd92](https://github.com/vlime/vlime/commit/debdd924d8af2112f3d8a7afad0fe96a0e39f6f9)).

Off to `vlime#ui#SurroundingOperator()`; placing the cursor onto the closing parenthesis right after the `#\(` character literal and calling this function outputs an empty string instead of `char=`.  The same happens if we place the cursor at the closing parenthesis right after the `#\)` character literal of the second function definition.  Using our `searchpairpos()` wrapper here, seems to be fixing the problem (see [dac541a](https://github.com/vlime/vlime/commit/dac541a49414152353516e3c10d7f604479f6aea)).

The next one is `vlime#ui#ParseOuterOperators()`; placing the cursor onto the closing parenthesis right after the `#\(` character literal and calling `vlime#ui#ParseOuterOperators(3)` returns `[['', -1, [6, 15]], ['char=', 2, [6, 3]], ['', 1, [5, 45]]]` instead of the expected `[['char=', 3, [6, 3]], ['defun', 7, [4, 1]]]`.  Again, using our `searchpairpos()` wrapper fixes the problem (see [f8a8777](https://github.com/vlime/vlime/commit/f8a8777eafb3a0cd0a47eeec50dddea2d7278cd7)).

Lastly, for `vlime#ui#CurArgPos()`, let's use these function definitions instead:
(defun open-parenthesis-yoda-p (ch)
  "Some documentation containing unbalanced ("
  (char= #\( ch))

(defun close-parenthesis-yoda-p (ch)
  "Some documentation containing unbalanced )"
  (char= #\) ch))
Placing the cursor onto the `ch` symbol right after the `#\(` character literal and calling `vlime#ui#CurArgPos()` outputs `0` instead of the `2`.  Similarly, doing the same from the `ch` symbol right after the `#\)` character literal outputs `5` instead of `2`.  To fix this, not only we had to use our `searchpairpos()` wrapper, but also change the _parser_ to try to deal with character literals (see [5ea8f32](https://github.com/vlime/vlime/commit/5ea8f32fe74591c2cf42c09adf5629e440ce0027)).

2021-04-06 (permalink)

Smalltalk is one of the simplest, smallest programming languages in the world. Thus, it is supremely easy to learn. Its syntax can fit on a post card!
[Syntax on a Post Card](https://richardeng.medium.com/syntax-on-a-post-card-cb6d85fabf88).

PS. Follows the transcription of that postcard:
exampleWithNumber: x

"A method that illustrates every part of Smalltalk method syntax
except primitives. It has unary, binary, and keyword messages,
declares arguments and temporaries, accesses a global variable
(but not and instance variable), uses literals (array, character,
symbol, string, integer, float), uses the pseudo variables
true false, nil, self, and super, and has sequence, assignment,
return and cascade. It has both zero argument and one argument blocks."

    true & false not & (nil isNil) ifFalse: [self halt].
    y := self size + super size.
    #($a #a "a" 1 1.0)
        do: [:each | Transcript show: (each class name);
                                 show: ' '].
    ^ x < y
PPS. Yes, lately I got curious about Smalltalk, mostly because its highly interactive development experience; having said that, I am not really sold on the idea of being forced into using an IDE to fully _experience_ the language, but who knows, maybe I am going to give it a try...

a system is something you compile and load into the runtime, while a package is something that you access to get a hold of the functions and variables and classes named by symbols inside that package
QL doesn't install packages, it downloads ASDF systems
ASDF then loads these systems, which might create packages in the Lisp image
These were taken from Discord, a while back.  Surely packages, and systems are one of the most confusing things new Common Lisp programmers need to get accustomed to.

Casually chatting on Discord (Lisp/#web) on the reasons why one would try to use React even for a 100% no-JavaScript site:
Yeah, exactly. You can be as dynamic or static as you want, but what I was specifically referring to was to use webpack or whatever else to render the React pages into Plain Old HTML, you can put that on a dumb static file server and it'd Just Work, with no JS required on the client.
You could, on top of that, or instead of it, do as you hinted at before, and have a React engine running server-side which renders React->HTML on-the-fly for a client request. You might choose to do this if you need dynamic (at the time of request) data, but don't want to offload the rendering & IO (to fetch the data) calls to the client
Finally on top of these options, of you can of course serve 'Normal' React (well, compiled into Plain Old Javascript) which is rendered client-side
and if the question is "Why overcomplicate this", it's because you have a lot of freedom of where to put the work, while keeping the same rendering code regardless
It kind of makes sense if you think about it:

- You use React because it will hopefully make it easier to maintain the different pages of the site, even if it's going to be 100% JavaScript free
- In the meanwhile, yourself/the team gets acquaintanced with the piece of technology that most likely you will end up using when adding some sprinkles of JavaScript to the client -- because it will happen...sooner or later you will want to add that to your "app"

Few pointers:

- [Server side rendering with React and NodeJs](https://aparnajoshi.netlify.app/server-side-rendering-with-react-and-nodejs)
- [How to Generate Static Websites with React](https://www.cloudsavvyit.com/5418/how-to-generate-static-websites-with-react/)

? read book: Gödel, Escher, Bach

? read book: leminal thinking

2021-04-05 (permalink)

* finished reading [Structure and Interpretation of Computer Programs](https://mitpress.mit.edu/sites/default/files/sicp/index.html)

2021-03-25 (permalink)

About playing with remote Node.js REPLs; I [recently](https://matteolandi.net/plan.html#day-2021-03-24) said:
I am sure there might be better and more idiomatic ways of dealing with this, but at least it unblocks me for now.
Well, it turns out _there_ is a better way to make sure socket messages are communicated within a given asynchronous context; all you need is a combination of [`Stream.Transform`](https://nodejs.org/api/stream.html#stream_class_stream_transform) and [`Readable.prototype.pipe`](https://nodejs.org/api/stream.html#stream_readable_pipe_destination_options).  From [GitHub](https://github.com/nodejs/node/issues/37866#issuecomment-805578727):
The socket is created before outside of your async scope therefore it's clearly not bound to it.
The repl instance is created within the async scope but it seems it's not modeled as async resource. It operates in caller context which are synchronous calls via socket stream (which is in turn an EventEmitter). So this ends up in the Async context of the socket...
You could try to use a `Transform` stream which passes the data through but switches the async context:
Then the following snippets follow; the custom `Transform` class:
const { Transform } = require("stream");
class MyTransformStream extends Transform {
  constructor(options) {
    this._res = new asyncHooks.AsyncResource("MyTransformStream");
  _transform(chunk, encoding, cb) {
    this._res.runInAsyncScope(cb, null, null, chunk);
and how to use it to "properly" setup the remote REPL:
const myStream = new MyTransformStream();
var repl = require("repl").start({
  input: socket.pipe(myStream), // created within your async scope so ALS propagation works
  output: socket,
Amazing, I can now carry on with my REPL experiments.

2021-03-24 (permalink)

Alright, I think I figured a way to make [this](https://matteolandi.net/plan.html#day-2021-03-23) work by using a combination of `AsyncResoruce.bind` and `Function.prototype.bind`.  First I use `AsyncResource.bind` to force the `socket.on` callback to run inside the "right" asynchronous context, and then I bind the first argument of that _hooked_ callback (i.e. `thisArg`); lastly, I call the _original_ `socket.on` implementation, passing the _hooked_ callback into it.
  .createServer(function (socket) {
    ctx.run((reqId += 1), () => {
      const onOriginal = socket.on;
      socket.on = function onHooked(data, callback) {
        let callbackHooked = asyncHooks.AsyncResource.bind(callback, "REPL");
        // The result of `AsyncResource.bind` is a function expecting
        // its first argument to be `thisArg`, so to properly finish up
        // the wiring we need to bind it before actually using the transformed
        // callback
        callbackHooked = callbackHooked.bind(this, this);
        return onOriginal.apply(socket, [data, callbackHooked]);

      const repl = require("repl").start({ input: socket, output: socket });
      repl.context.ctx = ctx;
      debug("Created repl", ctx.getStore());
debug("Listening on port 8081");
And with this, `AsyncLocalStorage` is not not losing its context anymore.

I am sure there might be better and more idiomatic ways of dealing with this, but at least it unblocks me for now.

2021-03-23 (permalink)

I tried to update to the latest version of Node, but luck, it's still _broken_:
$ node --version
$ node async_bug.js
Listening on port 8081
For the first snippet, the REPL one, from the remote client's terminal:
$ nc localhost 8081
> ctx.getStore()
For the second snippet, the one without the REPL, the one with `socket.on(...)` only:
$ node async_bug.js
Listening on port 8081
Connected 1
[undefined] on-data from-1

Connected 2
[undefined] on-data from-2
I read through the whole [#33723](https://github.com/nodejs/node/issues/33723) thread, and realized that if I create an instance of `AsyncResource` when the connection is established, and run the `socket.on` handler within the context of that resource, then the asynchronous state does not get lost any longer:
  .createServer(function (socket) {
    ctx.run((reqId += 1), () => {
      const res = new asyncHooks.AsyncResource("REPL");
      debug("Connected", ctx.getStore());
      socket.on("data", (data) => {
          () => {
            debug(`[${ctx.getStore()}]`, "on-data", data.toString());
debug("Listening on port 8081");
on the server's stdout:
$ node async_bug.js
Listening on port 8081
Connected 1
[1] on-data from-1

Connected 2
[2] on-data form-2
I quickly hacked the first snippet again, the REPL one, and implemented a man-in-the-middle solution where the socket's input is pumped into the REPL's stdin from inside an `AsyncResource` context, and it's kinda working (i.e. the asynchronous context is _not_ lost, but of course all the evaluation results are dumped on the server's terminal:
  .createServer(function (socket) {
    ctx.run((reqId += 1), () => {
      const res = new asyncHooks.AsyncResource("REPL");
      const repl = require("repl").start({ input: null, output: null });
      socket.on("data", (data) => {
          () => {
            repl.input.emit("data", data);
      repl.context.ctx = ctx;
      debug("Created repl", ctx.getStore());
debug("Listening on port 8081");
If I run this I get:
$ node async_bug.js
Listening on port 8081
> Created repl 1
> Created repl 2
Clearly the _wiring_ left something to be desired, but at least the context was not lost.

I guess I can work-around that ([eventemitter-asyncresource](https://www.npmjs.com/package/eventemitter-asyncresource) maybe?!), but I wonder if it wasn't somehow expected for the created REPL to be bound to the async context of the request.

2021-03-22 (permalink)

`AsyncLocalStorage` losing state when used with `net.Server` and `repl.REPLServer`

I am poking around a tiny wrapper around `repl` to make it possible for someone interact with a running Node.js instance, remotely,  but when I tried to use `AsyncLocalStorage` to attach some state to the active session, such state gets gets lost as soon as the `REPLServer` takes over.

Save the following into a file:
var asyncHooks = require("async_hooks");
var fs = require("fs");
var util = require("util");
var net = require("net");

function debug(...args) {
  fs.writeFileSync(1, `${util.format(...args)}\n`, { flag: "a" });

var ctx = new asyncHooks.AsyncLocalStorage();
var reqId = 0;

  .createServer(function (socket) {
    ctx.run((reqId += 1), () => {
      var repl = require("repl").start({
        input: socket,
        output: socket,
      repl.context.ctx = ctx;
      debug("Created repl", ctx.getStore());
debug("Listening on port 8081");
and run it:
$ node remote-repl.js
Listening on port 8081
Now from a different terminal, connect to port `8081` to get your remote REPL:
$ nc localhost 8081
Now, accessing `ctx` (well, its content) from the REPL should return the ID of the REPL itself (i.e. an increasing number), but `undefined` is returned instead:
> ctx
AsyncLocalStorage {
  kResourceStore: Symbol(kResourceStore),
  enabled: true
> ctx.getStore()
I suspect it might have something to do with the underlying socket or `EventEmitter`, because when I try to access the content of `ctx` from within a `socket.on('data'...)` block, I still get `undefined`.
  .createServer(function (socket) {
    ctx.run((reqId += 1), () => {
      debug("Connected", ctx.getStore());
      socket.on("data", (data) => {
        debug(`[${ctx.getStore()}]`, "on-data", data.toString());
I filed a bug for this, [node/#37866](https://github.com/nodejs/node/issues/37866), so let's wait to hear back from the core developers.

2021-03-19 (permalink)

for each desired change, make the change easy (warning: this may be hard), then make the easy change
Kent Beck, on [Twitter](https://twitter.com/KentBeck/status/250733358307500032)

2021-03-14 (permalink)

? add page on Netmap to matteolandi.net

2021-03-12 (permalink)

Dynamic variables in Node.js (part 2)

The Node.js implementation of dynamic variables that I presented few weeks ago was a bit...lacking, as it did not support for an _external_ actor to interact with (and change) the dynamic environment of a _running_ application.

Imagine if your application exposed a REPL you could [remotely connect to](https://nodejs.org/en/docs/guides/debugging-getting-started/#enabling-remote-debugging-scenarios); now, if you had all the application settings loaded into the dynamic environment, wouldn't it be amazing if you could change those while the application is running and immediately see the effects of such change?  So do all that, connect to the REPL, run `var { env } = require('dynamic_variables.js')`, only to discover that the actual environment is empty...

That's a bit of a bummer, but it also kind of makes sense if you think about it: all the dynamic bindings are done asynchronously, and when you connect to the REPL you are starting off a completely unrelated execution context, and because of that: a) the dynamic environment turns out to be empty, b) there is no way for you to change it and give the application access to the newly created bindings.

To _fix_ this, we will have to change the internal representation of a dynamic environment, and define it in terms of a stack of _frames_, each containing a set of bindings: a _global_ frame, containing all the _top-level_ bindings (i.e. the ones happening outside of any asynchronous context, when the application boots up); and then a list of _dynamic_ frames to keep track of all the asynchronous bindings.

If we want an _external_ actor to interact with the running application and change its dynamic environment, we will also need a way of _mutating_ existing bindings, living in one of the existing frames, without having to create a new one.  In particular, when run by the external actor this new bind operation is going to affect the global frame; but when run inside an asynchronous context it's going to affect the frames in which those bindings were most recently defined / re-defined.  Hopefully the following Common Lisp snippet will help you understand what I am talking about:
(defparameter *x* 5)

(defun rebind ()
  (setf *x* 15))

(defun test1 ()
  "We create a new _frame_ for *x*, so all the SETF operations affect that"
  (let ((*x* 10))
    (assert (= *x* 15)))
  (assert (= *x* 5)))

(assert (= *x* 5))

(defun test2 ()
  "LET here does not create a new _frame_ for *x*, so SETF ends up actually
  changing the _global_ value"
  (let ((whatever 10))
    (assert (= *x* 15)))
  (assert (= *x* 15)))

(assert (= *x* 15))

(defun test3 ()
  "We don't even create a new _frame_ for *x*...so this behaves like TEST2"
  (assert (= *x* 15)))

(setf *x* 5)
(assert (= *x* 15))
Alright then, let's get to it.  We said the new `DynamicEnvironment` was going to need to keep track of _global_ bindings, as well as _dynamic_ ones; for the first we are going to be using an instance of `Bindings` (i.e. a wrapper around `Map`), while for dynamic ones we will be using an array of `Bindings` all wrapped up inside an instance of `AsyncLocalStorage`:
var DynamicEnvironment = function (...flatBindings) {
  this.globalFrame = new Bindings(parseKVPairs(flatBindings));
  this.dynamicFrames = new AsyncLocalStorage();
Note 1: we are not passing `[]` to `AsyncLocalStorage constructor` because reading it back from an _external_ process will result in an _empty_ context, and if we have to deal with `undefined` values anyway then why bother initializing it in the first place?!
Note 2: the `Bindings` type used here is a little bit different from the one I showed [last time](https://matteolandi.net/plan.html#day-2021-02-28); it has a new `Bindings.prototype.has` method to check if a binding for a given variable exists, its `Bindings.prototype.set` method is now a _destructive_ one (i.e. it mutates the current object), it implements the [iteration protocol](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Iteration_protocols) (it's going to come in handy later on):
var Bindings = function (kvpairs) {
  this.data = new Map(kvpairs);
Bindings.prototype.get = function (name) {
  if (!this.has(name)) {
    throw new Error(`Dynamic variable, unbound: '${name.toString()}'`);
  return this.data.get(name);
Bindings.prototype.has = function (name) {
  assert.ok(name, `Dynamic variable name, invalid: ${name.toString()}`);
  return this.data.has(name);
Bindings.prototype.set = function (key, value) {
  return this.data.set(key, value);
Bindings.prototype[Symbol.iterator] = function () {
  return this.data[Symbol.iterator]();
Looking up a binding inside this new environment now translates to finding first the frame, dynamic or global, that _last_ defined a binding for that variable, and then pull the binding value out of it.
DynamicEnvironment.prototype.get = function (name) {
  return this.findBindingFrameOrGlobal(name).get(name);
DynamicEnvironment.prototype.findBindingFrameOrGlobal = function (name) {
  let frame;
  for (let each of this.dynamicFrames.getStore() || []) {
    if (each.has(name)) {
      frame = each;
  return frame || this.globalFrame;
On the topic of setting a new binding: we previously said we wanted to enable users to _create_ new bindings (i.e. create a new frame), as well as _update_ existing ones, as that would enable an external actor to interact with a running application.  To do that, instead of creating two separate methods, we will be having `DynamicEnvironment.prototype.set` check the arguments it receives and figure out what to do based on its length: if it receives an even number of arguments, it means the user specified new bindings only with no body function, in which case the method will proceed and update existing frames; if on the other hand the method is provided with a body function (i.e. odd number of arguments), then a new frame with the specified bindings is going to be added to the stack of frames, and the body function will be executed inside that new dynamic context:
DynamicEnvironment.prototype.set = function (...args) {
  const updatesExistingFrames = args.length % 2 === 0;
  if (updatesExistingFrames) {
    for (let [key, value] of parseKVPairs(args)) {
      this.findBindingFrameOrGlobal(key).set(key, value);
  } else {
    const kvpairs = parseKVPairs(args.slice(0, args.length - 1));
    const body = args[args.length - 1];
    const bindings = new Bindings(kvpairs);
    return this.dynamicFrames.run(
      [bindings, ...(this.dynamicFrames.getStore() || [])],
Lastly I figured it would be nice to make it possible to iterate over all the bindings of a dynamic environment, so the following implements that:
DynamicEnvironment.prototype[Symbol.iterator] = function () {
  const bindings = new Map(this.globalFrame);
  for (let frame of (this.dynamicFrames.getStore() || []).reverse()) {
    for (let [key, value] of frame) {
      bindings.set(key, value);
  return bindings[Symbol.iterator]();
Alright, let's test this out and see if it works.  We are going to create a timer, and inside of it we are going to be logging the content of the dynamic environment:
var env = new DynamicEnvironment('debug', false);

function timer() {

var tid = setInterval((...args) => timer(...args), 2000);
(You should now start seeing a bunch of `false` messages getting logged in the REPL, every 2 seconds)

Next we are going to listen for incoming connections on port `5001`, and every time we receive one, set up a _remote_ REPL to let the external actor play with the running environment:
var net = require("net");
var repl = require("repl");

function createREPL(socket) {
  console.log("Connection received; creating a REPL!");
  const remote = repl.start({
    prompt: "remote> ",
    input: socket,
    output: socket,
    useGlobal: true,

var server = net.createServer((...args) => createREPL(...args));
server.listen(5001, "localhost");
console.log("Remote REPL started on port 5001");
Let's poke around and see what we can do with it:
$ nc localhost 5001
remote> env
DynamicEnvironment {
  globalFrame: Bindings { data: Map(1) { 'debug' => false } },
  dynamicFrames: AsyncLocalStorage {
    kResourceStore: Symbol(kResourceStore),
    enabled: false
It looks like we can get hold of the dynamic environment (well, its global frame at least); let's try and change it:
remote> env.set('debug', true)
If all worked as expected, you should now be seeing a series of `true` messages getting logged in the REPL; _that_ means were successfully able to poke with one application's dynamic environment, from the outside.

"You could have easily done that with global variables" one could argue, and they would be right, we could have.  Let's mix in some more dynamic bindings, to show what the system is capable of.

First we reset the environment and add an additional binding, then we change our timer to create some dynamic bindings and finally print the whole dynamic environment:
env.set('logfile', '/tmp/server.log', 'debug', false);

function timer() {
  env.set("logfile", "/tmp/different.log", "date", new Date(), () =>
    setTimeout(() => {
    }, 500)
The messages logged in the REPL should now look like the following:
[Map Entries] {
  [ 'debug', false ],
  [ 'logfile', '/tmp/different.log' ],
  [ 'date', 2021-03-12T19:37:59.963Z ]
Let's go back to the remote REPL again, and change the value of the `debug` binding again:
remote> env.set('debug', true)
Wait a couple of seconds, and the messages logged on the REPL should now look like this:
[Map Entries] {
  [ 'debug', true ],
  [ 'logfile', '/tmp/different.log' ],
  [ 'date', 2021-03-12T19:40:54.317Z ]
As we can see the dynamic environment from within the `setTimeout` handler properly reflects the changes done from the _remote_ REPL, as well as the ones triggered by the dynamic bind operation.

And that's it!  I have pushed all the above on [GitHub](https://github.com/iamFIREcracker/dynamic_variables.js), and in there I also added an [example](https://github.com/iamFIREcracker/dynamic_variables.js/tree/master/examples/express) showing how dynamic variables can be be used within an Express.js application.

Till next time.

2021-03-11 (permalink)

During World War II, the Navy realized it was losing a lot of aircraft and could better armor its planes to increase their survival. After analyzing where its planes had suffered the most damage, it determined that it needed to reinforce the planes’ wingtips, central body and elevators.
But a statistician named Abraham Wald argued otherwise. He thought the Navy should reinforce the armor of the planes’ nose, engines and mid-body. But why would he suggest that when the planes were taking more damage to the wingtips, central body and elevators? Because in reality, they weren’t. The planes getting shot in the nose area, engines and mid-body were being destroyed from the damage and weren’t making it back to be analyzed.
The Navy thought it had discovered where its planes were suffering the most damage. Instead, it had discovered where its planes could take damage without being destroyed. It wasn’t looking at the whole sample set.
From [Finding the Missing Bullet Holes](https://onebiteblog.com/finding-the-missing-bullet-holes/)

2021-03-05 (permalink)

Am I the only one having a hard time understanding the implementation of Data-Directed Programming, presented in SICP?

A little bit of background first; we are shown how to _tag_ data ([2.4.2 Tagged data](https://mitpress.mit.edu/sites/default/files/sicp/full-text/book/book-Z-H-17.html#%_sec_2.4.2)):
(define (attach-tag type-tag contents)
  (cons type-tag contents))
(define (type-tag datum)
  (if (pair? datum)
      (car datum)
      (error "Bad tagged datum -- TYPE-TAG" datum)))
(define (contents datum)
  (if (pair? datum)
      (cdr datum)
      (error "Bad tagged datum -- CONTENTS" datum)))
...how to implement type predicates:
(define (rectangular? z)
  (eq? (type-tag z) 'rectangular))
(define (polar? z)
  (eq? (type-tag z) 'polar))
...how to implement _packages_:
(define (real-part-rectangular z) (car z))
(define (imag-part-rectangular z) (cdr z))
(define (magnitude-rectangular z)
  (sqrt (+ (square (real-part-rectangular z))
           (square (imag-part-rectangular z)))))
(define (angle-rectangular z)
  (atan (imag-part-rectangular z)
        (real-part-rectangular z)))
(define (make-from-real-imag-rectangular x y)
  (attach-tag 'rectangular (cons x y)))
(define (make-from-mag-ang-rectangular r a)
  (attach-tag 'rectangular
              (cons (* r (cos a)) (* r (sin a)))))

;; similar definitions for the "polar" representation
...and how to implement _generic_ selectors:
(define (real-part z)
  (cond ((rectangular? z)
         (real-part-rectangular (contents z)))
        ((polar? z)
         (real-part-polar (contents z)))
        (else (error "Unknown type -- REAL-PART" z))))
(define (imag-part z)
  (cond ((rectangular? z)
         (imag-part-rectangular (contents z)))
        ((polar? z)
         (imag-part-polar (contents z)))
        (else (error "Unknown type -- IMAG-PART" z))))
(define (magnitude z)
  (cond ((rectangular? z)
         (magnitude-rectangular (contents z)))
        ((polar? z)
         (magnitude-polar (contents z)))
        (else (error "Unknown type -- MAGNITUDE" z))))
(define (angle z)
  (cond ((rectangular? z)
         (angle-rectangular (contents z)))
        ((polar? z)
         (angle-polar (contents z)))
        (else (error "Unknown type -- ANGLE" z))))
Clearly it's not OK for REAL-PART, or IMAG-PART, or any other _generic_ selector, to require change every time a new implementation is added to the system, and this is where "Data-directed programming" is going to come to the rescue.

The idea ([2.4.3 Data-Directed Programming and Additivity](https://mitpress.mit.edu/sites/default/files/sicp/full-text/book/book-Z-H-17.html#%_sec_2.4.3)) is to construct a table mapping operations (e.g. REAL-PART, IMAG-PART) and types (e.g. POLAR, RECTANGULAR) to the actual function in charge of fulfilling the user need, and of course to provide a way to pluck the specific implementation out of the table.  In particular:

- `(put <op> <type> <item>)` will install `<item>` in the table, indexed by `<op>` and `<type>`
- `(get <op> <type>)` will look up the `<op>,<type>` entry in the table

Note: we are not given the actual implementations of PUT and GET.

With this in mind, we can then re-define the rectangular package as follows:
(define (install-rectangular-package)
  ;; internal procedures
  (define (real-part z) (car z))
  (define (imag-part z) (cdr z))
  (define (make-from-real-imag x y) (cons x y))
  (define (magnitude z)
    (sqrt (+ (square (real-part z))
             (square (imag-part z)))))
  (define (angle z)
    (atan (imag-part z) (real-part z)))
  (define (make-from-mag-ang r a)
    (cons (* r (cos a)) (* r (sin a))))
  ;; interface to the rest of the system
  (define (tag x) (attach-tag 'rectangular x))
  (put 'real-part '(rectangular) real-part)
  (put 'imag-part '(rectangular) imag-part)
  (put 'magnitude '(rectangular) magnitude)
  (put 'angle '(rectangular) angle)
  (put 'make-from-real-imag 'rectangular
       (lambda (x y) (tag (make-from-real-imag x y))))
  (put 'make-from-mag-ang 'rectangular
       (lambda (r a) (tag (make-from-mag-ang r a))))
Here is where things get a bit confusing: why, we are _registering_ REAL-PART for the `(rectangular)` type, and not simply `'rectangular` like we are doing for MAKE-FROM-REAL-IMAG?

The only explanation I could think of is that we are giving the `<type>` argument of the PUT call different meanings:

- For REAL-PART, `<type>` represents the list of the types of the arguments expected by the registered procedure (i.e. one argument, of type RECTANGULAR)
- For MAKE-FROM-REAL-IMAG instead, `<type>` represents the type of the _returned_ instance

And why would we do that?  Because otherwise it would not be possible to dispatch to the "Right" implementation in case of generic functions with the same argument types (both the implementation of MAKE-FROM-REAL-IMAG inside the rectangular and polar packages expects 2 arguments of type NUMBER).

Anyways, a similar package for the polar representation is presented, and then finally we are shown the implementation of APPLY-GENERIC, the procedure responsible for invoking the "Right" procedure based on the _types_ of the arguments of the dispatched action:
(define (apply-generic op . args)
  (let ((type-tags (map type-tag args)))
    (let ((proc (get op type-tags)))
      (if proc
          (apply proc (map contents args))
            "No method for these types -- APPLY-GENERIC"
            (list op type-tags))))))
Here more doubts come to my mind: how can we use this with the MAKE-FROM-REAL-IMAG?

Clearly we cannot simply run `(apply-generic 'make-from-real-imag 4 2)`, as that would fail when trying to apply TYPE-TAG to `(4 2)`.  I thought, maybe we pass `(attach-tag 'rectangular (list 4 2))` to APPLY-GENERIC, but then `(map contents args)` would evaluate to `((4 2))` and that is incompatible with the registered procedure, which expects two numbers and not a list of 2 numbers, right?

So where do we go from here?  There _has_ to be a way, but I just cannot find it.

Posted this on [/r/sicp](https://www.reddit.com/r/sicp/comments/ly7wj3/datadirected_programming_and_how_to_invoke/), let's see if anyone can help me out.

2021-02-28 (permalink)

Dynamic variables in Node.js

For those of you unfamiliar with it, Common Lisp has this concept of "dynamic variables", which are nothing more than _global_ variables with _dynamic_ scope; but what does it mean for a variable to have dynamic scope? That you can bind a new a value to it, and all the _subsequent_ accesses to that variable (within the scope of the binding operation), will return _that_ new value instead of the _previous_ one.  Let's take a look at an example, as I hope it will make things a little bit easier to understand (the asterisks around a variable name indeed mark the variable as dynamic / special):
(defvar *x* 5)

(defun foo ()

(defun bar ()
  (let ((*x* 42))

> *x*

> (foo)

> (bar)

> *x*
Step by step explanation:

- `*x*` was initially bound to `5`, so when we access it we get `5` back -- all good
- We call FOO, which all it does is returning the value of `*x*`, and without much surprise we get `5` back -- so all good
- We call BAR, which binds `*x*` to `42` before calling FOO and returning its return value, and interestingly enough we now get back `42` instead of `5` -- there it is the _dynamic_ scope I was talking about earlier
- Lastly, we access `*x*` and we get `5` back -- we are outside BAR's new binding scope, so the value of `*x*` has been restored to its previous value

Let's go back the definition that I gave earlier: _global_ variables, with _dynamic_ scope.  I know that the first rule of global variables is: "thou shalt not use global variables", but it's just that _sometimes_ they appear to be right tool for the job, especially in the context of Web applications; think about the following use cases:

- getting a hold of the currently logged in user
- automatically adding the content of the `X-Request-Id` header to each log trace
- querying the _right_ database, based on the logged in user's tenant

How would you implement these?  Either you shove all this information up into a context object, and pass it around, _everywhere_; or maybe you forget about all the bad things you read about global variables, and consciously and carefully agree to use them where it really matters, where it really makes a difference.

Anyways, enough talking about global variables, I do not want this to be an essay about its pros or cons (I am sure the Internet is full of material and opinions about it); instead, let's go back to our original goal: trying to implement dynamic variables in Node.js.

It all begins with the definition of a type representing a bag of bindings (we are going to wrap the standard `Map` type for this):
var Bindings = function (kvpairs) {
  this.data = new Map(kvpairs);

> new Bindings().data
Map(0) {}

> new Bindings([['a', 1]]).data
Map(1) { 'a' => 1 }

> new Bindings([['a', 1], ['b', 2]]).data
Map(2) { 'a' => 1, 'b' => 2 }
(Don't worry about that _ugly_ syntax for now, we will deal with it later)

Getting a binding (i.e. getting the value bound to a variable), should be as easy as calling `Map.prototype.get` (plus some additional logic to validate user input):
Bindings.prototype.get = function (name) {
  assert.ok(name, `Dynamic variable name, invalid: ${name}`);
  if (!this.data.has(name)) {
    throw new Error(`Dynamic variable, unbound: '${name}'`);
  return this.data.get(name);
First we check that the name of the variable is indeed _valid_, then we confirm that a binding for that variable actually exists, and finally we return the value bound to that variable.  Let's play with it an confirm it's all working fine:
var bb = new Bindings([['a', 1], ['b', 2]])

> bb.get('a')

> bb.get('b')

> bb.get('c')
Uncaught Error: Dynamic variable, unbound: 'c'
    at getBinding (repl:4:11)

> bb.get()
Uncaught AssertionError [ERR_ASSERTION]: Dynamic variable name, invalid: undefined
    at getBinding (repl:2:10)
    at repl:1:1
    at Script.runInThisContext (vm.js:131:18)
    at REPLServer.defaultEval (repl.js:472:29)
    at bound (domain.js:430:14)
    at REPLServer.runBound [as eval] (domain.js:443:12)
    at REPLServer.onLine (repl.js:794:10)
    at REPLServer.emit (events.js:326:22)
    at REPLServer.EventEmitter.emit (domain.js:486:12)
    at REPLServer.Interface._onLine (readline.js:337:10) {
  generatedMessage: false,
  code: 'ERR_ASSERTION',
  actual: undefined,
  expected: true,
  operator: '=='
Lastly, to set new bindings, we will go about and create a new `Binding` object and initialize it with the existing bindings and the newly defined ones..._merged_ together:
Bindings.prototype.set = function (kvpairs) {
  return new Bindings([...this.data, ...kvpairs]);
Again, let's can play with this to confirm that it's all working as expected:
var bb = new Bindings([['a', 1], ['b', 2]])

> bb.set([['c', 3]]).data
Map(3) { 'a' => 1, 'b' => 2, 'c' => 3 }

> bb.set([['a', 3]]).data
Map(3) { 'a' => 3, 'b' => 2 }

> bb.data
Map(2) { 'a' => 1, 'b' => 2 }
All good, great!  Before we dive into the details of the implementation of dynamic variables, let's first implement a couple of functions which will come in handy down the line.  The first one is to simplify the _syntax_ for creating new bindings; we don't want users to specify new bindings via nested lists (i.e. `[['a', 1], ['b', 2]]`); instead, we would like them to use a _flattened_ list instead (i.e. `['a', 1, 'b', 2]`):
function parseKVPairs(flatBindings) {
    flatBindings.length % 2 === 0,
    `Bindings arguments, expected even number of elements, but got: ${flatBindings.length}`

  const kvpairs = [];
  for (var i = 0; i < flatBindings.length; i += 2) {
    kvpairs.push(flatBindings.slice(i, i + 2));
  return kvpairs;
Nothing crazy about this: we first confirm that the number of bindings is indeed even, and then wrap every pair of adjacent elements into a nested list.  Let's give it a go:
> parseKVPairs([])

> parseKVPairs(['a', 1])
[ [ 'a', 1 ] ]

> parseKVPairs(['a', 1, 'b', 2])
[ [ 'a', 1 ], [ 'b', 2 ] ]
Perfect!  The second utility function might look a bit cryptic at first, simply because I am yet to show you what is the problem that it tries to solve, but hopefully soon it will all make more sense.  We want to feed it with a list of elements representing a flattened list of bindings followed by a callback function, and we expect it to return a list whose first element is a list of key-value pairs created from the list of flattened bindings, and the second is the given callback function:
function parseDynamicEnvironmentSetArguments(args) {
  assert.ok(args, `Function arguments, invalid: ${args}`);
    args.length % 2 === 1,
    `Function arguments, expected odd number of elements, but got: ${args.length}`

  const kvpairs = parseKVPairs(args.slice(0, args.length - 1));
  const body = args[args.length - 1];

  return [kvpairs, body];
Let's test it out:
> parseDynamicEnvironmentSetArguments([() => {}])
[ [], [Function (anonymous)] ]

> parseDynamicEnvironmentSetArguments(['a', 1, () => {}])
[ [ [ 'a', 1 ] ], [Function (anonymous)] ]

> parseDynamicEnvironmentSetArguments(['a', 1, 'b', 2, () => {}])
[ [ [ 'a', 1 ], [ 'b', 2 ] ], [Function (anonymous)] ]
Alright, it's all working as expected, and with all of this defined and taken care for, it's time we took a look at a possible implementation for a _dynamic environment_, i.e. environment getting a hold of a bunch of dynamic variables.

The biggest challenge in implementing dynamic variables in Node.js is figuring out a way to persist _state_ across changes of asynchronous context: you set `*x*` to a `5`, invoke `setTimeout`, and when the callback is invoked you expect `*x*` to still be bound to `5`.  Similarly, if two asynchronous operations happen to re-bind the same dynamic variable, you don't any of them to step on each others toes.

Luckily for us, the Node.js core team has been working on this _problem_ for quite some time now, and you can see the result of their effort in the [`async_hooks`](https://nodejs.org/api/async_hooks.html) module.  I am not going to bore you with its implementation details (mostly because I am not familiar with it myself), but for what we are trying to achieve here, all we need to know is that:

- Each piece of running code (_user_ code), can have an ID attached, identifying its asynchronous execution context
- Each piece of running code (_user_ code), can have another ID attached, identifying the asynchronous context that _triggered_, directly or indirectly, the current one (i.e. if you create three nested promises, each callback, when executed, will probably have a different `asyncId` value but same `triggerAsyncId` one)
- There is a low-level API, [`createHooks`](https://nodejs.org/api/async_hooks.html#async_hooks_async_hooks_createhook_callbacks), that can be used to get notified when an asynchronous execution context is created or destroyed; with it, one could think of attaching some _payload_ to the current execution context, and then expose another API for user code to access it
- There is a high-level API, [`AsyncLocalStorage`](https://nodejs.org/api/async_hooks.html#async_hooks_class_asynclocalstorage), that shields the user from all the above complexity, and offers a simple way of running user code with a given piece of _payload_ attached to the current execution context

It goes without saying it that `AsyncLocalStorage` is what we will use to implement our dynamic environment:

- Getting a binding translates to getting a hold of the current execution context's payload (i.e. the bindings), and returning whichever value is currently bound to the given variable name
- Setting a binding translates to creating a new set of bindings, attaching it to the current execution context, and running user code within it -- old bindings will be automatically restored after user code (synchronous or asynchronous) has finished running

Alright, let's get our hands dirty.  Let's start by creating a new type for the dynamic environment:
var { AsyncLocalStorage } = require("async_hooks");

var DynamicEnvironment = function (...flatBindings) {
  this.ctx = new AsyncLocalStorage();
  this.ctx.enterWith(new Bindings(parseKVPairs(flatBindings)));
Here, all we do, is creating the asynchronous context object (i.e. an instance of `AsyncLocalStorage`), and then initialize it with some user defined bindings (e.g. `'a', 1, 'b', 2`).  Let's give it a go to see what happens when we call the constructor (note: `ctx.getStore()` is how you access the _payload_ of the current asynchronous context):
> new DynamicEnvironment().ctx.getStore()
Bindings { data: Map(0) {} }

> new DynamicEnvironment('a', 1).ctx.getStore()
Bindings { data: Map(1) { 'a' => 1 } }

> new DynamicEnvironment('a', 1, 'b', 2).ctx.getStore()
Bindings { data: Map(1) { 'a' => 1, 'b' => 2 } }
Let's now define a method to get the value of a specific binding (note how `Bindings`, our previously defined type, is doing all the heavy lifting here):
DynamicEnvironment.prototype.get = function (name) {
  return this.ctx.getStore().get(name);

var env = new DynamicEnvironment('a', 1, 'b', 2)

> env.get('a')

> env.get('b')

> env.get('c')
Uncaught Error: Dynamic variable, unbound: 'c'
    at Bindings.get (repl:4:11)
    at DynamicEnvironment.get (repl:2:30)

> env.get()
Uncaught AssertionError [ERR_ASSERTION]: Dynamic variable name, invalid: undefined
    at Bindings.get (repl:2:3)
    at DynamicEnvironment.get (repl:2:30)
    at repl:1:5
    at Script.runInThisContext (vm.js:131:18)
    at REPLServer.defaultEval (repl.js:472:29)
    at bound (domain.js:430:14)
    at REPLServer.runBound [as eval] (domain.js:443:12)
    at REPLServer.onLine (repl.js:794:10)
    at REPLServer.emit (events.js:326:22)
    at REPLServer.EventEmitter.emit (domain.js:486:12) {
  generatedMessage: false,
  code: 'ERR_ASSERTION',
  actual: undefined,
  expected: true,
  operator: '=='
The last piece of the puzzle, is a mean of setting a new binding (or bindings), and run some user code within the scope of these new bindings; thanks to `Bindings`, `AsyncLocalStorage`, and the cryptic `parseDynamicEnvironmentSetArguments` I showed you before, this could not have been any easier to implement:
DynamicEnvironment.prototype.set = function (...args) {
  const [kvpairs, body] = parseDynamicEnvironmentSetArguments(args);
  const bindings = this.ctx.getStore().set(kvpairs);
  return this.ctx.run(bindings, body);
First we parse function arguments into key-value pairs and the callback inside of which the new bindings will be active; then we create a new `Bindings` object merging new bindings with any existing ones; lastly we tell `AsyncLocalStorage` to _do its magic_ (i.e. attach new bindings to the execution context, and run user code).  Let's try this out, and see if it works or not:
async function test(body) {
  try {
    await body();
  } catch (err) {

> test(async () => {
  var env = new DynamicEnvironment("x", 5);

  var foo = function () {
    return env.get("x");

  var bar = function () {
    return env.set("x", 42, () => foo());

  assert.equal(env.get("x"), 5);
  assert.equal(foo(), 5);
  assert.equal(await bar(), 42);
  assert.equal(env.get("x"), 5);
Promise { <pending> }
It seems like it _is_ indeed working; but what if we added some asynchronous operations within the scope of the `set` call?
> test(async () => {
  var env = new DynamicEnvironment("x", 5);

  var foo = function () {
    return env.get("x");

  var bar = function () {
    return env.set("x", 42, () => {
      return new Promise((resolve) => {
        setTimeout(() => resolve(foo()), 2000);

  assert.equal(env.get("x"), 5);
  assert.equal(foo(), 5);
  assert.equal(await bar(), 42);
  assert.equal(env.get("x"), 5);
Promise { <pending> }
Still working, great!  What about multiple asynchronous operations at the same time?
> test(async () => {
  var env = new DynamicEnvironment("x", 5);

  var foo = function () {
    return env.get("x");

  var bar = function () {
    return env.set("x", 42, () => {
      return Promise.all([
          () =>
            new Promise((resolve) => {
              setTimeout(() => resolve(foo()), 1000);
          () =>
            new Promise((resolve) => {
              setTimeout(() => resolve(foo()), 2000);

  assert.equal(env.get("x"), 5);
  assert.equal(foo(), 5);
  assert.deepEqual(await bar(), [42, 52, 72]);
  assert.equal(env.get("x"), 5);
Promise { <pending> }
It works, and by the look of it it appears we were indeed able to implement "dynamic variables" in Node.js.

I added all the above into a new repository, [`dynamic_variables.js`](https://github.com/iamFIREcracker/dynamic_variables.js), so feel free to play with it in your REPLs and do let me know if anything breaks for you.

Also, it's worth remembering that `async_hooks` is still considered _experimental_, so its API might suddenly change with a new release of Node.js; well, that and the fact that the current implementation might still contain some nasty bugs that might cause your dynamic bindings to get lost across switches of execution context.  This might not be a big deal if you were just to messing around with this, or if you were just planning to use this to enhance your logging capabilities; but if instead, you were planning anything more _serious_ than that, like selecting the "right" database connection based on the logged-in user's tenant, then I would strongly recommend that you tested as many execution paths as possible and confirmed that no binding got lost in the process.  You know, it works...until it doesn't!

PS. For educational purposes, here I am going to show you a different implementation of a dynamic environment, one that does _not_ use `AsyncLocalStorage` to keep track of re-binds (it does that with a stack of _active_ bindings) and because of that, one that most surely is going to fail the expectations in case of multiple nested, asynchronous, re-binds:
var UnsafeDynamicEnvironment = function (...flatBindings) {
  this.snapshots = [new Bindings(parseKVPairs(flatBindings))];
UnsafeDynamicEnvironment.prototype.get = function (name) {
  return this._getActiveSnapshot().get(name);
UnsafeDynamicEnvironment.prototype.set = function (...args) {
  const [kvpairs, body] = parseDynamicEnvironmentSetArguments(args);
  const bindings = this._getActiveSnapshot().set(kvpairs);
  return this._runWithBindings(bindings, body);
UnsafeDynamicEnvironment.prototype._getActiveSnapshot = function () {
  return this.snapshots[this.snapshots.length - 1];
UnsafeDynamicEnvironment.prototype._runWithBindings = async function (bindings, body) {
  try {
    return await body();
  } finally {

> test(async () => {
  var env = new UnsafeDynamicEnvironment("x", 5);

  var foo = function () {
    return env.get("x");

  var bar = function () {
    return env.set("x", 42, () => foo());

  assert.equal(env.get("x"), 5);
  assert.equal(foo(), 5);
  assert.equal(await bar(), 42);
  assert.equal(env.get("x"), 5);
Promise { <pending> }

> test(async () => {
  var env = new UnsafeDynamicEnvironment("x", 5);

  var foo = function () {
    return env.get("x");

  var bar = function () {
    return env.set("x", 42, () => {
      return new Promise((resolve) => {
        setTimeout(() => resolve(foo()), 2000);

  assert.equal(env.get("x"), 5);
  assert.equal(foo(), 5);
  assert.equal(await bar(), 42);
  assert.equal(env.get("x"), 5);
Promise { <pending> }

> test(async () => {
  var env = new UnsafeDynamicEnvironment("x", 5);

  var foo = function () {
    return env.get("x");

  var bar = function () {
    return env.set("x", 42, () => {
      return Promise.all([
          () =>
            new Promise((resolve) => {
              setTimeout(() => resolve(foo()), 1000);
          () =>
            new Promise((resolve) => {
              setTimeout(() => resolve(foo()), 2000);

  assert.equal(env.get("x"), 5);
  assert.equal(foo(), 5);
  assert.deepEqual(await bar(), [42, 52, 72]);
  assert.equal(env.get("x"), 5);
Promise { <pending> }
AssertionError [ERR_ASSERTION]: Expected values to be loosely deep-equal:


should loosely deep-equal

    at repl:34:14
    at async test (repl:3:9) {
  generatedMessage: true,
  code: 'ERR_ASSERTION',
  actual: [Array],
  expected: [Array],
  operator: 'deepEqual'

2021-02-26 (permalink)

? book: [The Web Application Hacker's Handbook: Finding and Exploiting Security Flaws: Discovering and Exploiting Security Flaws](https://www.amazon.it/Web-Application-Hackers-Handbook-Exploiting/dp/1118026470)

2021-02-19 (permalink)

+ book: [Smalltalk Best Practice Patterns](https://www.amazon.it/Smalltalk-Best-Practice-Patterns-Kent-dp-013476904X/dp/013476904X)

? book: [Refactoring](https://martinfowler.com/books/refactoring.html)

If your Mac refuses to go to sleep, use `pmset -g assertions` to see who's preventing this from happening...and kill it -- the other day it was one of my browser tabs, and only God knows what it was doing it the background...welcome to 2021!

Relevant read: [Sleepless Mac](https://stuff-things.net/2017/05/10/sleepless-mac/)

2021-02-14 (permalink)

* finished reading [The Common Lisp Condition System](https://www.apress.com/gp/book/9781484261330)

The "earnmuff notation" (extracted from the Common Lisp Condition System)

The "earnmuff notation" in CL is the conventional notation for dynamic variables.

Here is a very poor an concise _refresher_ (I should have used _reminder_ there) about how dynamic variables work in CL:
(defvar *x* 5)

(defun bar ()

(defun foo ()
  (let ((*x* 42))

> (foo)

> (bar)

On reporting custom conditions (extracted from The Common Lisp Condition System)

Condition objects are printed in their unreadable form if the dynamic variable `*print-escape*` is bound to T,  However, when that same variable is bound to NIL, printing a restart object causes its _report function_ to be invoked.  That report function accepts a pair of arguments: the condition object itself and the reporting stream that the the report should be written to.  This means that the report function is capable of querying the condition object for all the infromation that is needed for generating a proper report; it is goodf style for condition reporting functions to be self-contained, which means not to depend on any Lisp data other than the condition object itself.

On CERROR, and its signature (extracted from The Common Lisp Condition System)

An interesting feature of CERROR is that the optional arguments passed to the function are used _both_ for constructing the condition object _and_ as format arguments for the passed format string.  Let us analyze the following form:
 (cerror "Continue after signaling a SIMPLE-ERROR ~
          with arguments: ~S ~S ~S ~S"
         :format-control "A simple error signaled with ~A."
         :format-arguments '(42))
 Executing this form will cause the system to signal a SIMPLE-ERROR; we expect the report of that condition object to be `A simple error signaled with 42.`. In addition, we expect a new restart to be bound around the error site; the restart will have a longer report form, reading `Continue after signaling a SIMPLE-ERROR with arguments: ` and then listing all the opotional arguments that CERROR was called with.

On the condition-restart association (extracted from The Common Lisp Condition System)

Let's take a look at the following code snippet:
Debug> (let ((*print-escape* nil))
         (format t "~&~{;; ~W~%~}" (compute-restarts)))

;; Return to level 2 of the debugger.
;; Retry using SETG.
;; Use specified function
;; Return to level 1 of the debugger.
;; Suppy a new value for *X*.
;; Retry evaluating the form.
;; Return to the top level.

Debug> (let ((*print-escape* nil))
         (format t "~&~{;; ~W~%~}" (compute-restarts *debugger-condition*)))

;; Return to level 2 of the debugger.
;; Retry using SETG.
;; Use specified function
;; Return to level 1 of the debugger.
;; Retry evaluating the form.
;; Return to the top level.
Let us differentiate these two lists of restarts via SET-DIFFERENCE to find restarts which appear in the first list but don't in the other:
Debug> (let ((*print-escape* nil))
         (format t "~&~{;; ~W~%~}"
                     (compute-restarts *debugger-condition*)))
;; Suppy a new value for *X*.
We can see that calling COMPUTE-RESTARTS _without_ passing it the UNDEFINED-FUNCTION condition has caused one more restart to appear -- the missing STORE-VALUE restart, with a report `Suppy a new value for *X*.`, which was bound by the STORE-VALUE form which we originally evaluated.  From this observation, we may induce that there must be some sort of relationship between condition objects and restart objects that is checked by COMPUTE-RESTARTS and FIND-RESTART, among other functions.  This relationship in CL is named _condition-restart association_.

On programmatically calling MACROEXPAND (extracted from The Common Lisp Condition System)

We can see that the macro definition has been expanded to include a special _environment_ object inside its lambda list; this object is not directly passed to it by the programmer, but is usually provided by the Lisp implementation as part of calling its macroexpander.  While we will not describe environment objects in detail, we will note their traits which are most relevant to us for the matter at hand.  This environment argument is available only in macros; it holds, among other things, information about local macro definitions -- information which is important for passing into the MACROEXPAND calls of our new functions.  Since these calls must perform macroexpansion of their own, it is important that they receive this environment argument to be able to expand macros correctly.  That is why RESTART-CASE passed the argument to EXPAND-RESTART-CASE, which -- in turn -- passes it into our yet-undefined functions RESTART-CASE-SIGNALING-FORM-P and RESTART-CASE-EXPAND-SIGNALING-FORM.

Refresher on generalized booleans (extracted from the Common Lisp Condition System)

In the second test, the function returned a non-NIL value, which in CL is "as good as" returning true.  CL has generalized booleans, which means that anything that is not NIL counts as a true value, evewn though the symbol T is designated to be the "canonical" representation of truth -- the standard type BOOLEAN, after all, is defined as `(member nil t)`.

2021-02-12 (permalink)

? Look at OSC52 and see if I can make `cb` use it instead of piping data through the SSH tunnel. Some useful links: https://medium.com/free-code-camp/tmux-in-practice-integration-with-system-clipboard-bcd72c62ff7b, https://github.com/fcpg/vim-osc52/blob/master/plugin/osc52.vim

? [The UNIX Hate's Handbook](https://www.amazon.com/UNIX-Haters-Handbook-UNIX-Haters-line/dp/1568842031) @book

? [ASK YOUR DEVELOPER - how to harness the power of software developers and win in the 21st century](https://www.askyourdeveloper.com/) @book

From Hacker News: [Is Y Combinator worth it?](https://news.ycombinator.com/item?id=24616649)
Would you disqualify companies that had bootstrapped successfully, were profitable, and did not take any outside money?
Not a judging question - I understand the value of a good investor's due diligence and backing in improving the odds of sane leadership. But it's interesting to think how that can impact a successful, self-funded copmany's access to talent.
Someone replied:
As a founder, self funding makes a lot of sense, as you avoid dilution and aren’t beholden to a group of people expecting 10x growth every year.
But as a candidate I’d be cautious about taking a job at a self funded startup. They’re probably cash constrained so the base salary might not be as much. And if there is equity, I would want management beholden to people expecting 10x growth.
Is the second paragraph true tough?
As an engineer I would love to work for a company that makes money and pays me from that money.
Exactly, freedom from investors, chained to your customers.  The conversation goes on:
Curious, why does it matter if your paycheck comes from VC money or revenue money? In the end the same amount comes to your bank account.
And eventually someone steps in and explain how things work:
Because VCs work by pushing for high returns on some of their investments, not low returns on all of their investments. They therefore push for business decisions to force high growth, when there can be a better chance of success with a slow burn in some situations. So you end up taking business risks to chase high growth, instead of simply focusing on a niche market to stay profitable, and having a flexible timeframe for growth.
As an engineer, this does hit the paycheck directly - if they go hire more devs to speed product features to market while at the same time increasing sales and marketing... it can change from a bootstrapped model with no end to the runway into a situation where you do have a runway, and it is counted in months, not years. Instead of steady pay with plenty of time to deliver new product, I'm being pushed to deliver quickly, and if we fail, we're unemployed.
And yes, we probably get lower salaries for that model because we are taking less risk. But steady, lower stress work is desired by some of us, and a small bootstrapped group delivers it better than a VC.
The above finds me in absolute agreement -- and I could not have explained it in simpler terms.

2021-02-10 (permalink)

While this was happening, the big sharks sensed an opportunity; larger trading professionals joined the fight but only for their own profit. BlackRock and eight other Wallstreet titans made a combined 16 billion dollars on GameStop in a matter of days. You see? It wasn't just the redditors that made money, and that's a key point in the story: Wallstreet always finds a way to win.
[Reddit vs Wallstreet - GameStop, The Movie](https://www.youtube.com/watch?v=YFQ-v1jCpF0)

2021-02-05 (permalink)

I wanted to fix some of the compilation warnings in my AdventOfCode-solutions-in-Common-Lisp [project](https://iamFIREcracker.github.com/adventofcode), and this one in particular caught my attention.

Given this function:
(defun count-hits (map delta-col delta-row)
  (loop with (rows cols) = (array-dimensions map)
        for row below rows by delta-row
        for col = 0 then (mod (+ col delta-col) cols)
        count (char= (aref map row col) #\#)))
Try and evaluate in the REPL (SBCL/2.0.11 in my case) and you should get the following style warnings back:
;     (LOOP AOC/2020/03::WITH (AOC/2020/03::ROWS
;                              AOC/2020/03::COLS) = AOC/2020/03::DIMS
;           AOC/2020/03::FOR AOC/2020/03::ROW AOC/2020/03::BELOW AOC/2020/03::ROWS AOC/2020/03::BY AOC/2020/03::DELTA-ROW
;           AOC/2020/03::FOR ...)
; ==>
;   NIL
;   The binding of #:LOOP-LIMIT-1 is not a REAL:
;    NIL
;   See also:
;     The SBCL Manual, Node "Handling of Types"
;   The binding of SB-C::DIVISOR is not a REAL:
;    NIL
;   See also:
;     The SBCL Manual, Node "Handling of Types"
; compilation unit finished
;   caught 2 STYLE-WARNING conditions
WARNING: redefining AOC/2020/03::COUNT-HITS in DEFUN
I can avoid the implicit call to DESTRUCTURING-BIND (LOOP-DESTRUCTURING-BIND to be precise, as that is what the `with (rows cols) = ...` expression gets expanded into) by replacing ARRAY-DIMENSIONS with two calls to ARRAY-DIMENSION:
(defun count-hits (map delta-col delta-row)
  (loop with rows = (array-dimension map 0)
        with cols = (array-dimension map 1)
        for row below rows by delta-row
        for col = 0 then (mod (+ col delta-col) cols)
        count (char= (aref map row col) #\#)))
Now _that_ seems to have solved the problem (i.e. no style warning anymore).

What if I wrapped the original LOOP expression into a DESTRUCTURING-BIND form in which I initialize `rows` and `cols`?
(defun count-hits (map delta-col delta-row)
  (destructuring-bind (rows cols) (array-dimensions map)
    (loop for row below rows by delta-row
          for col = 0 then (mod (+ col delta-col) cols)
          count (char= (aref map row col) #\#))))
This version too, does **not** cause the compiler to throw any style warnings.  What exactly is going on?

Let's try and find the minimal snippet to reproduce it:
(defvar arr (make-array '(2 5) :initial-contents '((1 2 3 4 5)
                                                   (6 7 8 9 10))))
(defun test-1 (arr)
  (loop with (rows cols) = (array-dimensions arr)
        for r below rows do
        (loop for c below cols do
              (print (aref arr r c)))))
When I send the above to the REPL I get:
; in: DEFUN TEST-1
;     (LOOP AOC/2020/03::WITH (AOC/2020/03::ROWS
;                              AOC/2020/03::COLS) = (ARRAY-DIMENSIONS
;                                                    AOC/2020/03::ARR)
;           AOC/2020/03::FOR AOC/2020/03::R AOC/2020/03::BELOW AOC/2020/03::ROWS
;           DO (LOOP AOC/2020/03::FOR AOC/2020/03::C AOC/2020/03::BELOW AOC/2020/03::COLS
;                    DO (PRINT
;                        (AREF AOC/2020/03::ARR AOC/2020/03::R AOC/2020/03::C))))
; ==>
;   NIL
;   The binding of SB-C::Y is not a REAL:
;    NIL
;   See also:
;     The SBCL Manual, Node "Handling of Types"
;   The binding of SB-C::Y is not a REAL:
;    NIL
;   See also:
;     The SBCL Manual, Node "Handling of Types"
; compilation unit finished
;   caught 2 STYLE-WARNING conditions
Perfect.  Let's try and expand that LOOP expression, _recursively_, and see if anything stands out:
    (LET* ((#:G692
                                       '(&OPTIONAL ROWS COLS
                                         . #:LOOP-IGNORED-691)))
            (IF #:G692
                (LET ((#:G693 (CAR #:G692)))
                  (PROGN (SETQ #:G692 (CDR #:G692)) #:G693))))
            (IF #:G692
                (LET ((#:G694 (CAR #:G692)))
                  (PROGN (SETQ #:G692 (CDR #:G692)) #:G694))))
           (#:LOOP-IGNORED-691 #:G692))
      (LET ((#:LOOP-LIMIT-690 ROWS) (R 0))
                 (TYPE (AND NUMBER REAL) R)
                 (IGNORABLE #:LOOP-LIMIT-690)
                 (TYPE (AND NUMBER REAL) #:LOOP-LIMIT-690))
          (IF (>= R #:LOOP-LIMIT-690)
              (GO SB-LOOP::END-LOOP))
          (BLOCK NIL
            (LET ((#:LOOP-LIMIT-695 COLS) (C 0))
              (DECLARE (IGNORABLE C)
                       (TYPE (AND NUMBER REAL) C)
                       (IGNORABLE #:LOOP-LIMIT-695)
                       (TYPE (AND NUMBER REAL) #:LOOP-LIMIT-695))
                (IF (>= C #:LOOP-LIMIT-695)
                    (GO SB-LOOP::END-LOOP))
                (PRINT (AREF ARR R C))
                (SETQ C (1+ C))
                (GO SB-LOOP::NEXT-LOOP)
          (SETQ R (1+ R))
          (GO SB-LOOP::NEXT-LOOP)
I am afraid the compiler is right, isn't it?

- LOOP's destructuring bind would not signal an error if the list you are destructuring bind against does not have enough elements; it would bind all the _extra_ variables to NIL
- This means that both `rows` and `cols` can be NIL
- `rows`, however, is used after the keyword `:below`, and because of that it is expected to be a NUMBER
- But NIL is not a valid NUMBER, hence the warning

Technically the compiler is right, but what a pain...

Anyways, I reported this as a bug to sbcl-bugs@lists.sourceforge.net, let's see if they think it's a bug or not.

2021-02-02 (permalink)

"Great flight up... Gotta work on that landing"
SpaceX commenter, at the end of the [Starship SN9 flight test](https://mobile.twitter.com/SpaceX/status/1356699321840721920)

2021-01-20 (permalink)

any legit use case for PROG2 though? It seems very ad-hoc-y
Here is a _legit_ use case for PROG2:
(prog2 (prologue)
It turns out there is a library, `:cl-advice`, that exploits PROG2 in an almost identical way (check this out on [GitHub](https://github.com/szos/cl-advice/blob/3d8653da763ab67b8abda1dba8c7783806a10e64/cl-advice.lisp#L137-L142)):
(prog2 (when ,before
         (apply ,before ,dispatch-args))
       (if ,around
         (apply ,around ,dispatch-args)
         (apply ,main ,dispatch-args))
       (when ,after (apply ,after ,dispatch-args)))))
When perceived in this way, PROG2 is a non-object-oriented `:before`/`:after`

2021-01-17 (permalink)

Advent of Code: [2015/25](https://adventofcode.com/2015/day/25)

The task for today, i.e. Christmas, is to boot up the weather machine -- no idea what that is -- and all we have to do is to read a code from the instruction manual and insert it on the console.  Except, we don't know where the manual is, so need to generate the sequence of codes first.

We are given the rules of how this sequence has to be constructed:
   | 1   2   3   4   5   6
 1 |  1   3   6  10  15  21
 2 |  2   5   9  14  20
 3 |  4   8  13  19
 4 |  7  12  18
 5 | 11  17
 6 | 16

   |    1         2         3         4         5         6
 1 | 20151125  18749137  17289845  30943339  10071777  33511524
 2 | 31916031  21629792  16929656   7726640  15514188   4041754
 3 | 16080970   8057251   1601130   7981243  11661866  16474243
 4 | 24592653  32451966  21345942   9380097  10600672  31527494
 5 |    77061  17552253  28094349   6899651   9250759  31663883
 6 | 33071741   6796745  25397450  24659492   1534922  27995004
First, let's begin by parsing the input to know the position in this _diagonal_ sequence where we will find the code to boot up the weather machine.
(defun parse-target-position (lines)
  (cl-ppcre:register-groups-bind ((#'parse-integer row col))
      ("row (\\d+), column (\\d+)" (first lines))
    (list row col)))
Next, let's implement a generator of all the positions of this _diagonal_ sequence:

- Return `(1 1)` the first time
- Then move `1` up, and `1` right
- If the current row is `0`, we need to wrap around, otherwise there we have the next position of the sequence
- Repeat
(defun next-grid-position (&optional prev)
  (if (not prev)
    (list 1 1)
    (let ((next (mapcar #'