This is my log ... When I accomplish something, I write a * line that day. Whenever a bug / missing feature / idea is mentioned during the day and I don't fix it, I make a note of it and mark it with ? (some things get noted many times before they get fixed). Occasionally I go back through the old notes and mark with a + the things I have since fixed, and with a ~ the things I have since lost interest in. --- Matteo Landi # 2011-01-29 * finished reading [Rework]( # 2011-03-14 * finished reading [Learning the vi and Vim Editors: Text Processing at Maximum Speed and Power]( # 2011-04-02 * finished reading [Getting Real: The Smarter, Faster, Easier Way to Build a Successful Web Application]( # 2011-06-15 * finished reading [Growing Object-Oriented Software, Guided by Tests]( # 2012-08-10 * finished reading [Effective Java (2nd Edition)]( # 2012-08-17 * finished reading [Functional Programming for Java Developers: Tools for Better Concurrency, Abstraction, and Agility]( # 2012-08-23 * finished reading [Vim and Vi Tips: Essential Vim and Vi Editor Skills]( # 2012-09-12 * finished reading [Design Patterns: Elements of Reusable Object-Oriented Software]( # 2012-10-24 * finished reading [Simplify]( # 2012-11-03 * finished reading [Effective Programming: More Than Writing Code]( # 2013-02-01 * finished reading [Python Cookbook]( # 2013-10-01 * finished reading [Java Concurrency in Practice]( # 2015-08-01 * finished reading [The Lean Startup: How Today's Entrepreneurs Use Continuous Innovation to Create Radically Successful Businesses]( # 2015-09-03 * finished reading [Zero to One: Notes on Start Ups, or How to Build the Future Kindle Edition]( # 2015-09-07 * finished reading [The Pragmatic Programmer: From Journeyman to Master]( # 2015-11-05 * finished reading [Clean Code: A Handbook of Agile Software Craftsmanship]( # 2016-08-21 is a domain that resolves to localhost, and it works with subdomains too ( would still resolve to localhsot). Why is this helpful? Because you could use it when developing applications that need subdomain configurations. # 2016-09-12 * finished reading [The Startup Playbook: Secrets of the Fastest-Growing Startups from their Founding Entrepreneurs]( # 2016-09-13 Analyze angular performance: angular.element(document.querySelector('[ng-app]')).injector().invoke(['$rootScope', function($rootScope) { var a =; $rootScope.$apply(); console.log(; }]) or: angular.element(document.querySelector('[ng-app]')).injector().invoke(['$rootScope', function($rootScope) { var i; var results = [] for (i = 0; i < 100; i++) { var a =; $rootScope.$apply(); results.push(; } console.log(JSON.stringify(results)); }]) # 2016-09-14 Move pushed commit to a different branch git checkout $destination-branch git cherry-pick $change-done-on-the-wrong-branch git push git checkout $origin-branch git revert $change-done-on-the-wrong-branch Related reads - - # 2016-09-16 Run the last history command (like `!!`): fc -s Run last command without appending it to the history: HISTFILE=/dev/null fc -s # 2016-12-26 * successfully built vim, from sources sudo apt-get install -y \ liblua5.1-0-dev \ libluajit-5.1-dev \ lua5.1 \ luajit \ python-dev \ ruby-dev sudo ln -s /usr/include/lua5.1 /usr/include/lua5.1/include cd ~/opt git clone cd vim ./configure \ --with-features=huge \ --enable-multibyte \ --enable-largefile \ --disable-netbeans \ --enable-rubyinterp=yes \ --enable-pythoninterp=yes \ --enable-luainterp=yes \ --with-lua-prefix=/usr/include/lua5.1 \ --with-luajit \ --enable-cscope \ --enable-fail-if-missing \ --prefix=/usr make # 2016-12-28 * found a way to debug why a node app refuses to grecefully shut-down // as early as possible const wtf = require('wtfnode'); setInterval(function testTimer() { wtf.dump(); }, 5000); # 2016-12-31 * figured out a way to always update vbguest add-ons: `vagrant plugin install vagrant-vbguest` ( Also, you could instruct Vagrant to automatically install required dependencies with the following: required_plugins = %w( vagrant-hostmanager vagrant-someotherplugin ) required_plugins.each do |plugin| system "vagrant plugin install #{plugin}" unless Vagrant.has_plugin? plugin end Source: # 2017-01-03 Found this nice hack to quickly get the average color of an image img2 = img.resize((1, 1)) color = img2.getpixel((0, 0)) # 2017-01-06 * downloaded all my intagram photos - Load your profile page on the browser, and scroll to the bottom until you reached the end of your stream. - Open the developer console, right click on the element, copy it, and dump it somewhere - Run the following command: `lynx -dump -listonly -image_links ../foo.html | grep cdninstagram | - 640x640 | awk '{print $2}' | xargs -n 1 -P8 wget` # 2017-01-08 * finished reading [The Everything Store: Jeff Bezos and the Age of Amazon]( # 2017-01-18 * finished reading [Drive: The Surprising Truth About What Motivates Us]( # 2017-02-19 * finished reading [Designing with the Mind in Mind: Simple Guide to Understanding User Interface Design Guidelines]( # 2017-04-26 * finished reading [To Sell is Human: The Surprising Truth About Persuading, Convincing, and Influencing Others]( # 2017-07-25 Me of the future, do enjoy this! " Dear /bin/bash: fuck you and your bullshit, arcane command-line behaviour. " " Basically, I want to set this to a non-login, non-interactive bash shell. " Using a login/interactive bash as Vim's 'shell' breaks subtle things, like " ack.vim's command-line argument parsing. However, I *do* want bash to load " ~/.bash_profile so my aliases get loaded and such. " " You might think you could do this with the --init-file command line option, " which is used to specify an init file. Or with --rcfile. But no, those only " get loaded for interactive/login shells. " " So how do we tell bash to source a goddamned file when it loads? With an " *environment variable*. Jesus, you have multiple command line options for " specifying files to load and none of them work? " " Computers are bullshit. let $BASH_ENV = "$HOME/.bashrc" # 2017-08-08 * web resource caching Manually adding headers via express or anything: 'Cache-Control': 'public, max-age=31536000' // allow files to be cached for a year Nginx configuration: location /static { alias /site; location ~* \.(html)$ { add_header Last-Modified $date_gmt; add_header Cache-Control 'no-store, no-cache, must-revalidate, proxy-revalidate, max-age=0'; if_modified_since off; expires off; etag off; } } # 2017-08-27 * edit markdown files with live-reload Install `markserv`: npm install markserv -g Move into the folder containing any markdown file you would like to preview, and run: markserv # 2017-09-26 * finished reading [Practical Vim: Edit Text at the Speed of Thought]( # 2017-10-13 * finished reading [Programming JavaScript Applications: Robust Web Architecture with Node, HTML5, and Modern JS Libraries]( # 2017-10-21 * Installed Common Lisp on Mac OS brew install roswell # ros install sbcl/1.3.13 # 2017-11-26 * finished reading [Don't Make Me Think, Revisited: A Common Sense Approach to Web Usability (Voices That Matter)]( # 2017-12-12 * fixed slow bash login shells on Mac OS Unlink old (and slow) bash-completion scripts: brew unlink bash-completion Install bash-completion for bash4: brew install bash-completion@2 Change '.bash\_profile' to source the new bash\_completion files if [ -f $(brew --prefix)/share/bash-completion/bash_completion ]; then . $(brew --prefix)/share/bash-completion/bash_completion fi Finally, make sure your default `bash` shell is the +4 one: sudo bash -c 'echo /usr/local/bin/bash >> /etc/shells' chsh -s /usr/local/bin/bash # 2017-12-15 * fixed key-repeat timings on Mac OS We would do this by using Karabiner/Karabiner-Elements, but that option is long gone, so we need to do that by using `defaults`: defaults write NSGlobalDomain InitialKeyRepeat -int 11 defaults write NSGlobalDomain KeyRepeat -float 1.3 In particular, the above sets: - Delay until repeat: 183 ms - Key repeat: 21 ms # 2017-12-20 * learned how to clone repos subdirectories This can be done by using *sparse checkouts* (available in Git since 1.7.0): mkdir cd git init git remote add -f origin This creates an empty repository with your remote, and fetches all objects but doesn't check them out. Then do: git config core.sparseCheckout true Now you need to define which files/folders you want to actually check out. This is done by listing them in `.git/info/sparse-checkout`, eg: echo "some/dir/" >> .git/info/sparse-checkout echo "another/sub/tree" >> .git/info/sparse-checkout Last but not least, update your empty repo with the state from the remote: git pull origin master To undo this instead: echo "/*" > .git/info/sparse-checkout git read-tree -mu HEAD git config core.sparseCheckout false # 2017-12-21 * fix symbolic links on Windows/cygwin Define the following environment variable: CYGWIN=winsymlinks:nativestrict # 2017-12-27 * fucking fixed MacOS not going to sleep! As of OS X 10.12, the nfsd process now holds a persistent assertion preventing sleep. This is mostly a good thing, as it lets NFS work on networks where wake-on-lan packets don't work. However, this means that when plugged in, OS X will no longer automatically sleep. If you unload the daemon, sleep works fine, and I noticed that vagrant starts nfsd if required on vagrant up. This is now worse with 10.12.4 as a call to nfsd stop is no longer supported when system integrity protection is turned on. Thus the only way to let the machine sleep is to prune /etc/exports and restart nfsd. It would be nice to at least add a cli command to have vagrant prune /etc/exports (so we can script in when we want nfs pruned) or even better have it automatically prune exports on suspend and halt and re-add on wake and up. For this I created the following, and added it to my .bashrc: function fucking-kill-nfsd() { # sudo sh -c "> /etc/exports" sudo nfsd restart } [Source]( # 2018-01-08 * finished reading [Never Split the Difference: Negotiating As If Your Life Depended On It]( # 2018-06-03 * learned how to take a peek at a repo's remote Fetch incoming changes: git fetch Diff `HEAD` with `FETCH_HEAD`: git diff HEAD..FETCH_HEAD If the project had a changelog, you could just diff that: git diff HEAD..FETCH_HEAD -- some/path/CHANGELOG Bring changes in: git pull --ff # 2018-06-05 * finished reading [The Senior Software Engineer]( * make `appcelerator` work again Unset `_JAVA_OPTS` as scripts are too fragile and the `javac` command is not properly parsed ~/Library/Application Support/Titanium/mobilesdk/osx/7.2.0.GA/node_modules/node-appc/lib/jdk.js Added log messages because of uncaught exceptions # 2018-07-05 * finished reading [Modern Vim: Craft Your Development Environment with Vim 8 and Neovim]( # 2018-07-12 * fucking fixed weird Cygwin/SSH username issues Symptoms: cygwin `USER` and `USERNAME` env variables are different[0]: $ set | egrep "USER=|USERNAME=" USER=IONTRADING+User(1234) USERNAME=mlandi Where does the value of USER come from? Take a look in '/etc/profile'. You will find the following: # Set the user id USER="$(/usr/bin/id -un)" `id` gets it's data from /etc/passwd (which, if not there, you might have to create with the following command[1]): mkpasswd -c > /etc/passwd You will have to kill all the cygwin processes to see this working. # 2018-08-01 * init'd new work windows box -- and addes stuff to my-env's README # 2018-08-19 * finished reading [Cypherpunks: Freedom and the Future of the Internet]( # 2018-08-20 * re-learned how to release on pypi Make sure your `~/.pypirc` file is current: [distutils] index-servers: testpypi pypi [testpypi] repository: username: landimatte [pypi] repository: username: landimatte Clean up the `dist/` directory: rm -rf dist/* Build the source distribution package: python sdist Release to the *test* instance of Pypi, and check everything is in place: twine upload dist/* -r testpypi Release to the *public* instance of Pypi: twine upload dist/* -r pypi # 2018-08-25 * force merge to keep feature branches in the history This [post]( does a good job at explaining how to keep feature branches in the git history (i.e. forgin Git to always create a merge commit). In a nutshell, to disable _fast-forward_ merges completely: git config --global --add merge.ff false While to disable them just for `master`: git config --global branch.master.mergeoptions "--no-ff" # 2018-08-26 * finished reading [No Place to Hide: Edward Snowden, the NSA, and the U.S. Surveillance State]( * setting up Mutt on Windows Cannot find mutt-patched, so let's go with the default apt-cyg mutt It turns out, default mutt now ships with the sidebar patch -- yay!!! Manually installed installed `offlineimap` using Cygwin apt-cyg install offlineimap + build offlineimap from sources SSL certificates on Cygwin are put inside a different directory, so I had to customize files a little to point to the right path + create a script to inspect the disk and symlink certificates in a known location (e.g. ~/.certs) Used python's [keyring]( library to handle secrets (it work on MacOS too!) urlview is not available on Cygwin; I tried with [urlscan]( for a couple of days but ended up with manually building urlview from sources: git clone ... apt-cyg install \ autoconf \ automake \ gettext \ libcurses-devel ./configure autoreconf -vfi make Created `~/bin/urlview` script delegating to `~/opt/urlview/urlview` (location of my local urlview repo); created `~/.urlview` containing: COMMAND br Luckily for me, msmpt is available on Cygwin, so a simple `apt-cyg install ...` did the trick # 2018-08-29 A tale of Sinon, Chai, and stubs Imports: const chai = require('chai'); const expect = chai.expect; const sinon = require('sinon'); const sinonChai = require('sinon-chai'); chai.should(); chai.use(sinonChai); Then inside the tests: expect(; expect(; # 2018-09-07 * reinstalled docker-ce on Ubuntu # 2018-09-08 * first aadbook release! * fork vim-goobook and add support for specifying a custom program (e.g. aadbook) # 2018-09-09 * finished reading [The Art of Invisibility: The World's Most Famous Hacker Teaches You How to Be Safe in the Age of Big Brother and Big Data]( # 2018-09-14 * finished reading [When Google Met WikiLeaks]( # 2018-09-16 * aadbook: always regenerate tokens when invoking 'authenticate' * aadbook: don't refresh auth token, unless expired # 2018-09-21 * reimplement vim-goobook filtering logic in pure vimscript # 2018-09-24 * finished reading [Ghost in the Wires: My Adventures as the World's Most Wanted Hacker]( # 2018-09-26 * fix certbot on # 2018-09-28 * make vim-goobook platform independent # 2018-09-29 * vim-goobook completion works even on lines starting with spaces/tabs # 2018-10-01 * aadbook: fix soft-authentication issue # 2018-10-02 * add aadbook page to # 2018-10-03 * make mobile friendly # 2018-10-05 * set up a .plain file, some vim scripts, and a bash script to manage it + add a `logfilter` page on ? fix mutt 'indicator' colors taking over 'index' ones! I want index rules to be more important than indicator one: ? pin specific gmail/microsoft certificates + PGP mutt ? live notification fetcher plugin on # 2018-10-06 * cleaned up mail related scripts (personal/work) * added .plan page to * images on are no more cropped on mobile # 2018-10-07 * yet another restyle -- ripped of css rules from: * automatic generation of sitemap.xml # 2018-10-08 * fix s format on sitemap # 2018-10-09 * added personal GPG key to * added fuzzy-finding support to aadbook by replacing each whitespace ` ` with `.*` * added fuzzy-finding support to goobook # 2018-10-10 * finished reading [Masters of Doom: How Two Guys Created an Empire and Transformed Pop Culture ]( # 2018-10-11 * disabled caching of html|css files on # 2018-10-14 * finished setting up GPG w/ Mutt!! It's been a long journey, but I finally did it. At first I tried setting up all the pgp_* mutt commands to verify/sign/encrypt/decrypt, but while encrpyting seemed to work, when I tried to decrypt my own messages (i.e. message I had sent to my own address) Mutt errored out complaining that GPG was not able to decrypt the message -- note that manually running the `gpg` command on a copy of the message worked without any issues. Anyway, I finally gave up and went on and added `set crypt_use_gpgme` to my .muttrc (had to re-install mutt, with `--with-gpgme` flag). Few related (and interesting) articles: Gnu PGP Intro: Getting Started with PGP and GPG: Simpler GnuPG & Mutt configuration with GPGME: GPG / Mutt / Gmail: # 2018-10-20 * improved vim-syntax for .plan files * fixed `aadbook authenticate` bug where the comamnd could fail when initializing the Auth object, leaving the user with anything to do but wipe everything out ? don't advance mutt cursor when deleting/archiving threads # 2018-10-22 * created MR for goobook, fixing pip installation -- is trying to read a file not included in the manifest * fixed goobook crash happening when user contacts list contained some contacts without neither a name, nor a email address * added support for local `b:goobookprg` to vim-goobook * added editorconfig integration to dotfiles/vim # 2018-10-23 * expensio: created skeleton Node.js app -- logger, module loader, express, openrecord * expensio: created skeleton Angular app -- boilerplate, Login/Register pages ? expensio: send auth tokens via email # 2018-10-25 * expensio: added back/front end support for editing paymentmethods * expensio: created passport/bearer based auth layer # 2018-10-26 * expensio: almost complete auth workflow -- missing: emails * expensio: authenticate paymentmethods-related workflows ? expensio: dynamically make 'code' required ? expensio: intercept UNIQUE CONSTRAINT and return 409 ? expensio: stop reloading pm/expenses -- especially when going back after pm/expense edit # 2018-10-27 * expensio: initial support (server/client) for groups: list, create * expensio: initial support (server/client) for group-members: list, create * expensio: initial support (server/client) for groupcategories: list, create ? expensio: confirm group membership ? expensio: hierarchical categories # 2018-10-28 * expensio: add logging of HTTP requests using morgan ? expensio: fix logging of warnings/errors with winston # 2018-10-29 * expensio: initial support (server/client) for expenses: list, create * expensio: edit of group categories * expensio: fixed exception loggin when `json: true` # 2018-10-31 * vim-lsp: use loclist instead of quickfix for document diagnostics # 2018-11-01 * aadbook: 0.1.2 -- fix potential security vulnerability related to `requests==2.19.1` # 2018-11-03 Finally found some time to play with Microsoft's [Language Server Protocol]( * js-langserver: submitted PR for loading ESLint from the project directory -- instead of the language server one * js-langserver: updated the LS manifest based on implemented features * js-langserver: fixed 'rename' -- the protocol must have changed since the first implementation, and 'rename' wasn't working anymore # 2018-11-04 + tern.js does not recognize `async function foo() ..` as definition # 2018-11-06 * submitted PR to rainbow_paretheses.vim for configuring the maximum number of levels to color # 2018-11-10 * finished reading [Machines of Loving Grace: The Quest for Common Ground Between Humans and Robots]( Played around with Tern.js internals to understand: 1) why `findDef` of an `async function` is not returning anything, 2) how to enhance js-langserver to support pretty much all the supported properties one can configure inside .tern-config/.tern-project Regarding point 1 above, I tried to factor out a boostrap.js file inside tern.js/lib, containing all the code necessary to start up a Tern server (code gently borrowed from tern.js/bin/tern) -- let's see if tern.js maintainers are interested in merging my PR # 2018-11-11 * managed to get tern.js to reference `async function` definitions -- I did not realize I had `"ecmaVersion": 6` in my .tern-project file * created a PR to tern.js to factor out server-boostrapping logic into a different file people can use to easily start a Tern server without the HTTP wrapper Read a few articles about [Initiative Q]( and, and I feel like it's just a big scam: they are giving away free money in exchange of...your data, and information about your inner circles (you only have 5 invites, so you need to be carefull whom you are sending it to; and when you receive one, since you know the sender very well and you know they only had 5 invites, you might as well simply accept it and join the community because really trust the sender of the invite and he wouldn't have sent it to you if the thiing was a scam). Anyway, let's see how it pans out. # 2018-11-13 * finished reading [Ripensare la smart city]( # 2018-11-25 * fixed editorconfig configuration for .vimrc -- # 2018-12-03 * completed advent of code day1-1 # 2018-12-05 * completed advent of code day1-2 * cleared Spotify cache Couple interesting articles on how to clear/configure Spotify's cache: - - # 2018-12-06 * completed advent of code day2-1 * completed advent of code day2-2 # 2018-12-08 * completed advent of code day3-1 * completed advent of code day3-2 # 2018-12-09 * completed advent of code day4-1 * completed advent of code day4-2 * completed advent of code day5-1 -- it takes around 2 minutes to run... we can do better * completed advent of code day5-2 -- left running at night + more efficient solution for day5-1 and day5-2 # 2018-12-10 * completed advent of code day6-1 * completed advent of code day6-2 # 2018-12-11 * completed advent of code day7-1 -- it did not seem that complex... yet it took me forever to implement the dependency-tree traversal # 2018-12-12 * completed advent of code day8-1 * completed advent of code day8-2 # 2018-12-13 * completed advent of code day9-1 * completed advent of code day9-2 -- slow as balls, had to leave it going during the night + more efficient solution for day9-2 # 2018-12-14 * completed advent of code day10-1 * completed advent of code day10-2 * completed advent of code day11-1 # 2018-12-15 * completed advent of code day12-1 * completed advent of code day12-2 ? understand why day12-2 first set of loops need to be skipped (> cur-gen 10000) # 2018-12-16 * finished reading [Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy]( * completed advent of code day13-1 * completed advent of code day13-2 * completed advent of code day14-1 * completed advent of code day14-2 -- slow-ish ? make solution for dat14-2 more efficient # 2018-12-20 * completed advent of code day16-1 ? figure out how to parse day16-2 input as a single file, instead of 2 # 2018-12-21 * completed advent of code day16-2 + learn about lispwords -- vim # 2018-12-24 * completed advent of code day17-1 -- it took me forever to get this right.. I am dumber that I thought I were * compelted advent of code day17-2 -- once you get part 1 right.. * compelted advent of code day18-1 * compelted advent of code day18-2 * compelted advent of code day19-1 # 2018-12-25 * completed advent of code day19-2 I started analyzing all the comparison instructions and the status of the registers before and after these were executed. Then tweaking the initial state of the registers to fast forward the program execution I managed to understand that the program was trying to calculate the sum of the divisors of a given number (where _given_ means some funky formula I have no idea of... I simply looked at the registers to see what this number was). # 2018-12-26 * completed advent of code day21-1 I looked for all the comparison instructions where R0 (the only one we could mutate) appeared as one of the argument; put a breakpoint on the instruction, and inspected what the expected value was, used that as R0 and that's it. # 2018-12-30 * completed advent of code day23-1 -- it took me forever to realize I was looping over the sequence I had passed to `sort` -- which is destructive -- and that some elements were gone (I should have either `copy-seq`d or looped over the sorted list) * completed advent of code day22-1 + memoize decorator in common lisp # 2018-12-31 ? parse string using regexpes in common lisp # 2019-01-03 * completed advent of code day22-2 -- the solution (a-star) runs in 48 seconds.. common lisp doesn't ship with an implementation of heapq, so adding+sort on every step is probably killing the performance + make day22-2 run fuster -- find a good implementation of heapq Simple Dijkstra/path-finding Python implementations: # 2019-01-04 * completed advent of code day20-1 -- Dijkstra * completed advent of code day20-2 # 2019-01-05 * completed advent of code day24-1 -- it took me quite some time to get all the rules implemented, but it worked eventually * completed advent of code day24-2 -- starting from a boost of 2 I started to double it until I found **a** winning boost; then run a binary search between the winning one, and its half # 2019-01-06 * completed advent of code day25-1 -- disjointset * completed advent of code day7-2 ? write a popn macro popping n elements from place # 2019-01-07 * completed advent of code day11-2 -- before, I were generating all the squares, calculating the energy of a square, and then finding the max; now, I am doing all this using for loops, without allocating much stuff. It's slow as balls, but only after 19 minutes it spit the right solution :P + make solution for dat11-2 more efficient # 2019-01-08 * completed advent of code day21-2 After spending almost an hour to try and undestand what the problem was doing, I gave up and peeked into Reddit to see how other people had solved it and... it turned out I got the problem all wrong! The program loops forever, and the only way to break out of it is if (in my case) R0 is loaded with some random values, say 10 234 55 1 12 and 10 again. Now, if you load R0 with 10, it will break out immediately; if you load it with 234 it will check against 10 and then 234 (at which point it will exit); the problem asks for the number for which the program will run the maximum number of instructions, and the answer is the latest value of the loop: 12. You load R0 with 12, and the program will run and check against 10, 234, 55, 1, and finally 12! The program run in 511 seconds but I don't want to try to understand what the program does and simplify it (or transpile it into C or something). I can try and make opcodes run faster, but that's it ? Make Elf opcodes run faster -- use arrays instead of consing # 2019-01-10 * completed advent of code day23-2 I spent a few hours trying to come up with a proper solution for this, but I gave up eventually; I got so clouded by the size of the input (not just the number of nanobots, but also their coordinates and their radius sizes), than rather than immediately searching for the correct solution, I should have just tried things in a trial and error fashion. The solution I happened to copy from Reddit goes as follows (I am still not 100% sure it is generic, but it worked for my input so...): we sample the three dimensional space looking for a local maximum; once we have one, we zoom in around that maximum, and keep on sampling, until the area is a point. Of course we have to take into account not only the number of nanobots covering the sampled point, but also its distance to the origin. # 2019-01-13 * completed advent of code day15-1 * completed advent of code day15-2 -- it's official, I don't know how to code a-star. Also, I am unlucky: my solution was working with pretty much all the inputs I found only, but still it wasn't accepting mine. Anyway, there was a bug in my a-star algorithm, and once fixed it, it finally accepted my answer (funny how it was working OK for all the other inputs). * completed andvent of code 2018 -- now time to look at the solutions again, and see what can be improved, factored out * fixed my laptop -- `brewski` fucked it Vim (it would not start up), and it turned out there were some issues with [python2 vs python3]( # 2019-01-17 * finished reading: [Common LISP: A Gentle Intorduction to Symbolic Compoutation]( # 2019-01-19 * configured my inputrc to popup an editor (rlwrap-editor) with C-xC-e, when playing with Node, Python, or SBCL -- which are rlwrap-wrapped * uninstalled `roswell` and installed plain `sbcl` instead; had to install quicklisp though, to make it work with vlime, but that was [easy]( 1 Downlaod quicklisp: `curl -O` 2 Execute the following forms from a sbcl REPL: (load quicklisp.lisp) (quicklisp-quickstart:install) 3 Finally instruct SBCL to always load quicklisp -- add the following to your '.sbclrc: `(load "~/quicklisp/setup.lisp")` # 2019-01-20 * got more familiar with CL `loop`'s syntax `loop` macro supports iterating over a list of elements and exposing not just the current element, but also the remaining ones; it's implemented using the `:on` phrases (this will make `var` step var over the cons cells that make up a list.). (let ((data '(1 2 3 4 5 6 6))) (loop :for (a . remaining) :on data :for b = (find a remaining) :do (when b (return a)))) Note that you cannot simply `:for b :in remaining` because `remaining` will be `NIL` when accessed, forcing and early exit from `loop` (this does not happen when we use [Equals-Then Iteration]( `loop` macro supports `count` (and `counting`), so expressions like: :summing (if (hash-table-find 3 freqs) 1 0) :into threes can be simplified into: :counting (hash-table-find 3 freqs) :into threes # 2019-01-24 Trying to make sense of lisp indentation with Vim and Vlime, and it turns out you have to `set nolisp` (or simply not `set lisp` if you were explicitly setting it): + I still don't get why I should use `lispwords` though (I thought Vlime used a different set of rules), but maybe I just don't know what I am doing... # 2019-01-26 * replaced TSlime with a custom-made mini-plugin that that ships with a couple of `` mappings: one to list all the buffers and pick a terminal you want you send data to, and another to send the current selection to the selected terminal # 2019-01-27 * converted my advent-of-code solutions repository to [quickproject]( It's important to symlink your project, or its .asd file won't properly load up: mkdir -p ~/quicklisp/local-projects/ ln -s $(pwd) ~/quicklips/local-project/$(dirname) Once you do this, and you open a REPL inside a _quickproject_, a simple `(ql:quickload ...)` should work just fine # 2019-01-28 Need to concatenate multiple images into a single PDF? [Easy]( `convert "*.{png,jpeg}" -quality 100 outfile.pdf` # 2019-02-03 + `recursively` macro -- and use it for day08 ? is it possible to do `:collecting` with a specific type ? parsing input is still annoying... # 2019-02-05 * implement `RECURSIVELY` macro # 2019-02-06 * refactored day9 using doubly linked list and ring structures `ARRAY-IN-BOUNDS-P` can be used to figure out if an index (list of indexes) is a valid array (multi-dimensional array) position You loop over _lazily_ generated numbers by using `LOOP`: (loop :for var = initial-value-form [ :then step-form ] ...) For example, to indefinitely loop over a _range_ of values: (loop :for player = 0 :then (mod (1+ player) players) # 2019-02-10 Lightroom shortcuts: - R: grid overlay (press O to change the grid type; shift-O to rotate the grid) - L: lights out (dim background, isolating the photo you are working on); press it again, all the controls will now be black - F: fullscreen - Y: before & after - \: compare with original - J: show where the image is clipping (red: overexposed, blue: underexposed) Lightroom quick tips: - Exposure > Tone Auto - Reset button (bottom right corner) / or double click on the effect - Grid > Angle: you can now draw a line, and the photo will be rotated based on that - Radio brush adjustment # 2019-02-11 [04 – Focal Length]( Ultra-wide angle – 14-24mm Wide angle – 24-35mm Normal – 40-75mm Mild tele – 85-105mm Medium tele – 120-300mm Long and exotic tele – 300-800mm > This was a very interested exercise for me. I used a kit Canon 18-55mm lens with the shots at 18/35/55. I then approached the hydrant and shot it at 18mm. The most noticeable impact that I saw was the loss of depth you have when you are zooming in versus physically approaching the subject. If you look at the dirt pile just over the top left of the hydrant in the third image versus the fourth, the depth of field and the scope are significantly different versus trying to shoot the same shot just physically closer without any significant zoom. Interesting exercise. > > # 2019-02-16 fixed DNS problems after upgrading Ubuntu from 16.04 to 18.04 > After not understanding why this wouldn't work I figured out that what was also needed was to switch /etc/resolv.conf to the one provided by systemd. This isn't the case in an out-of-a-box install (for reasons unknown to me). sudo ln -sf /run/systemd/resolve/resolv.conf /etc/resolv.conf Source: Also, recently W10 started pushing me to use a new Snip tool, so had to figure out how to change my AutoHotKey files to use that: > I figured it out! Create a shortcut as described here: and then incorporate that shortcut into the following code: +#s:: IfWinExist Snip & Sketch { WinActivate WinWait, Snip & Sketch,,2 Send ^n } else { Run "F:\Snip & Sketch.lnk" sleep, 500 send, ^n } return Source: # 2019-02-19 * completed advent of code 2017 day1-1 * completed advent of code 2017 day1-2 # 2019-02-20 * completed advent of code 2017 day2-1 * completed advent of code 2017 day2-2 TIL, `MULTIPLE-value-LIST`: (loop :for values :in data :for (divisor number) = (multiple-value-list (evenly-divisible-values values)) :summing (/ number divisor)))) # 2019-02-21 * completed advent of code 2017 day3-1 * completed advent of code 2017 day3-2 + is there anything similar to `WITH-SLOTS`, but for structures? # 2019-02-23 * completed advent of code 2017 day4-1 * completed advent of code 2017 day4-2 In CL, you can use `:thereis` to break out of a `LOOP` when a given condition is `T`: (loop :for (a . rest) :on pass-phrase :thereis (member a rest :test 'equal))) There are `:never` and `:always` too! # 2019-02-24 * completed advent of code 2017 day5-1 * completed advent of code 2017 day5-2 * completed advent of code 2017 day6-1 * completed advent of code 2017 day6-2 # 2019-02-25 * completed advent of code 2017 day7-1 * completed advent of code 2017 day7-2 A little messy my solution for 2017 day 07 -- gotta find someone else's solution and compare it with mine to see what can be improved # 2019-02-26 * completed advent of code 2017 day8-1 * completed advent of code 2017 day8-2 * completed advent of code 2017 day9-1 * completed advent of code 2017 day9-2 It would be nice if there was a macro similar to `WITH-SLOTS`, but for structures, so I wouldn't have to do the following: (loop :with registers = (make-hash-table :test 'equal) :for instruction :in data ; XXX isn't there a better way to handle this? :for reg-inc = (reg-inc instruction) :for inc-delta = (inc-delta instruction) :for op = (comp-op instruction) :for reg-comp-1 = (reg-comp-1 instruction) :for value-comp = (value-comp instruction) # 2019-02-27 * completed advent of code 2017 day10-1 * completed advent of code 2017 day10-2 Couple interesting `FORMAT` recipes: - `~x` to print a hexadecimal (`~2,'0x~` to pad it with `0`s) - `~(string~)` to lowercase a string # 2019-03-01 * completed advent of code 2017 day11-1 * completed advent of code 2017 day11-2 * completed advent of code 2017 day12-1 * completed advent of code 2017 day12-2 Day11 was all about hex grids; found this interesting [article]( about how to represent hex grids, and how to calculate the distance between tiles -- I opted for the _cube_ representation. Day12 instead, disjointsets! # 2019-03-02 * completed advent of code 2017 day13-1 * completed advent of code 2017 day13-2 * completed advent of code 2017 day14-1 * completed advent of code 2017 day14-2 Day13 part 2 kept me busy more than expected, but eventually figured out what I was doing wrong: Severity == 0 isn't the same as not getting caught. Also, at first I implemented this, but simulating the dilay, and the packets passing through the different firewall layers; it worked, but was slow as balls (around 13 secs), so I decided to do some math to figure out collisions, without having to simulate anything. It paid off, as now it takes around 0.4secs to find the solution. Day14 part 1 was all about re-using Knot Hashes from day 10, and figure out an hexadecimal to binary conversion. Part 2 instead, disjointsets, again! + implement a proper hex to binary macro or something -- `HEXADECIMAL-BINARY` # 2019-03-03 * completed advent of code 2017 day15-1 * completed advent of code 2017 day15-2 Upgraded vim-fugitive, bus since then, I cannot `:Gcommit` anymore. Even when I do it from the command line, not only I see the following error message, but when I try to save, Vim complains with: `E13: File exists (add ! to override)` ... Error detected while processing function 41_config[4]..119_buffer_repo: line 1: E605: Exception not caught: fugitive: A third-party plugin or vimrc is calling fugitive#buffer().repo() which has been removed. Replace it with fugitive#repo() Error detected while processing function FugitiveDetect: line 11: E171: Missing :endif + Need to figure this out, as it's super annoying + implement something similar to `SUMMATION`, but for `MAX` and `MIN` -- `MAXIMIZATION` and `MINIMIZATION` Also, TIL (or better, remembered) that you can `(loop :repeat N ...)` to execute a loop `N` times # 2019-03-04 * completed advent of code 2017 day16-1 * completed advent of code 2017 day16-2 * completed advent of code 2017 day17-1 * completed advent of code 2017 day17-2 + Make day17-2 run faster -- it takes minutes for that 50_000_000 iterations on that ring buffer to complete # 2019-03-05 * completed advent of code 2017 day18-1 * completed advent of code 2017 day18-2 * completed advent of code 2017 day19-1 * completed advent of code 2017 day19-2 Day 18: Another "here is your instructions set, create a compiler for it" kind of exercise. Part1 was OK, it did not take me long to got it right; part2 instead, it got messier than expected, but overall it doesn't look that _bad_ and it runs pretty quickly too, so... # 2019-03-06 * completed advent of code 2017 day20-1 * completed advent of code 2017 day20-2 Really sloppy solution, mine. The problem statement says: Which particle will stay closest to position <0,0,0> in the long term? So I said, why don't I let the simulation run for some time until the particle closest to <0,0,0> doesn't change (part1), or no more particles collide with one another (part2)? Well, it turned out 500 iterations were enough to converge to the right answers! ~ find a clever (and more generic) way to solve 2017 day20 -- implement a function to check when all the particles are moving away from <0,0,0> # 2019-03-07 * optimized 2017 day 17 We are interested in the value next to 0 after 50000000 iterations, so we don't have to keep the ring buffer up to date, we can simply simulate insertions, and keep track of all the values inserted after value-0 # 2019-03-08 * optimize solution for 2017 day20 -- add GROUP-BY, refactored UNIQUE-ONLY to use GROUP-BY, and that's it, from O(N²) to O(N)! ? Is it guaranteed that iterating over hash-table keys would return them in insertion order? It seems to be the case for SBCL at least, but is it implementation dependent? # 2019-03-09 * completed advent of code 2017 day21-1 * completed advent of code 2017 day21-2 I decided to represent the pixels grid as a string, so I had to implement logic for: - accessing specific sub-squares of such grid -- `(+ (* i * size) j)` - combine sub-squares together -- It takes less than 4 seconds to complete the 18th iteration, so I did not bother optimizing further; I have however read something interesting on Reddit: > After 3 iterations a 3x3 block will have transformed into 9 more 3x3 blocks, the futures of which can all be calculated independently. Using this fact I just keep track of how many of each "type" of 3x3 block I have at each stage, and can thus easily calculate the number of each type of 3x3 block I'll have 3 iterations later. So yeah, this can be optimized further but you know, I am engineer.. # 2019-03-10 * finally solved all the vim-fugitive issues I was facing since the last upgrade; it turns out some plugin functions were deprecated, and other plugins (e.g fugitive-gitlab were still them) -- after I upgraded those plugins as well, everything started to work just fine * completed advent of code 2017 day22-1 * completed advent of code 2017 day22-2 * completed advent of code 2017 day22-1 -- reused code from day18 and managed to quickly get a star for part y; for part2 it seems like I need to reverse engineer that code, which I don't want to do right now TIL: - I can use `:when .. :return ..` instead of `:do (when .. (return ..))` - I can use `:with .. :initially ..` to create and initialize `LOOP`'s state instead of `LET` + initialization logic *outside* of `LOOP` ? implement MULF to multiply a place -- # 2019-03-11 * completed advent of code 2017 day24-1 * completed advent of code 2017 day24-2 + `goobook-add` is not working anymore -- it ends up creating an account having name and email both set to the email address: `goobook-add Matteo Landi` creates a new account with name:, and address: ~ in CL, what's the idiomatic way of filtering a list keeping the elements matching a specific value? `(remove value list :test-not 'eql)`? `(remove-if-not #'= value list)` # 2019-03-12 * completed advent of code 2017 day25-1 -- it was painful to parse the input, and as many did in the megathread, I should have manually parsed it, hopefully got some scores, and then implemneted the input parser, but you know, I am late for the leaderboard anyway, so... # 2019-03-13 * completed advent of code 2017 day23-3 -- and with it AoC 2017!!! Oh boy, I am done reverse-engineering assembly code -- probably because I am bad at it. Few lesson learned though: - Use TAGBODY to create CL code matching your input - When iterating over big ranges of numbers (like here, where we working in the range 108100 - 125000), one could try to make such range smaller, hopefully making it easier to figure out what the code is actually doing Enjoy my abomination -- it's ugly, still it helped me realize we were counting non-prime numbers (defun solve-part-2 () (let ((b 0) (c 0) (d 0) (e 0) (f 0) (g 0) (h 0)) (tagbody (setf b 81 c b) part-a (prl 'part-a) (setf b (* b 100) b (- b -100000) c b c (- c -17000)) part-b (prl 'part-b b c (break)) (setf f 1 d 2) part-e (prl 'part-e) (setf e 2) part-d (prl 'part-d d e b) (setf g (- (* d e) b)) (if (not (zerop g)) (go part-c)) ; (prl 'resetting-f (break)) (prl 'resetting-f) (setf f 0) part-c (prl 'part-c d e b) (setf e (- e -1) g e g (- g b)) (if (not (zerop g)) (go part-d)) (setf d (- d -1) g d g (- g b)) (if (not (zerop g)) (go part-e)) (if (not (zerop f)) (go part-f)) ; (prl 'setting-h (break)) (prl 'setting-h) (setf h (- h -1)) part-f (prl 'part-f b c) (setf g b g (- g c)) (if (not (zerop g)) (go part-g)) (return-from solve-part-2 (prl b c d e f g h)) part-g (prl 'part-g) (setf b (- b -17)) (go part-b)))) Also, this! from [reddit]( > The key to understanding what this code does is starting from the end and working backwards: > > - If the program has exited, g had a value of 0 at line 29. > - g==0 at line 29 when b==c. > - If g!=0 at line 29, b increments by 17. > - b increments no other times on the program. > - Thus, lines 25 through 31 will run 1000 times, on values of b increasing by 17, before the program finishes. > > So, given that there is no jnz statement between lines 25 and 28 that could affect things: > > - If f==0 at line 25, h will increment by 1. > - This can happen once and only once for any given value of b. > - f==0 if g==0 at line 15. > - g==0 at line 15 if d*e==b. > - Since both d and e increment by 1 each in a loop, this will check every possible value of d and e less than b. > - Therefore, if b has any prime factors other than itself, f will be set to 1 at line 25. > > Looking at this, then h is the number of composite numbers between the lower limit and the upper limit, counting by 17. # 2019-03-14 ? read about Y-combinator: -- I am pretty sure there are better resources out there # 2019-03-16 ~ review vim-yankring config -- I have a custom branch to override some mappings which hopefully could go; also, it stomps on vim-surround mappings too Extracts from Everything is an expression. [Lisps] Most programming languages are a combination of two syntactically distinct ingredients: expressions (things that are evaluated to produce a value) and statements (things that express an action). For instance, in Python, `x = 1` is a statement, and `(x + 1)` is an expression. Every expression is either a single value or a list. [Lisps] Single values are things like numbers and strings and hash tables. (In Lisps, they’re sometimes called atoms.) That part is no big deal. Macros. [Racket] Also known in Racket as syntax transformations. Serious Racketeers sometimes prefer that term because a Racket macro can be far more sophisticated than the usual Common Lisp macro. # 2019-03-17 ? change RING related methods to use DEFINE-MODIFY-MACRO --, and # 2019-03-19 *refactored 2018 22 - Use COMPLEX instead of (LIST X Y) - Use CASE / ECASE instead ASSOC - Add poor's heap-queue implementation to utils - Review a-star implementation -- soon I will factor out a new function for it The execution time dropped from ~40 seconds to ~5 In the process I have also added HASH-TABLE-INSERT to utils, and used that wherever I saw fit # 2019-03-20 + gotta go vim-sexp a try -- and tpope's [mappings]( in particular # 2019-03-21 * factored out A-STAR from 2018 22 ? DESTRUCTURING-BIND for COMPLEX ? how to destructuring-bind an espression, ignoring specific elements? # 2019-03-23 * changed PARTIAL-1 to support nested substitution via SUBST * added PARTIAL-2 (similar to PARTIAL-1, but with the returned function accepting 2 arguments instead of 1) I am pretty excited about what I came up with. At first I got stuck when trying to use SUBST because I couldn't get nested backticks/commans to work as expected; then I realized I coudl do all the transformation inside the macro, and only put inside backticks the transformed argument list (defmacro partial-1 (fn &rest args) (with-gensyms (more-arg) (let ((actual-args (subst args '_ more-arg))) `(lambda (,more-arg) (funcall (function ,fn) ,@actual-args))))) # 2019-03-24 * implemented DEFUN/MEMO macro to define a memoized function Tried to see if I could somehow expose the cache used (i.e. replacing `LET` with `DEFPARAMETER`) but could not find a way to programmatically generate a name as the concatenation of the function name, and a `/memo` literal. Anyway, I am happy with it already.. (defmacro defun/memo (name args &body body) (with-gensyms (memo result result-exists-p) `(let ((,memo (make-hash-table :test 'equalp))) (defun ,name ,args (multiple-value-bind (,result ,result-exists-p) (gethash ,(first args) ,memo) (if ,result-exists-p ,result (setf (gethash ,(first args) ,memo) (progn ,@body)))))))) # 2019-04-04 * implemented Summed-area Table data-structure, in Common Lisp -- # 2019-04-06 * implemented Clojure's threading macro, in Common Lisp -- * refactored 2018 day 11 to use Summed-area table! # 2019-04-13 * finally got gcj-qualifications-3-criptopangrams accepted by the grader -- it's possible it was blowing up because of memory issues before, and I did fix that by reading the ciphertext one piece at a time (i.e. multiple `(read)`) instead of read-line + split + parse If a codejam solution is RE-ing out, but cannot find any apparent flaw in the algorithm, try to invoke the GC at the end of each case -- `(sb-ext:gc)` # 2019-04-14 Had some *fun* with [sic](, and implemented a wrapper to: - create a SSL proxy using `socat` -- - keep track of input history using `rlwrap` - store IRC logs using `tee` - highlight my name with `gsed` # 2019-04-16 * fucking fixed iterm2/tmux/term-mode weirdness -- vitality.vim lacked support for [terminal-mode]( ? vitality for mintty -- mintality + read lispwords from project file # 2019-04-21 Use `hexdump(1)` to view text as binary data (use `-C` to display original strings alongside their binary representation). And why would I care so much about viewing text as binary data? Because I am trying to understand bracketed pasted mode, and figure out if the extra new-line when pasting text in the terminal is intentional or a bug (readline? iTerm? iTerminal?) # 2019-04-25 Fucking figured out why [bracketed paste mode]( had stopped working sometime during the last 3/4 months...finally! Long story short, readline 8.0 was released this January, and with it shipped a fix for, a fix which I think broke something somewhere else. Anyway, if your bash setup includes a custom prompt command - set via PROMPT_COMMAND - that outputs some text with newline characters in it, when bracketed paste mode is on and you try to run a pasted command, changes are you would end up running not just the command you pasted but also one or more empty commands as well (depending on the number of newline chars that are generated inside the custom prompt command). matteolandi at hairstyle.local in /Users/matteolandi $ bind 'set enable-bracketed-paste on' matteolandi at hairstyle.local in /Users/matteolandi $ pwd | pbcopy matteolandi at hairstyle.local in /Users/matteolandi $ echo "CMD+V + Enter" CMD+V + Enter matteolandi at hairstyle.local in /Users/matteolandi $ /Users/matteolandi -bash: /Users/matteolandi: Is a directory matteolandi at hairstyle.local in /Users/matteolandi matteolandi at hairstyle.local in /Users/matteolandi 126 > Luckily for me I was able to fix my bashrc setup and move my custom prompt command `echo` calls into `$PS1` (see:, but I wonder if this wasn't a regression actually -- to be honest I don't know if it's expected to genrate output from custom prompt commands or not. * added initial support for mintty terminal to vitality.vim # 2019-05-01 * bumbped aadbook dependencies -- github sent me an alert that one of them contained a vulnerability * set up keyring to play well with `twice` -- * finished my urlview vim wrapper -- # 2019-05-04 Participated to Google Code Jam Round1-C but did not make it to the second one: - 1st problem: miss-read the description and took a pass on it -- even though, in the end, it turns out I should have started with that - 2nd problem: the interactive ones, as since I have never done an interactive problem, I took a pass on it too - 3rd problem: it did not seem to complicated at first (especially if you were amining for completing the first part), but somehow I bumped into recursion errors and could not submit a working solution on time Lesson learned: - Google Code Jam is though, and you better prepare for it... in advance - You gotta read carefully man! # 2019-05-05 * finally got around to implement [cg](, a command line utility (written in Common Lisp) that tries to extract commands to run from its input -- e.g. you pipe `git log -n ..` into it, and it will recommend you to `git show` a specific commit. It parses definitions from a RC file (err, a lisp file), and ships with FZF and TMUX plugins # 2019-05-06 * more `cg` guessers: 'git branch -D', pruning '[gone]' branches, `sudo apt install` ? cg: nvm use --delete-prefix v8.16.0 # 2019-05-08 * change `cg`'s Makefile not to specify `sbcl` full path -- let the SHELL figure it out, as the path might be different based on the OS # 2019-05-09 * cg: fzf script to enable multi-select # 2019-05-10 * cg: documentation and refactoring I feel like `cg` is ready for people to give it a try, so maybe I will post it on Reddit... maybe.. # 2019-05-12 * cg: documentation # 2019-05-13 All right, I think I figured out a way to avoid asking users to symlink/clone my repo inside ~/quicklisp/local-projects; putting the following at the top of my build.lisp file, right above (ql:quickload :cg), should do the trick: ;; By adding the current directory to ql:*local-project-directories*, we can ;; QL:QUICKLOAD this without asking users to symlink this repo inside ;; ~/quicklisp/local-projects, or clone it right there in the first place. (push #P"." ql:*local-project-directories*) Also, about getting "deploy" automatically installed: I guess I can throw in there a (ql:quickload :deploy) and get the job done, right? Still need to figure out how easy it would be to get QUICKLOAD automatically installed; well, maybe I found one: # 2019-05-15 I managed to get QUICKLISP automatically installed as part of the build process, however, I am not 100% sure that's the way to go; some users on Reddit pointed out that it might not be ideal for a build script to automatically install dependencies under someone's HOME. Next experiments: + Try to get `cg` to build using travis-ci + Add a `./download` script to fetch the latest release # 2019-05-16 * added command line opts to cg: -h/--help, and -v/--version * added a `download` script to `cg`, to download the latest binary available (Linux, Mac OS, Windows) + cg: git branch --set-upstream-to=origin/ opts # 2019-05-18 * got `cg` to build automatically (for Linux, Mac OS) using travis-ci * calculate version at build time, and use `git rev-list %s..HEAD --count` to append a build number to the base version Windows build are kind of working too; list of caveats: - set `langugage: shell` or it will hang - created custom roswell install script to fetch binaries somehwere, add them to $PATH, and then invoke the ufficial roswell ci script -- by then, ros is found available, so it will set it up and not compile it: # 2019-05-19 * implemented some cg guessers * fixed a bug with cg build script where when bumbing a version, `git rev-list` would blow because unable to find the base version * released cg 0.1.0 # 2019-05-20 * cg: fix bug with downloader -- it would blow up in case ./bin was missing * don't output duplicate suggestions * add page for `cg` on # 2019-05-21 * cg: stop `fzf` from sorting `cg`'s output * released cg 0.2.0 # 2019-06-01 * published new repo / page for: [tmux-externalpipe]( -- with it I could deprecate tmux-urlview / tmux-fpp, and hopefully simplify `cg` too # 2019-06-05 + ?? .file -> rm.... + rm "is directory" -> rm -rf... # 2019-06-17 * cg: added few guessers (git-st -> git-add/reset/rm, rm -> rmr) * cg: redirect stderr as well 2>&1 * cg: keep on processing guessers, even after a match -- this makes it possible for users to define multiple suggestions for the same input line * cg: add support for guessers which return multiple suggestions ? tmux-externalpipe: spawn a new pane for the selection # 2019-06-19 * cleaning up my vimrc -- from `nomodeline` -- it's dangerous No more `nocompatible` -- it's useless Don't use shortnames /\v<\w\w\= " should match ts= /\v<\w\w\w\= " should match sts= Allow your functions to `abort` upon encountering an error -- this uses a zero-width negative look-behind assertion... /\v\s+function.*(abort.*)@sort! " Change § back to \r :%s/§/\r # 2019-08-21 * Updated async.vim MR to wait for transmit buffer to be empty before closing stdin * Asked on vim_dev about a weird ch_sendraw behavior -- not sure if it's expected behavior, or a bug Hello everyone, ch_sendraw seems to fail when sending large chunks of data to a job started in non-blocking mode. Is this expected? Steps to reproduce: - Copy the following into a file, and source it from vim let args = ['cat'] let opts = { \ 'noblock': 1 \} let job = job_start(args, opts) let data = repeat('X', 65537) call ch_sendraw(job, data) call ch_close_in(job) Expected behavior: - Nothing much Actual bevarior: - E630: channel_write_input(): write while not connected If on the other hand, I initialized data with 65536 bytes instead of 65537, then no error would be thrown. In case you needed it, here is the output of `vim --version` $ vim --version VIM - Vi IMproved 8.1 (2018 May 18, compiled Aug 16 2019 02:18:16) macOS version Included patches: 1-1850 Compiled by Homebrew Huge version without GUI. Features included (+) or not (-): +acl -farsi -mouse_sysmouse -tag_any_white +arabic +file_in_path +mouse_urxvt -tcl +autocmd +find_in_path +mouse_xterm +termguicolors +autochdir +float +multi_byte +terminal -autoservername +folding +multi_lang +terminfo -balloon_eval -footer -mzscheme +termresponse +balloon_eval_term +fork() +netbeans_intg +textobjects -browse +gettext +num64 +textprop ++builtin_terms -hangul_input +packages +timers +byte_offset +iconv +path_extra +title +channel +insert_expand +perl -toolbar +cindent +job +persistent_undo +user_commands -clientserver +jumplist +postscript +vartabs +clipboard +keymap +printer +vertsplit +cmdline_compl +lambda +profile +virtualedit +cmdline_hist +langmap -python +visual +cmdline_info +libcall +python3 +visualextra +comments +linebreak +quickfix +viminfo +conceal +lispindent +reltime +vreplace +cryptv +listcmds +rightleft +wildignore +cscope +localmap +ruby +wildmenu +cursorbind +lua +scrollbind +windows +cursorshape +menu +signs +writebackup +dialog_con +mksession +smartindent -X11 +diff +modify_fname -sound -xfontset +digraphs +mouse +spell -xim -dnd -mouseshape +startuptime -xpm -ebcdic +mouse_dec +statusline -xsmp +emacs_tags -mouse_gpm -sun_workshop -xterm_clipboard +eval -mouse_jsbterm +syntax -xterm_save +ex_extra +mouse_netterm +tag_binary +extra_search +mouse_sgr -tag_old_static system vimrc file: "$VIM/vimrc" user vimrc file: "$HOME/.vimrc" 2nd user vimrc file: "~/.vim/vimrc" user exrc file: "$HOME/.exrc" defaults file: "$VIMRUNTIME/defaults.vim" fall-back for $VIM: "/usr/local/share/vim" Compilation: clang -c -I. -Iproto -DHAVE_CONFIG_H -DMACOS_X -DMACOS_X_DARWIN -g -O2 -U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=1 Linking: clang -L. -fstack-protector-strong -L/usr/local/lib -L/usr/local/opt/libyaml/lib -L/usr/local/opt/openssl/lib -L/usr/local/opt/readline/lib -L/usr/local/lib -o vim -lncurses -liconv -lintl -framework AppKit -L/usr/local/opt/lua/lib -llua5.3 -mmacosx-version-min=10.14 -fstack-protector-strong -L/usr/local/lib -L/usr/local/Cellar/perl/5.30.0/lib/perl5/5.30.0/darwin-thread-multi-2level/CORE -lperl -lm -lutil -lc -L/usr/local/opt/python/Frameworks/Python.framework/Versions/3.7/lib/python3.7/config-3.7m-darwin -lpython3.7m -framework CoreFoundation -lruby.2.6 Thanks, Matteo * Created new async.vim MR to workaround ch_sendraw erroring out when data is bigger than 65536 bytes -- in this case, we would fall back to manual data chunking # 2019-08-22 Still fighting with non-blocking vim channels, and how to close stdin making sure that all the pending write operations are flushed first # 2019-08-24 * Cleaned up configs for mutt and friends (offlineimap, msmtp) * Restored Ctrl+Enter behavior on Change some terminal mappings - Ctrl-Enter key from ✠ to ◊ - Shift+Enter key from ✝ to Ø Why? Because I recently switched from iTerm2 to, and I could not find a way to creat custom shortcuts (e.g. to generate the maltese cross on Ctrl+Enter). I tried then using Karabiner-Element, but it doesn't seem it supports unicodes, so the only solution I found was: - Figure out a new char to use (the lozenge), whose Option+key shortcut was known (Option+Shift+v) - Tell Karabiner elements to generate that sequence when Ctrl+Enter is pressed - Change all the other settings to use the lozenge instead of the maltese cross Too bad I could not use a more meaningful character, like the carriage return one. PS. Now that these key mappings have been restored too, is pretty much usable; I do not see any performance improvements (when scrolling through Tmux + Vim panes), and true color support is not there, but that's OK, I will survive I guess # 2019-08-25 * Suppressed CR/LF warnings raised when working on Unix Git repos from Windows -- git config core.autocrlf input * Figured out a way to prevent `git up` from updating the dotfiles submodule while on the parent repo, my-env: add `update = none` to the sub-module entry inside .gitmodules [submodule "dotfiles"] path = dotfiles url = ignore = dirty update = none Why would I want that you might ask? Well, I usually keep the dotfiles submodule up to date already (e.g. `cd dotfiles; git up`), so when I decide to sync my-env, I don't want `git up` to accidentally checkout an older version of dotfiles. * Created tmux binding to spawn a command/application in a split pane -- similar to how `!' on mutt lets you spawn an external command * finished reading [Black Box Thinking: Why Most People Never Learn from Their Mistakes--But Some Do]( # 2019-08-27 * Added new reading.html page to, listing all the books I remember having read * Changed ansible scripts to re-use letsencrypt certificates for the docker registry -- so I don't have to re-generate self signed certificates every year # 2019-08-28 * Created [cb.vim]( -- still a WIP, but it's working, with my set up at least # 2019-08-30 * added Makefile and support for br * added Makefile and support for cb * removed a bunch of wrappers from ~/bin (br, cb, urlview, mutt-notmuch-py) + fzf ain't working on cygwin: and + try to ditch the following wrappers: fzf-tmux -- find a way to install them locally (make install PREFIX=..., or the same for python crap) + avoid to update PATH for tmux, and winpty # 2019-08-31 * added Makefile and support for cb * removed tmux,winpty from PATH -- now installing it into ~/local/bin * fucking managed to run fzf on Cygwin! Created `fzf` wrapper that would start `fzf` via cmd.exe on Windows: #!/usr/bin/env bash EXE=$(realpath ~/.vim/pack/bundle/start/fzf/bin/fzf) if ! hash cygpath 2>/dev/null; then "$EXE" "$@" else stdin=$(tempfile stdout=$(tempfile wrapper=$(tempfile trap "rm -f $wrapper $stdout $stdin" EXIT # save stdin into a temporary file cat - > "$stdin" # create wrapper cat > "$wrapper" < $stdout EOF # run fzf through cmd.exe -- stolen from: cmd.exe /C 'set "TERM=" & start /WAIT sh -c '"$wrapper" cat "$stdout" fi And updated `fzf-tmux` as follows -- no need to run tmux if we would be spawning a `cmd.exe` anyway EXE=~/.vim/pack/bundle/start/fzf/bin/fzf-tmux if [ -f ~/.vim/pack/bundle/start/fzf/bin/fzf.exe ]; then # While on Cygiwn, fzf-tmux would normally open a tmux pane, which would open # a cmd.exe (yes, because on cygwin fzf has to be run through a cmd.exe). # # That's ridicolous, let's skip the tmux part completely EXE=fzf fi $EXE "$@" * fucking figured out (well, partially) how to make tmux preserve current path when splitting panes: it appears I have to export CHERE_INVOKING=1. That also fixes a problem I have always had when running tmuxinator on Windows where it would not always move in the expected directory + what the hell is CHERE_INVOKING, and why do I need to set it on Windows, but not on MacOS/Linux + backup files Mutt was behaving weirdly when run inside tmux, and googling around I discovered that it's not recommended to set TERM to anything different from screen/scren-256color while inside tmux: So I tried to disable the lines where I override TERM in my .tmux.conf, let's see what happens. # 2019-09-01 * Started moving my old journal entries over to my file Regarding CHERE_INVOKING, on Cygwin: it turns out that the cygwin default /etc/profile checks for an environment variable named CHERE_INVOKING and inhibits the change of directory if set. So by default, when sourcing /etc/profile, cygwin would take you to your home directory, unless CHERE_INVOKING=1 is set -- * Implemented plan-rss script -- in bash for now And here there was me thinking it would be easy to manually generate RSS: - XML escape VS turns out you need both: XML escape because the output has to be valid HTML, CDATA if you want the parser to ignore non-existng HTML tags - Channel's description has to be plain text -- item's description has to be HTML compatible + plan-rss: new-fucking-lines + plan-rss: publish latest 5 entries only + plan-rss: rewrite in CL # 2019-09-02 * Backed up all the into Dropbox * plan-rss: populate * plan-rss: fucking fixed (I guess) new-line madness * plan-rss: pubDate matching plan entry's date - XML-escape all your content - Wrap it into `
` ... `
` - Wrap it into `` - Voila'! No need to append `
`s to each line -- `
` will do the trick!

~ plan-rss: populated  but feedly still does not show it
? Cygwin + Tmux weirdnesses: if I don't force tmux to set TERM=xterm-256color, everything will go black and white -- it turns out TERM is being set to screen, and not screen-256color
? Tmux weirdnesses: if I don't force tmux to set TERM=xterm-256color, tmux will set it to screen-256color which doesn't support italics well

# 2019-09-03
~ plan-rss: remove old pub-date logic

Regarding the image not showing on Feedly: could it be that we are not actually pointing to a proper image resource, but rather to an endpoint that will generate it?!

~ tmuxinator on windows:

# 2019-09-04
* Started caching gravatar locally, on my box; I hoped this could fix the problem with my RSS channel image not being rendered, but it did not work -- still not image on Feedly, and with the other RSS client I am using on Mac OS
* Fixed my rubygems cygwin env

When trying install any gem, I would be presented with the following error:

    $ gem install tmuxinator
    ERROR:  While executing gem ... (NameError)
        uninitialized constant Gem::RDoc

After some Googling around, I landed on this GitHub [page]( where they suggest to manually edit 'rubygems/rdoc.rb' file, and move RDoc logic inside the begin/rescue block -- where it should be

    vim /usr/share/rubygems/rubygems/rdoc.rb
    # move the last line inside begin/rescue block

Everything started working again since then

+ Add step to to guard against this rubygems/rdoc weirdness
+ plan-rss: plan.xml did not update overnight -- manually running update-plan-xml did the trick though

I cannot get any RSS client I found to parse the image that I added to my plan.xml file, but I checked the specification, and it should be correct:

- Someone on SO pointing about the `` tag in the specification:
- Codinghorror's feed: -- it seems to be using ``, but somehow his image is popping up, mine not
- Another feed (taken from NetNewsWire) where image is being used:
- Some guy on SO suggesting that we use favicon.ico:
- GitHub's repo feed: -- it's ATOM, and when adding this to NetNewsWire, it seems to be using the favicon.ico

* Opened a ticket to NetNewsWire to see if RSS2.0 channel's image is supported or not --

# 2019-09-05
* enhance to isntall the various rubygems I am using
* enhance to abort when running bogus version of rubygems
* plan-rss: better item pubdate management: 1) new item: use date of the plan entry, 2) content differs: use $NOW, 3) re-use last pubdate

+ .plan (and plan.xml) did not update overnight.. again!
+ plan-rss: publish last 10 recently updated entries

# 2019-09-06
* Copying more entries from the old journal into this file
~ .plan does not automatically sync on my phone, unless I open Dropbox first

# 2019-09-07
* Figured out why my .plan was not being getting updated anymore -- it turns out the two cron jobs I had created (one for .plan, the other for plan.xml) were sharing the same name, so the second one (for plan.xml) would override the first one
* Changed some of my ansible scripts to run specific commands (i.e. init of avatar or init download of .plan file) only if the target file does not exist already -- if it does, the cron job would take care of updating it in due time:
* Copying more entries from the old journal into this file

~ I am still having second thoughts on whether updating old .plan entries should force an update on the relevant plan.xml ones (i.e. republish them with new content).

# 2019-09-08
* finished reading [Hackers & Painters: Big Ideas from the Computer Age](

+ aadbook does not work with python3
? offlineimap (the version I am manually building) does not work with python3
~ cg: add an option to use a single guesser only -- that would help me replace urlview completely

TIL, you can stop Vim to increment numbers in octal notation (i.e. C-A) with the following

    :set nrformats-=octal.

# 2019-09-09
Started looking into rewriting plan-rss into CL.  I found xml-emitter, but I am afraid I will have to submit a couple PRs for it to be able to generate valid RSS2.0

+ xml-emitter: missing support for  isPermaLink
+ xml-emitter: missing CDATA support
+ xml-emitter: Missing atom:link with rel="self"

# 2019-09-18
+ cg rm -> rmdir

    rm '.tmux-plugins/tmux-fpp/'
    rm: cannot remove '.tmux-plugins/tmux-fpp/': Is a directory

~ git co bin/fzf-tmux # XXX why on earth would the install script delete fzf-tmux

# 2019-09-22
* finished reading [The History of the Future: Oculus, Facebook, and the Revolution That Swept Virtual Reality](
* cleaend/validated pages -- broken elements inside  in particular
* cg: better rm-rmr support for mac and linux outputs: it's funny how little the output of the same command on different OSes could change...I mean, what's the point, really?!
* aadbook: added py3 support! yay
~ fzf's install script seems to delete bin/fzf-tmux on Windows/Cygwin only -- on Mac, I run it, and fzf-tmux was stll there

Spent some time trying to get offlineimap to work with Python3, but no luck:


This is going to be annoying, especially when python2 sunsets in January next year...

Started reading John Carmack .plan archive:

I have also started learning more about multi-tenant applications, so I am going to start dropping notes here.

A multi-tenant application needs to be carefully designed in order to avoid major problems.  It:

- needs to the detect the client (or tenant) that any individual web request is for;
- must separate persistent data for each tenant and its users;
- can not allow cached data (e.g. views) to leak beetween tenants;
- requires separate background tasks for each tenant;
- must identify to which tenant each line of log output belongs.

What’s the definition? (of a multi-tenant application)

We don’t think there is a clear, concise definition.  In broad strokes, y'know it when y'see it.  You might say that the application is multi-tenant if the architects ever had to decide between deploying a number of separate (not pooled) instances of the app, or deploying a single instance.

Or you might say it has to do with data segregation: the idea that there are some datasets that simply shouldn't touch, shouldn't interact, and shouldn't be viewable together in any combination.  This seems to be the “definition” that most closely fits the examples we’ve come up with, and the spirit of the term (as we understand it).

Set up different domains per client:

- ...

# 2019-09-23
+ Change autohotkeys comment string from `/* .. */` is being used

In a multi-tenant app, each request that comes in can be for a separate tenant.  We need two things from the outset:

- a way to determine what tenant a request is for
- and a way to process the request in the context of that tenant.

Single login leading to tenants

    def current_tenant

Don't change your public API as follows:


Users will never go off wandering to a nother tenant, and since a user is linked to a tenant only, all the magic can happen behind the scenes without the user knowing anything about it.

Another approach is to implement a custom piece of middleware that would take care of figuring out the tenant, and change configs (e.g. DB user) accordingly:

    module Rack
      class MultiTenant
        def initialize(app)
          @app = app

        def call(env)
          request =
          # CHOOSE ONE:
          domain =               # switch on domain
          subdomain = request.subdomain       # switch on subdomain

          @tenant = TENANT_STORE.fetch(domain)

          # Do some configuration switching stuff here ...


And the list of tenants could be configured as:

- in memory hash (not dynamic, will require a bounce of the server)
- database (dynamic, but you would still want to cache some data, or it would slow down every request)

How to provide the *right* data for each user, based on their tenant?

- have a single database, but to use associations to control the siloing.  Pros: single database (less maintenance); Cons: you forget about that node-tenant relation, and all of a sudden users are able to see data from other tenants
- multiple independent copies of the application's database.  Pros: once the connection is switched, siloing is complete and guaranteed; Cons: configuration changes at run-time, possibly invalidating some of the db connection pool techniques implemented by drivers

Also, the database is not the only "feature" you might want to swap out based on the user tenant; for example, if you were using Redis, you might want to try and implement (and then switch) [Redis namespace](

    var fooRedis = new Redis({ keyPrefix: 'foo:' });
    fooRedis.set('bar', 'baz');  // Actually sends SET foo:bar baz

# 2019-09-26
Started doing burpees -- used this timer web-app to set the pace:

Rounds: 8
Time on: 0:30
Time off: 1:00

For those unfamiliar with what a burpee is:

1) Begin in a standing position.
2) Move into a squat position with your hands on the ground. (count 1)
3) Kick your feet back into a plank position, while keeping your arms extended. (count 2)
4) Immediately return your feet into squat position. (count 3)
5) Stand up from the squat position (count 4)

Well, I initially tried to jump instead of stand up at step 5), but then I quickly realize I would not have made it alive that way, so I opted for something more lighter.

Anyway, it was tough, and sometimes I even stopped before the time, but I guess I shouldn't be beating myself too much about it: in the end it's the first session.  Let's if tomorrow I can get out of my bed.

# 2019-09-27
* fixed commentstring for autohotkey files

TIL: Vim will try and syn-color a maximum of `synmaxcol` characters.  -- `:help 'synmaxcol'`

To separate various tenants' data in Redis, we have – like SQL – two options. The first is to maintain independent Redis server instances, and switch out the connection per request. Of course, this has the disadvantage of reducing or eliminating reuse of the connection pool, as the Rails server must open a new set of connections to the Redis server on each web request.

Instead, we recommend using a simple "namespace" concept to identify each key with its tenant.

What's a Redis namespace?  When working with Redis, it's customary to namespace keys by simply prepending their name with the namespace, i.e. namespace:key_name. We often do this manually, and write classes to encapsulate this process, so that other parts of our applications needn't be aware of it.

# 2019-09-28
I have been trying to create a private mailing on for the last couple of days, but they seem to have put 'Request a service' page on mainentance mode, and by the look of their tech blog it seems like they are undergoing a big infrastructure update.  Anyway, shot them an email to understand if there is something we can do about it (e.g. sign up, and register a service).

Follows another dump of notes re multitenancy...

Benefits of multitenancy:

- Segregation of data across the clients
- Manage and customize the app according to the client's needs (e.g. separate branding, separate feature flags)
- Maximize efficiency while reducing the cost needed for centralized updates and maintenance

Ways of implementing this:

- Logical separation of data
- physical separation of data

# 2019-09-30
* enhanced vitality.vim to support escapes -- for changing the cursor shape

I have been thinking about this lately, and I don't think relying on env variables to figure out which terminal Vim or other programs are being run into, is The Right Thing.  I mean, it works pretty decently for local sessions, but the moment you ssh into a remote box and create a Tmux session for some long-running activity, your screwed.

For example, say that I am at work (Windows, Mintty), and I log into my remote Workstation.  When you do so -- ssh into a remote box -- based on your settings, some/all your env variables on your local host will be maved available in your remote session; this could seem sexy at first, because you might have Vim-specific settings based on the running terminal emulator, but what if you create a Tmux session (which saves env variables), start some work, and decide to carry it on from home, where you have a different terminal?  You won't be destroying your tmux session, but since it was created earlier, with a different terminal emulator, some of the already pre-configured variables might not apply for the *new* client.

I believe the Right Thing to do here, is to use escape sequencies, when available, to query the Terminal emulator capabilities -- which is what the Mintty manteiner is suggeseting people do:

Anyway, I guess for the time being I will kill myself and set/unset env variables based on the client I am connecting from, and based on the tmux environment. <3

I am going to dump here the steps to reprovision my Linux box:

    cd /some/dir
    curl -O
    vagrant up
    vagrant ssh
    cd /data
    mkdir Workspace
    ln -s $(pwd)/Workspace ~/Workspace
    ln -s $(pwd)/Workspace ~/workspace
    git clone --recurse-submodules
    ln -s $(pwd)/my-env ~/my-env
    cd my-env
    git clone --recurse-submodules
    bash --force --bootstrap --os-linux

# 2019-10-02
Was playing with my dev box earlier, trying to test persistent drives -- this way, once a new LTS is released I can destroy the box, recreate it, and pick up from where I left -- and accidentally wiped out my ConnectION workspace; I did not realize `vagrant destroy` would delete also attached drives...

SO has a solution for this:

- Mark the storage drive as hot-pluggable
- Register a pre-destroy hook, and inside of it detach the drive -- so it wont be automatically deleted

Unfortunately that does not seem to work as expected: syntax errors, triggers being invoked multiple times.  I guess I will be creating a custom `./vagrantw` wrapper to take care of all this


Getting a reference to `Window` in Angular, with AOT enabled, is super painful.  `Window` is defined as an interface, and because of that you cannot simply rely on Angular DI, but instead you have to...sacrifice few animals the God of forsaken programmers!

First you define a named provider:

    // file: ./window-ref.service.ts

    import { InjectionToken } from '@angular/core';

    export const WindowRef = new InjectionToken('WindowRef');

    export function windowRefFactory() {
      return window;

    export const WindowRefProvider = { provide: WindowRef, useFactory: windowRefFactory }

Next you register the provider into your main module:

    // file: ./app.module.ts

    import { WindowRefProvider } from './utils/window-ref.service';

      providers: [

Finally you update your dependent services / components as follows:

     import { Injectable, Inject } from '@angular/core';
     import { WindowRef } from '@connection-ui/utils/window-ref.service';

     export class HelloWorldService {
       $window: Window;

       constructor(@Inject(WindowRef) $window) {
         this.$window = $window as Window;

Yeah, you need to use the named injector but without any type (i.e. `any`), and then inside the constructor you cast it to `Window`; it's retarded, but that's the only solution I could come up with.


Played with mocking classes in Typescript, and stumbled upon something neat:

    type PublicInterfaceOf = {
        [Member in keyof Class]: Class[Member];
    class MockHelloWorld implements PublicInterfaceOf {
      hello(name) {
        return "Mocked!";


? create vagrantw to properly deal with persistent drives, and prevent they are removed when `vagrant destroy`-ing

# 2019-10-03
Time for another burpees training session.  Last time it took me almost 3 days to fully *recover* so I figured I should try with something lighter this time:

Rounds: 6
Time on: 0:30
Time off: 1:00

6 rounds instead of 8, and no jumping since the beginning (last time I had to stop in the middle of the second round).  Let's what my body thinks about this tomorrow or the day after.

# 2019-10-04
* finished reading john carmak's collection of .plan files

I am taking some time off from work next week, and I plan to do some research on Natural Language Processing (NPL); in particular I would like to experiment with Named Entity Recognition (NER) in the context of search engines (populate search filters based on users free text input).

Also, I will be in London for the next couple of days to meet some friends, so don't expect much done (other than reading, I guess).

# 2019-10-07
Reading about Natural Language Processing (NLP), and Named Entity Recognition (NER) in particular, to be able to populate search engine filters, based on user's free text input

    mkdir ~/tmp/ner
    cd ~/tmp/ner
    virtualenv -p python3 venv
    pip install nltk
    pip install numpy


I was reading this book about NLP, and there was this exercise about text tokeninzation, that the author thought about solving it using simulated annealing.  Now, I always found simulated annealing fascinating, and because of that I decided to dump the exercise (and solution) here for future reference.

Let's first define a function, `segment()`, that given some text (i.e. a string, like 'Hello world!'), and another string representing where such text should be broken into chunks, would split the text into chunks:

    def segment(text, segs):
      # text = 'helloworld'
      # segs = '000100000'
      # => ['hello', 'world']
      words = []
      last = 0
      for i in range(len(segs)):
        if segs[i] == '1':
          words.append(text[last:i + 1])
          last = i + 1
      return words

`evaluate()` instead is a function that evaluates the _quality_ of a *segmentation*.

    def evaluate(text, segs):
      words = segment(text, segs)
      text_size = len(words)
      lexicon_size = len(''.join(list(set(words))))
      return text_size + lexicon_size

Now the simulated annealing: basically we start with a *random* segmentation kernel, and start randomly flipping *bits* on/off, evaluating how the new segmentation compares with the best one found so far; repeat this multiple times, always reducing the number of *bits* to flip (e.g. temperature, decreasing over time)

    def flip(segs, pos):
      return segs[:pos] + str(1 - int(segs[pos])) + segs[pos + 1:]

    def flip_n(segs, n):
      for i in range(n):
        segs = flip(segs, randint(0, len(segs) - 1))
      return segs

    def anneal(text, segs, iterations, cooling_rate):
      temperature = float(len(segs))
      while temperature > 0.5:
        best_segs, best = segs, evaluate(text, segs)
        for i in range(iterations):
          guess = flip_n(segs, int(round(temperature)))
          score = evaluate(text, guess)
          if score < best:
            best, best_segs = score, guess
        score, segs = best, best_segs
        temperature = temperature / cooling_rate
        print(evaluate(text, segs), segment(text, segs))
      return segs

# 2019-10-08
* finished reading [Natural Language Processing with Python: Analyzing Text with the Natural Language Toolkit](
+ add dark mode support to, just for fun -- Dark mode in CSS with `prefers-color-scheme` -

# 2019-10-09
Started reading The Mythical Man-Month (TMMM)


Finally got around to create a SPID ID, so I can - hopefully - easily access all my health record and anything else PA related, without having to use obsolete smart card readers or remember crazy PINs.

You have to request a SPID ID to one of the entity providers out there, and they might ask you for proof of identity; lucky me, Hello Bank is partnering with Poste Italiane (an entity provider), so I could easily request an ID without much of a hassle: the bank trasferred all the required documents over, and in 15 mins I had my SPID created.

I then installed PosteID on my mobile, signed in with my SPID ID, created a new code (and there was silly me thinking the SPID ID would be the last username/password to remmeber), and that's it: every time I try to access PA websites and choose SPID as login workflow, I would receive a notification on my mobile asking me to confirm or deny the access -- living in the future!

Few handy PA links:

- Fascicolo Sanitario Elettronico (FSE):
- CUP online:


I have decided to list all the applications I have on my mobile, so I can 1) monitor which I have been constantly using vs the ones I throw away after a bit, and 2) easily re-setup my mobile in case Google refused to re-install my previously installed apps.

Here we go:

- (notes)
- Authenticator (2FA codes)
- British Airways
- Calendar (Google)
- Chrome
- Dropbox
- E-mail (non G-mail accounts)
- Feedly
- File Manager+
- Fineco
- Firefox Nightly
- Fogli (Google spreadsheets)
- Gmail
- Hello bank!
- Hoplite (roguelike game)
- Instagram
- Maps
- Materialistic (hacker news)
- My3
- Netflix
- Odyssey (endless runner snowboarding game)
+ Orario Treni (train timetables)
- PosteID (SPID identity)
- Reddit
- Ryanair
- SAP concur (tracking expenses when traveling)
- Satispay
- Settle Up
- Shazam (the future!)
- Signal
- Snapseed
- Spotify
- Stremio
- Tasker
- Teams (Microsoft)
- TV Time
- Twitter
- Waze
- WhatsApp
- Youtube

Damn, I was not expecting this to be that long...


Few "interesting" links:

- koa.js, from the creators of express.js:
- Apex ping, easy and sexy monitoring for websites, applications, and APIs:



- Train the NLP manager, then save it on disk so you don't have to train it again:
- NER Manager:


My Node.js application kept on failing to start up from my new Virtualbox box, blowing up with a ENOSPC error.

    echo fs.inotify.max_user_watches=524288 | sudo tee -a /etc/sysctl.conf && sudo sysctl -p


? setting up a vpn on my DO instance (for mobile, and laptop)

# 2019-10-10
I was reading this hackernews post ( about adding support for a dark-theme, and even though the overall activity took me less than one minute to complete (well, my website is very *simple*, so I am sure adding dark-theme support for a more complex website / web app is going to take longer, usually), one commend made me wonder: "What if someone was interested in *muting* the browser colors, the container's, but not the content?  Should we add some Js to let people toggle between light and dark mode?"

Few examples:

- Google, does not seem to support a dark theme
- Wikipedia, does not seem to support a dark theme
- Facebook, does not seem to support this either

Again, is nothing compared to Google and Wikipedia, so honestly...I am not that surprised they haven't added support for dark-mode yet; but still, is it possible they haven't done it yet because, as things stand, it's not yet a "problem" for them?  Somewhere on the Web I read that one should tweak CSS only to solve a specific user problem, so maybe adding a dark-them is not going to solve any of their problems, who knows...

Anyway, if you have dark-mode enabled at OS/browser level, going to you should be presented with a dark version of the website -- and here is what I added to my style.css to make this happen:

    @media (prefers-color-scheme: dark) {
      body {
        background-color: #444;
        color: #e4e4e4;
      a {
        color: #e39777;
      img {
        filter: grayscale(30%);


*  added dark-them to

# 2019-10-17


Rounds: 6
Time on: 0:30
Time off: 1:00

Still no jumping.

# 2019-10-20
Disabled dark-theme on

# 2019-10-27
Few bits I found around, about generating html emails, starting from markdown, with Mutt.

Mutt's editor first:

    set editor = "nvim -c 'set ft=mdmail' -c 'normal! }' -c 'redraw'"

Then a macro, activated with `H` or whatever, to pipe the body of the message to `pandoc` to generate the html version of the message, and attach it to the email:

    macro compose H "| pandoc -r markdown -w html -o ~/.mutt/temp/neomutt-alternative.html~/.mutt/temp/neomutt-alternative.html"

And the '.vim/syntax/mdmail.vim' syntax file:

    if exists("b:current_syntax")
    let s:cpo_save = &cpo
    set cpo&vim

    " Start with the mail syntax as a base.
    runtime! syntax/mail.vim
    unlet b:current_syntax

    " Headers are up to the first blank line.  Everything after that up to my
    " signature is the body, which we'll highlight as Markdown.
    syn include @markdownBody syntax/markdown.vim
    syn region markdownMailBody start="^$" end="\(^-- $\)\@=" contains=@markdownBody

    " The signature starts at the magic line and ends at the end of the file.
    syn region mailSignature start="^-- $" end="\%$"
    hi def link mailSignature Comment

    let b:current_syntax = "mdmail"
    let &cpo = s:cpo_save
    unlet s:cpo_save


Niceties to cat / edit the content of an executable (`which`-able program); the first one, will open the specified command/executable, in your configured editor

    function ew() { $EDITOR $(which "$1"); }
    complete -c ew -w which

While this other one, will simply `cat` its content:

    function cw() { cat $(which "$1"); }
    complete -c cw -w which


Found this intersting thread started by Aaron Swartz, from 2008, on HTTP sessions:!topic/webpy/KHCDcwLpDfA

I was initially confused by it -- and somewhat I still am -- as I could not really understand where Aaron was getting at with this, but then I guess his biggest complain was that sessions require servers to store information locally, on a per-user basis, breaking the *stateless* idea of the web.  Cookies are good (and Session cookies are good to, as far as I can tell), but Aaron did not seem lo like the use of cookies for session management; for that, one should use Digest access authentication (

Cookies, for session managements, I think they are good, especially because they prevent username/password to fly around -- even though hashed -- betweeen the clients and the server; better to generate a temporary session ID and have clients pass that to the server.

Does this break Web's stateless-ness?  Yes of course, but is that really a problem?  I mean, if the session is stored on disk, then yes, it's a problem, because the client will have to hit the same server to restore its session (or at least it would be uselessly complicated to manage it);but if one instead used a different session storate system, like Redis, then client requests could be handled by any available server in the pool, and it would be able to restore the specific client session just fine.

Anyway, the sad thing is, we are never going to get to know what Aaron had in mind when started that thread.


Changed my household booklet spreadsheet to:

- Handle cash withdrawal by entering two expenses: one negative on my bank account, and one positive, as cash bucket
- Changed the payment type sheet to know read the starting balance by summing up all the "positive" expenses

Sooner or later this will have to become an app...

# 2019-10-30
Finally A/I came back online, and I was finally able to create a request for a mailing list (to use it with the other college friends).  Anyway, the request has been created, so hopefully over the following days we will hear back from them...stay tuned!

# 2019-11-01
* xml-emitter: Add support for guid isPermaLink=false (
* xml-emitter: Add support for atom:link with rel="self" (

# 2019-11-02

Update on my journery to a Common Lisp version of plan-rss

Figured out a way to wrap "description" values inside CDATA/pre elements, but I don't think I am 100% satisfied by the solution (reason why I haven't submitted a MR yet).

So basically I added a new key argument to RSS-ITEM, `descriptionWrapIn` a 2 elements LIST, which the function would use to wrap the content of "description" (the first element, before, and the second after).  Here is what I came up with:

    (xml-emitter:rss-item date
                          :link *link*
                          :description (plan-day-content day)
                          :descriptionWrapIn '("" "
]]>")) Don't take me wrong, it works, but I just don't think it's good way of implementing it. Another approach would be to convert RSS-ITEM into a macro and let it accept a `BODY` that users can specify to further customize the content of the item. Something along the lines of: (xml-emitter:with-rss-item (date :link *link*) (xml-emitter::with-simple-tag ("description") (xml-emitter::xml-as-is "") (xml-emitter::xml-out (plan-day-content day)) (xml-emitter::xml-as-is "]]>")))))) More verbose, but more flexible too; in fact, yesterday's MRs to add `isPermaLink` to ``, and to support Atom's link with rel=self would not be required anymore if RSS-ITEM and RSS-CHANNEL-HEADER were macros. Channel with Atom link: (xml-emitter::with-rss-channel-header (*title* *link* :description (read-channel-description) :generator *generator* :image *image*) (xml-emitter::empty-tag "atom:link" `(("href" ,*atom-link-self*) ("rel" "self") ("type" "application/rss+xml")))) Item with guid (isPermaLink=false) and description wrapped inside CDATA/pre blocks: :do (xml-emitter:with-rss-item (date :link *link* :pubDate "XXX") (xml-emitter::simple-tag "guid" (format NIL "~a#~a" *link* date) '(("isPermaLink" "false"))) (xml-emitter::with-simple-tag ("description") (xml-emitter::xml-as-is "") (xml-emitter::xml-out (plan-day-content day)) (xml-emitter::xml-as-is "]]>")))))) --- TIL: you can pretty print a Common Lisp condition with: `(format t "~a" cond)` --- CL: unix-opts Required options seem to conflict with typical use of `--help` or `--version`: Here is a possible workaround (adapted by another one found in the thread there): (defun parse-opts (&optional (argv (opts:argv))) (multiple-value-bind (options) (handler-case (handler-bind ((opts:missing-required-option (lambda (condition) (if (or (member "-h" argv :test #'equal) (member "--help" argv :test #'equal) (member "-v" argv :test #'equal) (member "--version" argv :test #'equal)) (invoke-restart 'opts:skip-option) (progn (format t "~a~%" condition) (opts:exit 1)))))) (opts:get-opts argv)) (opts:unknown-option (condition) (format t "~a~%" condition) (opts:exit 1)) (opts:missing-arg (condition) (format t "~a~%" condition) (opts:exit 1))) (if (getf options :help) ... Important bits: - We do want to invoke the `OPTS:SKIP-OPTION` restart when any of the special options is set -- this way the library will use `NIL` and move on - HANDLER-BIND -- and not HANDLER-CASE -- needs to be used to handle `OPTS:MISSING-REQUIRED-OPTION`, especially if we want to call INVOKE-RESTART (with HANDLER-CASE the stack is already unwound, when the handler runs. Thus the restart established in the function is gone.) --- * xml-emitter: RSS macros ( (also cancelled the other MRs, as this one would now give more control to the user) * plan-rss: add pubDates (rfc2822) # 2019-11-03 Mac OS builds on Travis-CI were not working lately, and after some quick Googling around I stumbled upon this: The following indeed seems to fix the problem: before_install: # 2019-10-30: Homebrew/brew.rb:23:in `require_relative': ... # ... unexpected keyword_rescue, expecting keyword_end # - if [ "$TRAVIS_OS_NAME" = osx ]; then set -x && brew update-reset && if ! brew install gsl; then cd "$(brew --repo)" && git add . && git fetch && git reset --hard origin/master; fi; set +x; fi But what the hell should I install `gsl`? Let's try something else (`brew update-reset` only): before_install: # 2019-10-30: Homebrew/brew.rb:23:in `require_relative': ... # ... unexpected keyword_rescue, expecting keyword_end # - if [ "$TRAVIS_OS_NAME" = osx ]; then set -x && brew update-reset && set +x; fi This one works too, so I guess I am going to stick with this for now! --- Setting up Travis-CI deployment keys can be as easy as running `travis setup releases`, or way more messier (that's the price you have to pay if you don't to share your username/password with `travis`). 1) Add a new Oauth token:, and copy it -- only the `public_repo` scope seems to be required to upload assets 2) Run `cb | travis encrypt iamFIREcracker/plan-rss` (replace `iamFIREcracker/plan-rss` with your public repository) 3) Copy the secured token inside your .travis.yml file deploy: provider: releases api_key: secure: XXX_SECURE_XXX skip_cleanup: true file: bin/$DEST on: repo: iamFIREcracker/plan-rss tags: true Also, when *all* conditions specified in the `on:` section are met, your build will deploy. Possible options: - `repo`: in the form `owner_name/repo_name`. - `branch`: name of the branch (or `all_branches: true`) - `tags` -- to have tag-based deploys - other.. Useful links: - - - - --- * plan-rss: v0.0.1 * plan-rss: fix Windows build issue * plan-rss: remove trailing new-line from * plan-rss: v0.0.2 * plan-rss: --version not working for dev version (i.e. not taking into account commits since last tag) * plan-rss: travis ci binaries crashing (fucking had forgotten to check in some changes...) * plan-rss: v0.0.3 + plan-rss: ERROR instead of OPTS:EXIT so one can test opts parsing without accidentally closing the REPL # 2019-11-04 * Use plan-rss (CL version) to generate # 2019-11-05 * plan-rss: `--disable-pre-tag-wrapping` * plan-rss: remove `-m`, `-s` options (use longer versions instead) # 2019-11-06 * plan-rss: v0.0.4 + travis-ci builds are failing on Windows -- ros is erroring out # 2019-11-07 Not really sure what happened, but now CL travis-ci builds have started working again... Anyway, I am going to add the .sh script into every CL repo I am building via travis-ci, as using a single scrip hosted on gist/github turned out not to be a good idea; I mean, when the script works, fine, but when it doesn't and have to update it, then I am forced to update all the repos too, as each of them would still be pointing to the old version of the script. Also, having the script checked into the repo, sounds like the right thing to do, especially if one wants reproducible builds. #!/usr/bin/env bash ROSWELL_VERSION= ROSWELL_URL="$ROSWELL_VERSION/roswell_${ROSWELL_VERSION}" OS_WIN=$(uname -s | grep -e MSYS_NT) if [ -n "$OS_WIN" ]; then ROSWELL_IN_PATH=$(echo $PATH | grep -F /tmp/roswell) if [ -z "$ROSWELL_IN_PATH" ] ; then echo "/tmp/roswell not found \$PATH" exit 1 fi echo "Downloading Roswell (v$ROSWELL_VERSION) from: $ROSWELL_URL" curl -L "$ROSWELL_URL" \ --output /tmp/ unzip -n /tmp/ -d /tmp/ fi # Run roswell's CI script, and since it will find `ros` already available # in $PATH, it would not try to build it but instead will install the specified # CL implementation + quicklisp curl -L "$ROSWELL_VERSION/scripts/" All good! I also remembered I had asked roswell mainteiners about adding Windows support to -- -- and it appears they did it! Switching from custom .sh (wrapping the official one), to simply using the official one was pretty easy: 1) Define the following ENV variables env: - PATH=~/.roswell/bin:$PATH - ROSWELL_INSTALL_DIR=$HOME/.roswell 2) Call roswell's ci script: install: - curl -L | sh ... And that's it! Bonus: you might want to define `LISP` too, and make it point to a specific version -- again, reproducible builds are the right thing to do! --- * added to plan-rss and cg * plan-rss: use MD5 of plan entries as content of `` tags * plan-rss: v0.0.5 + cg: parse git push output to quickly get a reference of the last commit ID -> ' f5455f40f1..6443447668 story/US2709-Workstreams-Make-workstreams-visible-only-to-their-members -> story/US' # 2019-11-08 * plan-rss: preserve empty lines -- we are using ~& instead of ~% (use ~^ to avoid the last newline) * plan-rss: v0.0.6 # 2019-11-10 Typescript interfaces are not available at runtime: they are getting used at compilation time for type-checking, but then thrown away. So, if we want to implement run-time checking and throw an error if the decorator is being applied to a non-compatible object, we are forced to: - Define a dummy class, implementing the interface - Create an instance of such class - Store its `Object.keys()` --- Downgrading a Angular attribute directive is simply **not** supported: People suggest that you can wrap `downgradeComponent()` calls and change `component.restrict` from 'E' to 'A' or 'EA', but I could not make it work -- and to be honest I am kind of happy that it did not work, as otherwise I would have probably left the hack there, and it would broken again on the next upgrade of the framework. So what to do about it? - Migrate the old component in need of the new directive (duh) - Maintain two directives: one for Angular, and one for Angular.js --- Fucking figured out how, with AutoHotKeys, to activate window and cycle through all the similar windows, with the same hotkey. At first I thought I would implement it with something like: - if the focused window does not match "activation criteria", then activate one that maches it - otherwise, activate the next matching window Well, it turns out AutoHotKeys already supports all this, via [window groups](; the idea is simple, you define first a group of windows (all with a given title, class, or exe), and then you call `GroupActivate` instead of `WinActivate`. For example: - Define te ChromeGroup GroupAdd, ChromeGroup, ahk_exe chrome.exe - Activate the group / cycle its windows using `Hyper+k`: F15 & k::GroupActivate, ChromeGroup, R (`R`, is for activating the most recent window of the group: --- ? cg: br regexp is not working under Windows # 2019-11-11 ? cb.vim is adding ^M when pasting on Windows -- :.!cb --force-paste doesn't though... # 2019-11-12 More typescript niceties Earlier I thought it wasn't possible to enforce (at compile-time) that a decorator were applied to a class implementing specific interface, and because of it I wound up implementing run-time checks (check the notes of some previous .plan file). Well I was wrong! First you define an interface for the constructor: export interface UrlDescriptorConstructor { new (...args: any[]): UrlDescriptor; } (for reference, here is how `UrlDescriptor` is looking like) export interface UrlDescriptor { generatePathname(): string; generateParams(): any; } Next you change the signature of your decorator to expect the target class to be a constructor for that type: export function SearchHrefDirective(target: UrlDescriptorConstructor) { ... # 2019-11-16 Trying to understand why copy-pasting things on Vim/Windows generates trailing ^M... Let's create a file with `\r\n`s (that's how the clipboard appears to be populated when I copy things from Chrome): $ echo -en "OS cliboards are hart...\r\naren't they?\r\n" > clipboard.txt $ cat clipboard.txt OS cliboards are hart... aren't they? $ cat clipboard.txt -A OS cliboards are hart...^M$ aren't they?^M$ Let's open up vim now (`vim -u NONE`) and do some experiments: 1) read 'clipboard.txt', under the cursor :read clipboard.txt The content is loaded, not trailing ^M, at the bottom I see: "clpboard.txt" [dos format] 2 lines, 40 characters 2) load 'clipboard.txt' using `read!` :read! cat clipboard.txt The content is loaded, no trailing ^M 3) load the content into a variable, then paste it under the cursor :let @@ = system('cat clipboard.txt') | exe 'normal p' The content is loaded, trailing ^M are added Let's now push the file into the OS clipboard: $ cat clipboard.txt | clip.exe Open up vim again, and: 4) paste from the `*` register (works with `Shift+Ins` too): :exe 'normal "*p' OS clipboard is read, no trailing ^M 5) load the content of the clipboard using `read!` :read! powershell.exe Get-Clipboard The content is loaded, no trailing ^M 6) Read the content of the clipboard into a variable, then paste it under the cursor :let @@ = system('powershell.exe Get-Clipboard') | exe 'normal p' OS clipboard is read, trailing ^M are added So what's the difference between *reading* a file, and calling another program that reads it and outputs its content? --- ? add Makefile to my-env/dotfiles, so I can simply run make to reubild what changed # 2019-11-18 Angular, AOT, and custom decorators Plan on using custom decorators with Angular, and AOT? Well, don't...or at least, beware! We did implement a custom decorator lately, to add behavior to existing Angular components in a mixins kind of fashion, and while everything seemed to be working just fine on our local boxes -- where the app runs in JIT mode -- once we switched to prod mode -- where AOT is enabled -- all components decorated with our custom decorator stopped working, completely, like Angular did not recognize them as all. After some fussing around, I noticed that the moment I added a dummy `ngOnInit` to such components, all of a sudden the `ngOnInit` defined by our custom decorator would be run; but not any of the others `ng*` methods enriched by our decorator though, just `ngOnInit`, the one that we also re-defined inside the Angular component. This led me to this Angular ticket:, where someone seemed to have a pretty good understanding of what was going on: Appears the reason here is that Angular places `ngComponentDef` generated code before all `__decorate` calls for class. `ɵɵdefineComponent` function takes `onInit` as a reference to `Class.prototype.ngOnInit` and overridding it later has no any effect. ... Here's a pseudo-code of what happens: ``` class Test { ngOnInit() { console.log('original') } } const hook = Test.prototype.ngOnInit; Test.prototype.ngOnInit = () => { console.log(1); } hook(); // prints 'original' ``` In case of JIT compilation `componentDef` is created lazily as soon as `getComponentDef` gets called. It happens later. Someone at the end even tried to suggest a possible solution,, but when I noticed that the author suggested that we added `ngOnInit` to the final component, I hoped there would be a better way to work-around this (especially because we would be forced to redefine `ngOnInit`, `ngOnChanges`, `ngAfterViewInit` and `ngOnDestroy`). Further digging took me to this other Angular ticket:, where I found a couple of not-so-promising comments, left almost at the start of the thread: This is a general "problem" with AOT: It relies on being able to statically analyze things like lifecycle hooks, properties with @Input on them, constructor arguments, ... As soon as there is dynamic logic that adds new methods / properties, our AOT compiler does not know about this. Immediately followed by: As we are moving to AOT by default in the future, this won't go away. Sorry, but closing as infeasible. All right, to recap: - Typescript compiler, `tsc`, supports Decorators just fine - Angular's compiler is a custom compiler, different from `tsc` - Angular's compiler is not 100% feature-compatible with `tsc` - Since Angular's compiler does not properly implement support for custom decorators, it's not possible to use custom decorators that enrich objects prototypes (Angular would not notice any enriched method) So it turns out the components annotated with our custom decorator do have to implement `ng*` methods -- we cannot reply on the decorator adding them -- otherwise the AOT compiler would not know of them, and not call them at the right time...Sweet! The solution? Create a base class implementing these `ng*` methods, and have the components annotated with our custom decorator *extend* this base class; the only problem is that we would also have to add `super()` to the component's constructor -- which could be annoying -- but it's either this or no custom decorators, so I guess this will do just fine. ``` export class ObserveInputsBase implements OnChanges, OnDestroy { ngOnChanges(changes: SimpleChanges): void {} ngOnDestroy(): void {} } @Component({ ... }) @CustomDecorator export class TodoComponent extends ObserveInputsBase { constructor(){ super(); } } ``` --- The thing is, :read! appears to be stripping ^M already, irrespective of the use of ++opts:   :read !powershell.exe Get-Clipboard :read ++ff=unix !powershell.exe Get-Clipboard :read ++ff=dos !powershell.exe Get-Clipboard To be honest, I'd expect the second one, where we specify ++ff=unix, to leave trailing ^M, but somehow that's not happening, and for your reference (after I vim -u NONE):   :verbose set ff Returns 'fileformat=unix'   :verbose set ffs Returns 'fileformats=unix,dos' So am I correct if I say that there is something "weird" going on with system()?  I also found the following at the end of system()'s help page, but somehow the experienced behavior is not the documented one:   To make the result more system-independent, the shell output   is filtered to replace with for Macintosh, and   with for DOS-like systems. I even tried to give systemlist() a go, but each entry of the array still has that trailing ^M, so it really seems like Vim cannot properly guess the fileformat from the command output. I am really in the dark here. # 2019-11-25 plann-rss Previously, testing PARSE-OPTS was a bit of a pain as your REPL could end up getting accidentally killed if passed in options would cause the parsing logic to invoke OPTS:KILL; it goes without saying it that this was very annoying. The fix for this was simple though: 1) Create a new condition, EXIT 2) ERROR this condition (with the expected exit code) instead of directly invoking OPTS:EXIT 3) Invoke PARSE-OPTS inside HANDLER-CASE, and invoke OPTS:EXIT when the EXIT condition is signalled This way: - When run from the command line OPTS:EXIT will be called, and the app will terminate - When run from the repl, an unmanaged condition will be signalled, and the debugger will be invoked --- * plan-rss: add --max-items option # 2019-11-26 ? plan-rss: use #'string> when merging, and get rid of the call to REVERSE + reg-cycle: use localtime() instead of timers # 2019-12-01 * aoc: 2019/01 + PR, PRL defined inside .sbclrc, is not available when running the REPL via VLIME -- + loading :aoc does not work cleanly anymore -- have to load RECURSIVELY first, then all the ARGS-* related functions needed for PARTIAL, then it will eventually work... # 2019-12-02 Unfortunately I spent too much time because of a wrong use of RETURN: you can indeed use RETURN (which is the same as RETURN-FROM NIL) only from inside a DO or PROGN block, otherwise you have to use RETURN-FROM followed by the name of the surrounding function. can create a named block with BLOCK, and RETURN-FROM it: --- * aoc: 2019/02 # 2019-12-03 * aoc: 2019/03 -- needs some clean up tho ? ONCE-ONLY: # 2019-12-04 I am trying to look into compilation errors thrown when loading my :AOC project, but even though I shuffled functions/macros around to make sure functions are defined before they are getting used, it still errors out ... The function AOC::ARGS-REPLACE-PLACEHOLDER-OR-APPEND is undefined. It is defined earlier in the file but is not available at compile-time. I am afraid I need to re-study again how Lisp code is being parsed/compiled, and if there is anything we can do about it. I am on the go right now, so this will have to wait until tomorrow. --- Why on earth would someone use rlwrap with `--pass-sigint-as-sigterm`? I mean, all my *-rlwrap scripts had it, but it was kind of dumb if you think about it... Most of the times rlwrap is used with REPLs, and REPLs usually catch SIGINT to interrupt running operations (e.g. break out from an infinite loop), so why would someone kill the REPL, instead of leave the REPL deal with it? I am sure there are legit use cases for this, it's just that I cannot think of any. --- Damn, my Lisp is really rusty! Today I re-learned about: - :THEREIS and :ALWAYS keywords on LOOP --- Few days ago I threw PR/PRL inside .sbclrc hoping I could handily use them from any REPL I had open. Well I was right and wrong at the same time: I could indeed use PR/PRL in the REPL, but only if inside the :CL-USER package. So I moved those macros inside a new file, made a package out of them (pmdb, poor's man debugger, lol), and loaded it from my sbclrc file; after that I was finally able to use those PR/PRL from any package -- too bad I had to use the package classifier to use them (i.e. PMDB:PR). To fix that, I went on and added :pmdb to the :use form of the package I was working on. Alternatively, I could have created two vim abbreviations like: inoreabbr pr pmdb:pr inoreabbr prl pmdb:prl but for some reasons, abbreviations ain't working while editing Lisp files -- I bet on parinfer getting in the way.. --- * aoc: 2019/04 ? SCASE: a switch for strings -- and maybe use it on 2019 day 03 ? figure out why abbreviations are not working on Lisp files # 2019-12-05 * aoc: 2019/05 -- fucking opcodes :-O # 2019-12-06 * aoc: 2019/06 # 2019-12-07 * aoc: 2019/07 -- part2 missing still, fucking opcodes # 2019-12-08 * aoc: 2019/08 + aoc: 2019/08/2 -- figure out a way to test for the output string # 2019-12-09 * aoc: 2019/07 + aoc: 2019/05 -- use intcode as per 2019/07 + aoc: 2019/08 -- add test for output part2 # 2019-12-10 Wasted the night trying to get a solution for AOC 2019/09, but somehow it ain't working. The logic is simple: for each asteroid, as 'curr' initialize 'remaining' as **all** asteroids for each remaining asteroid, as 'other' calculate the line of sigth between 'current' and 'other' remove from 'remaining' any asteroid hidden **behind** 'other' maximize 'remaining' So the biggest challenge here would be to calculate the sight direction vector between 'curr' and 'other'. Few things to take into account though: - when dir is (0 0): return (0 0) - when dir-x is 0: return vertical unit vector (preserve direction) - when dir-y is 0: return hirizontal unit vector (preserve direction) - when dir-y divides dir-x: reduce it (so we can return (2 1) instead of (4 2)) - when dir-x divides dir-y: reduce it (so we can return (1 2) instead of (2 4)) - otherwise return dir, as-is All good, except it doesn't work -- and I have no idea why... # 2019-12-11 * aoc: 2019/09 -- "parameters in relative mode can be read from or written to." GODDAMMIT! * aoc: 2019/10/a -- my direction-reduce function was able to reduce (5, 0) into (1, 0), (5, 5) into (1, 1), (2, 4) into (1, 2), but **not** (4 6) into (2 3) -,- # 2019-12-12 * aoc: 2019/10/2 -- phases are hard * aoc: 2019/11 -- phases are hard * aoc: 2019/12/1 ~ aoc: 2019/10 needs some serious refactoring around phases, and vector math + aoc: 2019/11/2 -- figure out a way to test for the output string # 2019-12-13 My virtualbox date and time is getting out of sync when the laptop is suspended. A temporary fix was to 1) install ntp, 2) restart the service every hour: $ crontab -l 28 * * * 0 sudo service ntp restart But there has to be a better way to fix this / prevent this from happening --- * aoc: 2019/13 -- intcode problem, with some AI (well ... a simple `(- ball-x bar-x)` seems to do just fine) # 2019-12-14 * aoc: 2019/14/1 ? refactor partial-1/partial-2 to accept a form, instead of a FN and ARGS ? also, why is the following form not compiled? (partial-2 #'* -1 (- _1 _2)) It spits the following error: The value -1 is not of type (OR (VECTOR CHARACTER) (VECTOR NIL) BASE-STRING SYMBOL CHARACTER) # 2019-12-15 You cannot really schedule `sudo` commands with crontab, you better use root's crontab instead... So, to fix datetime drifing on Virtualbox: # crontab -l 28 * * * * service ntp restart --- AOC 2019/12: That was freaking painful -- for me at least. Part 1 was easy: parse input, apply gravity, apply velocity, repeat, calculate energy. Part 2? Not quite! At first I tried to look at the output of some of the variables to figure out if there were any patters, and noticed: - the *baricenter* of the system (i.e. sum of all xs, sum of all the ys, sum of all the zs) was constant - the overall velocity of the system (i.e. sum of all vxs, sum of all vys, sum of all the vzs) was 0 But unfortunately none of the above turned out to be useful/usable. Something else that I played a lot with, was looking at each moon separately: I would still simulate the whole universe of course, but inspect a moon at time, again, hoping to find patterns to exploit. I was hoping that each moon had a different cycle, and that the overlall system cycles could be calculated as the the LCM of each moon cycles, but each moon was influencing the others, so my logic was flawed. Anyway, I put this on hold, then tried again, I eventually gave up and looked for some help on Reddit, and this is what brought me to the solution [0]: Note that the different dimensions are independent of each other. X's don't depend on y's and z's. The idea that the overall cycle would be calculated as the LCM of other *independent* systems cycles was right; unfortunately I failed to understand what these *independent* systems were. [0] --- * aoc: 2019/15 * aoc: 2019/12/02 + aoc: 2019/15: figure out a proper way to explore the map (now it goes on for 50000 steps) + aoc: 2019/15: why my BFS costs are off by 1? + aoc: print-hash-table-as-grid # 2019-12-16 * aoc: 2019/16/1 + define CIRCULAR, and use it for 2019/16, and 2018/01: # 2019-12-18 * aoc: 2019/17 * aoc: 2019/16/2 ~ find a proper solution for 2019/17/2 -- right now I got the solution by manually looking at the map, and figuring out program inputs. Also, there is no proper solution at the moment :) ? explain the realization about 2019/16/2 # 2019-12-19 * aoc: 2019/19 ~ aoc: proper solution for 2019/19/2 (figure out how to calculate the direction of the uppear border of the beam -- now I manually calculated it by looking at the output) # 2019-12-20 * aoc: 2019/14/2 + aoc: 2019/14/2 better euristic (binary search from cargo-limit and 0) + create a function to return sortable-difference between two numbers: a < b ? -1 : a > b ? 1 : 0, and use it for 2019/14/2, phases, robots # 2019-12-21 * aoc: 2019/20 # 2019-12-22 * aoc: 2019/21 # 2019-12-23 * aoc: 2019/23 # 2019-12-24 * aoc: 2019/24/1 # 2019-12-25 * aoc: 2019/24/2 ~ aoc: 2019/24: better logic for recurive neighbors ~ aoc: 2019/24: how to efficiently add levels to the state? currently we end up with way more states, and that takes processing time # 2019-12-26 * aoc: 2019/22/1 ? aoc: 2019/22 -- explain the math behind it, the modeling # 2019-12-27 * aoc: 2019/22/2 * aoc: 2019/25 ? aoc: 2019/22/2: explain all that math stuff, and move some of it inside utils.lisp ~ aoc: 2019/25 automate it?! # 2019-12-30 * aoc: 2019/18/1 + aoc: 2019/18/1: 20s to complete part 1... # 2019-12-31 Migrated all the mercurial repository from Bitbucket to Github: cd ~/tmp git clone hg clone ssh:// git init logfilter.git ../fast-export/ -r ../logfilter git remote add origin git push origin --all git push origin --tags # repeat --- * migrated logfilter to github -- thanks bitbucket * migrated strappo-api to github -- thanks bitbucket * migrated weblib to github -- thanks bitbucket * migrated strappo-analytics to github -- thanks bitbucket * migrated strappon to github -- thanks bitbucket * migrated getstrappo to github -- thanks bitbucket * migrated deploy-strappo to github -- thanks bitbucket * migrated osaic to github -- thanks bitbucket * migrated expensio to github -- thanks bitbucket * migrated medicinadellosport to github -- thanks bitbucket * migrated webpy-facebook-login to github -- thanks bitbucket * aoc: 2019/18/2 -- this is fucking it! + aoc: 2019/18/2: 400s to complete part 2... I blame it on HEAPQ + aoc: 2019/18/2: don't manually update the input to generate the 4 sub-vaults + aoc: add DIGITS function to utils, and use it inside 2019/04 and 2019/16 + aoc: 2019/17: use PRINT-HASH-MAP-GRID + aoc: 2019/20: figure out a way to programmatically figure out if a portal is on the outside or on in the inside ~ aoc: add goap-p to search algo, and review dijkstra and bfs.. + aoc: add FLATTEN to utils + common lisp use quickutils! ? update logfilter repo references to the new github location ? update osaic repo references to the new github location # 2020-01-02 ? ? PARTIAL-1: evaluate only once, and not every-time the lambda function is called # 2020-01-03 * add quickutils to aoc ~ # 2020-01-04 * add ADJACENTS to aoc.utils * change BFS to use QUEUE instead of poor's HEAPQ # 2020-01-05 Imported yet another trick from SJL, this time to automatically load all the .lisp files conained a directory -- this way I don't have to update aoc.asd everytime I add a new day. First you define a new class, and then implement the generic method COMPONENT-CHILDREN: (defclass auto-module (module) ()) (defmethod component-children ((self auto-module)) (mapcar (lambda (p) (make-instance 'cl-source-file :type "lisp" :pathname p :name (pathname-name p) :parent (component-parent self))) (directory-files (component-pathname self) (make-pathname :directory nil :name *wild* :type "lisp")))) Next, you use `:auto-module` in your components list as follows (2nd line): (:file "intcode") (:auto-module "2017") (:module "2018" :serial t :components ((:file "day01") --- TIL, VALUES in Common Lisp, is SETF-able! (loop :with remainder :do (setf (values n remainder) (truncate n base)) :collect remainder :until (zerop integer))) --- * vim: add mapping to (in-package) the current file in the repl + quickutil: pull request to fix a problem with DIGITS: use `n` instead of `integer` when calling :until ~ quickutil: pull request to change DIGITS to optionally return digits in the reverse order (this way I can ditch DIGITS-REVERSE) # 2020-01-11 * Fix DIGITS compilation error: `integer` is not a variable #64 # 2020-01-15 Today I installed `roswell` again, because I wanted to test `xml-emitter` with abcl, but somehow it seemed the runtime corrupted: $ brew install roswell ... boring output $ ros install sbcl sbcl-bin/1.4.0 does not exist.stop. $ ros help $ sbcl-bin/1.4.0 does not exist.stop. $ Making core for Roswell... $ sbcl-bin/1.4.0 does not exist.stop. $ sbcl-bin/1.4.0 does not exist.stop. Googling for the error I stumbled upon this GH issue:, and after nuking ~/.roswell, `ros setup` successfully installed the latest version of sbcl. --- I was erroneously trying to use `cl-travis` instead of plain `roswell`. Why erroneusly? `cl-travis` uses `CIM`:, which turns out to have been deprectated in [2017]( What's the replacement? `ros` and similars -- so better directly use `ros` instead. The command `ros` however won't be available unless you tweak a couple of ENV vars: global: - PATH=~/.roswell/bin:$PATH - ROSWELL_INSTALL_DIR=$HOME/.roswell # 2020-01-16 Couple of TILs (not all today's really, but whatever): - `:serial T` inside ASDL systesms, make sure that components are compiled in order; otherwise, in case of inter-file dependencies, one is forced to to use `:depends-on`, which is annoying - you need to run `(ql:update-dist "...")` to download an updated version of a dist, or `(ql:update-all-dists)` to download them all --- Fixed xml-mitter#6, added tests support to the project Fixing #6 was pretty easy; in fact, double quoting WITH-RSS optional arguments was all we had to do. Adding unit tests support to the module on the other end, wound up being not as easy as I initially thought it would, but still, I am quite happy with the final result. I split the tests support in 3 different sub-activities: 1) Add a package (and a system) for the tests: `:xml-emitter/tests` 2) Add a Makefile to run the tests This turned out to be more annoying than expected: - The interactive debugger needs to be disabled when tests fail, or the CI job would hang forever - The spawn REPL needs to be shut down when tests pass, or again, it would cause the CI job to hang forever - Certain implementations did not exit with non-0 code when tests failed And all this needed to be done outside of the testing system (hence in the Makefile) becasue I still wanted to leave things interactive when people tried to run the tests with `(asdl:test-system ...)` 3) Add a .travis.yml file to test the library with multiple implementations The result: - Makefile .PHONY: test test-sbcl test-ros lisps := $(shell find . -type f \( -iname \*.asd -o -iname \*.lisp \)) cl-print-version-args := --eval '\ (progn \ (print (lisp-implementation-version)) \ (terpri))' cl-test-args := --eval '\ (progn \ (ql:quickload :xml-emitter/tests :verbose T) \ (let ((exit-code 0)) \ (handler-case (asdf:test-system :xml-emitter) \ (error (c) \ (format T "~&~A~%" c) \ (setf exit-code 1))) \ (uiop:quit exit-code)))' all: test # Tests ----------------------------------------------------------------------- test: test-sbcl test-sbcl: $(lisps) sbcl --noinform $(cl-test-args) test-ros: $(lisps) ros run $(cl-print-version-args) $(cl-test-args) - .travis.yml language: generic env: global: - PATH=~/.roswell/bin:$PATH - ROSWELL_INSTALL_DIR=$HOME/.roswell - ROSWELL_VERSION= - ROSWELL_URL="$ROSWELL_VERSION/scripts/" matrix: - LISP=abcl - LISP=allegro - LISP=ccl - LISP=clisp - LISP=cmucl - LISP=ecl - LISP=sbcl install: - curl -L $ROSWELL_URL | sh script: - make test-ros This is it! # 2020-01-17 Finally got around to refactor 2019/15, and fix the +1 issue The issue was pretty dumb...when reading 2 from the robot I ended up setting oxygen with the current position, and not with the position after the movement! Thanks to some Reddit folk, I ended up simplifying the solution by implementing a simple dfs to explore the map (no more 5000 ticks random walking) It feels good! # 2020-01-18 EVAL-WHEN is there to tell the file compiler whether it should execute code at compile-time (which it usually does not do for function definitions) and whether it should arrange the compiled code in the compiled file to be executed at load time. This only works for top-level forms. Common Lisp runs the file compiler (remember we are talking about compiling files, not executing in a REPL) in a full Lisp environment and can run arbitrary code at compile time (for example as part of the development environment's tools, to generate code, to optimize code, etc.). If the file compiler wants to run code, then the definitions need to be known to the file compiler. Also remember, that during macro expansion the code of the macro gets executed to generate the expanded code. All the functions and macros that the macro itself calls to compute the code, need to be available at compile time. What does not need to be available at compile time, is the code the macro form expands to. This is sometimes a source of confusion, but it can be learned and then using it isn't too hard. But the confusing part here is that the file compiler itself is programmable and can run Lisp code at compile time. Thus we need to understand the concept that code might be running at different situations: in a REPL, at load time, at compile time, during macro expansion, at runtime, etc. Also remember that when you compile a file, you need to load the file then, if the compiler needs to call parts of it later. If a function is just compiled, the file compiler will not store the code in the compile-time environment and also not after finishing the compilation of the file. If you need the code to be executed, then you need to load the compiled code -> or use EVAL-WHEN -> see below. --- All right, replaced a bunch of `(if (< a b) -1 (if (> a b) 1 0))` expressions with `(signum (- a b))`, but I wonder: doesn't a mathematical/cs function exist, do refer to this? It seems too common as a pattern, not to have an agreed name -- Its name is: three-way comparison, a.k.a. spaceship operator --- Figured out how to customize Clozure-CL REPL prompt... it couldn't be easier, could it?! ;; What sorcery is this? In summary: ;; ;; - Use the first argument as conditional (I believe it represents the number ;; of pending stacktraces or something ;; - If none, call PACKAGE-PROMPT -- we need to re-add one argument to the stack ;; or invoking user defined functions would fail ;; - If there are pending exceptions to process, print the lot followed by `]` ;; ;; FORMAT directives 101 ;; ;; ~[...~] Conditional expression. This is a set of control strings, called ;; clauses, one of which is chosen and used. The clauses are separated by ~; ;; and the construct is terminated by ~]. Also, ~:; can be used to mark a ;; default clause ;; ~/name/ calls the user defined function, NAME ;; ~:* ignores backwards; that is, it backs up in the list of arguments so ;; that the argument last processed will be processed again. ~n:* backs up ;; n arguments. ;; ;; Source: (setf ccl::*listener-prompt-format* "~[~:*~/package-prompt/~:;~:*~d]~]") --- ~ aoc: abcl: hangs on 2018/22 ~ aoc: allegro: attempts to call RECURISVELY, which is undefined.... ~ aoc: ccl: hangs on 2018/22 ~ aoc: clisp: =: NIL is not a number (last 1AM test appears to be 2019/18) ~ aoc: cmulc: Structure for accessor AOC/2019/24::TILES is not a AOC/2019/24::ERIS: NIL ~ aoc: ecl: hangs on 2019/20 # 2020-01-19 All right, I feel dumb... For almost one year I have been dreaming of a `WITH-SLOTS` macro to help with structures and not only classes, and today I stumbled upon a piece of Common Lisp code where the author was using `WITH-SLOTS` with a structure. "That has to be homemade/library macro, right?" I thought. Well, it turns out `WITH-SLOTS` works with slots, and strucutres use slots too, so I have been waiting all this time.. for nothing. Let's look at the bright side of this, though: now I can finally use it! (defstruct foo bar baz) => FOO (let ((foo (make-foo :bar 123 :baz 'asd))) (with-slots ((bar1 bar) (baz1 baz)) foo (format T "~a ~a~&" bar1 baz1))) => 123 ASD NIL Reference: --- ? implement topaz's paste, in common lisp: # 2020-01-24 Got presented with the following info messages this morning, while booting up my dev box: [default] GuestAdditions seems to be installed (6.0.14) correctly, but not running. Got different reports about installed GuestAdditions version: Virtualbox on your host claims: 5.2.8 VBoxService inside the vm claims: 6.0.14 Going on, assuming VBoxService is correct... Got different reports about installed GuestAdditions version: Virtualbox on your host claims: 5.2.8 VBoxService inside the vm claims: 6.0.14 Going on, assuming VBoxService is correct... VirtualBox Guest Additions: Starting. VirtualBox Guest Additions: Building the VirtualBox Guest Additions kernel modules. This may take a while. VirtualBox Guest Additions: To build modules for other installed kernels, run VirtualBox Guest Additions: /sbin/rcvboxadd quicksetup VirtualBox Guest Additions: or VirtualBox Guest Additions: /sbin/rcvboxadd quicksetup all VirtualBox Guest Additions: Kernel headers not found for target kernel 4.15.0-72-generic. Please install them and execute /sbin/rcvboxadd setup VirtualBox Guest Additions: Running kernel modules will not be replaced until the system is restarted All right then, let's install them and see what happens: $ sudo apt-get install linux-headers-$(uname -r) ... $ sudo /sbin/rcvboxadd setup VirtualBox Guest Additions: Starting. VirtualBox Guest Additions: Building the VirtualBox Guest Additions kernel modules. This may take a while. VirtualBox Guest Additions: To build modules for other installed kernels, run VirtualBox Guest Additions: /sbin/rcvboxadd quicksetup VirtualBox Guest Additions: or VirtualBox Guest Additions: /sbin/rcvboxadd quicksetup all VirtualBox Guest Additions: Building the modules for kernel 4.15.0-72-generic. update-initramfs: Generating /boot/initrd.img-4.15.0-72-generic VirtualBox Guest Additions: Running kernel modules will not be replaced until the system is restarted Reload the virtual box and.... $ vagrant reload ... [default] GuestAdditions 6.0.14 running --- OK. ==> default: Checking for guest additions in VM... ... # 2020-01-26 Few Lisp book recommendations (in this order): * Common Lisp: A Gentle Introduction to Symbolic Computation * Practical Common Lips ? Paradigms of Artificial Intelligence Programming (PAIP) + Common Lisp Recipes (CLR): + Patterns of software Book by Richard P. Gabriel ? On Lisp (Paul Graham): ? Let Over Lambda: --- * finished reading [Practical Common Lisp]( # 2020-01-31 * finished reading [The Mythical Man-Month]( # 2020-02-02 Finally got around to enhance pmdb.lisp (Poor Man's debugger) to add a few macros to facilitate profiling of slow functions -- not really debugging per-se, I will give you that, but since I did not have a Poor Man's profiler package yet and I wanted to get this done quickly, I figured I could shove it inside pmdb.lisp and start using it (I can always refactor it later). Anyway, these are the macros that I added: - DEFUN/PROFILED: a DEFUN wrapper to create functions that, when executed, would keep track of their execution times - PROFILE: a macro (similar to TIME) to evaluate a given expression and output information about profiled functions (e.g. % of CPU time, execution time, number of calls) Say you TIMEd an expression and decided you wanted to try and understand where the bottlenecks are; the idea is to replace all the DEFUNs of all the potential bottleneck-y functions, with DEFUN/PROFILED, and then replace TIME with PROFILE. For example, say you started from something like: (defun fun-1 (&aux (ret 0)) (dotimes (n 10000) (incf ret)) ret) (defun fun-2 (&aux (ret 0)) (dotimes (n 2000000) (incf ret)) ret) (defun fun-3 (&aux (ret 0)) (dotimes (n 200000000) (incf ret)) ret) (defun main () (fun-1) (fun-2) (fun-3)) To time the execution time of MAIN, you would probably do the following: (time (main)) ; Evaluation took: ; 0.396 seconds of real time ; 0.394496 seconds of total run time (0.392971 user, 0.001525 system) ; 99.49% CPU ; 910,697,157 processor cycles ; 0 bytes consed ; ; 200000000 Perfect! But what's the function that kept the CPU busy, the most? Replace all the DEFUNs with DEFUN/PROFILED: (defun/profiled fun-1 (&aux (ret 0)) (dotimes (n 10000) (incf ret)) ret) (defun/profiled fun-2 (&aux (ret 0)) (dotimes (n 2000000) (incf ret)) ret) (defun/profiled fun-3 (&aux (ret 0)) (dotimes (n 200000000) (incf ret)) ret) Replace TIME with PROFILE, et voila', you now have a better idea of where your bottleneck might be: (profile (main)) ; Profiled functions: ; 98.00% FUN-3: 449 ticks (0.449s) over 1 calls for 449 (0.449s) per. ; 2.00% FUN-2: 8 ticks (0.008s) over 1 calls for 8 (0.008s) per. ; 0.00% FUN-1: 0 ticks (0.0s) over 1 calls for 0 (0.0s) per. ; ; 200000000 It also works with nested function calls too: (defun/profiled fun-4 (&aux (ret 0)) (dotimes (n 200000000) (incf ret)) ret) (defun/profiled fun-5 (&aux (ret 0)) (dotimes (n 200000000) (incf ret)) (fun-4) ret) (defun main () (fun-1) (fun-2) (fun-3) (fun-4) (fun-5)) (profile (main)) ; Profiled functions: ; 39.00% FUN-5: 880 ticks (0.88s) over 1 calls for 880 (0.88s) per. ; 21.00% FUN-4: 464 ticks (0.464s) over 1 calls for 464 (0.464s) per. ; 20.00% FUN-5/FUN-4: 451 ticks (0.451s) over 1 calls for 451 (0.451s) per. ; 19.00% FUN-3: 430 ticks (0.43s) over 1 calls for 430 (0.43s) per. ; 0.00% FUN-2: 9 ticks (0.009s) over 1 calls for 9 (0.009s) per. ; 0.00% FUN-1: 0 ticks (0.0s) over 1 calls for 0 (0.0s) per. ; ; 200000000 Percentages would start to lose meaning here (FUN-5/FUN-4 execution time would be included in FUN-5's one), but still I am quite happy with the overall result. PS. If you read PCL, you should notice quite a few similarities with the WITH-TIMING macro explained at the end of the book. --- I am going to leave it here (another e-learning platform): # 2020-02-03 Vim was segfaulting when opening lisp files, and after bisecting my ft_commonlisp `autogroup` I found the culprit: au syntax lisp RainbowParenthesesLoadRound I dug deeper and found that the line in the plugin that Vim was tripping over, was: for each in reverse(range(1, s:max_depth)) It did not take me long to get to this vim issue:, and changing `reverse(range(...))` with a simple `range()` with negative 'stride' did solve the problem for me: diff --git a/autoload/rainbow_parentheses.vim b/autoload/rainbow_parentheses.vim index b4963af..455ad22 100644 --- a/autoload/rainbow_parentheses.vim +++ b/autoload/rainbow_parentheses.vim @@ -106,7 +106,7 @@ func! rainbow_parentheses#load(...) let b:loaded = [0,0,0,0] endif let b:loaded[a:1] = s:loadtgl && b:loaded[a:1] ? 0 : 1 - for each in reverse(range(1, s:max_depth)) + for each in range(s:max_depth, 1, -1) let region = 'level'. each .(b:loaded[a:1] ? '' : 'none') let grp = b:loaded[a:1] ? 'level'.each.'c' : 'Normal' let cmd = 'sy region %s matchgroup=%s start=/%s/ end=/%s/ contains=TOP,%s,NoInParens fold' Opened a MR for this: # 2020-02-06 I got tired of the colors of my prompt (probably because the default colors palette used by most of the terminal applications are kind of terrible and I don't want to bother finding new colorscheme), so here is what I am experienting with: - username: bold (previously pink) - hostname: cyan / pink (previously yellow / blue) - path: underline (previously green) It's definitely less colored as it used to be, and I am pretty sure I will tweak it again over the following days, but let's see if it works --- + # 2020-02-08 * finished reading [Remote]( ? bunny1 in common lisp # 2020-02-13 + remap '/" to F13, and then use it as a hyper when hold-down (e.g. +S as CTRL) + remap '[" to F14, and then use it as a hyper when hold-down (e.g. +w to move forward one word, +b to move backward one word) # 2020-02-14 The Old Way Using Decorate-Sort-Undecorate This idiom is called Decorate-Sort-Undecorate after its three steps: - First, the initial list is decorated with new values that control the sort order. - Second, the decorated list is sorted. - Finally, the decorations are removed, creating a list that contains only the initial values in the new order. From: Spent a good 30 mins trying to find this reference... somehow I remembered it was called Wrap-Sort-Unwrap and Google wasn't returning anything meaningful. After the fact, I was able to find this in the Python Cookbook Recipes by searching for pack/unpack. # 2020-02-15 All right, I finally got around to get `sic(1)` up and running on my laptop and I am quite...OK with the result. Why sic and not irssi, or weechat, you migth ask; I don't know, I guess I just got bored... Anyway, [sic]( stands for simple irc client, and it really lives up to its name: - let you connect to a server (plain connection, no SSL) - support `PASS` authentication - outputs a single stream of messages (yes, all the messages received on the different channels you joined, are just...comingled) That's it, really. Suckless tools might seem...lacking at first -- well, I guess they really are -- but there is a reason for that: they usually are highly composable, which means that most of the times you can combine them with other tools (suckless' tools, or unix tools in general) and say implement a feature that was originally missing. For example, `sic(1)` does only support plaintext connections; not a problem, create a secure TCP connection using `socat(1)` and have `sic(1)` connect to the local relay instead: socat tcp-listen:6697 sic -h -p 6697 -n your-nickname Or, `sic(1)` does not support colors, but everyone would agree that a certain level of color does not hurt; easy, pipe `sic(1)`'s output into `sed(1)`, et voila: sic -h -p 6697 -n your-nickname \ sed/your-nickname/${RED}your-nickname${DEFAULT}/ Or, you want to store every message ever received or sent? easy, that's what `tee(1)` is for: sic -h -p 6697 -n your-nickname \ tee -a ~/irc/server.log \ ... Or you can add a typing hisory using `rlwrap(1)`, or on-connect auto-commands (e.g. join #vim) by using a combination of shell pipes, `echo(1)` and `cat(1)`. Anyway, I guess you got the idea...and if anything, I hope I made a you a bit curious about this whole _compose-all-the-things_ philosophy. These are the scripts I ended up creating -- feel free to poke around and tell me if anything intersting come into your mind: - [sicw]( - [sicw-rlwrap]( --- Say you have an interactive process like a REPL or something; how can you send some data/text to it, before entering in interactive mode, waiting for the user to type anything? For example: how can you pass some _auto-commands_ like `:j #suckless` and `:j #lisp` to `sic(1)`, so that you don't have to manually type them every time you connect to the server? Note, you still want to be able to manually send messages to the server, after the auto-commands have been sent. Well, I found the solution somewhere on StackOverflow (unfortunately I cannot find the link to the page anymore): #!/usr/bin/env bash echo -en "$@" cat - What this `input` script does, is: - `echo(1)` its input argument $ echo -en 'foo\nbar\n' foo bar - Then start reading from stdin (i.e. enter interactive mode) Clever, isn't it? $ input ':j #suckless\n' ':j #vim\n' | sic --- * cb: added support for xsel(1) -- not that I needed it, but since I stumbled upon it when looking at other people dotfiles repos, I figured I should just steal it + add SelectAndSendToTerminal("^Vap") to my .vimrc -- currently there is `SendSelectionToTerminal`, but it forces me to use nmap/xmap, instead of nnoremap/xnoremap, and I have to manually save/restore the state of the window (annoying) ? add kr to the family of 2-letters-named tools, to run `keyring` remotely -- will have to be careful to make sure people on the same network can not snoop my passwords ;-) # 2020-02-16 + R-type notation for documenting Javascript functions ? Auto-curry recursive magic spell (from composing software) # 2020-03-03 When we estimate, it is worth considering worst-case scenarios as well as best-case scenarios in order to provide a more accurate estimate. A formula I often use in estimation templates is the following (derived from a book I read a long time ago): [Best + (Normal * 2) + Worst] / 4 Keep a risk register alongside the estimates Review the estimates and the risk register at the end of the sprint Estimating work is never easy. No two tasks are the same, and no two people are the same. Even if they were, the environment (code base) would most certainly not be the same. Getting to a precise estimation of work is almost impossible without doing the work itself. Source: # 2020-03-05 Convert markdown files to PDF on MacOS: $ brew install pandoc $ brew install homebrew/cask/basictex $ ln -s /Library/TeX/Root/bin/x86_64-darwin/pdflatex /usr/local/bin/pdflatex $ pandoc -o multi-tenancy.pdf # 2020-03-06 TIL: when you have an unresponsive SSH connection, you can kill it with `~.` Also, you can type `~?` and get the list of supported escape codes: remote> cat # then type ~? Supported escape sequences: ~. - terminate connection (and any multiplexed sessions) ~B - send a BREAK to the remote system ~C - open a command line ~R - request rekey ~V/v - decrease/increase verbosity (LogLevel) ~^Z - suspend ssh ~# - list forwarded connections ~& - background ssh (when waiting for connections to terminate) ~? - this message ~~ - send the escape character by typing it twice --- * finished reading [Composing Software]( ? I realized my kindle highlights for books I haven't purchased through amazon are not getting displayed here: Need to figure out a way to extract them # 2020-03-07 Today I finally got around to toy a bit with MacOS Karabiner-Elements, and customize my configuration even further so that: - `'` behaves exactly as `` (i.e. ``-modifier when held down, `` when tapped) - `[` behaves exactly as `` (i.e. _movement_-modifier when held down, `` when tapped) What's the deal with these _movement_-modifiers? Just to give you an idea: - ` + h` -> `` - ` + j` -> `` - ` + k` -> `` - ` + l` -> `` - ` + p` -> _paste from clipboard_ - ` + b` -> _one word backward_ - ` + w` -> _one word forward_ For the record, these are the relevant blocks of my karabiner.json file that enabled this: { "description": "Tab as modifier", "manipulators": [ { "description": "tab as modifier", "from": { "key_code": "tab" }, "to": [ { "set_variable": { "name": "tab_modifier", "value": 1 } } ], "to_after_key_up": [ { "set_variable": { "name": "tab_modifier", "value": 0 } } ], "to_if_alone": [ { "key_code": "tab" } ], "type": "basic" }, { "conditions": [ { "name": "tab_modifier", "type": "variable_if", "value": 1 } ], "description": "tab + h as left_arrow", "from": { "key_code": "h" }, "to": [ { "key_code": "left_arrow" } ], "type": "basic" }, { "conditions": [ { "name": "tab_modifier", "type": "variable_if", "value": 1 } ], "description": "tab + j as left_arrow", "from": { "key_code": "j" }, "to": [ { "key_code": "down_arrow" } ], "type": "basic" }, { "conditions": [ { "name": "tab_modifier", "type": "variable_if", "value": 1 } ], "description": "tab + k as left_arrow", "from": { "key_code": "k" }, "to": [ { "key_code": "up_arrow" } ], "type": "basic" }, { "conditions": [ { "name": "tab_modifier", "type": "variable_if", "value": 1 } ], "description": "tab + l as left_arrow", "from": { "key_code": "l" }, "to": [ { "key_code": "right_arrow" } ], "type": "basic" }, { "conditions": [ { "name": "tab_modifier", "type": "variable_if", "value": 1 } ], "description": "tab + p as shift inssert", "from": { "key_code": "p" }, "to": [ { "key_code": "v", "modifiers": [ "left_command" ] } ], "type": "basic" } ] } { "description": "open_bracket as modifier", "manipulators": [ { "description": "open_bracket as modifier", "from": { "key_code": "open_bracket" }, "to": [ { "set_variable": { "name": "open_bracket", "value": 1 } } ], "to_after_key_up": [ { "set_variable": { "name": "open_bracket", "value": 0 } } ], "to_if_alone": [ { "key_code": "open_bracket" } ], "type": "basic" }, { "conditions": [ { "name": "open_bracket", "type": "variable_if", "value": 1 } ], "description": "open_bracket + w as left_alt + right_arrow (one word forward)", "from": { "key_code": "w" }, "to": [ { "key_code": "right_arrow", "modifiers": [ "left_alt" ] } ], "type": "basic" }, { "conditions": [ { "name": "open_bracket", "type": "variable_if", "value": 1 } ], "description": "open_bracket + b as left_alt + left_arrow (one word backward)", "from": { "key_code": "b" }, "to": [ { "key_code": "left_arrow", "modifiers": [ "left_alt" ] } ], "type": "basic" } ] } --- ? implement these very same mappings on Windows (AutoHotKey + SharpKeys) # 2020-03-08 English writing lessons: - Use verb + adverb, not the opposite Good: I have worked closely with... Bad: I have closely worked with... - Use 'be experienced with' instead of 'have experience with' - Don't use 'seem'... be assertive: either one is, or is not! - Use 'further' instead of 'improve' --- TIL: you can pass '60m' to `docker logs --since ...` to get logs of the last hours... I knew how to se data/times with `--since`, and always been doing the math in my head to figure out the correct cut-off date-time value; but being able to `--since 60m` is just... pure gold! # 2020-03-09 MacOS Bundle IDs: function mac-os-bundle-id() { lsappinfo info -only bundleid "$*" | cut -d '"' -f4 } Note: this only works if the the application you are querying the bundle ID for, is running. # 2020-03-15 Asking for help, on [reddit](, on best-practices on how to implement multitenancy Node.js applications: Hello Everyone, I am trying to add [multitenancy]( support to an existing Node.js/Express.js application, and while the Web is full of articles on how to properly segregate data (e.g. additional 'tenant_id' column, different databases), I am currently struggling to find best-practices on how to properly implement it... let me explain. Irrespective of your chosen data segregation model, there comes a point throughout the life-cycle of the user request where you will have to: 1) figure out what the current tenant is, 2) use that information to fetch the _right_ data (e.g. run custom _by-tenant-id_ queries, or by selecting the proper database connection). I am particularly interested in hearing how others have implemented this, especially given that point 1 and 2 might be a few asynchronous function calls away. There are 3 possible options I could think of, to solve this problem: 1. Use libraries like [continuation-local-storage](, or its successor [cls-hooked]( - these libraries rely on experimental APIs and require monkey-patching of other asynchronous libraries to make sure the context is indeed properly propagated - when users ask about [maturity](, their authors suggest to heavily test internally and make sure the context is never lost - other users on [reddit]( confirm they had a lot of issues with cls, and because of that only used it for logging purposes 2. Pass a context object around, **everywhere** (a la Golang's [context]( - it's a **lot** of work, especially for an already existing application (and I am not [alone]( in thinking this) 3. Defer the initialization of the used services until the request is received, and inject this context object as a constructor argument. - this seems to require less changes as compared to the previous option, but again, it forces you to defer the initialization of **all** your services until you know what the request tenant is Now, all this journey has led me to believe that maybe this is not the _right_ way to implement multitenancy inside Node.js applications; maybe, instead of having a single Node.js application being able to serve requests coming from different tenants, we should just _multi-instantiate_ this application and have each of these instances serve one specific tenant only. This is one realization that I had when thinking about all this: - Service multitenancy (i.e. a single service, being able to serve requests coming from different tenants), is not required, but could save you some money later on, as that would let you utilize hardware resources to full extent - Service multi-instantiation (i.e. different services, each one specialized to serve a specific tenant), is not required, but it's **inevitable**: there are only so many requests you will be able to serve with a single multitenant service So probably I shouldn't worry much about how to add support multitenancy support to an existing Node.js application, but rather _multi-instantiate_ it and put a broker in front of it (i.e. Nginx). Let me know your thoughts. --- Interesting article about [Zone.js]( the dummy implementation is ridiculously simple! var currentZoneFrame = {zone: root, parent: null}; class Zone { static get current() { return; } run(callback, applyThis, applyArgs) { _currentZoneFrame = {parent: _currentZoneFrame, zone: this}; try { return callback.apply(applyThis, applyArgs); } finally { _currentZoneFrame = _currentZoneFrame.parent; } } } This is how you are expected to use it: var zoneA = Zone.current.fork({name: 'zoneA'}); function callback() { // _currentZoneFrame: {zone: zoneA, parent: rootZoneFrame} console.log('callback is invoked in context',; } // _currentZoneFrame = rootZoneFrame = {zone: root, parent: null} { // _currentZoneFrame: {zone: zoneA, parent: rootZoneFrame} console.log('I am in context',; setTimeout(zoneA.wrap(callback), 100); }); Too bad, the [TC39 proposal]( has been withdrawn! --- * finished reading [Graph Algorithms: Practical Examples in Apache Spark and Neo4j]( # 2020-03-16 ? ctags and javascript files: # 2020-03-17 _Rant against software development in 2020: start_ We added `prettier` to our repo because you know, it feels good not to have to worry about indentation and style; so we linted all of our files (yes, it was a massing changeset), and then carried on. Except...changesets pushed by different team members looked different from one another, like `prettier` settings were not properly parsed...and yet all of them had their IDEs properly configured. Well, it turns out 'prettier-vscode', `prettier` plugin for VisualStudio Code, is not picking up '.prettierignore' unless it's in the project working directory: And guess what? Since we have a monorepo, we have different '.prettierignore' files, one per that's why changesets were looking different. Long story short: - `prettier` does not support that behavior: - 'prettier-vscode' does not want to support that, unless `prettier` support that: So yeah, users, go fuck yourselves! Well, of course there are [workarounds](, but still, `git` works that way, `npm` works that way...why not implement a similar behavior? I am pretty sure there are reasons, but still... _Rant against software development in 2020: start_ --- Few notes on how to implement multitenancy with Elasticsearch: [shared-index](, i.e. a single index, shared by all the tenants, in which all the documents have a tenant identifer PUT /forums { "settings": { "number_of_shards": 10 }, "mappings": { "post": { "properties": { "forum_id": { "type": "string", "index": "not_analyzed" } } } } } PUT /forums/post/1 { "forum_id": "baking", "title": "Easy recipe for ginger nuts", ... } This approach works, but all the documents for a specific tenant will be scattered across all the shards of the index. Enters [document routing]( PUT /forums/post/1?routing=baking { "forum_id": "baking", "title": "Easy recipe for ginger nuts", ... } By specifing 'baking' as routing value, we are basically telling Elasticsearch to always use a single shard... which one? `shard = hash(routing) % number_of_primary_shards` (not that we care, as long as it's always the same). Oh, and when searching data (with routing), don't forget to filter by the tenant identifer, or you ill accidentally retrun other tenant's data: GET /forums/post/_search?routing=baking { "query": { "bool": { "must": { "match": { "title": "ginger nuts" } }, "filter": { "term": { "forum_id": { "baking" } } } } } } But...can we do better, and have Elasticsearch do the filtering for us? Yes, [aliases]( to the rescue! When you associate an alias with an index, you can also specify a filter and routing values: PUT /forums/_alias/baking { "routing": "baking", "filter": { "term": { "forum_id": "baking" } } } Now we can use the 'baking' index when indexing, and forget about _routing_: PUT /baking/post/1 { "forum_id": "baking", "title": "Easy recipe for ginger nuts", ... } Similarly, when searching, we can omit the _routing_ parameter, and the filters too! GET /baking/post/_search { "query": { "match": { "title": "ginger nuts" } } } Next step: see if it's possible to implement ACL so that one user can connect to Elasticsearch and only see their aliases: neither the aliases used by other tenants, nor the big, fat, shared index. # 2020-03-21 I have got a bit serious about trying to use `w3m` for web browsing (yeah, I know, I bit ironic coming from me, given that my team is working on a SPA unable to load shit without Javascript enabled). First things first... mappings! keymap r RELOAD keymap H PREV keymap L NEXT keymap C-U PREV_PAGE keymap C-D NEXT_PAGE keymap gg BEGIN keymap w NEXT_WORD keymap b PREV_WORD keymap zz CENTER_V keymap o COMMAND "GOTO /cgi-bin/b1.cgi; NEXT_LINK" keymap t COMMAND "TAB_GOTO /cgi-bin/b1.cgi; NEXT_LINK" keymap ( NEXT_TAB keymap ) PREV_TAB keymap gt TAB_LINK keymap d CLOSE_TAB keymap x CLOSE_TAB keymap :h HELP keymap :o OPTIONS keymap :q QUIT keymap gs SOURCE keymap V EXTERN keymap p GOTO /cgi-bin/cb.cgi keymap P TAB_GOTO /cgi-bin/cb.cgi keymap yy EXTERN "url=%s && printf %s "$url" | cb && printf %s "$url" &" I am going to change these quite a lot over the following weeks, but these should be enough to get you started using it. A few gems - You won't be able to read all the pages you visit inside `w3m`... so when you bump into one, just press `V` (or `M`) to open the current page in a configured external browser - There is no support for a default search engine (a bit annoying), but given that you can customize the hell out of `w3m`, I was able to work around that with the following: `COMMAND "GOTO /cgi-bin/b1.cgi; NEXT_LINK"`. This renders the content of cgi script that I created, and moves the cursor to the next link (so I can then press ``, type anything I want to search for, press `` to quit editing the text field, press `` to focus the Submit button, and press `` again to submit... easy uh?). Anyway, this is the content of my cgi script (it's a simple wrapper for [bunny1]( #!/usr/bin/env bash set -euo pipefail cat < bunny1 wrapper
EOF - Another intersting thing you can do by using a custom cgi script, is opening withing `w3m` the URL saved in the OS clipboard. How? 1) read the content of the clipboard (assuming it's a URL), 2) tell `w3m` to go to that URL (yes, your cgi script can send commands to `w3m` ): #!/usr/bin/env bash set -euo pipefail echo "w3m-control: GOTO $(cb --force-paste)" - This last tip will teach you how to _copy_ the current page URL into the OS clipboard. We are going to use the `EXTERN` command for this (i.e. invoke an _external_ command and pass the current URL as the first argument) and in particular we will: 1) set `url` to the input URL, 2) print `$url` (without the trailing new-line) and 3) pipe that into [cb]( to finally copy it into the OS clipboard. keymap yy EXTERN "url=%s && printf %s "$url" | cb && printf %s "$url" &" Few interesting articles I found when setting this all up: - Bunch of `w3m` related tricks: - A vim like firefox like configuration for w3m: --- Finally decided to give [Alacritty]( a go, and damn... now I wished I had tried it earlier: scrolling in `vim` inside a `tmux` session is finally fast again! It supports `truecolors` too, so what else could I ask more? Well, one thing maybe I would ask for: [support multiple windows]( We can _multi-instantiate_, and that's fine, but those instances would not be recognized as being part of the same group (i.e. multiple entries in the dock, no support for `cmd + backtick`)... and that would make it really hard for people to set up keystrokes to focus a specifc instance of a group (currently with ``, I have a mapping to focus the group, and then then automatically inject: `cmd + opt + 1` to select the first instance of the group, `cmd + opt + 2` for the second...). There is a `--title`, but Keyboard Maestro would not let me select an application given its title (also, I do want the title of the window to be dynamic, based on the current process...). There is a `--class` option, but it seems to be working on Linux only... so what else? Well, after a bit of fiddling I finally found a solution: $ cp -r /Applications/ /Applications/ $ vim /Applications/ # And change CFBundleName CFBundleDisplayName accordingly It's ugly, I know it, it's a bit like installing the same application twice, with a different name, but who cares? I mean, scrolling is damn fast now <3 PS. remember to re-sync these '/Applications' directories the next time that an new release of `alacritty` is published. # 2020-03-22 Re-set up SSH on my W10 laptop: 1. Re-install: `cygrunsrv`, and `openssh` 2. Open a cygwin terminal, **as-administrator**, and run: $ ssh-host-config -y $ vim /etc/sshd_config # make sure 'PubkeyAuthentication yes' is there, and not commented out $ cygrunsrv --start cygsshd 3. Add a Firewall rule to let in all incoming connections on TCP: 22 Note: if you do this on a company's laptop, `sshd` will try and contact the company's AD when authenticating connections, so make sure your laptop is connected to the Intranet. Links: - - # 2020-03-23 * finished reading [Node.js 8 the Right Way: Practical, Server-Side JavaScript That Scales]( # 2020-03-24 Today I learded, you can quickly setup SSH proxies by adding the following to your ~/.ssh/config file: Host ProxyCommand ssh -q nc %h %p By this, when we connect to the server by git push, git pull or other commands, git will first SSH to As the ssh client will check the config file, the above rule makes it set up a proxy by SSH to and relaying the connection to %h ( with port %p (22 by default for SSH) by nc (you need to have nc installed on proxy). This way, the git connection is forwarded to the git server. And why would this be relevant? Think about this scenario: 1. Main laptop: decent CPU / memory, connected to the home WIFI, and company VPN 2. Dev laptop: good CPU / memory, connected to the home WIFI, but not to the company VPN (because you know, you don't want to connect and re-enter auth tokens... **twice**) Well, with this proxy trick setup, you would still be able to access configured internal services (e.g. git) from the dev laptop without it being connected to the company VPN. Quite handy! --- Something is messing up my bash prompt, when running it inside Alacritty: 1. Open a new instance $ open -n /Applications/ --args --config-file /dev/null The new instance is opened, and the configured prompt is printed: matteolandi at hairstyle.local in /Users/matteolandi $ █ 2. Press `up-arrow` to access the last executed command: matteolandi at hairstyle.local in /Users/matteolandi $ open -n /Applications/ --args --config-file /dev/null█ 3. Press `ctrl+a` to get to the beginning of the line Expected behavior: the cursor moves indeed at the beginning of the line matteolandi at hairstyle.local in /Users/matteolandi $ █pen -n /Applications/ --args --config-file /dev/null Actual behavior: the cursor stops somewhere in the middle of the line matteolandi at hairstyle.local in /Users/matteolandi $ open -n█/Applications/ --args --config-file /dev/null Note 1: if I now press `ctr-e` (to get to the end of the line), the cursor get past the current end of the line: matteolandi at hairstyle.local in /Users/matteolandi $ open -n /Applications/ --args --config-file /dev/null █ Note2: if I hit `ctrl+c` (to reset the prompt), press `up-arrow` again, and start hammering `left-arrow`, it eventually gets to the beginning of the line: matteolandi at hairstyle.local in /Users/matteolandi $ open -n /Applications/ --args --config-file /dev/null ^C matteolandi at hairstyle.local in /Users/matteolandi 130 > █pen -n /Applications/ --args --config-file /dev/null And if I know press `ctrl+e` (again, to get to the end of the line), the cursor does actually move to the end of the line: matteolandi at hairstyle.local in /Users/matteolandi 130 > open -n /Applications/ --args --config-file /dev/null█ I opened a new [ticket]( to the project's GitHub page -- hopefully someone will get back to me with some insights. # 2020-03-26 My reg-cycle plugin is not quite working as expected... sometimes you yank some text... then yank something else but when you try to paste what you yanked the first time (`p` followed by `gp`), other stuff will be pasted instead (like text previously deleted). Few things to look at: - `:help redo-register` - ? fix reg-cycle # 2020-03-27 Fucking figured out why my `PS1` was breaking input with Alacritty: It turns out I was using non-printing characters in the prompt... without escaping them! Hence `bash` was not able to properly calculate the length of the prompt, hence `readline` blew up! You can read more about escaping non-printing characters, here: # 2020-04-04 A new version of Alacritty has been released (0.4.2), so time to do the _dance_: 1. Update the main app: brew cask install --force alacritty 2. Do the dance cd /Applications for suffix in fullscreen main scratch w3m; do target=Alacritty-$suffix echo " -> $" rm -rf $ cp -rf $ echo "Patchiing $" sed -i "s/Alacritty/$target/" $ echo "Done" cat $ | grep VersionString -A1 done cd - That's it! --- Been working on a little tool called `ap`: the activity planner. You feed to it a list of **activities** (ID, effort, and dependencies), and a list of **people** (ID, allocation, and whether they are working already on a specific activity): activity act-01 5 activity act-02 3 activity act-03 5 activity act-04 3 activity act-05 3 activity act-06 3 activity act-07 3 activity act-08 5 activity act-09 3 activity act-10 5 activity act-11 5 activity act-12 5 act-03 activity act-13 10 person user-01 80 act-01 person user-02 20 act-02 person user-03 60 act-03 person user-04 80 act-04 person user-05 60 act-05 person user-06 20 act-06 person user-07 80 And it will output what it thinks the _best_ execution **plan** is: act-04 0 3.75 user-04 act-05 0 5.0 user-05 act-01 0 6.25 user-01 act-03 0 8.333333 user-03 act-13 0 12.5 user-07 act-06 0 15.0 user-06 act-02 0 15.0 user-02 act-10 3.75 10.0 user-04 act-08 5.0 13.333333 user-05 act-07 6.25 10.0 user-01 act-09 8.333333 13.333333 user-03 act-11 10.0 16.25 user-04 act-12 10.0 16.25 user-01 Where each line tells: - The ID of the activity (column 1) - When _someone_ will start working on it (column 2) - When the work will be done (column 3) - Who this _someone_ actually is (column 4) This also tell us that all the planned activities will be completed after `16.25` days of work. Now, actual days of work can be converted into calendar days, week-ends can be excluded... act-04 2020-03-30 2020-04-02 user-04 act-05 2020-03-30 2020-04-06 user-05 act-01 2020-03-30 2020-04-07 user-01 act-03 2020-03-30 2020-04-09 user-03 act-13 2020-03-30 2020-04-15 user-07 act-06 2020-03-30 2020-04-20 user-06 act-02 2020-03-30 2020-04-20 user-02 act-10 2020-04-02 2020-04-13 user-04 act-08 2020-04-06 2020-04-16 user-05 act-07 2020-04-07 2020-04-13 user-01 act-09 2020-04-09 2020-04-16 user-03 act-11 2020-04-13 2020-04-21 user-04 act-12 2020-04-13 2020-04-21 user-01 ...and with this we can then plot a Gantt chart and I don't know, maybe make some decisions about it. Finding the optimal execution plan turned to be more complicated than anticipated; in fact, the moment I remove activities pre-allocations from the example above, `ap` would blow up and crash (apparently the search space becomes too large). _There must be a problem with your implementation_ you might argue, and I would not disagree; the reason why I am putting this into writing, is so I can clarify the problem in my mind (never heard of rubber-ducking before?!). Anyway, `ap` internally uses A*[1] to find the best execution plan; for the sake of readability, we can say that each state object is just a list of `workers`, where each worker is defined as two fields: - `been-working-on`: the list of activities completed _so far_ - `free-next`: when the worker will next be available to pick up something else With this, we can easily project our `target-date` (i.e. when all the in-progress activities will be completed) by simply running the `max` of all the workers' `free-next` values. But what's the **cost** to move from one state to another? Well that depends on: 1. The activity itself (i.e. it's `effort`) 2. Who is to claim that activity (i.e. someone with a _high_ `allocation` value, will complete the activity in less time) 3. When all the activities claimed so far will be completed For example, let's imagine: - the current `target-date` (before the worker had claimed the new activity) is '13' - the current worker has been working on for '10' days - and that it will will take them '5' days to complete the a newly claimed activity In this case, the _new_ target date (again, _max of all the been-working-on values_) will be '15', so the cost of moving into this new state is `15 - 13 = 2`; if on the other hand `target-date` was '17', then the cost to get into this new state would have been '0' (i.e. the _projected_ target date would not have changed). So far so good... I hope. How can we now _speed-up_ things a bit, and suggest A* which states to evaluate first, so that `ap` does not blow up when the search space increases? That's where the heuristic come into play -- as a remainder, in order for A* to find the optimal solution, the heuristic function has to be _admissible_ [2] i.e. it should never _overestimate_ the cost required to reach the target state. Let's look at couple of examples: 1. Sum of all the remaining activities' `effort`. This assumes that all the remaining activities are completed one after the other (and not in parallel), so it's easy to understand how this would end up overestimating the cost remaining. This heuristic is clearly non-admissible. 2. Sum of all the remaining activities' `effort`, over the number of total `workers`. This is a bit better from the previous one, as at least it tries to assume that work can actually happen in parallel); however, not **all** of it can actually be done in parallel, so this too is doomed to overestimate under certain circumstances. Please note that using a non-admissible heuristic does not necessarily implies A* will stop _working_; it will probably find a solution, sometimes even the optimal one, but it really depends on the proble, your input, and how lucky you were. For example, let's see what happens when we try and use the first heuristic: (defun heuristic (activities state) (let* ((effort-remaining (effort-remaining activities state))) effort-remaining)) With this, `ap` would still return '16.25' as closest completion date, but when I try that without taking pre-allocations into account, it would return '22.5' (clearly a success given that the planner is not blowing up anymore, but the solution found is _sub-optimal_). Is there an **admissible** heuristic that we can use with this problem? Have I modelled the problem poorly, and painted myself into a corner? [1]*_search_algorithm [2] # 2020-04-05 Put together a little script to generate the RSS feed URL given a YouTube page. YouTube does not include such URL in the HTML page, but it's easy to guess it (thanks to [1]): 1. Go to the YouTube channel you want to track 2. View the page’s source code 3. Look for the following text: channel-external-id 4. Get the value for that element (it’ll look something like UCBcRF18a7Qf58cCRy5xuWwQ 5. Replace that value into this URL: Here you go: #!/usr/bin/env bash set -euo pipefail URL=${1?YouTbe URL required} channel_id=$(curl "$URL" \ | grep --only-matching --extended-regexp 'data-channel-external-id="[^"]*"' \ | head -n1 \ | cut -d\" -f2) echo "$channel_id" [1] --- Very interesting (and quite lengthy) Q&A session on remote working, held from the Basecamp guys: # 2020-04-07 Another interesting video from the Basecamp guys, on how they use run Basecamp. Check it out: Home: - 3 sections: HQ, teams, projects - HQ i.e. company space - Teams i.e. teams, or departments (how people are organized) - Projects i.e. how work is organized - each of these sections has **tools** - each person has access to HQ, and only those teams and projects they have been invited to Tools: - Message board - project/team/company announcements: so when a new joiner is added, they can easily browse past announcements - Todos - Lists (or scopes, or chunks of work) - Each list has items - An item can: - be assigned to someone - notify list of people when done - have a due date assigned - and store additional notes - Hill-chart - simple way to understand _where things stand_, at a given time (not based on number of tasks completed/remaining, as tasks are not linear) - two sides: left, figuring things out (lots of unknown), right, making it happen (just need time to implement it, but no blockers in sight) - each dot is a scope of work, and people can drag it around the hill curve - scopes are dragged manually, not systematically... we want to exercise people gut feelings - you can roll-back a project and see how things moved on the hillchart, over time - Docs & Files - Dropbox, Google Drive ... - Can create folders - Can _color_ document -- should you need some clues - Campfire - realtime chat, built-in - Schedule - project/team/company shared calendar - Automatic Check-ins - scheduled questions sent to everybody in the company - Click on someone's name, and it will pull all someone's answers to a specific automatic check-in (it works similarly for other features too) General: - Everything is _commentable_ - Almost everything has a deep-link associated with it, so you can share it with anyone, and if they have the right permissions they will be able to see it - You can leave **boosts** (like reactions, or 16-character-long comments) to messages or to comments - From each team, or each project, if you scroll down the Tools section, you get access to the **whole** history of the team/project. Organized stuff at the top (your tools), scroll through history down the bottom - The same **history** concepts exists for the whole **company**. There is a "Latest activity" section where you can see what's happening, in realtime, across all the teams, across all the projects - There is a "My Schedule" section, that lists all the events that you are involved with; same with the Schedule tool, you can get a link and display this information inside your personal Google Calendar, or whatever - You can open someone's profile, and access all that is on their plate (i.e. all the to-dos currently assigned to them) - You can open someone's profile, and access all of their activities... not just to-dos, but comments Basecamp for Basecamp: - Discuss multiple things in a **team**... but very specific ones inside **projects** - If what you want to do can be summarized in one to-do item, or a message... then it goes into a team; if not, then it goes into a project - Pretty much all the **teams** have everybody on it -- very open company. Only a few teams have a few people only -- mostly for executives to discuss about strategy. - Tons of projects and they usually last for 1 week or 2 weeks (once completed, you can archive them) For example: every new release of the iOS Basecamp app, has a new project, with to-dos, hill-chart and everything - They have a project called OUT, with only the Schedule tool turned on, where people tell when they will **not** be working. The nice thing about the Schedule tool, is that you can import that into Google Calendar, Outlook, iCal. - Idea pitches are completely managed asynchronously on Basecamp: big writeup on the message board (usually the _deep_ thinking is done by one person), with enough details for programmers to be able to come up with high level estimates, but that's it (too many details, at this stage, are almost a waste of time, especially if the idea does not turn out to be a good one). - How to manage contractors? 1. create a new company (otherwise they would have access to your HQ) 2. invite them to projects. HQ: - Announcements - Everyone has to see this stuff - All Talk - Chat-room, social stuff, everyone has access to it - Dates - Company coming/past meetups - Reference Materials - Policies - Logos for products - National Holidays - 5x12s: every month, 5 random people, 1h to talk about stuff which is not work -- all of this is transcribed - Fan mail - Forward mail into Basecamp, so they show up there - Scheduled check-ins: - Every Monday morning: what did you do this weekend? - Every month, what are you reading? -- helps building a culture, especially for remote teams Example project: Hey (a new email platform they are launching): - 2 develoeprs, but everyone invited to the project - Tools: Message board, To-dos, Campfire - All starts with a _shaping-doc_ (read more at: - high level what the project is about - edge cases maybe - few low-fidelity mockups - the team creates the tasks - To-dos - Mixed to-dos for the programmer and the designer - Different scopes (list of to-dos), like 'sign-up flow' Another example project: 'What Works [2020]' project (one project per year): - Message board - Heartbeats: every team lead, every 6 weeks (they run 6 weeks dev cycles), write about what they have done over the last cycle - Kickoffs: every team lead, every 6 weeks, write about what they will be working on - Just summaries for the rest of the company to read, and stay up to date - One heartbeat / kickoff per team pretty much - Automatic checkins - Every Monday, 9am, what will you be working on this week? Rough estimate of what we think we will be working on - Every day, at 4:30pm, what did you do today? Not just a list of to-dos... as most of the time, what you are working on does not have a to-do attached. Own words, 15 minutes tops - Managers can stop asking people about updates, but read them instead - Schedule - When 6 weeks cycle start, end - To-dos # 2020-04-08 Is it possible to implement Basecamp's automatic check-ins with Teams? I think it is, with some duct tape of course. - Each Teams channel has an email address associated with it; you send an email to it, and a few seconds later that message would pop up on teams - A Jenkins job can be configured to run periodically (e.g. every day @ 9am) - You can can easily send email from the command line as follows: `echo 'body' | mutt -s 'Subject'` So... 1. Gather your Microsoft Teams' channel email 2. Set up your script: #!/bin/bash -ex /bin/mutt -s 'Automatic check-in: What will you be working on this week?' $TO_EMAIL < A2 (depends on) P1: A1 P2: A2 P2 cannot start working on A2 until A1 is finished, so the best is T(A1) + T(A2), while I suspect the algo is going to spit out max(T(A1), T(A2)) # 2020-04-11 Ode to [checklists]( Good checklists are: - Short enough to memorize - Only include the keypoints For example (quite relevant these days): - Use soap - Wet both hands completely - Rub the soap until it forms a thick lather covering both hands completely - Wash hands for at least 20 seconds [not included in this study, but we know now it takes at least 20 seconds to break down viral bugs including Coronavirus so I’m putting it here for posterity] - Completely rinse the soap off Other examples, closer to the software development industry Before asking peers to review your pull request, make sure that: - [ ] Successfully tested on Jenkins - [ ] Product team has signed off the implemented workflow - [ ] Bonus: Screenshots/screencast demo included Before approving a pull request and merging that into master, make sure that: - [ ] Code is readable - [ ] Code is tested - [ ] The features are documented - [ ] Files are located and named correctly - [ ] Error states are properly handled - [ ] Successfully tested on Jenkins Before pushing a new version out, make sure that: - [ ] Look into failing tests (if any), and confirm if intermittent - [ ] Announce on whichever social media you are using, the main features / bugfixes / changes being released After a successful release, make sure that: - [ ] Do some smoke-test to confirm it's all good - [ ] Run any post-release action (e.g. database migration, if not automatic, support tickets updated) - [ ] Announce on social media that the release has completed successfully - [ ] Archive completed stories When adding a new Defefct to the backlog, make sure it includes: - [ ] Description - [ ] Steps to reproduce - [ ] Expected behavior - [ ] Actual behabior - [ ] Bonus: Screenshot/video demonstrating the bug - [ ] Bonus: any additional pre-analysis I believe we can go on forever, but this seems promising. # 2020-04-12 * finished reading [The 4-Hour Work Week: Escape the 9-5, Live Anywhere and Join the New Rich]( # 2020-04-13 Flipped the order of notes in my .plan file...again! The main reason why switched from _most-recent-last_ to _most-recent-first_ last August, was that I thought that would make it easier for a casual reader to access the most recent entry; however, that made it really difficult for someone, to dump the whole file and read it in chronological order. Anyway, now that I can create a Feed RSS off my .plan file (see:, there no more need for adding new notes at the top of my .plan file, so I decided to flip the order of my notes to _most-recent-last_. This is what I last used last August, to reverse the order of all the notes: " Select all the end of lines *not* preceding a new day note /\v\n(# [0-9]{4})@! " Change all the end of lines with a § -- unlikely to use character :%s//§ " Sort all the collapsed notes (! is for reverse :help sort) :'<'>sort! " Change § back to \r :%s/§/\r It still does not properly support the scenario where a `§` character is being used in a note, but I checked my plan file before running it, and it's going to be all right. --- * finished reading [It Doesn't Have to Be Crazy at Work]( # 2020-04-16 Source: how to remember tar flags 🎏 tar -xzf 👈 eXtract Ze Files! tar -czf 👈 Compress Ze Files! This never gets old! # 2020-04-18 Wanted to try and setup some _automatic checkins_ for the team; as people work remotely, I would like to experiment with reducing the number of scheduled meetings, but before implementing that and say, kill every daily standup and have people write their daily status update, I thought we should first give people a place where they could talk about anything, and stay up to date with each other lives (if they want). So rather than start asking people what they will be working on during the coming week, I thought we should start with something lighter, something social, something like: How was your weeked like? The task is quite easy: every Monday morning at 8am, have Jenkins fire an email (later published on Teams) to ask people: what have you guys done over the weekend? People into sharing, would then reply to the thread, or simply read what others have been up to. Nothing creazy, nothing creepy, just a bit of sharing to make people we work with a bit more interesting. All right, let's talk implementation: > every Monday morning at 8am We have people working from: Jakarta, Noida, Hyderabad, Pisa, Scotland, and New York. So what that 8am really means here? Well, ideally we would want people to process this as soon as they get in (i.e. skim through inbox / Teams to see if there is anything critical, then spend a few minutes _sharing_), so the notification needs to be there when Jakarta people get in. > have Jenkins Jenkins lets you easily schedule when jobs need to run (Build Triggers > Build periodically > Schedule), and it supports custom timezones too: TZ=Asia/Jakarta # Every Monday morning, at 8am # MINUTE HOUR DOM MONTH DOW * 0 8 * * 0 > fire an email ... Nothing could have been easier: on a Jenkins job configuration page, Build > Add build step > Execute shell, then copy the following into the 'Command' field (replace `$TO_EMAIL` with the email address associated with the specific MSFT Teams channel): #!/bin/bash -ex /bin/mutt -s 'How was your weekend?' $TO_EMAIL < git # # So here we manually source the file, if available if [ -f /usr/share/bash-completion/completions/git ]; then source /usr/share/bash-completion/completions/git fi function g() { git "$@"; } __git_complete g __git_main # 2020-05-22 ? book: tools for thought # 2020-05-23 Simplified Poor's man yankstack implementation to to use `localtime()` instead of timers. The idea is pretty simple, everytime you invoke it: - Take a timestamp - Check that timestamp another one (buffere variable) representing when the function was last executed - If the two timestamps are more than 5 seconds apart, it means the user wants to start from register 0, otherwise 1, 2, 3... --- Change SendToTerminal (STT), to strip out command prompts (if configured) Sometimes you just want to _send_ a line like the following one, without worrying about stripping the `> ` bit out of it. $ fortune So I changed STT to automatically strip it out for me! # 2020-05-24 Random quote from a podcast I recently listened to: A lot of people say, just do a support model... and I think everybody in the business room looks at this and laughs... because you don't trade time for money... only poor people do that... rich people, they trade products for money. # 2020-05-25 On indenting lisp files with Vim. First things first, Vim has Lisp mode (`:h lisp`), an that alone should give you good enough indentation for lisp files. au FileType lisp setlocal lisp However, Vim's default way of indenting s-expressions is not always perfect, so if you happen to use [Vlime]( (and if you don't, I highly reccommend that you do), then you can switch to Vlime's `indentexpr` by simply disabling lisp mode (the plugin will do the rest): " Weirdly, you have to set nolisp for (the correctly working) indentexpr to work. " If lisp (the vim option) is set, then it overrules indentexpr. " Since an indentexpr is set by vlime (and working!) it would be great to add nolisp to the default configuration, I guess. " au FileType lisp setlocal nolisp Lastly, you can go back to using lisp mode, but tell Vim to use an external program like [scmindent]( to invoke when indenting s-expressions (make sure `lispindent` is in your `$PATH`): au FileType lisp setlocal lisp au FileType lisp setlocal equalprg=lispindent You might prefer this last approach more, especially when collaborating with others, as it does not rely on a specific conbination of editor / plugin, but on an external tool that others can easily configure in their workflow. Few final words about Vim `lispwords` (`:h lisowords`), and particular a question that has bugged me for quite some time: do you need to set them up with using Vlime's `indentexpr`, or an external program to indent your lisp files? Yes, especially if you care for Vim properly indenting s-expressions as you type them: `indentexpr` is only used when in conjunction with the `=` operator (i.e. when manually indenting blocks of code), while `lispwords` are used when typing in new code. # 2020-05-29 ~ small utility to create RSS feed URL given another URL (i.e. give it a YouTube URL, and it will generate a feed for it; give it a GitHub URL, and it will generate a feed for it) # 2020-05-31 ? add a Vim operator for neoformat: this requires the current formatter to support reformatting of partial areas, in which case, one could surround the area to re-format with markers (e.g. comments, that the formatter is likely to ignore), and then use these markers to extract the formatted area of interest and stick it back into the buffer # 2020-06-01 Been finally giving [vim-sexp]( a try, and I gotta be honest, I am quite liking it. I have also been reading about _structural code editing_, and landed on this [SO page]( about _barfing_ and _slurping_: slurping and barfing are the essential operations/concepts to use one of the modern structural code editors. after getting used to them I'm completely incapable of editing code without these. Of the ~20 people that sit with me writing clojure all day all of them use these all the time. so saying that they are "helpful for lisp coders" is a very tactful and polite understatement. slurp: to include the item to one side of the expression surrounding the point into the expression barf: to exclude either the left most or right most item in the expression surrounding the point from the expression That's an alright definition, but when would you actually use it? Another [answer]( to the same question tries to shed some light on this: so a session might be: (some-fn1 (count some-map)) (some-fn2 (count some-map)) aha, a let could come in here to refactor the (count some-map): (let [c (count some-map)]|) (some-fn1 c) (some-fn2 c) But the let isn't wrapping the 2 calls, so we want to pull in (slurp) the next 2 forms inside the let s-exp, so now at the cursor position, slurp twice which will give after first: (let [c (count some-map)] (some-fn1 c)|) (some-fn2 c) and then on second: (let [c (count some-map)] (some-fn1 c) (some-fn2 c)|) It all starts to make sense. Let's now have a look at Tpope's [vim-sexp-mappings](, and in particular what he thought some sane mappings should be for slurpage and barfage: " Slurping/Barfing " " angle bracket indicates the direction, and the parenthesis " indicates which end to operate on. nmap <( (sexp_capture_prev_element) nmap >) (sexp_capture_next_element) nmap >( (sexp_emit_head_element) nmap <) (sexp_emit_tail_element) So going back the example above where the cursor was on the outermost closed paren: (let [c (count some-map)]|) (some-fn1 c) (some-fn2 c) To slurp the next expression in, you would end up pushing the closing paren the cursor is on to right (past to the next element), and the chosen mapping, `>)`, really tries to caption that... ...again it all starts to make sense... --- * Changed `lispindent` to support reading ./.lispwords: * Changed `lispindent` to properly format FLET, MACROLET, and LABELS forms: # 2020-06-07 * Changed my `lispindent` for to read `.lispwords` from the current directory, from its parent, from its parent parent... until the process gets past the user home directory. * scmindent: it leaves trailing empty spaces when keywords are surrounded with empty lines * scmindent: skip commented out lines * Created Vim plugin, vim-lispindent, to: 1. automatically `set equalprg` for lisp and scheme files, 2. parse `.lispwords` and 3. set Vim's `lispwords` accordingly ? vim-lispindent: download `lispindent` if not found on $PATH ~ vim-lispindent: support rules with LIN = 0 (we want to remove these from `lispwords`) ? vim-lispindent: README + doc/ ? cg: pass the last executed command (e.g '$!') and use that to _influence_ the order of suggested commands to execute ? scmindent: cases inside `HANDLER-CASE` are not _properly_ indented # 2020-06-11 TIL: only the Quartz Java scheduler (i.e. the one used by Jenkins)[0] supports the H character to _evenly_ distribute jobs across a defined time range [1] [0] [1] # 2020-06-20 What?! $ curl Weather report: Pisa, Italy \ / Partly cloudy _ /"".-. 20 °C \_( ). ← 6 km/h /(___(__) 10 km 0.0 mm ┌─────────────┐ ┌──────────────────────────────┬───────────────────────┤ Sat 20 Jun ├───────────────────────┬──────────────────────────────┐ │ Morning │ Noon └──────┬──────┘ Evening │ Night │ ├──────────────────────────────┼──────────────────────────────┼──────────────────────────────┼──────────────────────────────┤ │ \ / Partly cloudy │ \ / Partly cloudy │ \ / Partly cloudy │ \ / Partly cloudy │ │ _ /"".-. 22 °C │ _ /"".-. 24..25 °C │ _ /"".-. 22..25 °C │ _ /"".-. 20 °C │ │ \_( ). ↑ 6-8 km/h │ \_( ). → 10-12 km/h │ \_( ). → 12-15 km/h │ \_( ). → 9-14 km/h │ │ /(___(__) 10 km │ /(___(__) 10 km │ /(___(__) 10 km │ /(___(__) 10 km │ │ 0.0 mm | 0% │ 0.0 mm | 0% │ 0.0 mm | 0% │ 0.0 mm | 0% │ └──────────────────────────────┴──────────────────────────────┴──────────────────────────────┴──────────────────────────────┘ ┌─────────────┐ ┌──────────────────────────────┬───────────────────────┤ Sun 21 Jun ├───────────────────────┬──────────────────────────────┐ │ Morning │ Noon └──────┬──────┘ Evening │ Night │ ├──────────────────────────────┼──────────────────────────────┼──────────────────────────────┼──────────────────────────────┤ │ \ / Partly cloudy │ \ / Partly cloudy │ \ / Partly cloudy │ \ / Partly cloudy │ │ _ /"".-. 23..24 °C │ _ /"".-. 25..26 °C │ _ /"".-. 23..25 °C │ _ /"".-. 22 °C │ │ \_( ). ↑ 4-5 km/h │ \_( ). → 9-10 km/h │ \_( ). → 11-14 km/h │ \_( ). ↘ 6-10 km/h │ │ /(___(__) 10 km │ /(___(__) 10 km │ /(___(__) 10 km │ /(___(__) 10 km │ │ 0.0 mm | 0% │ 0.0 mm | 0% │ 0.0 mm | 0% │ 0.0 mm | 0% │ └──────────────────────────────┴──────────────────────────────┴──────────────────────────────┴──────────────────────────────┘ ┌─────────────┐ ┌──────────────────────────────┬───────────────────────┤ Mon 22 Jun ├───────────────────────┬──────────────────────────────┐ │ Morning │ Noon └──────┬──────┘ Evening │ Night │ ├──────────────────────────────┼──────────────────────────────┼──────────────────────────────┼──────────────────────────────┤ │ \ / Partly cloudy │ \ / Partly cloudy │ \ / Partly cloudy │ \ / Partly cloudy │ │ _ /"".-. 25..27 °C │ _ /"".-. 27..28 °C │ _ /"".-. 24..26 °C │ _ /"".-. 22 °C │ │ \_( ). ↑ 4-5 km/h │ \_( ). → 8-10 km/h │ \_( ). → 10-14 km/h │ \_( ). ↘ 6-11 km/h │ │ /(___(__) 10 km │ /(___(__) 10 km │ /(___(__) 10 km │ /(___(__) 10 km │ │ 0.0 mm | 0% │ 0.0 mm | 0% │ 0.0 mm | 0% │ 0.0 mm | 0% │ └──────────────────────────────┴──────────────────────────────┴──────────────────────────────┴──────────────────────────────┘ This is pretty dope! # 2020-06-30 ? ap: parsing errors are not loggedf to stdout ... and there is no exit status too ? book: the medici effect ? book: the innovator's dilemma # 2020-07-01 ? cb.vim: can it work like a standard register? + cb.vim: vim-fugitive: ["x]y copies the current git-object, but how can I do the same with ,y and send it to cb? # 2020-07-03 A little bit off-topic from the usual stuff I write about, but I just want to drop here some links to videos about DJ techniques (don't ask why I got into these): * Strobing/Strobe Technique: * Another video about Strobing/Chasing: * Back-spinning: * Finally the original DJ battle video that got me into this rabbit hole: --- TIL: Ansible won't re-create a container, when you remove one of its ENV variables -- and apparently, that's by design! Apparently ansible would not recreate the container when ENV variables are removed, because the condition of the remaining ENV variables still being set would still hold true. The work-around, is to inject a dummy ENV variable, and keep on updating it's value every time we want to force ansible to recreate the container (for whatever reason). # 2020-07-04 Bit of vim g* niceties: g8: show the ASCII code of the character under the cursor g<: see the output of the last command gF: got to file, and line number (./path/file.js:23) -- handy if you have a stack-trace in a buffer gv: re-select the last selection gi: go back to last insert location gq: wrap command (e.g. break at the specified text-width) # 2020-07-06 Rails Conf 2013 Patterns of Basecamp's Application Architecture by David Heinemeier Hansson: HTML: document based programming, or delivery mechanism? Google maps vs Blogs (and to some extend, SPAs vs server-side rendered applications, or hybrid applications) To make things fast: 1. Key-based cache expiration (or generational caching) - Model name + id / "new" + last-updated-at (if available) 2. Russian Doll nested caching - Comment is inside a post, a post is inside a project - When you change a comment, the system automatically touches the parent, then the parent touches its parent... (where "touches" really means update the last-modified-date timestamp) - The MD5 of a partial, is included in the cache key, which means every time you update a partial, the new view will be rendered (and cached again); if the partial is not updated, the old, cached value would still be safe to use - Templates are inspected, and caches are invalidates based on `render` calls - What if you want to show local time? You add UTC to the template, add `data-time-ago` attribute, inject some JavaScript on the client to display the utc value into local value - What if you have admin links (Edit, Delete, Copy, Move) which are only visible to a subset of users? We cannot access the logged-in user when rendering/caching the view, so a similar solution to the problem above: annotate the link with `data-visible-to`, and then some JavaScript will take care of removing the link which is not visible 3. Turbolinks process persistence With Turbolinks, your app could be as fast as the backend is able to serve responses: 50/150 ms? The experience is great; higher than that, and you are not going to notice much of a difference from a complete page-refresh. 4. Polling for updates # 2020-07-12 TIL: `git` fixups / autosquashes: --- * finished reading [Sails.js in Action 1st Edition]( # 2020-07-14 * finished reading [Lisp for the Web]( * finished reading [Lisp Web Tales]( --- RailsConf 2016 - Turbolinks 5: I Can’t Believe It’s Not Native! by Sam Stephenson: Basecamp3: - 18 months - 200+ screens - 5 platforms (desktop web, mobile web, android native, ios native, and email) - 6 rails devs, 2 android, 2 ios - built 3 open source frameworks: Trix (editor), Action Cable, and Turbolinks 5 Turbolinks for iOS (or Android): - Intercepts link-click events, and creates a new view on the stack - There is a single WebView only, and that's shared by all the windows on the stack (smaller memory footprint) - You can customize the user agent, and on the backend for example, hide certain areas which would not look good on the APP - Native (kind-of) app in 10 minutes - You can go fully native on specific screens -- hybrid approach - You can create native menus, by simply extracting metadata from the received page # 2020-07-15 TIL: `cal` can be instructed to display not just the current month, but also surrounding ones, with `-$N` (if `$N` is 3, `cal` will show the current month, the previous, and the next one): $ cal -3 June 2020 July 2020 August 2020 Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa 1 2 3 4 5 6 1 2 3 4 1 7 8 9 10 11 12 13 5 6 7 8 9 10 11 2 3 4 5 6 7 8 14 15 16 17 18 19 20 12 13 14 15 16 17 18 9 10 11 12 13 14 15 21 22 23 24 25 26 27 19 20 21 22 23 24 25 16 17 18 19 20 21 22 28 29 30 26 27 28 29 30 31 23 24 25 26 27 28 29 30 31 Source: # 2020-07-17 * finished reading [Lisp Hackers]( * finished reading [Full Stack Lisp]( # 2020-07-18 ? lispindent: backquote templates are not properly indented This is the output: (defmacro with-tansaction (conn &body body) `(let ((*connection* ,conn)) (cl-dbi:with-transaction *connection* ,@body))) While I would expect: (defmacro with-tansaction (conn &body body) `(let ((*connection* ,conn)) (cl-dbi:with-transaction *connection* ,@body))) Note how `(cl-dbi:with-transaction ...)` and its body should be indented by an additional space # 2020-07-28 Oh boy, time-zones are a bitch! Spent the day looking at the different DB-related libraries I am using in my web app (MITO, CL-DBI, and CL-MYSQl) trying to chase down a problem with DATE fields; these fields, even though stored with the **right** value inside the database, once _inflated_, they ended up with an unexpected time-zone offset (in MITO's terminology, _inflated_ is the action of transforming data, after it's read from the database, so it's easier to work with it inside the controller). Before we dive into the details, let's make it clear I want my web app to store DATE/DATETIME/TIMESTAMP in UTC (everyone should, really)... and here I thought the following SETF statement will get the job done: (ql:quickload :local-time) => (:LOCAL-TIME) (setf local-time:*default-timezone* local-time:+utc-zone+) => # It all starts with the _model_ trying to save a date object; it reads it as string from the request, converts it to a `local-time:timestamp`, and pass it down: (defvar date "2020-07-28") => DATE (local-time:parse-timestring date) => @2020-07-28T00:00:00.000000Z MITO will then process the value twice; it will try to _deflate_ it first inside DEFLATE-FOR-COL-TYPE (_deflate_ is the opposite of _inflate_, i.e. make a memory value ready to be processed by the database access layer) -- this step will turn out to be a noop for us, since we are using `local-time:timestamp` already: (defgeneric deflate-for-col-type (col-type value) (:method ((col-type (eql :date)) value) (etypecase value (local-time:timestamp value) (string value) (null nil))) Then it will apply some MySQL specific conversions inside CONVERT-FOR-DRIVER-TYPE (defgeneric convert-for-driver-type (driver-type col-type value) (:method (driver-type (col-type (eql :date)) (value local-time:timestamp)) (local-time:format-timestring nil value :format *db-date-format*)) This basically converts the `local-time:timestamp` back into a YYYY-MM-DD string: (defvar *db-date-format* '((:year 4) #\- (:month 2) #\- (:day 2))) => *DB-DATE-FORMAT* (local-time:format-timestring nil ** :format *db-date-format*) => "2020-07-28" And this is how the data is actually getting stored inside the database. Let's have a look now at what happens when we try to read the data back. I am using a MySQL database, and CL-DBI internally uses CL-MYSQL to actually talk to the database; CL-MYSQl in particular, tries to convert all the DATE, DATETIME, and TIMESTAMP objects into CL [universal time](, and it does it with the following function: (defun string-to-date (string &optional len) (declare (optimize (speed 3) (safety 3)) (type (or null simple-string) string) (type (or null fixnum) len)) (when (and string (> (or len (length string)) 9)) (let ((y (parse-integer string :start 0 :end 4)) (m (parse-integer string :start 5 :end 7)) (d (parse-integer string :start 8 :end 10))) (unless (or (zerop y) (zerop m) (zerop d)) (encode-universal-time 0 0 0 d m y))))) => STRING-TO-DATE Let's apply this function to the value stored inside the database and see whta it returns: (string-to-date **) => 3804876000 This is what MITO receives, before _inflating_ it; so let's _inflate_ this and see if we get back to where we started: (local-time:universal-to-timestamp *) => @2020-07-27T22:00:00.000000Z What?! That's what I thought too. The first thing I tried was to change STRING-TO-DATE, and force a UTC offset when invoking ENCODE-UNIVERSAL-TIME: (defun string-to-date (string &optional len) (declare (optimize (speed 3) (safety 3)) (type (or null simple-string) string) (type (or null fixnum) len)) (when (and string (> (or len (length string)) 9)) (let ((y (parse-integer string :start 0 :end 4)) (m (parse-integer string :start 5 :end 7)) (d (parse-integer string :start 8 :end 10))) (unless (or (zerop y) (zerop m) (zerop d)) (encode-universal-time 0 0 0 d m y 0))))) => STRING-TO-DATE (string-to-date "2020-07-28") => 3804883200 (local-time:universal-to-timestamp *) => @2020-07-28T00:00:00.000000Z This is better! What happened? Basically SETF-ing `local-time:*default-timezone*` changes the behavior of the LOCAL-TIME library, but does not change how built-in date-related functions operate; which means that if the runtime thinks it's getting runned in CEST, then CEST is the time-zone it will try and apply unless instructed not not to. Alright, it kind of makes sense, but how can we fix this? Clearly we can't change CL-MYSQL and hard-code the UTC time-zone in there, as everyone should be free to use the library under whichever time-zone the like. Enters the `TZ=UTC` environment variable. $ TZ=UTC sbcl (ql:quickload :local-time) => (:LOCAL-TIME) (setf local-time:*default-timezone* local-time:+utc-zone+) => # (defvar date "2020-07-28") => DATE (local-time:parse-timestring date) => @2020-07-28T00:00:00.000000Z (defvar *db-date-format* '((:year 4) #\- (:month 2) #\- (:day 2))) => *DB-DATE-FORMAT* (local-time:format-timestring nil ** :format *db-date-format*) => "2020-07-28" (defun string-to-date (string &optional len) (declare (optimize (speed 3) (safety 3)) (type (or null simple-string) string) (type (or null fixnum) len)) (when (and string (> (or len (length string)) 9)) (let ((y (parse-integer string :start 0 :end 4)) (m (parse-integer string :start 5 :end 7)) (d (parse-integer string :start 8 :end 10))) (unless (or (zerop y) (zerop m) (zerop d)) (encode-universal-time 0 0 0 d m y))))) => STRING-TO-DATE (string-to-date **) => 3804883200 (local-time:universal-to-timestamp *) => @2020-07-28T00:00:00.000000Z _Lovely time-zones..._ # 2020-08-03 Had some troubles updating my brew formulae today: $ brew cask upgrade java ==> Upgrading 1 outdated package: Error: Cask 'java' definition is invalid: Token '{:v1=>"java"}' in header line does not match the file name. It seems like my java cask is [unbelievably old]( $ rm -rf "$(brew --prefix)"/Caskroom/java $ brew cask install java Updating Homebrew... ==> Auto-updated Homebrew! Updated Homebrew from 36c10edf2 to 865e3871b. No changes to formulae. ==> Downloading Already downloaded: /Users/matteolandi/Library/Caches/Homebrew/downloads/4a16d10e8fdff3dd085a21e2ddd1d0ff91dba234c8c5f6588bff9752c845bebb--openjdk-14.0.2_osx-x64_bin.tar.gz ==> Verifying SHA-256 checksum for Cask 'java'. ==> Installing Cask java ==> Moving Generic Artifact 'jdk-14.0.2.jdk' to '/Library/Java/JavaVirtualMachines/openjdk-14.0.2.jdk'. 🍺 java was successfully installed! # 2020-08-09 Today I decided to spend some time to scratch one of the itches I have been having lately, when working with CL (Vim + Vlime): closing all the splits except for the current one, the Lisp REPL (terminal buffer), and Vlime's REPL. When I work on a CL project, I usually arrange my splits as follows: +------------------+------------------+ | file.lisp | CL-USER> | | | | | | | | | | | +------------------+ | | VLIME REPL | | | | | | | | | | +------------------+------------------+ Well, that's when I start working... few minutes/hours later, it might as well look similar to the below: +------------+------------+-----------+ | file.lisp | file2.lisp | CL-USER> | | | | | | | | | | | | | +------------+------------+-----------+ | fil11.lisp | file3.lisp | VLIMEREPL | | | | | | | | | | | | | +------------+------------+-----------+ So, how can I easily and quickly get from this messy state, to where I started off, with a single file, and the termnial and Vlime REPLs, and Vlime's REPL? `:only` would unfortunately close all the open splits except for the current one, which would force me to create new splits for the terminal and Vlime REPLs. I ended up creating the following VimL function: function! LispCurrentWindowPlusVlimeOnes() abort " {{{ let focused_win = win_getid() for tab in gettabinfo() for win in if win != focused_win let bufname = bufname(winbufnr(win)) if bufname =~? 'vlime' continue else let winnr = win_id2win(win) execute winnr . 'close' endif endif endfor endfor endfunction " }}} Which can be summarized as follows: - Get a hold of the focused split's Window-ID - Iterate all the splits of the current tab - If the current split is **not** the focused one - and the loaded buffer does not contain 'vlime' (that will match both my terminal REPL, and Vlime's one) - then we should close the split And that's it! Wire those mappings in (I like to override `:only` ones) and you should be ready to roll: nnoremap o :call LispCurrentWindowPlusVlimeOnes() nnoremap O :call LispCurrentWindowPlusVlimeOnes() nnoremap :call LispCurrentWindowPlusVlimeOnes() # 2020-08-10 TIL: DevOps folks have [design patterns too](, like the _ambassador_ pattern, described [here]( and [here]( # 2020-08-11 + read Structure and Interpretation of Computer Programs (SICP) # 2020-08-16 * finished reading [Common Lisp Recipes (CLR)]( # 2020-08-18 ? book: Living with Complexity --- The JavaScript That Makes Basecamp Tick, By Javan Makhmali, 2014: Basecamp (2): - 40 app servers - 100 total servers - 1/2 PB file storage - 1 TB database - 1 TB caching - 70 million requests per day - 50-60ms Rails response times The Asset Pipeline: - Tool for compiling, concatenating, and compress asset files - -> erb first, then coffee compiler, then javascript... - Bunch of _require_ comments that bring in assets from somewhere else, and add them to the pipeline - All of this will later be replaced by webpacker - You can also configure an `asset_host`, so the generated ` ... # 2021-05-02 Splitting routes into packages, with Caveman If you use [caveman]( to build your Web applications, sooner or later you will realize that it would not let you create an instance in one package, and use DEFROUTE in another. Why? Because the library internally keeps a HASH-TABLE mapping instances to the package that was active at the time these instances were created, and since DEFROUTE looks up the instance in this map using *PACKAGE* as key, if the current package is different from the one which was active when was created then no instance will be found, and error will be signalled, and you will be prompted to do something about it: (defmethod initialize-instance :after ((app ) &key) (setf (gethash *package* *package-app-map*) app)) (defun find-package-app (package) (gethash package *package-app-map*)) Of course I was not the first one bumping into this (see [caveman/issues/112](, but unfortunately the one workaround listed in the thread did not seem to work: it's not a matter of carrying the instance around (which might or might not be annoying), but rather of _changing_ *PACKAGE* before invoking DEFROUTE so that it can successfully look up the instance inside *PACKAGE-APP-MAP* at macro expansion time. But clearly you cannot easily change *PACKAGE* before executing DEFROUTE, or otherwise all the symbols used from within the route definition will most likely be unbound at runtime (unless of course such symbols are also accessible from the package in which the instance was created). So where do we go from here? Well, if we cannot _override_ *PACKAGE* while calling DEFROUTE from within a different package, then maybe what we can do is _adding_ *PACKAGE* to *PACKAGE-APP-MAP* and make sure it links to the instance you already created! (defun register-routes-package (package) (setf (gethash package*package-app-map*) *web*)) Add a call to REGISTER-ROUTES-PACKAGE right before your first call to DEFROUTE and you should be good to go: (in-package :cl-user) (defpackage chat.routes.sessions (:use :cl :caveman2 :chat.models :chat.views :chat.routes)) (in-package :chat.routes.sessions) (register-routes-package *package*) (defroute "/" () (if (current-user) (redirect-to (default-chatroom)) (render :new :user))) Ciao! # 2021-05-03 Calling MAKE-INSTANCE with a class name which is only known at run-time In CL, how can you instantiate a class (i.e. call MAKE-INSTANCE) if you had a string representing its name (and the package it was defined in)? > (intern "chat.channels.turbo::turbo-channel") |chat.channels.turbo::turbo-channel| > (intern (string-upcase "chat.channels.turbo::turbo-channel")) |CHAT.CHANNELS.TURBO::TURBO-CHANNEL| > (read-from-string "chat.channels.turbo::turbo-channel") CHAT.CHANNELS.TURBO::TURBO-CHANNEL At first I gave INTERN ago (I am not exactly sure why though, but that's where my mind immediately went to), but then irrespective of the case of the string I feed into it, it would end up using quote characters (i.e. `|`s). READ-FROM-STRING on the other hand seemed to get the job done, though I would like to avoid using it as much as possible, especially if the string I am feeding it with is getting generated _externally_ (e.g. sent over a HTTP request). Well, as it turns out, if you have the strings in `package:symbol` format, then you can parse out the package and symbol names and use FIND-PACKAGE and FIND-SYMBOL to get the symbol you are looking for: > (find-symbol (string-upcase "turbo-channel") (find-package (string-upcase "chat.channels.turbo"))) CHAT.CHANNELS.TURBO::TURBO-CHANNEL :INTERNAL > (make-instance *) # PS. @zulu.inuoe on Discord was kind enough to post this utility function that he uses: > this is a thing I use but uhh no guarantees. it also doesn't handle escape characters like | and \ (defun parse-entry-name (name) "Parses out `name' into a package and symbol name. eg. symbol => nil, symbol package:symbol => package, symbol package::symbol => package, symbol " (let* ((colon (position #\: name)) (package-name (when colon (subseq name 0 colon))) (name-start (if colon (let ((second-colon (position #\: name :start (1+ colon)))) (if second-colon (prog1 (1+ second-colon) (when (position #\: name :start (1+ second-colon)) (error "invalid entry point name - too many colons: ~A" name))) (1+ colon))) 0)) (symbol-name (subseq name name-start))) (values package-name symbol-name))) > (multiple-value-bind (package-name symbol-name) (parse-entry-name "chat.channels.turbo::turbo-channel") (find-symbol (string-upcase symbol-name) (if package-name (find-package (string-upcase package-name)) *package*))) CHAT.CHANNELS.TURBO::TURBO-CHANNEL :INTERNAL # 2021-05-06 > Is it possible to imagine a future where “concert programmers” are as common a fixture in the worlds auditoriums as concert pianists? In this presentation Andrew will be live-coding the generative algorithms that will be producing the music that the audience will be listening too. As Andrew is typing he will also attempt to narrate the journey, discussing the various computational and musical choices made along the way. A must see for anyone interested in creative computing. Andrew Sorensen Keynote: "The Concert Programmer" - OSCON 2014 ([link to video]( # 2021-05-16 Configuring lightline.vim to display the quickfix's title In case anyone was interested in displaying the quickfix title while on quickfix buffers (works with location-list buffers too), all you have to do is loading the following into the runtime: let g:lightline = { \ 'component': { \ 'filename': '%t%{exists("w:quickfix_title")? " ".w:quickfix_title : ""}' \ }, \ } Before: NORMAL [Quickfix List] | - And after: NORMAL [Quickfix List] :ag --vimgrep --hidden --smart-case --nogroup --nocolor --column stream-name | - Related link: --- Lack's Redis session store does not appear to be thread-safe Steps to reproduce: - Load the necessary dependencies: (ql:quickload '(:clack :lack :lack-session-store-redis :cl-redis)) - Setup a dummy "Hello, World" application: (defparameter *app* (lambda (env) (declare (ignore env)) '(200 (:content-type "text/plain") ("Hello, World")))) - Setup the middleware chain, one that uses Redis as session storage: (setf *app* (lack:builder (:session :store ( :connection (make-instance 'redis:redis-connection :host #(0 0 0 0) :port 6379 :auth "auth"))) *app*)) - Start the app: > (defvar *handler* (clack:clackup *app*)) Hunchentoot server is started. Listening on - Run `wrk` against it: $ wrk -c 10 -t 4 -d 10 Running 10s test @ 4 threads and 10 connections Expected behavior: - The test runs successfully, and the number of requests per second is reported Actual behavior: - Lots of errors are signaled; from unexpected errors: Thread: 7; Level: 1 Redis error: NIL ERR Protocool error: invalid multibulk length [Condition of type REDIS:REDIS-ERROR-REPLY] Restarts: R 0. ABORT - abort thread (#) Frames: F 0. ((:METHOD REDIS:EXPECT ((EQL :STATUS))) #) [fast-method] F 1. (REDIS::SET "session:fdf15f46896842bcd84ef2e87d2321e97419f70e" "KDpQQ09ERSAxICg6SEFTSC1UQUJMRSAxIDcgMS41IDEuMCBFUVVBTCBOSUwgTklMKSk=") F 2. ((:METHOD LACK.MIDDLEWARE.SESSION.STORE:STORE-SESSION (LACK.MIDDLEWARE.SESSION.STORE.REDIS:REDIS-STORE T T)) #S(LACK.MIDDLEWARE.SESSION.STORE.REDIS:REDIS-STORE :HOST #(0 0 0 0) :PORT 6379 :NAMESPACE ".. F 3. (LACK.MIDDLEWARE.SESSION::FINALIZE #S(LACK.MIDDLEWARE.SESSION.STORE.REDIS:REDIS-STORE :HOST #(0 0 0 0) :PORT 6379 :NAMESPACE "session" :EXPIRES NIL :SERIALIZER # [Condition of type REDIS:REDIS-CONNECTION-ERROR] Restarts: R 0. RECONNECT - Try to reconnect and repeat action. R 1. ABORT - abort thread (#) Frames: F 0. (REDIS::SET "session:5b391bd4699e86f5b6e757451572c0a3ba825557" "KDpQQ09ERSAxICg6SEFTSC1UQUJMRSAxIDcgMS41IDEuMCBFUVVBTCBOSUwgTklMKSk=") F 1. ((:METHOD LACK.MIDDLEWARE.SESSION.STORE:STORE-SESSION (LACK.MIDDLEWARE.SESSION.STORE.REDIS:REDIS-STORE T T)) #S(LACK.MIDDLEWARE.SESSION.STORE.REDIS:REDIS-STORE :HOST #(0 0 0 0) :PORT 6379 :NAMESPACE ".. F 2. (LACK.MIDDLEWARE.SESSION::FINALIZE #S(LACK.MIDDLEWARE.SESSION.STORE.REDIS:REDIS-STORE :HOST #(0 0 0 0) :PORT 6379 :NAMESPACE "session" :EXPIRES NIL :SERIALIZER # # #) [fast-met.. A similar error was mentioned on `cl-redis` GitHub's space ([cl-redis/issues#19](, to which the author replied that one the reasons why this might happen is when trying to share the same connection between multiple threads. So a took a look the implementation of the Redis store, and it looks like it is indeed sharing the same Redis connection between different HTTP requests (i.e. multiple threads). The following seems to be fixing the problem though I would like to hear @fukamachi's view on this, before submitting a pull-request: diff --git a/src/middleware/session/store/redis.lisp b/src/middleware/session/store/redis.lisp index 9e489d9..3ccfaec 100644 --- a/src/middleware/session/store/redis.lisp +++ b/src/middleware/session/store/redis.lisp @@ -54,9 +54,8 @@ (defun redis-connection (store) (check-type store redis-store) (with-slots (host port auth connection) store - (unless (redis::connection-open-p connection) - (setf connection - (open-connection :host host :port port :auth auth))) + (setf connection + (open-connection :host host :port port :auth auth)) connection)) (defmacro with-connection (store &body body) # 2021-05-21 Example of how to use [AbortController]( to cancel an action when input changes: let currentJob = Promise.resolve(); let currentController; function showSearchResults(input) { if (currentController) currentController.abort(); currentController = new AbortController(); const { signal } = currentController; currentJob = currentJob .finally(async () => { try { startSpinner(); const response = await fetch(getSearchUrl(input), { signal }); await displayResults(response, { signal }); } catch (err) { if ( === 'AbortError') return; displayErrorUI(err); } finally { stopSpinner(); } }); } Source: [twitter]( # 2021-05-27 Few days ago Replit announced support for [every programming language](! I never heard of [NixOS]( (that's what enables them to support "every programming language"), but I decided to give it a go anyway, and after a little bit of struggles I was then able to put together the following Repls: - [Common Lisp]( -- of course the first one had to be one about CL - [Common Lisp w/ SSL]( -- it turns out you have to mess with `LD_LIBRARY_PATH` to get SSL to work with `:cl+ssl` - [Game of Life in Common Lisp]( -- why not... - [10 PRINT.BAS]( -- ever heard of the famous one-line Commodore 64 BASIC program to generate mazes? They even wrote a whole [book]( about it! This is pretty cool, well done Replit! --- This is the famous one-line Commodore 64 BASIC program to generate random mazes (the Commodore 64 uses the [PETSCII]( character set, not ASCII): 10 PRINT CHR$(205.5+RND(1)); : GOTO 10 While this is a Common Lisp equivalent: (loop (princ (aref "/\\" (random 2)))) If the above run a little bit too fast int he REPL (it most likely will, these days), then you can try with this other alternative: (loop (finish-output) (princ (aref "/\\" (random 2))) (sleep (/ 1 30))) Happy retro-hacking! # 2021-05-31 TIL: Untracked files are stored in the third parent of a stash commit. > Untracked files are stored in the third parent of a stash commit. (This isn't actually documented, but is pretty obvious from The commit which introduced the -u feature, 787513..., and the way the rest of the documentation for git-stash phrases things... or just by doing git log --graph stash@{0}) > You can view just the "untracked" portion of the stash via: > git show stash@{0}^3 > or, just the "untracked" tree itself, via: > git show stash@{0}^3: > or, a particular "untracked" file in the tree, via: > git show stash@{0}^3: Source: [stackoverflow](,as%20part%20of%20the%20diff) # 2021-06-02 Web scraping for fun and profit^W for a registration to get my COVID-19 jab Rumor says that in a few days everyone in Tuscany, irrespective of their age, could finally register for their first shot of the vaccine; that means "click day", i.e. continuously polling the registration website until your registration category is open. I am sure a bird will eventually announce when the service will be open, i.e. the time of the day, so no need to stay on the lookout for the whole day, but I figured I could use this as an excuse to do a little bit of Web scraping in Common Lisp, so here it goes. Dependencies first: (ql:quickload "drakma") ; to fire HTTP requests (ql:quickload "local-time") ; to track of when the scraping last happened (ql:quickload "st-json") ; guessed it (ql:quickload "uiop") ; read env variables The next step is writing our MAIN loop: - Wait for categories to be activated: new categories can be added, or existing ones can be activated - Send an SMS to notify me about newly activated categories (so I can pop the website open, and try to register) - Repeat (defun main () (loop (with-simple-restart (continue "Ignore the error") (wait-for-active-categories) (send-sms)))) WAIT-FOR-ACTIVE-CATEGORIES is implemented as loop form that: - Checks if categories have been activated, in which case the list of recently activated ones is returned - Otherwise it saves the current timestamp inside *LAST-RUN-AT* and then goes to sleep for a little while (defvar *last-run-at* nil "When the categories scraper last run") (defun wait-for-active-categories () (loop :when (categories-activated-p) :return it :do (progn (setf *last-run-at* (local-time:now)) (random-sleep)))) Let's now dig into the category processing part. First we define a special variable, *CATEGORIES-SNAPSHOTS*, to keep track of all the past snapshots of services we scraped so far; next we fetch the categories from the remote website, see if any of them got activated since the last scraping, and last we push the latest snapshot into *CATEGORIES-SNAPSHOTS* and do some _harvesting_ to make sure we don't run out of memory because of all these scraping done so far: (defvar *categories-snapshots* nil "List of categories snapshots -- handy to see what changed over time.") (defun categories-activated-p () (let ((categories (fetch-categories))) (unless *categories-snapshots* (push categories *categories-snapshots*)) (prog1 (find-recently-activated categories) (push categories *categories-snapshots*) (harvest-categories-snapshots)))) Before fetching the list of categories we are going to define a few utilities: a condition signaling when HTML is returned in spite of JSON (usually a sign that the current session expired): (define-condition html-not-json-content-type () ()) The URLs for the registration page (this will be used later, when generating the SMS) and for the endpoint returning the list of categories: (defparameter *prenotavaccino-home-url* "") (defparameter *prenotavaccino-categories-url* "") A cookie jar to create a _persistent_ connection with the website under scraping: (defvar *cookie-jar* (make-instance 'drakma:cookie-jar) "Cookie jar to hold session cookies when interacting with Prenotavaccino") With all this defined, fetching the list of categories becomes as easy as firing a request to the endpoint and confirm the response actually contains JSON and not HTML: if JSON, parse it and return it, otherwise, simply try again. The idea behind "trying again" is that when the response contains HTML, that will most likely contain a redirect to the home page with a newly created session cookie, and since the same cookie jar is getting used, the simple fact that we processed that response should be enough to make the next API call succeed: (defun parse-json (response) (let* ((string (flexi-streams:octets-to-string response))) (st-json:read-json string))) (defun fetch-categories () (flet ((fetch-it () (multiple-value-bind (response status headers) (drakma:http-request *prenotavaccino-categories-url* :cookie-jar *cookie-jar*) (declare (ignore status)) (let ((content-type (cdr (assoc :content-type headers)))) (if (equal content-type "text/html") (error 'html-not-json-content-type) response)))) (parse-it (response) (st-json:getjso "categories" (parse-json response)))) (parse-it (handler-case (fetch-it) (html-not-json-content-type (c) (declare (ignore c)) (format t "~&Received HTML instead of JSON. Retrying assuming the previous session expired...") (fetch-it)))))) Note: we retry only once, so if two consecutive responses contain HTML instead of JSON, that would result in the debugger to pop open. Finding all the recently activated categories consists of the following steps: - for each category just scraped - compare it with the same in the last snapshot of categories - a category is considered as _recently_ activated if it's active and it is not present inside the second to last snapshot, or it was present but either it was inactive or if its title has changed (yes, sometimes the title of a category is changed to advertise that new people with different age can now sign up) Anyways, all the above translates to the following: (defun category-name (cat) (st-json:getjso "idCategory" cat)) (defun category-title (cat) (st-json:getjso "title" cat)) (defun category-active-p (cat) (and (eql (st-json:getjso "active" cat) :true) (eql (st-json:getjso "forceDisabled" cat) :false))) (defun find-recently-activated (categories) (flet ((category-by-name (name categories) (find name categories :key #'category-name :test #'equal))) (let ((prev-categories (first *categories-snapshots*))) (loop :for cat :in categories :for cat-prev = (category-by-name (category-name cat) prev-categories) :when (and (category-active-p cat) (or (not cat-prev) (not (category-active-p cat-prev)) (not (equal (category-title cat-prev) (category-title cat))))) :collect cat)))) Harvesting is pretty easy: define a maximum length threshold for the list of snapshots and when the list grows bigger than that, start popping items from the back of the list as new values are pushed to the front: (defparameter *categories-snapshots-max-length* 50 "Maximum numbers of categories snapshots to keep around") (defun harvest-categories-snapshots () (when (> (length *categories-snapshots*) *categories-snapshots-max-length*) (setf *categories-snapshots* (subseq *categories-snapshots* 0 (1+ *categories-snapshots-max-length*))))) Note: I could have removed "consecutive duplicates" from the list, or prevented these from getting stored in the list to begin with, but I am going to leave this as an exercise for the reader ;-) Two pieces of the puzzle are still missing: RANDOM-SLEEP, and SEND-SMS. For RANDOM-SLEEP we decide the minimum number of seconds that the scraper should sleep for, and then add some _randomness_ to it (like the remote site cared that we pretended to try and act like a _human_, but let's do it anyway): (defparameter *sleep-seconds-min* 60 "Minimum number of seconds the scraper will sleep for") (defparameter *sleep-seconds-jitter* 5 "Adds a pinch of randomicity -- see RANDOM-SLEEP") (defun random-sleep () (sleep (+ *sleep-seconds-min* (random *sleep-seconds-jitter*)))) For the SMS part instead, we are going to use Twilio; first we define all the parameters required to send SMSs: - Account SID - Auth token - API URL - From number - To numbers (space separated values) (defun getenv-or-readline (name) "Gets the value of the environment variable, or asks the user to provide a value for it." (or (uiop:getenv name) (progn (format *query-io* "~a=" name) (force-output *query-io*) (read-line *query-io*)))) (defvar *twilio-account-sid* (getenv-or-readline "TWILIO_ACCOUNT_SID")) (defvar *twilio-auth-token* (getenv-or-readline "TWILIO_AUTH_TOKEN")) (defvar *twilio-messages-api-url* (format nil "" *twilio-account-sid*)) (defvar *twilio-from* (getenv-or-readline "TWILIO_FROM_NUMBER")) (defvar *twilio-to-list* (split-sequence:split-sequence #\Space (getenv-or-readline "TWILIO_TO_NUMBERS"))) Next we assemble the HTTP request, fire it, and do some error checking to signal an error in case it failed to deliver the message to any of the _to_ numbers specified inside *TWILIO-TO-LIST*: (defun send-sms (&optional (body (sms-body))) (flet ((send-it (to) (drakma:http-request *twilio-messages-api-url* :method :post :basic-authorization `(,*twilio-account-sid* ,*twilio-auth-token*) :parameters `(("Body" . ,body) ("From" . ,*twilio-from*) ("To" . ,to))))) (let (failed) (dolist (to *twilio-to-list*) (let* ((jso (parse-json (send-it to))) (error-code (st-json:getjso "error_code" jso))) (unless (eql error-code :null) (push jso failed)))) (if failed (error "Failed to deliver **all** SMSs: ~a" failed) t)))) Last but not least, the SMS body; we want to include in the message which services got activated since the last time the scraper run, and to do so we: - Take the latest snapshot from *CATEGORIES-SNAPSHOTS* - Temporarily set *CATEGORIES-SNAPSHOTS* to its CDR, like the latest snapshot wasn't recorded yet - Call FIND-RECENTLY-ACTIVATED effectively pretending like we were inside CATEGORIES-ACTIVATED-P and were trying to understand if anything got recently activated (dynamic variables are fun, aren't they?!) (defun sms-body () (let* ((categories (first *categories-snapshots*)) (*categories-snapshots* (cdr *categories-snapshots*))) (format nil "Ora attivi: ~{~A~^, ~} -- ~a" (mapcar #'category-title (find-recently-activated categories)) *prenotavaccino-home-url*))) Note: "Ora attivi:" is the Italian for "Now active:" And that's it: give `(main)` a go in the REPL, wait for a while, and eventually you should receive an SMS informing you which category got activated! > (find "LastMinute" (first *categories-snapshots*) :key #'category-name :test #'string=) #S(ST-JSON:JSO :ALIST (("idCategory" . "LastMinute") ("active" . :TRUE) ("forceDisabled" . :FALSE) ("start" . "2021-06-02") ("end" . "2021-07-31") ("title" . "Last
Minute") ("subtitle" . "ATTENZIONE. Prenotazione degli slot rimasti disponibili nelle prossime 24H.") ("message" . "Il servizio aprirà alle ore 19:00 di ogni giorno") ("updateMaxMinYear" . :TRUE))) > (sms-body) "Ora attivi: Last
Minute --" Happy click day, Italians! # 2021-06-07 * finished reading [Object-Oriented Programming in COMMON LISP: A Programmer's Guide to CLOS]( # 2021-06-11 ? That will never work: Netflix ? No rules Netflix Culture Reinvetion ? book: Designing Data-intensive applications ? book: Microservices Patterns ? book: Grokking algorithms # 2021-06-17 * Opened [cl-cookbook/pull/385]( Don't wrap SWANK:CREATE-SERVER inside BT:MAKE-THREAD # 2021-06-19 Remote live coding a Clack application with Swank, and Ngrok Today I would like to show you how to create a very simple CL Web application with Clack, and how to use Swank and [Ngrok]( to enable remote [live coding]( Here is what we are going to wind up with: - Clack application running on `localhost:5000` -- i.e. the Web application - Swank server running on `localhost:4006` -- i.e. the "live coding" enabler - Ngrok tunnel to remotely expose `localhost:4006` (just in case you did not have SSH access into the server the app is running on) As usual, let's start off by naming all the systems we are depending on: - `clack`, Web application environment for CL - `ngrok`, CL wrapper for installing and running Ngrok - `swank`, what enables remote "live coding" - `uiop`, utilities that abstract over discrepancies between implementations Now, `ngrok` has not been published to Quicklisp yet, so you will have to manually download it first: $ git clone $ sbcl --noinform > ... ...and then make sure it is QL:QUICKLOAD-able: (pushnew '(merge-pathnames (parse-namestring "ngrok/") *default-pathname-defaults*) asdf:*central-registry*) Now you can go ahead and load all the required dependencies: (ql:quickload '("clack" "ngrok" "swank" "uiop")) Let's take a look at the MAIN function. It starts the Web application, the Swank server, and Ngrok: (defun main () (clack) (swank) (ngrok)) For the Web application: - We define a special variable, *HANDLER*, to hold the Clack Web application handler - We start the app, bind it on `localhost:5000`, and upon receiving an incoming request we delegate to SRV - Inside SRV, we delegate to APP (more to this later) - Inside APP, we finally process the request and return a dummy message back to the client (defvar *handler* nil) (defun clack () (setf *handler* (clack:clackup #'srv :address "localhost" :port 5000))) (defun srv (env) (app env)) (defun app (env) (declare (ignore env)) '(200 (:content-type "text/plain") ("Hello, Clack!"))) You might be wondering: "why not invoking CLACK:CLACKUP with #'APP? Why the middle-man, SRV?" Well, that's because Clack would dispatch to whichever function was passed in at the time CLACK:CLACKUP was called, and because of that, any subsequent re-definitions of such function would not be picked up by the framework. The solution? Add a level of indirection, and so long as you don't need to change SRV, but always APP, then you should be all right! Let's now take a look at how to setup a Swank server. First we ask the user for a secret with which to [secure]( the server (i.e. it will accept incoming connections only from those clients that do know such secret), then we write the secret onto ~/.slime-secret (yes, it's going to overwrite the existing file!), and last we actually start the server: (defun getenv-or-readline (name) "Gets the value of the environment variable, or asks the user to provide a value for it." (or (uiop:getenv name) (progn (format *query-io* "~a=" name) (force-output *query-io*) (read-line *query-io*)))) (defvar *slime-secret* (getenv-or-readline "SLIME_SECRET")) (defparameter *swank-port* 4006) (defun swank () (write-slime-secret) (swank:create-server :port *swank-port* :dont-close t)) (defun write-slime-secret () (with-open-file (stream "~/.slime-secret" :direction :output :if-exists :supersede) (write-string *slime-secret* stream))) Last but not least, Ngrok: we read the authentication token, so Ngrok knows the account the tunnel needs to be associated with, and then we finally start the daemon by specifying the port the Swank server is listening to (as that's what we want Ngrok to create a tunnel for) and the authentication token: (defvar *ngrok-auth-token* (getenv-or-readline "NGROK_AUTH_TOKEN")) (defun ngrok () (ngrok:start *swank-port* :auth-token *ngrok-auth-token*)) Give the MAIN function a go, and you should be presented with a similar output: > (main) Hunchentoot server is started. Listening on localhost:5000. ;; Swank started at port: 4006. [10:13:43] ngrok setup.lisp (install-ngrok) - Ngrok already installed, changing authtoken [10:13:44] ngrok setup.lisp (start) - Starting Ngrok on TCP port 4006 [10:13:44] ngrok setup.lisp (start) - Tunnnel established! Connect to the tcp:// "tcp://" Let's test the Web server: $ curl https://localhost:5000 Hello, Clack! It's working. Now open Vim/Emacs, connect to the Swank server (i.e. host: ``, port: `12321`), and change the Web handler to return something different: (defun app (env) (declare (ignore env)) '(200 (:content-type "text/plain") ("Hello, Matteo!"))) Hit the Web server again, and this time it should return a different message: $ curl https://localhost:5000 Hello, Matteo! Note how we did not have to bounce the application; all we had to do was re-define the request handler...that's it! Happy remote live coding! PS. The above is also available on [GitHub](, and on [Replit]( # 2021-06-28 * finished reading [10 PRINT CHR$(205.5+RND(1)); : GOTO 10]( # 2021-08-23 * finished reading [The Friendly Orange Glow: The Untold Story of the PLATO System and the Dawn of Cyberculture]( # 2021-08-27 > The power of Lisp is its own worst enemy. [The Lisp Curse]( # 2021-08-30 * finished reading [Grokking Simplicity]( --- Grokking Simplicity notes This page of the book says it all: > We have learned the skills of professionals > > Since we're at the end of the book, let's look back at how far we've come and do a high-level listing of the skills we've learned. These are the skills of professional functional programmers. They have been chose for their power and depth. > > Part 1: Actions, Calculations, and Data: > > - Identifying the most problematic parts of your code by distinguishing actions, calculations, and data > - Making your code more resusable and testable by extracting calculations from actions > - Improving the design of actions by replacing implicit inputs and outputs with explicit ones > - Implementing immutability to make reading data into a calculation > - Organizing and improving code with stratified design > > Part 2: First-class abstractions > > - Making syntactic operations first-class so that they can be abstracted in code > - Reasoning at a higher level using functional iteration and other functional tools > - Chaining functional tools into data transformation pipelines > - Understanding distributed and concurrent systems by timeline diagrams > - Manipulating timelines to eliminate bugs > - Mutating state safely with higher-order functions > - Using reactive architecture to reduce coupling between cause and effects > - Applying the onion architecture (Interaction, Domain, Language) to design services that interact with the world Where here is the list of primitives presented in the book to work with timelines. Serialize tasks with `Queue`: function Queue(worker) { var queue_items = []; var working = false; function runNext() { if(working) return; if(queue_items.length === 0) return; working = true; var item = queue_items.shift(); worker(, function(val) { working = false; setTimeout(item.callback, 0, val); runNext(); }); } return function(data, callback) { queue_items.push({ data: data, callback: callback || function(){} }); setTimeout(runNext, 0); }; } Example: function calc_cart_worker(cart, done) { calc_cart_total(cart, function(total) { update_total_dom(total); done(total); }); } var update_total_queue = Queue(calc_cart_worker); Serialize tasks and skip duplicate work with `DroppingQueue`: function DroppingQueue(max, worker) { var queue_items = []; var working = false; function runNext() { if(working) return; if(queue_items.length === 0) return; working = true; var item = queue_items.shift(); worker(, function(val) { working = false; setTimeout(item.callback, 0, val); runNext(); }); } return function(data, callback) { queue_items.push({ data: data, callback: callback || function(){} }); while(queue_items.length > max) queue_items.shift(); setTimeout(runNext, 0); }; } Example: function calc_cart_worker(cart, done) { calc_cart_total(cart, function(total) { update_total_dom(total); done(total); }); } var update_total_queue = DroppingQueue(1, calc_cart_worker); Synchronize (i.e. cut) multiple timelines with `Cut`: function Cut(num, callback) { var num_finished = 0; return function() { num_finished += 1; if(num_finished === num) callback(); }; } Example: var done = Cut(3, function() { console.log("3 timelines are finished"); }); done(); done(); done(); // only this one logs Make sure a function is called only once, with `JustOnce`: function JustOnce(action) { var alreadyCalled = false; return function(a, b, c) { if(alreadyCalled) return; alreadyCalled = true; return action(a, b, c); }; } Example: function sendAddToCartText(number) { sendTextAjax(number, "Thanks for adding something to your cart. Reply if you have any questions!"); } var sendAddToCartTextOnce = JustOnce(sendAddToCartText); sendAddToCartTextOnce("555-555-5555-55"); // only this one logs sendAddToCartTextOnce("555-555-5555-55"); sendAddToCartTextOnce("555-555-5555-55"); sendAddToCartTextOnce("555-555-5555-55"); Encapsulate state inside _reactive_ `ValueCell`s: function ValueCell(initialValue) { var currentValue = initialValue; var watchers = []; return { val: function() { return currentValue; }, update: function(f) { var oldValue = currentValue; var newValue = f(oldValue); if(oldValue !== newValue) { currentValue = newValue; forEach(watchers, function(watcher) { watcher(newValue); }); } }, addWatcher: function(f) { watchers.push(f); } }; } Example: var shopping_cart = ValueCell({}); function add_item_to_cart(name, price) { var item = make_cart_item(name, price); shopping_cart.update(function(cart) { return add_item(cart, item); }); var total = calc_total(shopping_cart.val()); set_cart_total_dom(total); update_tax_dom(total); } shopping_cart.addWatcher(update_shipping_icons); Derived values can instead be implemented with `FormulaCell`s: function FormulaCell(upstreamCell, f) { var myCell = ValueCell(f(upstreamCell.val())); upstreamCell.addWatcher(function(newUpstreamValue) { myCell.update(function(currentValue) { return f(newUpstreamValue); }); }); return { val: myCell.val, addWatcher: myCell.addWatcher }; } Example: var shopping_cart = ValueCell({}); var cart_total = FormulaCell(shopping_cart, calc_total); function add_item_to_cart(name, price) { var item = make_cart_item(name, price); shopping_cart.update(function(cart) { return add_item(cart, item); }); } shopping_cart.addWatcher(update_shipping_icons); cart_total.addWatcher(set_cart_total_dom); cart_total.addWatcher(update_tax_dom); --- Location sharing alternative to the Google built-in one @idea - It would be fun - You would not have to share data with Google (why would anyone share it with you instead? My friends and family probably would!) --- Streamlining answering the donors questionnaire @idea "Which cities have you visited during the last 4 weeks?" "Which countries have you visited over the last 6 months?" These are some questions donors get asked over and over again before they are are cleared for the donation, and it always catches me off guard. Can this information be automatically pulled from Google? What about my wife's data instead, would I have access to it? During COVID, most of the time it's not just about the place you have visited, but the places other people you are living with had... # 2021-09-06 * `gem install`-d all the things again. I guess the latest `brew upgrade` that I run last week might have broken a few things (and while at it, I also `rvm install --default ruby-3.0.0`) ? implement rainbow-blocks for Vim (see: A simple hack would be to define 5/6 syntax rules for `(...)` blocks and configure them so that they can only exist within an outer block, i.e. block1, block2 only inside block1, block3 only inside block2. # 2021-09-08 Notes about reverse engineering Basecamp's Recording pattern Before we begin: I never used RoR nor programmed in Ruby, so apologies if some of the things mentioned are inaccurate or completely wrong; I simply got interested in the pattern, and tried my best to understand how it could have been implemented. So bear with me, and of course feel free to suggest edits. Main concepts: - Buckets are collections of records, i.e. _recordings_, having different types e.g. `Todo`, `Todoset` - Recordings are just pointers, pointing to the actual data record, i.e. the _recordables_ - Recordables are immutable records, i.e. every time something changes, a new _recordable_ record is created and the relevant recording is updated to point to it - Events are created to link together 2 recordables (the current version and the previous one), the recording, the bucket, and the author of the change - This enables data versioning - This enables data auditing (at the recording level, at the bucket level, and at the user level) It all started with this [tweet]( > Probably the single most powerful pattern in BC3 ❤️. Really should write it up at one point. This is how a Basecamp URL looks like: `/1234567890/projects/12345/todos/67890` In it: - `1234567890` is the account ID -- it identifies a company's instance, it represents a tenant - `12345` is the bucket ID, pointing to a record whose `bucketable_type` is `Project` - `67890` is the recording ID, pointing to a record whose `recordable_type` is `Todo` Bucket is not a project, but links to one (project is the _bucketable_ entity): class Bucket < ActiveRecord::Base ... belongs_to :account delegated_type :bucketable, types: %w[ Project ] # this adds bucketable_id, bucketable_type to the schema ... or `belongs_to :bucketable` has_many :recordings, dependent: :destroy ... end Recording is not a todo, but links to one (todo is the _recordable_ entity): class Recording < ActiveRecord::Base ... belongs_to :bucket, touch: true belongs_to: :creator, class_name: 'Person', default: -> { Current.person } ... delegated_type :recordable, types: %w[ Todo Todolist Todoset Dock ] # this adds recordable_id and recordable_type to the schema delegate :account, to: :bucket end module Recordable extend ActiveSupport::Concern included do has_many :recordings, as: :recordable # same as specified inside `Recording` as `delegated_type` end end Recordables are immutable, and do not usually have all the created-at, updated-at metadata...recordings do! Recordings have parents, and children; it almost becomes a graph database, with a tree structure: - recording.recordable_type: Todo - recording.parent.recordable_type: Todolist - recording.parent.parent.recordable_type: Todoset - recording.parent.parent.parent.recordable_type: Dock - Chat::Transcript, Message::Board, Todoset, Schedule, Questionnaire, Vault The above, i.e. the tree structure, is most likely implemented via the `Tree` concern: module Recording::Tree extend ActiveSupport::Concern included do belongs_to :parent, class_name: 'Recording', optional: true has_many :children, class_name: 'Recording', foreign_key: :parent_id end end Note: I don't assume the above to be correct, but it should get the job done; I question whether it would make sense to add a _type_ field on this parent/child relation, but maybe it's not strictly required (i.e. you always know that one todo's parent is a todolist). As stated above, recordings are pointers, almost like symlinks, and when you update a todo, you don't actually update the record, but create a new one, and update the recording to point to that; since URLs are all keyed-up accordingly (i.e. they use Recordings IDs, and not Recordable ones), the URL does not change but the actual todo the recording point to does. Controllers then look something like this: class TodosController < ApplicationController def create @recording = @bucket.record new_todo end def update @recording.update! recordable: new_todo end private def new_todo params.require(:todo).permit(:content, :description, :starts_on, :due_on) end end While this is how `Bucket.record` looks like: class Bucket < ActriveRecord::Base ... def record(recordable, children: nil, parent: nil, status: :active, creator: Current.person, **options) transaction do! options.merge!(recordable: recordable, parent: parent, status: status, creator: creator) recordings.create!(options).tap do | recording | Array(children).each do | child | record child, parent: recording, status: status, creator: creator end end end end end This enables you to do things like, "Copy message to a new project", way more efficient: previously you had to set up a new job that literally copied the message, the comments, the events onto the new project; while now, all you have to do is set up a new recording pointing to the existing _immutable_ recordable. For example: - Project has messages -- messages are recordables - Messages have comments -- comments are recordables - When you copy a message from one project to another, you will have to create a recording not just for the message to be copied, but for all its existing comments as well (that's what `children` inside `Bucket.record` is for) Note: `children` inside `Bucket.record` is expected to be a recordable, and not a recording; this means we will have to pass in all the child recordables, e.g. ``. Note: `Bucket.record` (at least the version above, which was taken from [shown in the presentation]( does not seem to recurse, which means you can only create recordings for the parent entity and its direct children, i.e. messages and comments, but not `Todoset`s, `Todolist`s _and_ `Todo`s. Note: is this more efficient, really? Even though you will not have to copy the immutable records (i.e. the recordables), you will still have to recreate the same recordings tree, right? After you copied a recording, you should end up with something like this: >> Recording.find(111111111) => 33333333333333 >> Recording.find(222222222) => 33333333333333 Again, two pointers, i.e. recordings, pointing to the same immutable record, i.e. the recordable. Note: a recordable has many recordings, not one; this is because the same recordable object can be pointed by two different recordings. This Recording pattern works in tandem with `Event`s: - They link to the newer version of the recordable, and the previous one - They belong to a `Recording`: this way you can easily see all the times a recording was updated - They belong to a `Bucket`: this way you can easily see all the times a bucket was updated - They belong to a `Person`: this way you can easily see all the changes done by one user - They have one `Detail`, i.e. a hash containing some additional metadata linked to the current event class Event < ActiveRecord::Base belongs_to: :recording, required: false belongs_to: :bucket belongs_to: :creator, class_name: 'Person' has_one :detail, dependent: :delete, required: true include Notifiable...Requested ... end and with the `Eventable` concern: module Recording::Eventable extends ActiveSupport::Concern ... included do has_many :events, dependent: :destroy after_create :track_created after_update :track_updated after_update_commit :forget_adoption_tracking, :forget_events end def track_event(action, recordable_previous: nil, **particulars) Event.create! \ recording: self, recordable: recordable, recordable_previous: recordable_previous, bucket: bucket, creator: Current.person, action: action, detail: end ... end `track_event` here is the generic method, but more specific ones exist as well, for more common use cases so we don't have to manually plumb arguments. We track an event every single time an eventable is created, or updated: after_create: :track_created after_update :track_updated The private method `track_created` simply delegates to `track_event`: private def track_created track_event :created, status_was: status end A recordable has many events; again, the same recordable could referenced by multiple recordings, and since events are created when recordings are created, then that's how you end up with multiple events. OK, great, nice, but what schema would enable such a pattern? Again, not 100% accurate, but hopefully close enough: - Buckets (id, bucketable_type, bucketable_id, ...) - Recordings (id, bucket_id, recordable_type, recordable_id, parent_id, ...) - Events (id, bucket_id, recording_id, recordable_id, prevous_recordable_id, user_id, ...) Note: recordables are _untyped_ here, but the type information is actually one join-table away, i.e. it's available inside the linked recording - Projects (id, ...) - Messages (id, ...) And this seems to be it really. Note: this pattern heavily relies on Rails delegate types, i.e. sugar on top of polymorphic associations (see [rais/rails/pull#39341](, to automatically traverse the recording table to automatically get to the recordable entry. My first thought was: how will this perform? How is the extra join operation required to fetch any linked entity going to affect performance? If it's working for Basecamp, hopefully it's working decently enough for other applications as well, but I guess it also depends on how much the different entities are linked together. Note: in the end, this is not that different from 'time-based versioning' (read: [Keeping track of graph changes using temporal versioning](, with recordings being the _identity_ nodes, and recordables being the _state_ objects. Some references: - [Basecamp 3 Unraveled]( -- where an overview of the Bucket/Recording pattern is given - [On Writing Software Well #3: Using globals when the price is right]( -- where `Eventable` concern, and the `Event` model are shown - [On Writing Software Well #5: Testing without test damage or excessive isolation]( -- where the `Recording` model is shown - [GoRails: Recording pattern (Basecamp 3)]( # 2021-09-10 ? book: The Dream Machine # 2021-09-13 * finished reading [Tools for Thought: The History and Future of Mind-Expanding Technology]( # 2021-09-15 Nice little vim/fugitive gem: use `!` to open a command prompt with the file under the cursor. nnoremap ! :! It's very handy if you want to delete an untracked file (see netrw comes with an identical mapping too! # 2021-09-30 `ScratchThis`, a Vim command to make the current buffer a `scratch` one (see `:h special-buffers` for additional info): function! s:ScratchThis() abort setlocal buftype=nowrite bufhidden=delete noswapfile endfunction command! ScratchThis call s:ScratchThis() # 2021-10-06 * Added GitHub Actions to `cg` -- bye bye TravisCI -- [cg/pull/8]( + See if it's possible to enhance [40ants/setup-lisp]( to support Windows # 2021-10-07 ? @A @read [Making a Game in Janet, Part 2: Judging Janet]( -- self-modifying tests, [unified tests](, [expect tests]( ? @A @read [Clojure unique way of debugging]( -- environment, capture, replay # 2021-10-15 Today I fucking figured out how to make Vim's * / # not suck! @vim " Star search plugin {{{ " " Few things that this plugin does, and that others might not: " " 1. Works in both normal, and visual mode " 2. Plays nice with smartcase search -- vim's builtin * / # mappings do not " support that " 3. Does not automatically jump to the _first_ next occurrence; instead, moves " the cursor to the beginning of the current occurrence so that if for " whatever reason you decided to swap search direction with `N`, you would " be taken to the first previous occurrence (and not to the beginning of the " current word " " Normal mode: " " - Save the word under the cursor -- running `expand('')` while " searching actually cuases the cursor to move around one char forward, and " if you happened to start from the end of the word, that " 'move-one-char-ahead' behavior would actually move the cursor over to the " next word, causing `expand('')` to return something unexpected " instead " - Saves the current view (so we can restore it later) " - Triggers the word-boundary-d search " - Restores to the previous view -- effectively undoing the first jump " - Checks if the cursor wasn't already at the beginning of a match, and if " not, runs `b` to move at the beginning of the current word nnoremap NormalStarSearchForward \ :let star_search_cword = expand('') \ :let star_search_view = winsaveview() \ /\<=escape(star_search_cword, '*\')\> \ :call winrestview(star_search_view) \ :execute "normal! " . (getline('.')[col('.') - 1:] !~# '^' . star_search_cword ? "b" : "") nnoremap NormalStarSearchBackward \ :let star_search_cword = expand('') \ :let star_search_view = winsaveview() \ ?\<=escape(star_search_cword, '*\')\> \ :call winrestview(star_search_view) \ :execute "normal! " . (getline('.')[col('.') - 1:] !~# '^' . star_search_cword ? "b" : "") nmap * NormalStarSearchForward nmap # NormalStarSearchBackward " Visual mode: " " - Saves the current view (so we can restore it later) " - Triggers the search -- without can use normal the " normal mode alternative for that " - Restores to the previous view -- effectively undoing the first jump xnoremap VisualStarSearchForward \ :let star_search_view = winsaveview() \ /=escape(GetCurrentSelection(visualmode()), '*\') \ :call winrestview(star_search_view) xnoremap VisualStarSearchBackward \ :let star_search_view = winsaveview() \ ?=escape(GetCurrentSelection(visualmode()), '*\') \ :call winrestview(star_search_view) xmap * VisualStarSearchForward xmap # VisualStarSearchBackward " }}} There is a lot of copy-pasta in there, but it gets the job done and given the amount of time I have already spent on this today, or in the past even, I think it's time to move on! --- ? git: switch back to per-branch stashes, but register a pre-push hook to abort if any unpushed commit is tagged as SAVEPOINT -- + change Vim/lisp setup so that the REPL is actually started in a different terminal (via Dispatch maybe) and does not take real estate # 2021-10-20 How to verify the CE marking on your face mask?! First off, what does the CE marking mean? From [Wikipedia]( > The CE mark on a product indicates that the manufacturer or importer of that product affirms its compliance with the relevant EU legislation and the product may be sold anywhere in the European Economic Area (EEA). It is a criminal offence to affix a CE mark to a product that is not compliant or offer it for sale. It continues: > The marking does not indicate EEA manufacture or that the EU or another authority has approved a product as safe or conformant. The EU requirements may include safety, health, and environmental protection. If stipulated in any EU product legislation, assessment by a Notified Body or manufacture according to a certified production quality system may be required. Where relevant, the CE mark is followed by the registration number of the notified body involved in conformity assessment. So, for example, when you read `CE-2233` on a product, `CE` represents the CE marking (duh), while `2233` refers to ID of the notified body involved in conformity assessment. Now off to the verification part. Note: the steps listed below will help you verify that the notified body involved with the assessment is..._legit_, i.e. can actually issue the specific conformity assessment certificate; however, these won't help with verifying that the specified notified body had indeed run a conformity assessment on a given product (i.e. the product seller could lie about this, but as the Wikipedia article states, this is considered a criminal offense and by consequence, and published by law). - To sell face masks in the UE, you need the CE marking - For this matter, notified bodies need to be competent with the '2016/425 Personal protective equipment' legislation - They also need to be entitled to issue specific conformity assessment modules for 'Equipment providing respiratory system protection' products (e.g. face masks) - Luckily for us, an online database exists, [NANDO](, storing all this information about all the registered bodies So, without further ado, navigate to: [Nando]( > [Free search]( Take the CE numerical value (i.e. the `2233` part of `CE-2233`), populate the `Keyword On Notified body number` field with it, and press the `Search` button. Now `Ctrl+F` for the CE numerical value (i.e. the `2233` part of `CE-2233`), and if you are lucky enough, you should find a match inside the `Body type` column: > NB 2233 [GÉPTESZT Termelőeszközöket Felülvizsgáló és Karbantartó Kft.]( Hungary Click on the link in the second column to open up the page listing addresses, contact details, and all the legislations that the notified body has scoped to assess and certify against. > Body : > > GÉPTESZT Termelőeszközöket Felülvizsgáló és Karbantartó Kft. > Jablonka u. 79. > 1037 Budapest > Country : Hungary > > Phone : +36 1 250 3531 > Fax : +36 1 250 3531 > > Email : > Website : > > Notified Body number : 2233 > > Last approval date : 13/04/2010 > > -- > > Legislation > > - Regulation (EU) 2016/425 Personal protective equipment [HTML]( [PDF]( Search for the `Personal protective equipment` entry inside the Legislations table, and click on the link labeled with `HTML`. On this page you will find all the information the notified body has been certified for. You should first confirm that there is a entry for `Equipment providing respiratory system protection`; without this, even the PPE certification notified body cannot engage in CE certification activities of protective masks). Then, you will have to confirm the "conformity assessment" modules that the notified body is entitled to issue; in particular we are looking for: - [Module B: EU type-examination]( - [Module C2: Supervised product checks at random intervals]( - [Module D: Quality assurance of the production process]( Note: that manufacturers can only enter the EU market legally after obtaining a Module B + Module C2 certificate or a Module B + Module D certificate > Products: > > - Equipment providing respiratory system protection > - Protective Equipment against falls from heights > > Procedures: > > - EU type-examination > - Quality assurance of the production process > - Supervised product checks at random intervals > > Articles / Annexes > > - Annex V > - Annex VIII > - Annex VII By the looks of the information shown above, it appears that this notified body have all it takes to properly run conformity assessments against face masks...Hurray! Links: - [Guide to using NANDO website to identify Notified Bodies PPE]( - [Mascherine Ffp2, ecco i codici certificati CE non a norma e lo strumento di verifica Ue]( - [How to Verify CE Certification?]( # 2021-10-22 ? Can I get a Replit instance, open a ssh connection to my VPS, so that I can then use my VPS as jump server to connect back into the REPL? # 2021-10-23 Abusing Common Lisp feature expressions to comment things out! @vim @commonlisp Today I realized that I can easily comment out a form by placing a `#+nil` in front of it, and this turned out to be quite useful and productive especially when throwing things out at the REPL. Let's take a look at an example (this was taken from a [recent]( small project of mine): (hunchentoot:define-easy-handler (index :uri "/") (q) (let ((bodies (and q (fmcv:nb-search q)))) (with-page (:title "Face Mask CE Verifier") ... The bit to focus your attention to is the second line, which in words, pretty much translates to: > when `q`, the request the query string is non-NIL, call FMCV:NB-SEARCH with it, and assign the result to `bodies`; otherwise, `bodies` is initialized with nil Now, while working on this, I realized FMCV:NB-SEARCH could sometimes take up to 3 seconds to return, and since there wasn't much I could do about it (i.e. it was the external service the function was calling behind the scenes, which was taking so long to reply), I decide to make the whole development experience a tiny bit more enjoyable by introducing a little bit of caching: - call the API once - save the response somewhere - change the logic to read from the cached response until I was done with everything else (defvar *cached-bodies* (fmcv:nb-search "00")) (hunchentoot:define-easy-handler (index :uri "/") (q) (let ((bodies (and q *cached-bodies*))) (with-page (:title "Face Mask CE Verifier") The above just works, 100%; however, it also makes it difficult to quickly switch between implementations. For example, should I want to hit the external API again, I would have to put the FMCV:NB-SEARCH function call back in, and that would be a lot of typing /s This is where [feature expressions]( could come to the rescue: - Keep both *cached-bodies* and the FMCV:NB-SEARCH call in place - Want to use the cached response? Place a `#+nil` in front of the FMCV:-NB-SEARCH call (hunchentoot:define-easy-handler (index :uri "/") (q) (let ((bodies (and q *cached-bodies* #+nil (fmcv:nb-search q)))) (with-page (:title "Face Mask CE Verifier") - Want to hit the external service instead? Place a `#+nil` in front of *cached-bodies* (hunchentoot:define-easy-handler (index :uri "/") (q) (let ((bodies (and q #+nil *cached-bodies* (fmcv:nb-search q)))) (with-page (:title "Face Mask CE Verifier") Note: `#+` is a reader macro that operates on the next two expressions found in the stream; if the first expression evaluates to non-NIL, then the macro would emit the second expression; otherwise, it would swallow it. Now, we are always feeding the macro with a NIL as first expression, so basically we are telling it to swallow the second expression, always! Pretty cool uh?! _All the Vim haters should stop reading here_ I was so excited by this that I ended up creating two Vim mappings to automate this: - `gce` to add (or remove) a `#+nil` in front of the current element (_element_ in the [vim-sexp]( sense) - `gcf` to add (or remove) a `#+nil` in front of the current form " Comment out the current form / element function! ToggleCommonLispComment(toggle_form) abort "{{{ " Move at the start of the current element / form; if on an open paren " already, do nothing if getline('.')[col('.') - 1] != "(" let motion = "" if a:toggle_form let motion = "\(sexp_move_to_prev_bracket)" else let motion = "\(sexp_move_to_prev_element_head)" endif execute "normal " . motion endif " Figure out if we need to add a comment, or remove the existing one let should_add_comment = 1 let save_cursor = getcurpos() let reg_save = @@ " Move back to the previous element " Yank it " Check if it's a #+nil marker execute "normal \(sexp_move_to_prev_element_head)" normal yW if @@ =~? "#+nil" let should_add_comment = 0 endif let @@ = reg_save call setpos('.', save_cursor) " Do the needed if should_add_comment " Insert the comment marker, and move back to the element / form execute "normal! i#+nil " execute "normal \(sexp_move_to_next_element_head)" else " Move back, and the delete forward execute "normal \(sexp_move_to_prev_element_head)" normal dW endif endfunction " }}} nmap gce :call ToggleCommonLispComment(0) nmap gcf :call ToggleCommonLispComment(1) It's a bit hack-y to be honest, but whatever! Also, you can even change the relevant syntax file (e.g. ~/.vim/after/syntax/lisp.vim) and make these regions look like comments: syntax match lispExcludedForm /\v\#\+nil \([^)]+\)/ highlight link lispExcludedForm lispComment syn cluster lispBaseListCluster add=lispExcludedForm syntax match lispExcludedElement /\v\#\+nil [^(][^ ]+/ highlight link lispExcludedElement lispComment syn cluster lispBaseListCluster add=lispExcludedElement What a joy! Until next time... # 2021-10-27 ? @read take a look at how mongosh implemented _awaiting_ for promises without the explicit `await` statement -- ? CL/1AM: push tests into a custom special variable, say *MY-TESTS*, and bind 1AM:*TESTS* to *MY-TESTS* when actually running the tests (this way only the current project's tests will be executed) # 2021-10-29 Abusing Common Lisp feature expressions to comment things out! (a follow up) @vim @commonlisp I posted a link to my previous [.plan entry]( on [/r/Common_Lisp]( the other day, and a couple of things came out. First off, an evil person could add `:nil` to *FETURES* and break hell loose: > #+nil is not the best way of doing this because, in theory, an evil person can put :nil in *features* and then all hell will break lose. Use #+(or) instead, which is immune to this sort of trickery. Uh oh... > (or #+nil "WTF?!" "Hello, world!") "Hello, world!" > (pushnew :nil *features*) (:NIL :QUICKLISP :ASDF3.3 :ASDF3.2 :ASDF3.1 :ASDF3 :ASDF2 :ASDF :OS-MACOSX :OS-UNIX :NON-BASE-CHARS-EXIST-P :ASDF-UNICODE :X86-64 :GENCGC :64-BIT :ANSI-CL :BSD :COMMON-LISP :DARWIN :IEEE-FLOATING-POINT :LITTLE-ENDIAN :MACH-O :PACKAGE-LOCAL-NICKNAMES :SB-CORE-COMPRESSION :SB-LDB :SB-PACKAGE-LOCKS :SB-THREAD :SB-UNICODE :SBCL :UNIX) > (or #+nil "WTF?!" "Hello, world!") "WTF?!" > (setf *features* (delete :nil *features*)) (:QUICKLISP :ASDF3.3 :ASDF3.2 :ASDF3.1 :ASDF3 :ASDF2 :ASDF :OS-MACOSX :OS-UNIX :NON-BASE-CHARS-EXIST-P :ASDF-UNICODE :X86-64 :GENCGC :64-BIT :ANSI-CL :BSD :COMMON-LISP :DARWIN :IEEE-FLOATING-POINT :LITTLE-ENDIAN :MACH-O :PACKAGE-LOCAL-NICKNAMES :SB-CORE-COMPRESSION :SB-LDB :SB-PACKAGE-LOCKS :SB-THREAD :SB-UNICODE :SBCL :UNIX) Then someone else suggested to use [unintenred symbols]( instead, as that will not only keep you safe from the previously mentioned problem, but also give you the opportunity to _document_ why the specific form / element got intentionally excluded: > I use #+#:buggy or any other word to give a tiny piece of documentation as to why it’s commented out. Sometimes I’ll write #+#:pedagogical-implementation to provide a pedagogical, easy-to-understand version of an expression that has been made obscure through optimizations. Again, let's try this out > (or #+#:excluded "WTF?!" "Hello, world!") "Hello, world!" > (pushnew #:excluded *features*) ; in: PUSHNEW #:EXCLUDED ; (LET* ((#:ITEM #:EXCLUDED)) ; (SETQ *FEATURES* (ADJOIN #:ITEM *FEATURES*))) ; ; caught WARNING: ; undefined variable: #:EXCLUDED ; ; compilation unit finished ; Undefined variable: ; #:EXCLUDED ; caught 1 WARNING condition debugger invoked on a UNBOUND-VARIABLE @5351833B in thread #: The variable #:EXCLUDED is unbound. restarts (invokable by number or by possibly-abbreviated name): 0: [CONTINUE ] Retry using #:EXCLUDED. 1: [USE-VALUE ] Use specified value. 2: [STORE-VALUE] Set specified value and use it. 3: [ABORT ] Exit debugger, returning to top level. ((LAMBDA ())) source: (LET* ((#:ITEM #:EXCLUDED)) (SETQ *FEATURES* (ADJOIN #:ITEM *FEATURES*))) 0] 3 > (pushnew :excluded *features*) (:EXCLUDED :QUICKLISP :ASDF3.3 :ASDF3.2 :ASDF3.1 :ASDF3 :ASDF2 :ASDF :OS-MACOSX :OS-UNIX :NON-BASE-CHARS-EXIST-P :ASDF-UNICODE :X86-64 :GENCGC :64-BIT :ANSI-CL :BSD :COMMON-LISP :DARWIN :IEEE-FLOATING-POINT :LITTLE-ENDIAN :MACH-O :PACKAGE-LOCAL-NICKNAMES :SB-CORE-COMPRESSION :SB-LDB :SB-PACKAGE-LOCKS :SB-THREAD :SB-UNICODE :SBCL :UNIX) > (or #+#:excluded "WTF?!" "Hello, world!") "Hello, world!" > (setf *features* (delete :excluded *features*)) (:QUICKLISP :ASDF3.3 :ASDF3.2 :ASDF3.1 :ASDF3 :ASDF2 :ASDF :OS-MACOSX :OS-UNIX :NON-BASE-CHARS-EXIST-P :ASDF-UNICODE :X86-64 :GENCGC :64-BIT :ANSI-CL :BSD :COMMON-LISP :DARWIN :IEEE-FLOATING-POINT :LITTLE-ENDIAN :MACH-O :PACKAGE-LOCAL-NICKNAMES :SB-CORE-COMPRESSION :SB-LDB :SB-PACKAGE-LOCKS :SB-THREAD :SB-UNICODE :SBCL :UNIX) Alright then, I think I like _uninterned symbol_ approach better, so I am going to update my Vim files as follows: diff --git a/.vim/vimrc b/.vim/vimrc index 69c5db4..6ca6151 100644 --- a/.vim/vimrc +++ b/.vim/vimrc diff --git a/.vim/after/syntax/lisp.vim b/.vim/after/syntax/lisp.vim index 5a2859c..cdf3c03 100644 --- a/.vim/after/syntax/lisp.vim +++ b/.vim/after/syntax/lisp.vim @@ -855,7 +921,7 @@ function! s:vim_sexp_mappings() " {{{ " Check if it's a #+nil marker execute "normal \(sexp_move_to_prev_element_head)" normal yW - if @@ =~? "#+nil" + if @@ =~? "#+#:" let should_add_comment = 0 endif let @@ = reg_save @@ -864,7 +930,7 @@ function! s:vim_sexp_mappings() " {{{ " Do the needed if should_add_comment " Insert the comment marker, and move back to the element / form - execute "normal! i#+nil " + execute "normal! i#+#:excluded " execute "normal \(sexp_move_to_next_element_head)" else " Move back, and the delete forward diff --git a/.vim/after/syntax/lisp.vim b/.vim/after/syntax/lisp.vim index 5a2859c..cdf3c03 100644 --- a/.vim/after/syntax/lisp.vim +++ b/.vim/after/syntax/lisp.vim @@ -11,3 +11,11 @@ syn cluster lispBaseListCluster add=lispExcludedForm syntax match lispExcludedElement /\v\#\+nil [^(][^ ]+/ highlight link lispExcludedElement lispComment syn cluster lispBaseListCluster add=lispExcludedElement + +syntax match lispExcludedFormWithDescription /\v\#\+\#:[^ ]+ \([^)]+\)/ +highlight link lispExcludedFormWithDescription lispComment +syn cluster lispBaseListCluster add=lispExcludedFormWithDescription + +syntax match lispExcludedElementWithDescription /\v\#\+\#:[^ ]+ [^(][^ ]+/ +highlight link lispExcludedElementWithDescription lispComment +syn cluster lispBaseListCluster add=lispExcludedElementWithDescription Right on! # 2021-10-31 * While releasing `cg`, the created GH release is named `refs/tags/...` -- turns out I was setting the `name` input to `github.ref`: + migrate `ap` to GitHub actions -- see `cg` + migrate `adventofcode` to GitHub actions -- see `cg` + migrate `lg` to GitHub actions -- see `cg` ? migrate `plan-convert` to GitHub actions -- see `cg` ? migrate `xml-emitter` to GitHub actions -- see `cg` # 2021-11-02 * Add new page for `lg` to + Common Lisp / Replit: if we install dependencies inside ./ instead of ~/quicklisp, would Replit cache them between restarts? ? The new version of Vlime (the one supporting JSON protocol) made the REPL editable, but it's a little bit cumbersome to interact with it -- you got to press enter while in insert mode, and if you leave insert mode early, that's it... no way to undo that --- Today I woke up to realize that I could not deploy anything to my VPN anymore: task path: /Users/matteolandi/Workspace/ fatal: []: FAILED! => {"changed": false, "msg": "Error pulling - code: None message: unable to ping re gistry endpoint\nv2 ping attempt failed with error: Get x509: certificate has expired or is not yet valid\n v1 ping attempt failed with error: Get x509: certificate has expired or is not yet valid"} ____________ < PLAY RECAP > ------------ \ ^__^ \ (oo)\_______ (__)\ )\/\ ||----w | || || : ok=2 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0 Figured it had something to do with with the recent [Let's Encrypt's root certificate getting expired](, so I went on and upgraded my OS certificates (yeah, that plus all the other packages I had not upgraded in a while): yum update -y Then I tried to re-deploy again; it did not succeed, but at least the error message was different: task path: /Users/matteolandi/Workspace/ fatal: []: FAILED! => {"changed": false, "msg": "Failed to import the required Python library (Docker SDK for Python: docker (Python >= 2.7) or docker-py (Python 2.6)) on centos-512mb-fra1-01.localdomain's Python /usr/bin/python. Please read module documentation and install in the appropriate loc ation. If the required library is installed, but Ansible is using the wrong Python interpreter, please consult the documentation on ansible_python_interpret er, for example via `pip install docker` or `pip install docker-py` (Python 2.6). The error was: cannot import name certs"} ____________ < PLAY RECAP > ------------ \ ^__^ \ (oo)\_______ (__)\ )\/\ ||----w | || || : ok=33 changed=1 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0 First I upgraded `pip`: pip install --upgrade pip Then uninstalled a bunch of borked packages: pip uninstall docker requests urllib3 And re-installed them again: pip install --upgrade urllib3 requests docker And after this, it all went back to normal... Crisis averted! # 2021-11-03 TIL: You cannot use plain HTTP with a .dev domain; Google will force a HTTPS redirect! # 2021-11-05 (Trying to) Speed up Common Lisp Replit REPLs startup time -- or how to both install Quicklisp and cache *.fasl files in the current directory @commonlisp @replit All of my Common Lisp Replit REPLs have the following defined inside '.replit-files/init.lisp' (that's the script I am loading when starting up SBCL): #-quicklisp (let ((quicklisp-init (merge-pathnames "quicklisp/setup.lisp" (user-homedir-pathname)))) (if (probe-file quicklisp-init) (load quicklisp-init) (progn (load (merge-pathnames "quicklisp.lisp" *load-truename*)) (funcall (find-symbol "INSTALL" (find-package "QUICKLISP-QUICKSTART")))))) This pretty much translates to checking if '~/quicklisp/setup.lisp' already exists; if it does, it means we already got Quicklisp set up, so we go ahead and load that file up; otherwise, we load 'quicklisp.lisp' from the local directory (it's a copy of the file available on [](, and then install Quicklisp altogether. After this, with Quicklisp installed, all is left to do is loading up our app (the following is an example from a [recent REPL of mine]( (ql:quickload '("face-mask-ce-verifier-web" "swank-tunnel")) (fmcv-web:start :web-interface "") Now, as you can imagine, the following two main activities will be slowing down the REPL startup time: 1. Bootstrapping Quicklisp, i.e. downloading it, installing it, fetching the latest dist 2. Loading / compiling the application system, and all its dependencies Now, doing this once is perfectly fine, but can we configure things so that we don't have to do this over and over again, when Replit decides your REPL needs to be put to sleep or moved off to a new instance? The answer to that question is of course: "yes!"; basically all we have to do is make sure that all the artifacts which were dynamically generated and which were needed by the REPL, are saved somewhere inside the REPL workspace (yes, your REPL workspace will pretty much always be kept around). Alright; let's start off by changing our Quicklisp install script as follows: ;;; Quicklisp (installed in the current directory) #-quicklisp (let ((quicklisp-init (merge-pathnames ".quicklisp/setup.lisp" *default-pathname-defaults*))) (if (probe-file quicklisp-init) (load quicklisp-init) (progn (load (merge-pathnames "quicklisp.lisp" *load-truename*)) (funcall (find-symbol "INSTALL" (find-package "QUICKLISP-QUICKSTART")) :path (merge-pathnames ".quicklisp/" *default-pathname-defaults*))))) Here, we look for Quicklisp's setup script inside '.quicklisp/setup.lisp' instead of the canonical '~/quicklisp/setup.lisp' (note how we are using *DEFAULT-PATHNAME-DEFAULTS* instead of `(user-homedir-pathname)`); we also pass `:path (merge-pathnames ".quicklisp/" *default-pathname-defaults*)` to `QUICKLISP-QUICKSTART:INSTALL`, effectively telling it to install everything inside the local directory, '.quicklisp'. Nice! Now, to change the location where ADFS stores compiled files to make sure it's in the current workspace, we will have to play with ADFS's [Configuration DSL]( a bit; it took me some Googling around ([1](, [2](, [3](, and [4]( and quite a few tries before I got this to work; but ultimately, this is what I came up with: ;;; Locally cached FASLs (let ((cache-dir (merge-pathnames ".common-lisp/" *default-pathname-defaults*))) (asdf:initialize-output-translations `(:output-translations (t (,cache-dir :implementation)) :disable-cache :ignore-inherited-configuration))) Again, you can read more about the DSL in the link above, but in general: 1. Using `:implementation` forces ASDF to use separate directories, inside '.common-lisp', based on the current Lisp implementation; this way, should we change the REPL to use CCL instead of SBCL, it would compile everything from scratch, instead of trying to load the artifacts generated while we were using the other implementation 2. Using `:disable-cache` tells ASDF to stop caching compiled files under '$XDG_CONFIG_HOME/.common-lisp'; this is the behavior that the previous rule is meant to override, so disabling this will hopefully make sure the default caching mechanism won't get in our way 3. Using `:ignore-inherited-configuration` tells ASDF to ignore any other _inherited_ rule; now, I am not entirely sure what this is for, but ASDF was complaining that I should use either `:inherit-configuration` or `:ignore-inherited-configuration` when calling `INITIALIZE-OUTPUT`, so here it is Anyways, restart the REPL, and you should now finally see the '.quicklisp' and '.common-lisp' local directories being put to good use: ~/Face-mask-CE-marking-verifier$ ls .quicklisp/ asdf.lisp dists quicklisp tmp client-info.sexp local-projects setup.lisp ~/Face-mask-CE-marking-verifier$ ls .common-lisp/ sbcl-2.1.2.nixos-linux-x64 And that's it! Until the next time... # 2021-11-07 Abusing Common Lisp feature expressions to comment things out! (another follow up) @vim @commonlisp The simple syntax rules that I mentioned inside one of my previous [.plan entry]( turned out to break quite easily in case of nested parentheses; so I decided to try a different approach instead, and after playing with Vim's syntax regions a bit, I came up with the following: syntax region lispExcludedFormWithNil \ matchgroup=lispExcludedContentStart \ start="#+nil (" \ skip="|.\{-}|" \ matchgroup=lispExcludedContentStop \ end=")" \ contains=@lispBaseListCluster highlight link lispExcludedContentStart lispComment highlight link lispExcludedContentStop lispComment With this I was able to deal with nested parentheses just fine; however, if I there were `highlight` commands defined for any of the elements contained inside the `@lispBaseListCluster` cluster (and of course there are!), then Vim would end up coloring those elements; what I wanted instead, was the whole region to look like a `lispComment`. This is where I got stuck; this is where I started questioning the whole approach altogether, and where I ended up asking for [help]( on the /r/vim sub-reddit. Anyways, long story short, one reply made me realize that I did not actually need all the definitions included inside the `@lispBaseListCluster` cluster, especially if then I would need to figure out a way to mute their colors; so what I did instead, was creating a new, simpler but recursive syntax region, to cater for my _excluded_ lists: syntax region lispExcludedList \ contained \ matchgroup=lispExcludedListOpenParen \ start="(" \ skip="|.\{-}|" \ matchgroup=lispExcludedListCloseParen \ end=")" \ contains=lispExcludedList highlight link lispExcludedList lispComment highlight link lispExcludedListOpenParen lispComment highlight link lispExcludedListCloseParen lispComment I then updated my original `lispExcludedFormWithNil` definition to _contain_ `lispExcludedList` instead of the `@lispBaseListCluster` cluster: syntax region lispExcludedFormWithNil \ matchgroup=lispExcludedContentStart \ start="#+nil (" \ skip="|.\{-}|" \ matchgroup=lispExcludedContentStop \ end=")" \ contains=lispExcludedList highlight link lispExcludedFormWithNil lispComment highlight link lispExcludedContentStart lispComment highlight link lispExcludedContentStop lispComment And that's it; with this, it all started to work as expected: #+nil (list 1 (list 2 3) 4) ^ | `- `lispComment` syntax up until this position #+nil (list 1 (list 2 (list 3 4) 5)) ^ | `- `lispComment` syntax up until this position If I then were to add `lispExcludedFormWithNil` to the `@lispBaseListCluster`: syn cluster lispBaseListCluster add=lispExcludedFormWithNil Then this would work even in case the _excluded_ expression was found nested inside another expression / list: (list #+nil (list 1 (list 2 (list 3 4) 5)) 6) ^ ^ | | | `- this is highlighed as `lispNumber` `- `lispComment` syntax up until this position Sweet! So how is my Lisp syntax file looking? There you go: ... " Excluded forms {{{ syntax region lispExcludedList \ contained \ matchgroup=lispExcludedListOpenParen \ start="(" \ skip="|.\{-}|" \ matchgroup=lispExcludedListCloseParen \ end=")" \ contains=lispExcludedList highlight link lispExcludedList lispComment highlight link lispExcludedListOpenParen lispComment highlight link lispExcludedListCloseParen lispComment highlight link lispExcludedContentStart lispComment highlight link lispExcludedContentStop lispComment function! s:createLispExcludedSyntaxRegion(regionName, regionPrefix) abort "{{{ execute 'syntax region' a:regionName \ 'matchgroup=lispExcludedContentStart' \ 'start="' . a:regionPrefix . '("' \ 'skip="|.\{-}|"' \ 'matchgroup=lispExcludedContentStop' \ 'end=")"' \ 'contains=lispExcludedList' execute 'highlight link' a:regionName 'lispComment' execute 'syntax cluster lispBaseListCluster add=' . a:regionName endfunction " }}} " ... with NIL {{{ syntax match lispExcludedElementWithNil /\v\#\+nil [^(`][^ ]+/ highlight link lispExcludedElementWithNil lispComment syn cluster lispBaseListCluster add=lispExcludedElementWithNil call s:createLispExcludedSyntaxRegion("lispExcludedFormWithNil", "#+nil ") call s:createLispExcludedSyntaxRegion("lispExcludedQuotedFormWithNil", "#+nil '") call s:createLispExcludedSyntaxRegion("lispExcludedQuasiQuotedFormWithNil", "#+nil `") call s:createLispExcludedSyntaxRegion("lispExcludedSharpsignedDotFormWithNil", "#+nil #\.") " }}} " ... with #:_description_ {{{ syntax match lispExcludedElementWithDescription /\v\#\+\#:[^ ]+ [^(`][^ ]+/ highlight link lispExcludedElementWithDescription lispComment syn cluster lispBaseListCluster add=lispExcludedElementWithDescription call s:createLispExcludedSyntaxRegion("lispExcludedFormWithDescription", "#+#:[^ ]\\+ ") call s:createLispExcludedSyntaxRegion("lispExcludedQuotedFormWithDescription", "#+#:[^ ]\\+ '") call s:createLispExcludedSyntaxRegion("lispExcludedQuasiQuotedFormWithDescription", "#+#:[^ ]\\+ `") call s:createLispExcludedSyntaxRegion("lispExcludedSharpsignedDotFormWithDescription", "#+#:[^ ]\\+ #\.") " }}} " }}} PS. Yes, I created a script function, `createLispExcludedSyntaxRegion`, to make the process of creating new syntax regions a tiny bit less hairy. # 2021-11-08 Advent of Code: [2018/04]( December 1st is right around the corner, and we better get in "advent-of-code"-shape real quick; and what better way to do that than re-working some of my oldest solutions, i.e. the ones from 2018, when I was just trying to get started with Common Lisp?! 2018, day 4 it is! For today's challenge we are tasked to sneak inside a supply closet so we can fix our suit; however, the closet is protected by a guard, so we need to figure out our way in without getting caught. Lucky us, someone has been keeping tabs on guards shift (it's sleeping schedule, our input), and by the look of it it appears guards would fall asleep quite often, so hopefully we can figure out the best time to get in. This is the kind of input that we are given: [1518-11-01 00:00] Guard #10 begins shift [1518-11-01 00:05] falls asleep [1518-11-01 00:25] wakes up [1518-11-01 00:30] falls asleep [1518-11-01 00:55] wakes up [1518-11-01 23:58] Guard #99 begins shift [1518-11-02 00:40] falls asleep [1518-11-02 00:50] wakes up [1518-11-03 00:05] Guard #10 begins shift [1518-11-03 00:24] falls asleep [1518-11-03 00:29] wakes up [1518-11-04 00:02] Guard #99 begins shift [1518-11-04 00:36] falls asleep [1518-11-04 00:46] wakes up [1518-11-05 00:03] Guard #99 begins shift [1518-11-05 00:45] falls asleep [1518-11-05 00:55] wakes up We are going to parse all this into an ALIST having the guard ID as index; the value would be another ALIST having the minute as index (note: for each day, we are told to analyze the `00:00`-`00:59` time range only); the value, would be the list of days in which the specific guard fell asleep for the specific minute. The example above would be parsed into the following: ((99 (54 "1518-11-05") (53 "1518-11-05") (52 "1518-11-05") (51 "1518-11-05") (50 "1518-11-05") (39 "1518-11-04") (38 "1518-11-04") (37 "1518-11-04") (36 "1518-11-04") (49 "1518-11-05" "1518-11-02") (48 "1518-11-05" "1518-11-02") (47 "1518-11-05" "1518-11-02") (46 "1518-11-05" "1518-11-02") (45 "1518-11-05" "1518-11-04" "1518-11-02") (44 "1518-11-04" "1518-11-02") (43 "1518-11-04" "1518-11-02") (42 "1518-11-04" "1518-11-02") (41 "1518-11-04" "1518-11-02") (40 "1518-11-04" "1518-11-02")) (10 (28 "1518-11-03") (27 "1518-11-03") (26 "1518-11-03") (25 "1518-11-03") (54 "1518-11-01") (53 "1518-11-01") (52 "1518-11-01") (51 "1518-11-01") (50 "1518-11-01") (49 "1518-11-01") (48 "1518-11-01") (47 "1518-11-01") (46 "1518-11-01") (45 "1518-11-01") (44 "1518-11-01") (43 "1518-11-01") (42 "1518-11-01") (41 "1518-11-01") (40 "1518-11-01") (39 "1518-11-01") (38 "1518-11-01") (37 "1518-11-01") (36 "1518-11-01") (35 "1518-11-01") (34 "1518-11-01") (33 "1518-11-01") (32 "1518-11-01") (31 "1518-11-01") (30 "1518-11-01") (24 "1518-11-03" "1518-11-01") (23 "1518-11-01") (22 "1518-11-01") (21 "1518-11-01") (20 "1518-11-01") (19 "1518-11-01") (18 "1518-11-01") (17 "1518-11-01") (16 "1518-11-01") (15 "1518-11-01") (14 "1518-11-01") (13 "1518-11-01") (12 "1518-11-01") (11 "1518-11-01") (10 "1518-11-01") (9 "1518-11-01") (8 "1518-11-01") (7 "1518-11-01") (6 "1518-11-01") (5 "1518-11-01"))) Doing the actual parsing turned out way hairier than I originally anticipated, but whatever; anyways, high level: - Sort entries lexicographically (yes, the input is out of order) - Recursively - Parse guard ID -- PARSE - Parse "falls-asleep" time and day -- FILL-IN-SCHEDULE - Parse "wakes-up" time and day -- FILL-IN-SCHEDULE - For each minute in the time range - Update the schedule accordingly -- ADD-ASLEEP-TIME-DAY (defun parse-schedule (data) (let (schedule) (labels ((parse (remaining) (let ((guard (parse-schedule-guard (pop remaining)))) (when guard (fill-in-schedule guard remaining)))) (fill-in-schedule (guard remaining) (if (not (parse-schedule-timeday (first remaining))) (parse remaining) (let ((falls-asleep (parse-schedule-timeday (pop remaining))) (wakes-up (parse-schedule-timeday (pop remaining)))) (loop :with day = (cdr falls-asleep) :for time :from (car falls-asleep) :below (car wakes-up) :do (add-asleep-time-day guard time day)) (fill-in-schedule guard remaining)))) (add-asleep-time-day (guard time day) (let ((entry (assoc guard schedule))) (unless entry (setf entry (list guard)) (push entry schedule)) (let ((st-entry (assoc time (sleeptable entry)))) (unless st-entry (setf st-entry (list time)) (push st-entry (sleeptable entry))) (push day (days st-entry)))))) (parse (sort (copy-seq data) #'string<))) schedule)) (defun parse-schedule-guard (string) (cl-ppcre:register-groups-bind ((#'parse-integer guard)) ("Guard #(\\d+) begins shift" string) guard)) (defun parse-schedule-timeday (string) (cl-ppcre:register-groups-bind (date (#'parse-integer time)) ("(\\d{4}-\\d{2}-\\d{2}) 00:(\\d\\d)] (falls asleep|wakes up)" string) (cons time date))) The rest is a bunch of selectors for writing / pulling data into / out of the schedule: (defun guard (schedule-entry) (car schedule-entry)) (defun sleeptable (schedule-entry) (cdr schedule-entry)) (defun (setf sleeptable) (value schedule-entry) (setf (cdr schedule-entry) value)) (defun minute (sleeptable-entry) (car sleeptable-entry)) (defun days (sleeptable-entry) (cdr sleeptable-entry)) (defun (setf days) (value sleeptable-entry) (setf (cdr sleeptable-entry) value)) Now, let's get to the juicy part: > Strategy 1: Find the guard that has the most minutes asleep. What minute does that guard spend asleep the most? We find the schedule entry with the highest count of minutes asleep first; then, in that entry, we look for the _minute_ entry in the sleep table with the highest count of days; finally do the multiplication: (defun part1 (schedule) (let ((entry (find-max schedule :key #'minutes-asleep))) (* (guard entry) (sleepiest-minute entry)))) (defun minutes-asleep (entry) (reduce #'+ (sleeptable entry) :key #'(lambda (e) (length (cdr e))))) (defun sleepiest-minute (entry) (multiple-value-bind (st num-days-asleep) (find-max (sleeptable entry) :key #'(lambda (st) (length (days st)))) (values (minute st) num-days-asleep))) For part 2 instead: > Strategy 2: Of all guards, which guard is most frequently asleep on the same minute? We find the entry with the highest number of of days associated for any given minute (note: we don't want the minute but the number of days the guard feel asleep for that given minute, that's why we are using NTH-VALUE here); then we call SLEEPIEST-MINUTE on that; we do multiplication last: (defun part2 (schedule) (let ((entry (find-max schedule :key #'(lambda (entry) (nth-value 1 (sleepiest-minute entry)))))) (* (guard entry) (sleepiest-minute entry)))) Final plumbing: (defun part2 (schedule) (let ((entry (find-max schedule :key #'(lambda (entry) (nth-value 1 (sleepiest-minute entry)))))) (* (guard entry) (sleepiest-minute entry)))) And that's it: > (time (test-run)) TEST-2018/04.. Success: 1 test, 2 checks. Evaluation took: 0.005 seconds of real time 0.005837 seconds of total run time (0.004618 user, 0.001219 system) 120.00% CPU 13,727,746 processor cycles 720,224 bytes consed # 2021-11-09 Advent of Code: [2018/06]( Today, we are given a bunch of coordinates (i.e. X,Y locations), representing what could be either be dangerous safe places to land on -- and yes, we are falling. This is a sample input: 1, 1 1, 6 8, 3 3, 4 5, 5 8, 9 We are going to be parsing this into _tagged_ coordinates, i.e. coordinates with a tag information attached to it (more to this later); as usual, we would be representing coordinates as COMPLEX numbers: (defvar *tag* 0) (defun parse-points (data) (let ((*tag* 0)) (mapcar #'parse-point data))) (defun parse-point (string) (cl-ppcre:register-groups-bind ((#'parse-integer col row)) ("(\\d+), (\\d+)" string) (make-point (incf *tag*) row col))) (defun make-point (tag row col) (cons tag (make-coord row col))) (defun tag (coord) (car coord)) (defun (setf tag) (value coord) (setf (car coord) value)) (defun coord (point) (cdr point)) (defun make-coord (row col) (complex col (- row))) (defun row (coord) (- (imagpart coord))) (defun col (coord) (realpart coord)) For part 1, we are going to assume these to be _dangerous_ coordinates, and we are going to try and find which one is the furthest away (in terms of Manhattan distance) from all of the others. How? > Using only the Manhattan distance, determine the area around each coordinate by counting the number of integer X,Y locations that are closest to that coordinate (and aren't tied in distance to any other coordinate). > > Your goal is to find the size of the largest area that isn't infinite. Here is how we are going to solve this: - We find the bounding box, i.e. the smallest rectangle containing all the coordinates of the input -- see BOUNDING-BOX - For each X,Y point of this bounding box - We find the input coordinate which is _closest_ to that, being careful to ignore any point which is equally distant to 2 or more coordinates -- see CLOSEST-POINT - Then if the we are _not_ on the border (see ON-THE-BORDER-P), we mark the location with the tag of the closest input coordinate (we are not actually marking it, but rather counting, but you get the idea) - Finally, we find the input coordinate with the highest number of _closest_ locations In code: (defun part1 (points &aux (bbox (bounding-box points))) (destructuring-bind (row-min col-min row-max col-max) bbox (let ((sizes (make-hash-table))) (loop :for row :from row-min :to row-max :do (loop :for col :from col-min :to col-max :for test := (make-coord row col) :for closest := (closest-point test points) :do (if (on-the-border-p test bbox) (setf (gethash (tag closest) sizes) most-negative-fixnum) (incf (gethash (tag closest) sizes 0))))) (loop :for size :being :the :hash-values :of sizes :using (hash-key tag) :when tag :maximize size)))) (defun bounding-box (points) (loop :for p :in points :for row := (row (coord p)) :for col := (col (coord p)) :minimizing row :into row-min :maximizing row :into row-max :minimizing col :into col-min :maximizing col :into col-max :finally (return (list row-min col-min row-max col-max)))) (defun closest-point (test points) (let (closest closest-distance) (dolist (p points) (let ((distance (manhattan-distance test (coord p)))) (when (or (not closest-distance) (<= distance closest-distance)) (if (or (not closest-distance) (< distance closest-distance)) (setf closest (list p) closest-distance distance) (push p closest))))) (when (= (length closest) 1) (first closest)))) (defun on-the-border-p (point bbox) (destructuring-bind (row-min col-min row-max col-max) bbox (or (= (row point) row-min) (= (row point) row-max) (= (col point) col-min) (= (col point) col-max)))) For part 2, we are going to assume these input locations to be _safe_ to land on instead, so we are going to try and find the region near as many coordinates as possible: > For example, suppose you want the sum of the Manhattan distance to all of the coordinates to be less than 32. For each location, add up the distances to all of the given coordinates; if the total of those distances is less than 32, that location is within the desired region. > [...] > What is the size of the region containing all locations which have a total distance to all given coordinates of less than 10000? - For each X,Y location inside the same bounding box as part 1 - Sum all the distances with respect of all the input coordinates -- see TOTAL-DISTANCE - Increment a counter if this sum is less than 10000 (defun part2 (points &aux (bbox (bounding-box points))) (destructuring-bind (row-min col-min row-max col-max) bbox (loop :for row :from row-min :to row-max :summing (loop :for col :from col-min :to col-max :for test := (make-coord row col) :for total-distance := (total-distance test points) :counting (< total-distance 10000))))) (defun total-distance (test points) (loop :for p :in points :summing (manhattan-distance test (coord p)))) Final plumbing: (define-solution (2018 6) (points parse-points) (values (part1 points) (part2 points))) (define-test (2018 6) (3894 39398)) And that's it: > (time (test-run)) TEST-2018/06.. Success: 1 test, 2 checks. Evaluation took: 0.661 seconds of real time 0.618545 seconds of total run time (0.585166 user, 0.033379 system) 93.65% CPU 1,521,383,260 processor cycles 14,120,976 bytes consed # 2021-11-10 Advent of Code: [2018/20]( We find ourselves into an area made up entirely of rooms and doors; the thing is, the Elves don't have a map for this area; instead, they hand us over a list of instructions we can use to build the map ourselves: - `^` is the "start-of-instructions" marker - `$` is the "end-of-instructions" marker - `N` means you can go north (i.e. a door exists between the _current_ room, and the one _north_ to it) - `E` means you can go east - `S` means you can go south - `W` means you can go west - `(` marks a _fork_, meaning that multiple sub-paths can be started from the current room - `|` marks the end of a sub-path - `)` marks the end of the fork A couple of examples will hopefully help understanding this; this simple list of instructions, `^WNE$`, can be used to build up the following map: ##### #.|.# #-### #.|X# ##### This longer instructions list instead, ` ^ENWWW(NEEE|SSE(EE|N))$`, can be used to create the following map: ######### #.|.|.|.# #-####### #.|.|.|.# #-#####-# #.#.#X|.# #-#-##### #.|.|.|.# ######### Before figuring out how to create the map from the instructions, it's probably best to take a look at what we are asked to do for part 1, so here it goes: > What is the largest number of doors you would be required to pass through to reach a room? That is, find the room for which the shortest path from your starting location to that room would require passing through the most doors; what is the fewest doors you can pass through to reach it? OK, it seems like we are going to have to actually navigate through this map. Let's parse the list of instructions into a HASH-SET of locations (i.e. COMPLEX numbers), referring to where the doors are; to do so, we are going to use a _stack_ of current positions, so we can keep track of every fork we step foot on; in particular: - We skip the `^` - If the current char is `$`, we stop and return the set of doors we went through - If the current char is `(`, a new fork begins, so push the current position into the stack - If the current char is `|`, a sub-path just ended, so let's go back to the room of the most recent fork - If the current char is `)`, we visited all the sub-paths of a fork, so pop an item from the stack, and start from there - Otherwise, i.e. one of `NESW`, move into the adjacent room and record the door we used to get there (defun parse-doors (data &aux (string (first data)) (doors (make-hset nil))) (loop :with pos = 0 :with pos-stack = (list pos) :for ch :across string :do (ecase ch ((#\^) nil) ((#\$) (return doors)) ((#\() (push pos pos-stack)) ((#\)) (setf pos (pop pos-stack))) ((#\|) (setf pos (first pos-stack))) ((#\N) (hset-add (+ pos (complex 0 1)) doors) (setf pos (+ pos (complex 0 2)))) ((#\E) (hset-add (+ pos (complex 1 0)) doors) (setf pos (+ pos (complex 2 0)))) ((#\S) (hset-add (- pos (complex 0 1)) doors) (setf pos (- pos (complex 0 2)))) ((#\W) (hset-add (- pos (complex 1 0)) doors) (setf pos (- pos (complex 2 0)))))) doors) All is left to do now, is to explore the complete map, figure out which room is the furthest away from us, in terms of number of doors to go through, and then return that number of doors; we can do this with an unbounded BFS: - start point: `0,0` (or `(complex 0 0)`) - no goal state / function -- it will stop when there are no more states to process - from a given location, you can move into the adjacent room if a door exists in between -- see NEIGHBORS (defparameter *deltas* '(#C(0 1) #C(1 0) #C(0 -1) #C(-1 0))) (defun neighbors (doors pos) (loop :for d :in *deltas* :for door-pos = (+ pos d) :when (hset-contains-p door-pos doors) :collect (+ pos d d))) (let ((cost-so-far (nth-value 3 (bfs 0 :neighbors (partial-1 #'neighbors doors))))) (reduce #'max (hash-table-values cost-so-far))) For part 2, instead: > Okay, so the facility is big. > > How many rooms have a shortest path from your current location that pass through at least 1000 doors? Easy: (let ((cost-so-far (nth-value 3 (bfs 0 :neighbors (partial-1 #'neighbors doors))))) (count-if (partial-1 #'>= _ 1000) (hash-table-values cost-so-far))) Let's wire everything together: (define-solution (2018 20) (doors parse-doors) (let ((cost-so-far (nth-value 3 (bfs 0 :neighbors (partial-1 #'neighbors doors))))) (values (reduce #'max (hash-table-values cost-so-far)) (count-if (partial-1 #'>= _ 1000) (hash-table-values cost-so-far))))) (define-test (2018 20) (3835 8520)) And that's it: > (time (test-run)) TEST-2018/20.. Success: 1 test, 2 checks. Evaluation took: 0.029 seconds of real time 0.028120 seconds of total run time (0.018474 user, 0.009646 system) 96.55% CPU 67,925,979 processor cycles 6,870,096 bytes consed # 2021-11-12 To get started with Sketch (the Common Lisp graphic framework, inspired by the Processing language), you will have to install SDL2 and some of its add-ons. On a Mac, the most convenient way to do this is by using Homebrew: brew install sdl2 brew install sdl2_image brew install sdl2_ttf With that done, you should now be able to load Sketch's examples just fine: (ql:quickload :sketch) (ql:quickload :sketch-examples) (make-instance 'sketch-examples:brownian) Note: as mentioned on this issue, [OSX Can't run hello world example #29](, when trying this out on SBCL, you will have to wrap the MAKE-INSTANCE call inside SDL2:MAKE-THIS-THREAD-MAIN, or most likely you will be presented with the following: debugger invoked on a SDL2::SDL-ERROR in thread #: SDL Error: NSWindow drag regions should only be invalidated on the Main Thread! The current thread is not at the foreground, SB-THREAD:RELEASE-FOREGROUND has to be called in # {1007C501B3}> for this thread to enter the debugger. debugger invoked on a SB-SYS:INTERACTIVE-INTERRUPT @7FFF5DB4C58E in thread #: Interactive interrupt at #x7FFF5DB4C58E. restarts (invokable by number or by possibly-abbreviated name): 0: [CONTINUE] Return from SB-UNIX:SIGINT. 1: [ABORT ] Exit debugger, returning to top level. Alternatively, you can always switch to [CCL]( (another Common Lisp implementation) when playing with Sketch, as that one does not seem to be suffering from the same thread-related issue that SBCL does instead. # 2021-11-18 Advent of Code: [2018/23]( This problem's input is a list of _nanobots_, each representing a 3-d location, and a radius (more to this later): pos=<0,0,0>, r=4 pos=<1,0,0>, r=1 pos=<4,0,0>, r=3 pos=<0,2,0>, r=1 pos=<0,5,0>, r=3 pos=<0,0,3>, r=1 pos=<1,1,1>, r=1 pos=<1,1,2>, r=1 pos=<1,3,1>, r=1 We are going to put this into a structure with two slots: - The 3-d location, stored as a list - The communication radius (defstruct (nanobot (:type list) :conc-name) pos r) (defun x (pos) (first pos)) (defun y (pos) (second pos)) (defun z (pos) (third pos)) (defun parse-nanobots (data) (mapcar #'parse-nanobot data)) (defun parse-nanobot (string) (cl-ppcre:register-groups-bind ((#'parse-integer x y z r)) ("pos=<(-?\\d+),(-?\\d+),(-?\\d+)>, r=(\\d+)" string) (make-nanobot :pos (list x y z) :r r))) Now, let's take a look at the ask for part 1: > Find the nanobot with the largest signal radius. How many nanobots are in range of its signals? OK, easy enough: (defun part1 (bots) (let ((strongest (find-max bots :key #'r))) (count-if (partial-1 #'nanobot-contains-p strongest (pos _)) bots))) (defun nanobot-contains-p (nb pos) (<= (manhattan-distance (pos nb) pos) (r nb))) Now, things are going to get messy with part 2: > Find the coordinates that are in range of the largest number of nanobots. What is the shortest manhattan distance between any of those points and 0,0,0? OK, let's figure out what the bounding box of all the bots is, and then look for the point with the highest number of bots in range; except, the search space might be bigger that we thought: > (let ((bots (parse-nanobots (uiop:read-file-lines "src/2018/day23.txt")))) (loop :for bot :in bots :for pos = (pos bot) :minimize (x pos) :into x-min :maximize (x pos) :into x-max :minimize (y pos) :into y-min :maximize (y pos) :into y-max :minimize (z pos) :into z-min :maximize (z pos) :into z-max :finally (return (list x-min x-max y-min y-max z-min z-max)))) (-151734092 226831435 -103272011 144820743 -114031252 186064394) That, and the fact that there will be a lot of overlapping (not only are bots well spread out into the 3d space, but they also have huge communication radius too): > (let ((bots (parse-nanobots (uiop:read-file-lines "src/2018/day23.txt")))) (loop :for bot :in bots :minimize (r bot) :into r-min :maximize (r bot) :into r-max :finally (return (list r-min r-max)))) (49770806 99277279) We are going to need something clever than this, but what exactly? Well, I originally solved this by [random walking the search space and drilling into the areas with the highest number of bots in range](; however, that was not always guaranteed to work (yes, I got lucky for my second star), and after taking a good look at [r/adventofcode]( and even reading the AoC creator [commenting]( on what he thought people should be implementing to solve this, I decided to give the "Octree scan" solution a try. The basic idea is simple, and it relies on the recursive 3-d space subdivision logic that [octrees]( implement when efficiently organizing 3-d objects in the space; what follows is an overview of the algorithm that we are going to implement to solve this: - We start with the biggest bounding box possible, and we count the number of bots in range (all) - Then we split this box in 8 equal parts, and count the number of bots in range of each - We keep on doing this until the cubes becomes 3-d points - We are going to need a way to prune non optimal branches as soon as possible, or chances are we are going to get stuck in this loop for quite some time (more to this later) Let's start off by defining a BOX structure and create one instance covering for all the bots in our input: (defstruct (box (:type list) (:constructor make-box%)) range-x range-y range-z) (defun make-box (x-min x-max y-min y-max z-min z-max) (let* ((box (make-box% :range-x (list x-min x-max) :range-y (list y-min y-max) :range-z (list z-min z-max))) (volume (box-volume box))) (assert (or (= volume 0) (= (log volume 2) (floor (log volume 2))))) box)) (defun bots-box (bots) (loop :for bot :in bots :for pos = (pos bot) :minimize (x pos) :into x-min :maximize (x pos) :into x-max :minimize (y pos) :into y-min :maximize (y pos) :into y-max :minimize (z pos) :into z-min :maximize (z pos) :into z-max :finally (return (let* ((largest (max (- x-max x-min) (- y-max y-min) (- z-max z-min))) (side (make-pow2 largest))) (make-box x-min (+ x-min side) y-min (+ y-min side) z-min (+ z-min side)))))) (defun make-pow2 (number) (expt 2 (ceiling (log number 2)))) (Note: we decide to make sure the box is actually a cube, and that each side is a power of 2, so we know the splitting will always be _even_) Now, while waking through the search space, we would like to pick our next cube to analyze based on: - Number of nanobots in range of the box: the higher, the better, as that's what we want to maximize, eventually - Volume of the current box: the smaller, the better, as we want to converge to a solution (optimal or not) as soon as possible so we can use that to prune branching - Distance from origin: again, the smaller, the better We are going to create a new structure, for this, STATE, and customize its constructor as follows: (defstruct (state (:type list) (:constructor make-state%) :conc-name) box num-nanobots volume distance) (defun make-state (bots box) (let ((x (caar box)) (y (caadr box)) (z (caaddr box))) (make-state% :box box :num-nanobots (count-in-range bots box) :volume (box-volume box) :distance (manhattan-distance (list x y z) (list 0 0 0))))) There are a lot of helpers function used here, so let's take a look at their implementations; COUNT-IN-RANGE first: (defun count-in-range (nanobots box) (count-if (partial-1 #'nanobot-overlaps-p _ box) nanobots)) It relies on another helper function, NANOBOT-OVERLAPS-P, which all it does is calculating how far the nanobot is, from the box, and return T if that distance (i.e. it's a Manhattan distance) is smaller or equal than the nanobot radius itself: (defun nanobot-overlaps-p (nb box) (loop :with distance = 0 :for (v-min v-max) :in box :for v :in (pos nb) :if (< v v-min) :do (incf distance (- v-min v)) :else if (> v v-max) :do (incf distance (- v v-max)) :never (> distance (r nb)))) Lastly, calculating one box's volume should be pretty straightforward: (defun box-volume (box) (destructuring-bind ((x-min x-max) (y-min y-max) (z-min z-max)) box (* (- x-max x-min) (- y-max y-min) (- z-max z-min)))) Let's now take a look at the meat of this whole thing: - We create the bounding box - Initialize a sorted queue with it - We start popping items from it until it's empty - Now, until we have found _one_ solution (it could be the optimal one, or a sub-optimal one), or so long as the current box has a chance of containing a better solution (i.e. by having more nanobots than our current best solution) - We split the box in 8, add those cubes to the priority queue, rinse and repeat - Now, if we cannot split the cube further (i.e. it's a 3-d point), then we check for the number of nanobots covering this location, and if better than our best solution so far, we update that and carry on (defun part2 (bots) (let ((box (bots-box bots)) (best)) (loop :with queue = (make-hq :predicate #'state-better-p) :initially (hq-insert queue (make-state bots box)) :until (hq-empty-p queue) :for state = (hq-pop queue) :do (when (or (not best) (>= (num-nanobots state) (num-nanobots best))) (if (box-point-p (box state)) (if (or (not best) (state-better-p state best)) (setf best state)) (dolist (subbox (box-subdivisions (box state))) (hq-insert queue (make-state bots subbox))))) :finally (return (distance best))))) Easy uh?! Yeah... Let's take a look at those functions that we have never seen before; let's begin with STATE-BETTER-P, and as a reminder, this is the logic that we are going to try to implement: - Number of nanobots in range of the box: the higher, the better, as that's what we want to maximize, eventually - Volume of the current box: the smaller, the better, as we want to converge to a solution (optimal or not) as soon as possible so we can use that to prune branching - Distance from origin: again, the smaller, the better (defun state-better-p (state other) (destructuring-bind (num-nanobots volume distance) (rest state) (destructuring-bind (o-num-nanobots o-volume o-distance) (rest other) (or (> num-nanobots o-num-nanobots) (and (= num-nanobots o-num-nanobots) (or (< volume o-volume) (and (= volume o-volume) (< distance o-distance)))))))) BOX-POINT-P instead, checks if the current box is actually a point (i.e. each range is `0` units wide): (defun box-point-p (box) (loop :for (min max) :in box :always (= min max))) BOX-SUBDIVISIONS finally, takes care of taking a box / cube, and generating all its 8 subdivisions; one thing to note here: we are going to check for the volume of the box first, and if it's `1`, it means we cannot split it any further, and we will go ahead and return 8 new boxes, each covering for one vertex of the cube; otherwise, we will find the mid point of each range, and create sub-cubes accordingly: (defun box-subdivisions (box) (unless (box-point-p box) (if (= (box-volume box) 1) (loop :for (x y z) :in (box-vertices box) :collect (make-box x x y y z z)) (destructuring-bind ((x-min x-max) (y-min y-max) (z-min z-max)) box (let ((x-mid (+ x-min (floor (- x-max x-min) 2))) (y-mid (+ y-min (floor (- y-max y-min) 2))) (z-mid (+ z-min (floor (- z-max z-min) 2)))) (loop :for (x-min x-max) :on (list x-min x-mid x-max) :when x-max :append (loop :for (y-min y-max) :on (list y-min y-mid y-max) :when y-max :append (loop :for (z-min z-max) :on (list z-min z-mid z-max) :when z-max :collect (make-box x-min x-max y-min y-max z-min z-max))))))))) (defun box-vertices (box) (destructuring-bind (x-range y-range z-range) box (loop :for x :in x-range :append (loop :for y :in y-range :append (loop :for z :in z-range :collect (list x y z)))))) And that should be pretty much it. Final plumbing: (define-solution (2018 23) (bots parse-nanobots) (values (part1 bots) (part2 bots))) (define-test (2018 23) (433 107272899)) Et voila'! > (time (test-run)) TEST-2018/23.. Success: 1 test, 2 checks. Evaluation took: 0.132 seconds of real time 0.115931 seconds of total run time (0.105885 user, 0.010046 system) 87.88% CPU 305,851,279 processor cycles 1,768,384 bytes consed # 2021-11-19 Advent of Code: [2018/18]( Game of life kind of problem today: > On the outskirts of the North Pole base construction project, many Elves are collecting lumber. > > The lumber collection area is 50 acres by 50 acres; each acre can be either open ground (.), trees (|), or a lumberyard (#). You take a scan of the area (your puzzle input). > > Strange magic is at work here: each minute, the landscape looks entirely different. In exactly one minute, an open acre can fill with trees, a wooded acre can be converted to a lumberyard, or a lumberyard can be cleared to open ground (the lumber having been sent to other projects). This is how our input is going to look like: .#.#...|#. .....#|##| .|..|...#. ..|#.....# #.#|||#|#| ...#.||... .|....|... ||...#|.#| |.||||..|. ...#.|..|. The only difference is that our input is going to be bigger, i.e. 50x50 instead of 10x10; let's parse this into 2D array: (defun parse-area (data) (let ((rows (length data)) (cols (length (first data)))) (make-array (list rows cols) :initial-contents data))) Before taking a look at the rules governing how the area updates, minute after minute, let's see what exactly are we asked to do: > After 10 minutes, there are 37 wooded acres and 31 lumberyards. Multiplying the number of wooded > acres by the number of lumberyards gives the total resource value after ten minutes: 37 * 31 = > 1147. > > What will the total resource value of the lumber collection area be after 10 minutes? So we need to: - Figure out how the area is going to look like after `10` minutes - Count the number of acres with trees - Count the number of acres with lumber - Multiply the two together (defun part1 (area iterations) (dotimes (i iterations) (setf area (area-tick area))) (resource-value area)) To see how a given area is going to look like, after a minute we will have to: - For each row - For each column in the current row - Inspect the surrounding 8 acres and count which ones are open, which ones have trees, and which ones have lumber - Update the state of the current acre accordingly Note: all the updates happen at the same time, so we need to make sure the original area is not updated as we process it. As for the update rules instead: > - An open acre will become filled with trees if three or more adjacent acres contained trees. Otherwise, nothing happens. > - An acre filled with trees will become a lumberyard if three or more adjacent acres were lumberyards. Otherwise, nothing happens. > - An acre containing a lumberyard will remain a lumberyard if it was adjacent to at least one other lumberyard and at least one acre containing trees. Otherwise, it becomes open. All of the above, i.e. rules plus algorithm, nicely translates into the following, long-winded, function: (defun area-next (area) (let* ((rows (array-dimension area 0)) (cols (array-dimension area 1)) (next (make-array (list rows cols)))) (loop :for r :below rows :do (loop for c :below cols :do (loop :with open = 0 :and trees = 0 :and lumby = 0 :for rr :from (max (1- r) 0) :upto (min (1+ r) (1- rows)) :do (loop :for cc :from (max (1- c) 0) :upto (min (1+ c) (1- cols)) :unless (and (= rr r) (= cc c)) :do (ecase (aref area rr cc) (#\. (incf open)) (#\| (incf trees)) (#\# (incf lumby)))) :finally (setf (aref next r c) (ecase (aref area r c) (#\. (if (>= trees 3) #\| #\.)) (#\| (if (>= lumby 3) #\# #\|)) (#\# (if (and (>= lumby 1) (>= trees 1)) #\# #\.))))))) next)) At this point, all is left to do is calculating the resource value of a given area: (defun resource-value (area) (loop :for i :below (array-total-size area) :for ch = (row-major-aref area i) :count (eq ch #\|) :into trees :count (eq ch #\#) :into lumby :finally (return (* trees lumby)))) Now, off to part 2: > What will the total resource value of the lumber collection area be after 1000000000 minutes? _gulp_ Well, it turns out this _game_ is cyclic, i.e. after some time a given area configuration will keep on re-appearing over and over again; this means all we have to do is: - Figure out when it starts cycling - How big the cycle is - How many iterations are left to be done if we remove the cycles Easy peasy, especially if you already bumped into a similar problem before, and had an implementation for the [Floyd-Marshall]( algorithm handy: (defun part2 (area) (destructuring-bind (cycles-at cycle-size area-new) (floyd #'area-next area :test 'equalp) (let ((remaining (nth-value 1 (floor (- 1000000000 cycles-at) cycle-size)))) (part1 area-new remaining)))) Final plumbing: (define-solution (2018 18) (area parse-area) (values (part1 area 10) (part2 area))) (define-test (2018 18) (549936 206304)) And that's it: > (time (test-run)) TEST-2018/18.. Success: 1 test, 2 checks. Evaluation took: 0.877 seconds of real time 0.826163 seconds of total run time (0.811340 user, 0.014823 system) 94.18% CPU 2,018,921,962 processor cycles 47,953,424 bytes consed # 2021-11-22 Advent of Code: [2018/12]( Today we are given a bunch of pots, each containing (or not) plants; and some notes, describing how these pots change over time, based on the state of the adjacent pots: > The pots are numbered, with 0 in front of you. To the left, the pots are numbered -1, -2, -3, and so on; to the right, 1, 2, 3.... Your puzzle input contains a list of pots from 0 to the right and whether they do (#) or do not (.) currently contain a plant, the initial state. (No other pots currently contain plants.) For example, an initial state of #..##.... indicates that pots 0, 3, and 4 currently contain plants. > > Your puzzle input also contains some notes you find on a nearby table: someone has been trying to figure out how these plants spread to nearby pots. Based on the notes, for each generation of plants, a given pot has or does not have a plant based on whether that pot (and the two pots on either side of it) had a plant in the last generation. These are written as LLCRR => N, where L are pots to the left, C is the current pot being considered, R are the pots to the right, and N is whether the current pot will have a plant in the next generation. For example: > > - A note like ..#.. => . means that a pot that contains a plant but with no plants within two pots of it will not have a plant in it during the next generation. > - A note like ##.## => . means that an empty pot with two plants on each side of it will remain empty in the next generation. > - A note like .##.# => # means that a pot has a plant in a given generation if, in the previous generation, there were plants in that pot, the one immediately to the left, and the one two pots to the right, but not in the ones immediately to the right and two to the left. Here is how our input is going to look like: initial state: #..#.#..##......###...### ...## => # ..#.. => # .#... => # .#.#. => # .#.## => # .##.. => # .#### => # #.#.# => # #.### => # ##.#. => # ##.## => # ###.. => # ###.# => # ####. => # Anyways, as usual, let's get this input parsed; we are going to parse this into a list of 2 elements: in the first one, we are going to store the initial state of the pots; in the second, we are going to store all the notes describing how pots change over time. I am going to be using a HASH-SET for both the initial state of the pots, as well as the set of rules; for the initial state, I am going to simply keep track of the positions of the pots that contain a plant; for the list of notes instead, I am only going to be storing the ones causing a plant to grow, or to stay unchanged (i.e. I am going to skip all the notes with a `.` on the left-hand side of the note...more to this later): (defun parse-input (data) (list (parse-pots (first data)) (parse-notes (cddr data)))) (defun parse-pots (string) (let ((pots (make-hset nil))) (loop :for i :from 0 :for ch :across (subseq string 15) :when (eq ch #\#) :do (hset-add i pots)) pots)) (defun parse-notes (data) (let ((notes (make-hset nil :test 'equal))) (loop :for string :in data :for from = (subseq string 0 5) :for to = (char string 9) :when (eq to #\#) :do (hset-add from notes)) notes)) (defun pots (input) (first input)) (defun notes (input) (second input)) Now, for part 1, we are asked to evolve our array of pots over time, and see how things look after 20 generations: > After one generation, only seven plants remain. The one in pot 0 matched the rule looking for ..#.., the one in pot 4 matched the rule looking for .#.#., pot 9 matched .##.., and so on. > > In this example, after 20 generations, the pots shown as # contain plants, the furthest left of which is pot -2, and the furthest right of which is pot 34. Adding up all the numbers of plant-containing pots after the 20th generation produces 325. > > After 20 generations, what is the sum of the numbers of all pots which contain a plant? As with most of the Game of Life problems, we are going to _evolve_ our state a fixed number of times (i.e. `iterations`), and then inspect the final state to come up withe the challenge's answer: - We always start anew inside POTS-NEXT, as if all the pots were empty; this is the reason why we don't care about the notes resulting in the central pot to become empty, but only the ones resulting in pots with a plant in it - As we keep track of plants using a HASH-SET, and not an array, we need to find the min / max positions first, and then iterate for all the values in between - The `(- ... 4)` and `(+ ... 4)` is to make sure we catch changes happening on the _edge_ - SURROUNDING-POTS returns the state of the pots around a certain given position (this mostly makes up for the fact that we are using a HASH-SET instead of an array) - SUM-PLANT-POSITIONS simply sums the positions of the pots with a plant (i.e. the answer to our problem) (defun part1 (input iterations) (destructuring-bind (pots notes) input (dotimes (i iterations) (setf pots (pots-next pots notes))) (sum-plant-positions pots))) (defun pots-next (pots notes) (let ((next (make-hash-table)) (min (find-min (hset-values pots))) (max (find-max (hset-values pots)))) (loop :for i :from (- min 4) :to (+ max 4) :for key = (surrounding-pots pots i) :when (hset-contains-p key notes) :do (hset-add i next)) next)) (defun surrounding-pots (pots i) (let ((string (make-string 5 :initial-element #\.))) (loop :for j :from (- i 2) :to (+ i 2) :for k :from 0 :when (hset-contains-p j pots) :do (setf (char string k) #\#)) string)) (defun sum-plant-positions (pots) (loop :for i :being :the :hash-keys :of pots :sum i)) Let's now take a look at part 2: > After fifty billion (50000000000) generations, what is the sum of the numbers of all pots which contain a plant? What?! There has to be a loop involved... I fed this to my FLOYD loop detector utility function, but unfortunately that kept on spinning forever: (let ((input (parse-input (uiop:read-file-lines "src/2018/day12.txt")))) (flet ((next (pots) (pots-next pots (notes input)))) (floyd #'next (pots input)) (error "Never gonna give you up!"))) Well, as it turns out, after a few generations the array of pots assume a certain configuration; a configuration which keeps on sliding to the right hand side, over and over again: 120: #.##...#.##.##.##.##.##.##... 121: .#.##...#.##.##.##.##.##.##.. 122: ..#.##...#.##.##.##.##.##.##. 124: ...#.##...#.##.##.##.##.##.## So how can we detect that? Well, we could change SUM-PLANT-POSITIONS to count _relatively_ to the minimum position of a pot with a plant, instead of 0; and then tell FLOYD to use that to detect a loop: (defun sum-plant-positions (pots &optional (center 0)) (loop :for i :being :the :hash-keys :of pots :sum (- i center))) > (let ((input (parse-input (uiop:read-file-lines "src/2018/day12.txt")))) (flet ((next (pots) (pots-next pots (notes input))) (key (pots) (let ((min (find-min (hset-values pots)))) (loop :for i :being :the :hash-keys :of pots :sum (- i min))))) (floyd #'next (pots input) :key #'key) (list 'LOOP 'DETECTED))) (LOOP DETECTED) Once we found our loop, all is left to do is figure out the number of cycles left to be done, how much SUM-PLANT-POSITIONS increases at each cycle, and then do some math to put together our solution: (defun part2 (input iterations) (flet ((next (pots) (pots-next pots (notes input))) (key (pots) (let ((min (find-min (hset-values pots)))) (sum-plant-positions pots min)))) (destructuring-bind (cycles-at cycle-size pots-new) (floyd #'next (pots input) :key #'key) (let* ((base (sum-plant-positions pots-new)) (pots-new-next (pots-next pots-new (notes input))) (per-cycle-increment (- (sum-plant-positions pots-new-next) base)) (cycles (- iterations cycles-at))) (assert (= cycle-size 1)) (+ base (* per-cycle-increment cycles)))))) Final plumbing: (define-solution (2018 12) (input parse-input) (values (part1 input 20) (part2 input 50000000000))) And that's it: > (time (test-run)) TEST-2018/12.. Success: 1 test, 2 checks. Evaluation took: 0.060 seconds of real time 0.108866 seconds of total run time (0.107327 user, 0.001539 system) 181.67% CPU 139,759,679 processor cycles 33,689,536 bytes consed # 2021-11-23 Advent of Code: [2018/10]( In today's problem, we are given a sky map: a list of stars, characterized by their position and velocity: position=< 9, 1> velocity=< 0, 2> position=< 7, 0> velocity=<-1, 0> position=< 3, -2> velocity=<-1, 1> We are also told, that after some time, stars would align to form a message; that message will be the answer to our part 1. Let's start by parsing the sky map; we are going to define a custom structure, STAR, having one slot for the star position and one for its velocity; we are also going to store all these stars into a simple list: (defstruct (star (:conc-name nil)) pos vel) (defun parse-sky (data) (mapcar #'parse-star data)) (defun parse-star (string) (destructuring-bind (x y vx vy) (mapcar #'parse-integer (cl-ppcre:all-matches-as-strings "-?\\d+" string)) (make-star :pos (list x y) :vel (list vx vy)))) Now, how can to figure out when the stars are aligned? My idea is to take a look at the bounding box containing all the stars: as the velocity of the stars does not change, I would expect the size of this box to become smaller and smaller, until it starts getting bigger again. Well, the state of the sky when the bounding box is the smallest, is probably the one in which all the stars are aligned; so dumping it on the standard output would hopefully show us the message! FIND-MESSAGE accepts the sky map as input, and keeps on moving stars (see SKY-NEXT) minimizing the bounding box (I am using the area of the bounding box, SKY-AREA, as a proxy for its size); lastly, SKY-MESSAGE is responsible for analyzing the end sky map, and print `#` when the stars are: (defun find-message (input) (loop :for sky-prev = input :then sky :for sky = input :then (sky-next sky) :when (> (sky-area sky) (sky-area sky-prev)) :return (sky-message sky-prev))) (defun sky-next (sky) (mapcar #'star-move sky)) (defun star-move (star) (with-slots (pos vel) star (make-star :pos (mapcar #'+ pos vel) :vel vel))) (defun sky-area (sky) (destructuring-bind ((x-min x-max) (y-min y-max)) (sky-box sky) (* (- x-max x-min) (- y-max y-min)))) (defun sky-message (sky) (destructuring-bind ((x-min x-max) (y-min y-max)) (sky-box sky) (let ((output (loop :repeat (1+ (- y-max y-min)) :collect (make-string (1+ (- x-max x-min)) :initial-element #\Space)))) (loop :for star :in sky :for (x y) = (pos star) :for row = (nth (- y y-min) output) :do (setf (aref row (- x x-min)) #\#)) (with-output-to-string (s) (format s "~%~{~a~^~%~}" output))))) (defun sky-box (sky) (loop :for star :in sky :for (x y) = (pos star) :minimize x :into x-min :maximize x :into x-max :minimize y :into y-min :maximize y :into y-max :finally (return (list (list x-min x-max) (list y-min y-max))))) Let's give this a go: (setq input (parse-sky (uiop:read-file-lines "src/2018/day10.txt"))) > (find-message input) " ##### # # ##### ### #### # ##### ###### # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ##### ###### ##### # # # ##### ##### # # # # # # # ### # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ## # # # ##### # # # ### ### # ###### # ######" Good! Now part 2: > Impressed by your sub-hour communication capabilities, the Elves are curious: exactly how many seconds would they have needed to wait for that message to appear? We should be able to answer this quite easily, by slightly tweaking FIND-MESSAGE: - Keep track of how much time has elapsed - And return it, as second value (the call to 1- is to offset the fact that the first iteration is a dummy one, given that `sky-prev` and `sky` are the same) (defun find-message (input) (loop :for time :from 0 :for sky-prev = input :then sky :for sky = input :then (sky-next sky) :when (> (sky-area sky) (sky-area sky-prev)) :return (values (sky-message sky-prev) (1- time)))) Final plumbing: (define-solution (2018 10) (sky parse-sky) (find-message sky)) (define-test (2018 10) (" ##### # # ##### ### #### # ##### ###### # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ##### ###### ##### # # # ##### ##### # # # # # # # ### # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ## # # # ##### # # # ### ### # ###### # ######" 10831)) And that's it: > (time (test-run)) TEST-2018/10.. Success: 1 test, 2 checks. Evaluation took: 0.440 seconds of real time 0.416292 seconds of total run time (0.406274 user, 0.010018 system) [ Run times consist of 0.029 seconds GC time, and 0.388 seconds non-GC time. ] 94.55% CPU 1,012,629,537 processor cycles 309,925,952 bytes consed --- Advent of Code: [2018/7]( Given a list of steps and requirements about which steps are blocked by which other step or steps, we are tasked to find the order in which the steps are going to be completed. Example: Step C must be finished before step A can begin. Step C must be finished before step F can begin. Step A must be finished before step B can begin. Step A must be finished before step D can begin. Step B must be finished before step E can begin. Step D must be finished before step E can begin. Step F must be finished before step E can begin. Visually, these requirements look like this: -->A--->B-- / \ \ C -->D----->E \ / ---->F----- And the order in which the steps are going to be completed is: `CABDFE` (in the event of multiple steps being unblocked, we should select based on the lexicographical order). We are going to parse our input into an association list, mapping from steps to their dependencies: (defun parse-instructions (data) (let (rez) (loop :for string :in data :for (step . dep) = (parse-instruction string) :for step-entry = (assoc step rez) :for dep-entry = (assoc dep rez) :if step-entry :do (push dep (cdr step-entry)) :else :do (push (cons step (list dep)) rez) :unless dep-entry :do (push (cons dep nil) rez)) rez)) (defun parse-instruction (string) (cl-ppcre:register-groups-bind ((#'parse-char dep step)) ("Step (\\w) must be finished before step (\\w) can begin." string) (cons step dep))) Feeding the example above into PARSE-INSTRUCTIONS, should result in the following alist: ((#\E #\F #\D #\B) (#\D #\A) (#\B #\A) (#\F #\C) (#\C) (#\A #\C)) Now, to find the order in which the different steps are completed, we are going to proceed as follows: - We sort the instructions, placing entries with the fewest number of dependencies (ideally, 0) first, or lexicographically in case of ties (see SORT-INSTRUCTIONS, and INSTRUCTION<) - We pop the first item -- this is the next step to complete - We unblock any other instruction that might be depending on the step we just completed (see REMOVE-DEPENDENCY) - We do this until we run out of instructions (defun part1 (instructions &aux completed) (loop (if (null instructions) (return (format nil "~{~C~}" (reverse completed))) (destructuring-bind ((step) . rest) (sort-instructions instructions) (setf completed (cons step completed) instructions (remove-dependency rest step)))))) (defun sort-instructions (instructions) (sort (copy-seq instructions) #'instruction<)) (defun instruction< (i1 i2) (destructuring-bind (step1 . deps1) i1 (destructuring-bind (step2 . deps2) i2 (or (< (length deps1) (length deps2)) (and (= (length deps1) (length deps2)) (char< step1 step2)))))) (defun remove-dependency (instructions dependency) (loop :for (step . deps) :in instructions :collect (cons step (remove dependency deps)))) Things are going to be a tiny bit more complicated for part 2: - Each step takes a certain amount of time to complete: 60 seconds, plus a certain amount, depending on the step name (1 second for `A`, 2 seconds for `B`, ...) - We are not alone in executing these instructions -- there are 4 elves helping us out, which means multiple steps can run in parallel We are going to solve this by simulation; to do this, we are going to define a new structure, WORKERS, with the following slots: - `'BUSY-FOR`, an array as big as the number of workers we have, to keep track of how much time each worker needs to complete the assigned step - `'BUSY-WITH`, another array, this time keeping track of the step each worker is currently busy with (defstruct (workers (:conc-name nil) (:constructor make-workers%)) busy-for busy-with) (defun make-workers (n) (make-workers% :busy-for (make-array n :initial-element 0) :busy-with (make-array n :initial-element nil))) With this, the skeleton of our solutions is pretty simple: we initialize our WORKERS state, and until there are workers..._working_...we do another tick of our simulation; when done, we simply return the elapsed time: (defparameter *workers-count* 5) (defun part2 (instructions) (let ((workers (make-workers *workers-count*)) (time 0)) (loop (setf instructions (workers-tick workers instructions)) (if (every #'null (busy-with workers)) (return time) (incf time))))) Let's now take a look at the meat of this, WORKERS-TICK: - We let all the workers progress on their assigned step, and keep track of the completed ones (as well as the workers that finished them) - We keep the workers busy until there are workers available and unblocked instructions - Finally, we return the updated list of instructions -- i.e. instructions we could not assign because either we run out of workers, or because active workers were still working on some dependencies (defun workers-tick (workers instructions) (with-slots (busy-for busy-with) workers (flet ((try-release-workers () (loop :for i :from 0 :for left :across busy-for :for step :across busy-with :if (> left 1) :do (decf (aref busy-for i)) :else :collect i :and :do (setf instructions (remove-dependency instructions step) (aref busy-for i) 0 (aref busy-with i) nil))) (try-assign-work (available) (loop :while (and available instructions) :do (let ((i (first available))) (setf instructions (sort-instructions instructions) (aref busy-for i) 0 (aref busy-with i) nil) (destructuring-bind ((step . deps) . rest) instructions (if (> (length deps) 0) (return) (setf (aref busy-for i) (step-time step) (aref busy-with i) step available (rest available) instructions rest))))))) (let ((available (try-release-workers))) (try-assign-work available) instructions)))) (defparameter *step-duration* 60) (defun step-time (step) (+ *step-duration* (1+ (- (char-code step) (char-code #\A))))) Final plumbing: (define-solution (2018 7) (instructions parse-instructions) (values (part1 instructions) (part2 instructions))) (define-test (2018 7) ("GRTAHKLQVYWXMUBCZPIJFEDNSO" 1115)) And that's it: > (time (test-run)) TEST-2018/07.. Success: 1 test, 2 checks. Evaluation took: 0.005 seconds of real time 0.005701 seconds of total run time (0.004362 user, 0.001339 system) 120.00% CPU 13,260,235 processor cycles 2,686,944 bytes consed # 2021-11-25 Building a _local_ system with :TRIVIAL-BUILD (This was posted on [ceramic/trivial-build/issues/2]( Currently, there are at least two different ways of dealing with locally defined systems, i.e. systems defined in the working directory: 1. Create a symlink inside ~/quicklisp/local-projects (or ~/common-lisp), pointing to the current directory 2. Add the current directory to ASDF:\*CENTRAL-REPOSITORY\* (`(pushnew '*default-pathname-defaults* asdf:*central-registry*)`) Now, with the first option, everything works as expected: you call TRIVIAL-BUILD:BUILD, it spawns a new CL image, it runs ASDF:LOAD-SYSTEM in it, and since the current directory is inside ~/quicklisp/local-projects (or ~/common-lisp), your system will be loaded just fine. Unfortunately the current implementation does not seem to play nicely with the second option, i.e. updating ADSF:\*CENTRAL-REPOSITORY\*; that's because the newly spawn CL image, the one in which ASDF:LOAD-SYSTEM is called, has ASDF:\*CENTRAL-REPOSITORY\* in its default state, i.e. without our customizations. Is there any interest in supporting this use case, or instead you prefer to push people to use option 1 instead? Because if there was, here is how I _patched_ LOAD-AND-BUILD-CODE (note the line right above the ASDF:LOAD-SYSTEM call): - I serialize the current value of ASDF:*CENTRAL-REGISTRY*, in reverse order, so while loading them later with PUSH, the original order is preserved; note: this happens on the host image, i.e. the one where :CERAMIC is run - For each entry, I push it back into ASDF:*CENTRAL-REGISTRY* -- this instead is run on the final app image, the newly spawned one (defun load-and-build-code (system-name entry-point binary-pathname) "Return a list of code strings to eval." (list "(setf *debugger-hook* #'(lambda (c h) (declare (ignore h)) (uiop:print-condition-backtrace c) (uiop:quit -1)))" (format nil "(dolist (entry '~W) (push entry asdf:*central-registry*))" (reverse asdf:*central-registry*)) (format nil "(asdf:load-system :~A)" system-name) (format nil "(setf uiop:*image-entry-point* #'(lambda () ~A))" entry-point) (format nil "(uiop:dump-image ~S :executable t #+sb-core-compression :compression #+sb-core-compression t)" binary-pathname))) Let me know if you see any problems with this and most importantly if you find this somewhat interesting, in which case, I would not mind creating a pull request for this. Ciao, M. # 2021-11-26 Vlime: Make swank_repl a prompt buffer (This was posted on [vlime/pull/55]( Alright, since these changes got merged into @phmarek's fork, and since _that_ branch is the one I am using these days, I would like to provide some feedback about this; also, please note that I am running this with Vim, so expect some of the following feedback not be relevant to Neovim users. **Performance** I have no number to back this up, but I noticed that forms generating some non-trivial output are more likely to cause Vim to freeze if the REPL is made a prompt buffer than when it's not; don't take me wrong, t's a bit sluggish anyway, with or without these changes, but more so with the prompt buffer than without. I have a [project]( with quite a few packages, and every time I touch the main .asd file and QL:QUICKLOAD it, Vim becomes very sluggish. **Stealing focus** The currently focused window seems to be losing focus every time a new line of output is added to the prompt buffer; this causes all sort of issues, especially if you were in insert mode when the focus got stolen. I was messing around with a Web application of mine, and had the client fire a HEAD request every second (request which got logged by the Web server on the standard output, i.e. on the REPL buffer); that alone seemed to make editing in a different buffer impossible (e.g. missing characters, unable to move around the buffer), to the point that I had to disable the polling on the client! I am not exactly sure what's going on, and I would not be surprised if this wasn't somehow related to the performance issue above; however, the moment I disabled this PR's changes, it all become responsive again. **Current package** While Vim itself might be the cause of the two problems mentioned above, this one is most definitely a bug with the current implementation. Previously, forms typed into `vlime_input` buffer would be evaluated inside the package specified in the closest preceding IN-PACKAGE form; for example, imagine you had the following buffer open: 1: (in-package :cl-user) 2: 5: (in-package :foo) 6: Opening up an input buffer (i.e. `si`) from line 2, and typing `*package*` into it would cause `#` to be logged to the REPL buffer; opening up the input buffer from line number 6 instead, and typing in the same would output `#`. Well, with this pull request, it would always output `#`. **Only works while in insert mode (minor, opinionated)** I am not sure if this is Vim's fault or if the plugin's implementation, but evaluation from the prompt buffer only seems to work while in insert mode, i.e. pressing enter while in insert mode seems to be the only way to trigger the evaluation to happen. This unfortunately clashes with some of my settings / how I am used to work; I can work around this of course, re-wire my muscle memory, but I wanted to flag this anyway. I use [delimitMate]( to keep my parens balanced (yes, it's a poor man's solution, but gets the job done most of the times); this means that if I want to type in the form: 1 2 012345678901234567890123 (format t "Hello world") By the time I type the closing quote at position 22, the closing parenthesis would be there already, so I would just leave insert mode and expect the form to be evaluated; however, as soon as I leave insert mode, a new line is added to the prompt buffer and then I am forced to type it all again, the closing parenthesis included, and then press `Enter`. **Empty prompt (opinionated)** Maybe it's just me, but the first time I entered insert mode while in the prompt buffer I started wondering what that initial space was for. I thought it was a bug, having something to do with Vlime wrongly calculating the indentation level; but then I looked at the code and realized that that was the way we told the prompt buffer to look like. I think we should either set this to an empty string, or maybe the classic `> ` or `* `, as that would make it super clear what's going on. **Next steps** I am OK with tinkering with this a bit more, but here is what I think we should do: - Understand if the performance issue mentioned above, as well as the focus stealing one, are getting experienced by everybody, i.e. Vim and Neovim users, or if it's just one man's problem (me...sigh); this will help us understand if switching to a prompt buffer for the REPL is a sound idea or not - If it was, maybe put everything behind a config flag, so people can enable / disable it, at least until it's a bit more stable (not everybody wants to deal with this) - Work around some of the usability issues mentioned above, like setting the current package, or changing the prompt string, etc. Let me know what you guys think about this. M. # 2021-12-01 Advent of Code: [2021/01]( - An elf tripped, and accidentally sent the sleight keys flying into the ocean - All of a sudden we found ourselves inside a submarine which has an antenna that hopefully we can use to track the keys - The submarine performs a sonar sweep of the nearby sea floor (our input) > For example, suppose you had the following report: > > 199 > 200 > 208 > 210 > 200 > 207 > 240 > 269 > 260 > 263 > > This report indicates that, scanning outward from the submarine, the sonar sweep found depths of 199, 200, 208, 210, and so on. We already have an utility function to parse a list of integers, PARSE-INTEGERS, so there isn't much we have to do here, except for calling it: (parse-integers (uiop:read-file-lines "src/2021/day01.txt")) Now, here is what we are tasked to do for part 1: > The first order of business is to figure out how quickly the depth increases, just so you know what you're dealing with - you never know if the keys will get carried into deeper water by an ocean current or a fish or something. > > To do this, count **the number of times a depth measurement increases** from the previous measurement. (There is no measurement before the first measurement.) [...] Nothing crazy; we simply do as told: (defun part1 (numbers) (loop for (a b) on numbers when b count (< a b))) For part 2 instead: > Instead, consider sums of a three-measurement sliding window. [...] Here we can re-use PART1 completely, except that we need to pass to it a different list of numbers, each representing the sum of a three-measurement sliding window; again, LOOP/ON makes for a very readable solution: (defun part2 (numbers) (part1 (loop for (a b c) on numbers when c collect (+ a b c)))) Final plumbing: (define-solution (2021 01) (numbers parse-integers) (values (part1 numbers) (part2 numbers))) (define-test (2021 01) (1557 1608)) And that's it: > (time (test-run)) TEST-2021/01.. Success: 1 test, 2 checks. Evaluation took: 0.001 seconds of real time 0.001252 seconds of total run time (0.000647 user, 0.000605 system) 100.00% CPU 2,873,667 processor cycles 163,808 bytes consed # 2021-12-02 Advent of Code: [2021/02]( > It seems like the submarine can take a series of commands like `forward 1`, `down 2`, or `up 3`: > > - `forward X` increases the horizontal position by X units. > - `down X` increases the depth by X units. > - `up X` decreases the depth by X units. > > Note that since you're on a submarine, down and up affect your depth, and so they have the opposite result of what you might expect. > > [...] > > Calculate the horizontal position and depth you would have after following the planned course. What do you get if you multiply your final horizontal position by your final depth? OK, let's start by parsing our input, i.e. the list of instructions, into `` pairs: - CL-PPCRE:REGISTER-GROUPS-BIND to parse our instructions using regexps (we could have also used SPLIT-SEQUENCE) - AS-KEYWORD to convert a string into a _keyword_, i.e. `:forward`, `:up`, and `:down` (defun parse-instructions (data) (mapcar #'parse-instruction data)) (defun parse-instruction (string) (cl-ppcre:register-groups-bind ((#'as-keyword dir) (#'parse-integer delta)) ("(\\w+) (\\d+)" string) (cons dir delta))) (defun dir (i) (car i)) (defun delta (i) (cdr i)) Now to get the answer for part two: - Iterate over the instructions - If `:forward`, we move our horizontal position - If `:up`, we reduce our depth - If `:down`, we increase our depth - Last, we return the product of our horizontal position and depth (defun part1 (instructions &aux (horiz 0) (depth 0)) (dolist (i instructions (* horiz depth)) (ecase (dir i) (:forward (incf horiz (delta i))) (:up (decf depth (delta i))) (:down (incf depth (delta i)))))) For part 2 instead, the `:up` and `:down` instructions seem to assume a different meaning: > In addition to horizontal position and depth, you'll also need to track a third value, aim, which also starts at 0. The commands also mean something entirely different than you first thought: > > - `down X` increases your aim by X units. > - `up X` decreases your aim by X units. > - `forward X` does two things: > > - It increases your horizontal position by X units. > - It increases your depth by your aim multiplied by X. > > Again note that since you're on a submarine, down and up do the opposite of what you might expect: "down" means aiming in the positive direction. > > [...] > > Using this new interpretation of the commands, calculate the horizontal position and depth you would have after following the planned course. What do you get if you multiply your final horizontal position by your final depth? Nothing fancy, let's just implement this: - Iterate over the instructions - If `:forward`, we move our horizontal position and descend / ascend based on the current aim - If `:up`, we increase our aim - If `:down`, we decrease our aim - Last, we return the product of our horizontal position and depth (defun part2 (instructions &aux (horiz 0) (depth 0) (aim 0)) (dolist (i instructions (* horiz depth)) (ecase (dir i) (:forward (incf horiz (delta i)) (decf depth (* (delta i) aim))) (:up (incf aim (delta i))) (:down (decf aim (delta i)))))) Final plumbing: (define-solution (2021 02) (instructions parse-instructions) (values (part1 instructions) (part2 instructions))) (define-test (2021 02) (1648020 1759818555)) And that's it: > (time (test-run)) TEST-2021/02.. Success: 1 test, 2 checks. Evaluation took: 0.002 seconds of real time 0.002600 seconds of total run time (0.001565 user, 0.001035 system) 150.00% CPU 6,082,778 processor cycles 327,536 bytes consed PS. My [original solution]( parsed instructions into COMPLEX numbers (e.g. `down 4` into `#C(0 -4)`); this way, for part 1 all I had to do was summing all the deltas together; for part 2 instead, I had to check if the delta was horizontal or vertical, and adjust the aim accordingly. I figured a more readable solution would be better, hence the rewrite. # 2021-12-03 Advent of Code: [2021/03]( The submarine has been making some odd creaking noises today, and after we ask it to produce a diagnostic report (our input), we are tasked to _decode_ it and see if everything is all right (or not!): > The diagnostic report (your puzzle input) consists of a list of binary numbers which, when decoded properly, can tell you many useful things about the conditions of the submarine. The first parameter to check is the power consumption. > > You need to use the binary numbers in the diagnostic report to generate two new binary numbers (called the gamma rate and the epsilon rate). The power consumption can then be found by multiplying the gamma rate by the epsilon rate. > > Each bit in the gamma rate can be determined by finding the most common bit in the corresponding position of all numbers in the diagnostic report. > > [...] > > The epsilon rate is calculated in a similar way; rather than use the most common bit, the least common bit from each position is used. We are not going to do anything crazy with our input; in fact, we are going to keep it as is (i.e. as a list of strings), and jump right into the core of our solution: - We calculate the gamma instructed - We calculate the epsilon rate starting from the gamma rate (i.e. it's just its complement) - Parse the two strings into numbers, and multiply them (defun part1 (strings) (let* ((gamma (gamma-rate strings)) (epsilon (map 'string #'ch-complement gamma))) (* (parse-binary gamma) (parse-binary epsilon)))) (defun ch-complement (ch) (if (eq ch #\1) #\0 #\1)) (defun parse-binary (s) (parse-integer s :radix 2)) Note: we could have returned a number and not a string from GAMMA-RATE; but then, to calculate the epsilon rate, we would have had to mess with bits "a bit more" to account for any leading 0 (i.e. 0001, complemented, should be 1110, and not simply 0). Inside GAMMA-RATE we collect the most common char at each position, and then coerce the result to a string: (defun gamma-rate (strings &aux (n (length (first strings)))) (coerce (uiop:while-collecting (bit!) (dotimes (i n) (bit! (most-common-ch-at strings i)))) 'string)) (defun most-common-ch-at (strings i) (loop for s in strings for ch = (aref s i) count (eq ch #\1) into ones finally (return (if (>= ones (/ (length strings) 2)) #\1 #\0)))) Things are going to get messy for part 2: > Next, you should verify the life support rating, which can be determined by multiplying the oxygen generator rating by the CO2 scrubber rating. > > Both the oxygen generator rating and the CO2 scrubber rating are values that can be found in your diagnostic report - finding them is the tricky part. Both values are located using a similar process that involves filtering out values until only one remains. Before searching for either rating value, start with the full list of binary numbers from your diagnostic report and consider just the first bit of those numbers. Then: > > - Keep only numbers selected by the bit criteria for the type of rating value for which you are searching. Discard numbers which do not match the bit criteria. > - If you only have one number left, stop; this is the rating value for which you are searching. > - Otherwise, repeat the process, considering the next bit to the right. > > The bit criteria depends on which type of rating value you want to find: > > - To find oxygen generator rating, determine the most common value (0 or 1) in the current bit position, and keep only numbers with that bit in that position. If 0 and 1 are equally common, keep values with a 1 in the position being considered. > -To find CO2 scrubber rating, determine the least common value (0 or 1) in the current bit position, and keep only numbers with that bit in that position. If 0 and 1 are equally common, keep values with a 0 in the position being considered. Thaaaaaaaaaat is a lot, with lots of details too; hopefully we are going to figure this out -- well, if you are reading this, it means I already "figured something out" ;-) Let's start from the top: (defun part2 (strings) (* (oxygen-generator-rating strings) (co2-scrubber-rating strings))) Now, to calculate the oxygen generator rating, let's proceed as follows: - Until there is only one string left - Figure out what the most common char is -- at a given position - Remove all the strings having a different char at that position (defun oxygen-generator-rating (strings &optional (n (length (first strings)))) (loop until (= (length strings) 1) for i below n for ch = (most-common-ch-at strings i) do (setf strings (remove ch strings :key (partial-1 #'aref _ i) :test-not 'eq))) (parse-binary (first strings))) To calculate the CO2 scrubber rating instead, we just have to consider least common bits; this is done by wrapping MOST-COMMON-CH-AT inside a call to CH-COMPLEMENT (the rest is the same): (defun co2-scrubber-rating (strings &optional (n (length (first strings)))) (loop until (= (length strings) 1) for i below n for ch = (ch-complement (most-common-ch-at strings i)) do (setf strings (remove ch strings :key (partial-1 #'aref _ i) :test-not 'eq))) (parse-binary (first strings))) Final plumbing: (define-solution (2021 03) (data) (values (part1 data) (part2 data))) (define-test (2021 03) (3985686 2555739)) And that's it: > (time (test-run)) TEST-2021/03.. Success: 1 test, 2 checks. Evaluation took: 0.001 seconds of real time 0.001470 seconds of total run time (0.000941 user, 0.000529 system) 100.00% CPU 3,440,150 processor cycles 131,024 bytes consed PS. My original solution was not as cleaned as this one: it was super messy, I coded it fully in the REPL, i.e. there is not a single function in my solution (I am not proud of this, I am just stating a fact), and even though messy, it still took me forever to came up with it; lastly, cherry on top, to actually get the answer for part 1, I manually converted an ARRAY of Ts and NILs into 1s and 0s! Anyways, if you really, really want to take a look...[have at it]( # 2021-12-04 Advent of Code: [2021/04]( Today, we are playing Bingo with a giant squid (yes, apparently it is on the submarine with us!): > Bingo is played on a set of boards each consisting of a 5x5 grid of numbers. Numbers are chosen at random, and the chosen number is marked on all boards on which it appears. (Numbers may not appear on all boards.) If all numbers in any row or any column of a board are marked, that board wins. (Diagonals don't count.) This is the kind of input we are expected to process: 7,4,9,5,11,17,23,2,0,14,21,24,10,16,13,6,15,25,12,22,18,20,8,19,3,26,1 22 13 17 11 0 8 2 23 4 24 21 9 14 16 7 6 10 3 18 5 1 12 20 15 19 3 15 0 2 22 9 18 13 17 5 19 8 7 25 23 20 11 10 24 4 14 21 16 12 6 14 21 17 24 4 10 16 15 9 19 18 8 23 26 20 22 11 13 6 5 2 0 12 3 7 I am going to create a new structure, BINGO, with slots to keep track of the numbers still left to draw, and each board; for each board representation instead, I am going to use a 5x5 2D array: (defstruct (bingo (:conc-name nil)) to-draw boards) (defun parse-bingo (data) (let ((to-draw (extract-integers (car data))) (boards (parse-boards (cdr data)))) (make-bingo :to-draw to-draw :boards boards))) (defun parse-boards (data) (when data (cons (make-array '(5 5) :initial-contents (let ((lines (subseq data 1 6))) (uiop:while-collecting (rows) (dolist (s lines) (let ((row (extract-integers s))) (rows row)))))) (parse-boards (subseq data 6))))) (defun extract-integers (s) (mapcar #'parse-integer (cl-ppcre:all-matches-as-strings "\\d+" s))) Now, let's play some bingo! - For each number to be drawn - For each board - Let's mark the number on the board - If it turns out to be a winning board, let's return its score: (defun play (game) (with-slots (to-draw boards) game (dolist (n to-draw) (dolist (b boards) (when (mark-number b n) (return-from play (* (board-score b) n))))))) MARK-NUMBER would: - For each number of the input board - Check if equal to the number just drawn, and in case, mark it (i.e. remove it) - Finally return T if the move made the board a winning one (defun mark-number (board n) (loop for i below 5 do (loop for j below 5 when (eql (aref board i j) n) do (setf (aref board i j) nil) (return-from mark-number (board-won-p board i j))))) (defun board-won-p (board last-marked-i last-marked-j) (or (loop for i below 5 never (aref board i last-marked-j)) (loop for j below 5 never (aref board last-marked-i j)))) Lastly, the score function, i.e. the sum of all the remaining numbers of a given board: (defun score (board) (loop for i below 25 for v = (row-major-aref board i) when v sum v)) For part 2 instead, we are asked to keep on playing, until there are anymore boards left. All we have to do is update PLAY as follows: - Collect the scores of each winning board, instead of simply returning it - Stop processing winning boards (defun play (game) (uiop:while-collecting (scores) (with-slots (to-draw boards) game (dolist (n to-draw) (dolist (b boards) (when (mark-number b n) (scores (* (score b) n)) (setf boards (remove b boards)))))))) Final plumbing: (define-solution (2021 04) (game parse-bingo) (let ((scores (play game))) (values (first scores) (car (last scores))))) (define-test (2021 04) (82440 20774)) And that's it: > (time (test-run)) TEST-2015/04.. Success: 1 test, 2 checks. Evaluation took: 0.006 seconds of real time 0.005100 seconds of total run time (0.004883 user, 0.000217 system) 83.33% CPU 15,920,633 processor cycles 851,856 bytes consed PS. If you are curious, [this]( is the non-cleaned-up of the code that I used to take my 2 stars home (see how I went overboard and used conditions to notify the runner about winning boards). # 2021-12-05 Advent of Code: [2021/05]( > You come across a field of hydrothermal vents on the ocean floor! These vents constantly produce large, opaque clouds, so it would be best to avoid them if possible. Uh oh! It continues: > They tend to form in lines; the submarine helpfully produces a list of nearby lines of vents (your puzzle input) for you to review. For example: 0,9 -> 5,9 8,0 -> 0,8 9,4 -> 3,4 2,2 -> 2,1 7,0 -> 7,4 6,4 -> 2,0 0,9 -> 2,9 3,4 -> 1,4 0,0 -> 8,8 5,5 -> 8,2 The task for the day: > Consider only horizontal and vertical lines. At how many points do at least two lines overlap? Let's start by reading our input; we are going to parse each line int 4 elements lists, i.e. `(x1 y1 x2 y2)`: (defun parse-lines (data) (mapcar #'parse-line data)) (defun parse-line (string) (mapcar #'parse-integer (cl-ppcre:all-matches-as-strings "\\d+" string))) With this, the solution to our problem would be a simple: (defun part1 (lines) (count-overlaps (remove-if #'diagonalp lines))) "The evil is in the details" they use to say, so let's take a look at the details. DIAGONALP simply checks if the points describing a line have the same `x` or `y` value: (defun diagonalp (line) (destructuring-bind (x1 y1 x2 y2) line (and (not (= x1 x2)) (not (= y1 y2))))) Inside COUNT-OVERLAPS instead: - For each line in our input - We generate all its points - We store these inside a HASH-TABLE, keeping track of duplicates (as that represents our answer) - A naive `(loop for xx from x1 to x2 ...)` would fail if `x1` is greater than `x2`; similarly, a naive `(loop for yy from y1 to y2 ...)` would fail in `y1` is greater than `y2` - <=>, a.k.a. the spaceship operator, returns `-1` if the first value is smaller than the second, `+1` if greater, and `0` otherwise; this represents how much to move in a given direction - Once we know how much in each direction to move, we just have to figure out the number of times we have to move -- `n` in the snippet below (defun count-overlaps (lines &optional (grid (make-hash-table :test 'equal))) (dolist (l lines) (destructuring-bind (x1 y1 x2 y2) l (let ((dx (<=> x2 x1)) (dy (<=> y2 y1)) (n (max (abs (- x2 x1)) (abs (- y2 y1))))) (loop repeat (1+ n) for xx = x1 then (+ xx dx) for yy = y1 then (+ yy dy) do (incf (gethash (list xx yy) grid 0)))))) (loop for o being the hash-values of grid count (> o 1))) For part 2 instead, we are asked to consider diagonal lines as well; we could not have been any luckier, given that all we have to do is feed COUNT-OVERLAPS with the original list of lines (and not just the horizontal/vertical ones like we did for part 1): (defun part2 (lines) (count-overlaps lines)) Final plumbing: (define-solution (2021 05) (lines parse-lines) (values (part1 lines) (part2 lines))) (define-test (2021 05) (6397 22335)) And that's it: > (time (test-run)) TEST-2015/05.. Success: 1 test, 2 checks. Evaluation took: 0.155 seconds of real time 0.131351 seconds of total run time (0.107549 user, 0.023802 system) 84.52% CPU 356,617,473 processor cycles 44,400,176 bytes consed PS. This the code I originally used to solve this: - Part 1: MIN / MAX to figure out where to start and where to stop, when using `(loop for v from ... to ...)` kind of loops - Part 2: consed up all the ranges of points, and iterated over them ;; Part 1 (loop with grid = (make-hash-table :test 'equal) for line in input for (x1 y1 x2 y2) = line when (or (= x1 x2) (= y1 y2)) do (loop for xx from (min x1 x2) to (max x1 x2) do (loop for yy from (min y1 y2) to (max y1 y2) do (incf (gethash (list xx yy) grid 0)))) finally (return (loop for v being the hash-values of grid count (> v 1)))) ;; Part 2 (loop with grid = (make-hash-table :test 'equal) for line in input for (x1 y1 x2 y2) = line do (loop for xx in (range x1 x2) for yy in (range y1 y2) do (incf (gethash (list xx yy) grid 0))) finally (return (count-if (partial-1 #'> _ 1) (hash-table-values grid)))) (defun range (v1 v2) (cond ((= v1 v2) (ncycle (list v1))) ((< v1 v2) (loop for v from v1 to v2 collect v)) (t (loop for v from v1 downto v2 collect v)))) PPS. I fucking bricked my laptop yesterday while upgrading to Big Sur, so I had to sort that issue first before even start thinking about today's problem -- what a bummer! --------Part 1-------- --------Part 2-------- Day Time Rank Score Time Rank Score 5 07:28:10 27193 0 07:41:09 24111 0 4 00:30:52 2716 0 00:41:34 2704 0 3 00:12:42 4249 0 00:48:51 5251 0 2 00:05:42 3791 0 00:09:24 3731 0 1 00:02:45 1214 0 00:07:16 1412 0 # 2021-12-06 Advent of Code: [2021/06]( Today we are asked to study a massive school of lanternfish; in particular, we are tasked to see how big this group will be, after 80 days: > Although you know nothing about this specific species of lanternfish, you make some guesses about their attributes. Surely, each lanternfish creates a new lanternfish once every 7 days. > > However, this process isn't necessarily synchronized between every lanternfish - one lanternfish might have 2 days left until it creates another lanternfish, while another might have 4. So, you can model each fish as a single number that represents the number of days until it creates a new lanternfish. > > Furthermore, you reason, a new lanternfish would surely need slightly longer before it's capable of producing more lanternfish: two more days for its first cycle. So, to recap, we are given a comma separated list of numbers, each representing the age of each fish, and we are asked to measure the size of this group after 80 days, knowing that: - A lanterfish will produce a new lanterfish every 7 days - A newly spawn lanterfish will require 9 days before it can produce a new lanternfish As usual, let's start form the input (it's a comma separated list of numbers): (defun parse-ages (data) (mapcar #'parse-integer (cl-ppcre:all-matches-as-strings "\\d" (first data)))) With this, we can evolve the school of lanternfish as follows: - For each fish in the school - If it's 0 it means it can reproduce; we add the lanternfish itself back into the school (with the timer set to 6), and then we add a new lanternfish with the reproduction time set to 9: it's a new fish, so it will have to wait a little longer - Otherwise, we simply decrease the reproduction counter - We do this 80 times (defun part1 (ages) (let ((an (copy-seq ages))) (dotimes (_ 80 (length an)) (setf an (uiop:while-collecting (next) (dolist (a an) (cond ((= a 0) (next 6) (next 8)) (t (next (1- a)))))))))) For part 2 instead: > How many lanternfish would there be after 256 days? Before you try it, replacing 80 with 256 is not going to work -- you will run out of memory real quick! The big realization here, is that lanternfish with same age are all alike, and they all are going to reproduce at the same time; we don't care which one comes first; we only care of when in the future they are going to reproduce. Let's parse the input again; this time though, let's output a 9 elements array, representing the number of fish reproducing in 0 days, 1 day, 2 days... (defun parse-generations (data &aux (generations (make-array 9 :initial-element 0)) (ages (cl-ppcre:all-matches-as-strings "\\d" (first data)))) (loop for v in ages do (incf (aref generations (parse-integer v)))) generations) Evolving the school of lanternfish for a given number of days (so we can use this for part 1 as well) would then become a matter of: - Rotating array values to the left (i.e. aging + reproduction) - Increasing the `6-days` slot by the number of fish that just reproduced (i.e. they will be able to reproduce again in 6 days) (defun evolve (generations days &aux (gg (copy-seq generations))) (dotimes (_ days (reduce #'+ gg)) (let ((n (aref gg 0))) (dotimes (i 8) (setf (aref gg i) (aref gg (1+ i)))) (setf (aref gg 8) n) (incf (aref gg 6) n)))) Final plumbing: (define-solution (2021 06) (generations parse-generations) (values (evolve generations 80) (evolve generations 256))) (define-test (2021 06) (371379 1674303997472)) And that's it: > (time (test-run)) TEST-2021/06.. Success: 1 test, 2 checks. Evaluation took: 0.000 seconds of real time 0.000528 seconds of total run time (0.000354 user, 0.000174 system) 100.00% CPU 1,277,524 processor cycles 32,768 bytes consed # 2021-12-07 Advent of Code: [2021/07]( - A whale has decided to eat our submarine - Lucky us, a swarm of crabs (each in its own tiny submarine...LOL) is willing to help - They want to drill a hole in the ocean floor so we can escape through it - They need some help to figure out the exact position where to drill In particular, given the horizontal position of each crab, we are asked to align them and minimize the amount of fuel each tiny submarine needs to get there. Parsing the input is a no brainer; it's a comma separated list of numbers, so the usual CL-PPCRE:ALL-MATCHES-AS-STRINGS, and MAPCAR with PARSE-INTEGER will do the trick: (defun parse-crabs (data) (mapcar #'parse-integer (cl-ppcre:all-matches-as-strings "\\d+" (first data)))) Now, to minimize the amount of fuel required to align all the crabs, I am just going to brute-force it: - For each point between mix and max crab positions - Calculate the amount of fuel to move all the crabs there - Minimize this (defun minimize-fuel (crabs) (loop for p from (find-min crabs) to (find-max crabs) minimize (loop for c in crabs sum (abs (- c p))))) For part 2, the situation changes a little bit: > As it turns out, crab submarine engines don't burn fuel at a constant rate. Instead, each change of 1 step in horizontal position costs 1 more unit of fuel than the last: the first step costs 1, the second step costs 2, the third step costs 3, and so on. Continues: > Determine the horizontal position that the crabs can align to using the least fuel possible so they can make you an escape route! How much fuel must they spend to align to that position? If a crab needs to move 5 steps, it will consume: 1 + 2 + 3 + 4 + 5 units of fuel; and [in general](, to move `n` steps it will consume: `n * (n + 1) / 2` units of fuel: (defun minimize-fuel (crabs) (loop for p from (find-min crabs) to (find-max crabs) minimize (loop for c in crabs sum (let ((n (abs (- p1 p2)))) (floor (* n (1+ n)) 2))))) Little bit of refactoring plus final plumbing: (defun minimize-fuel (crabs distance-fun) (loop for p from (find-min crabs) to (find-max crabs) minimize (loop for c in crabs sum (funcall distance-fun c p)))) (define-solution (2021 07) (crabs parse-crabs) (values (minimize-fuel crabs (lambda (p1 p2) (abs (- p1 p2)))) (minimize-fuel crabs (lambda (p1 p2 &aux (n (abs (- p1 p2)))) (floor (* n (1+ n)) 2))))) (define-test (2021 07) (359648 100727924)) And that's it: > (time (test-run)) TEST-2021/07.. Success: 1 test, 2 checks. Evaluation took: 0.093 seconds of real time 0.083847 seconds of total run time (0.081319 user, 0.002528 system) 90.32% CPU 215,462,807 processor cycles 96,608 bytes consed PS. In the heat of the moment, i.e. at something past 6 in the morning, I could not really remember the formula for the sum of an arithmetic progression; so what did I do, instead? [I simulated it!](, adding 1 unit of fuel at each step... > (time (loop for p from (find-min crabs) to (find-max crabs) minimize (loop for c in crabs sum (loop repeat (- (max c p) (min c p)) for f from 1 sum c)))) Evaluation took: 18.734 seconds of real time 10.576204 seconds of total run time (10.243097 user, 0.333107 system) 56.45% CPU 43,088,527,879 processor cycles 16 bytes consed 185638080 # 2021-12-08 Advent of Code: [2021/08]( We managed to escape from the whale, but something is not quite working as it should: > As your submarine slowly makes its way through the cave system, you notice that the four-digit seven-segment displays in your submarine are malfunctioning; they must have been damaged during the escape. You'll be in a lot of trouble without them, so you'd better figure out what's wrong. Each digit of a seven-segment display is rendered by turning on or off any of seven segments named a through g: 0: 1: 2: 3: 4: aaaa .... aaaa aaaa .... b c . c . c . c b c b c . c . c . c b c .... .... dddd dddd dddd e f . f e . . f . f e f . f e . . f . f gggg .... gggg gggg .... 5: 6: 7: 8: 9: aaaa aaaa aaaa aaaa aaaa b . b . . c b c b c b . b . . c b c b c dddd dddd .... dddd dddd . f e f . f e f . f . f e f . f e f . f gggg gggg .... gggg gggg So, to render a `1`, only segments `c` and `f` would be turned on; the rest would be off. To render a `7`, only segments `a`, `c`, and `f` would be turned on. Continues: > The problem is that the signals which control the segments have been mixed up on each display. The submarine is still trying to display numbers by producing output on signal wires a through g, but those wires are connected to segments randomly. Worse, the wire/segment connections are mixed up separately for each four-digit display! (All of the digits within a display use the same connections, though.) > > So, you might know that only signal wires `b` and `g` are turned on, but that doesn't mean segments `b` and `g` are turned on: the only digit that uses two segments is `1`, so it must mean segments `c` and `f` are meant to be on. With just that information, you still can't tell which wire (`b`/`g`) goes to which segment (`c`/`f`). For that, you'll need to collect more information. > > For each display, you watch the changing signals for a while, make a note of all ten unique signal patterns you see, and then write down a single four digit output value (your puzzle input). Using the signal patterns, you should be able to work out which pattern corresponds to which digit. For example, here is what you might see in a single entry in your notes: acedgfb cdfbe gcdfa fbcad dab cefabd cdfgeb eafb cagedb ab | cdfeb fcadb cdfeb cdbaf > Each entry consists of ten unique signal patterns, a `|` delimiter, and finally the four digit output value. Within an entry, the same wire/segment connections are used (but you don't know what the connections actually are). The unique signal patterns correspond to the ten different ways the submarine tries to render a digit using the current wire/segment connections. Because `7` is the only digit that uses three segments, `dab` in the above example means that to render a `7`, signal lines `d`, `a`, and `b` are on. Because `4` is the only digit that uses four segments, `eafb` means that to render a `4`, signal lines `e`, `a`, `f`, and `b` are on. _OK, I kind of get it, but what exactly are we supposed to do?_ > In the output values, how many times do digits `1`, `4`, `7`, or `8` appear? The following algorithm should get the job done: - For each entry - For each output value - If its length is either `2`, `4`, `3`, or `7` (i.e. the number of segments on, for the digits: `1`, `4`, `7`, and `8` respectively), then it means the entry is trying to render one of those digits, so we add 1 However, let's parse our entries first: (defun parse-entries (data) (loop for string in data for parts = (cl-ppcre:all-matches-as-strings "[a-z]+" string) collect (cons (subseq parts 0 10) (subseq parts 10)))) (defun inputs (entry) (car entry)) (defun outputs (entry) (cdr entry)) Now, the algorithm to solve part 1: (defun part1 (entries) (loop for e in entries sum (loop for d in (outputs e) count (member (length d) '(2 4 3 7))))) An easy part 1 is usually followed by a somewhat complicated part 2, and today is no exception: > Through a little deduction, you should now be able to determine the remaining digits. So yeah, somehow we need to figure out which wire is connected to which segment, decode each output value, and add them up together: > For each entry, determine all of the wire/segment connections and decode the four-digit output values. What do you get if you add up all of the output values? OK, before we actually figure out a way to find this wire / segment mapping, let's pretend we had it, and see what we would do with it. Given the following wiring: dddd e a e a ffff g b g b cccc Our mapping would look something like: `(#\d #\e #\a #\f #\g #\b #\c)`. Now, if we _had_ this, we would go on and _decode_ the output signals: (defun decode (mapping signals &aux (rez 0)) (dolist (s signals) (let ((d (signal->digit mapping s))) (setf rez (+ (* rez 10) d)))) rez) Before looking at SIGNAL->DIGIT though, we are going to have to define a table mapping from digits to the list of segments which are on when the given digit is rendered; and instead of coding the list of segments as a list, we are actually going to use a bitmask instead (e.g. segment `a` -> `1`; segment `b` -> `2`; segment `c` -> `4`): (defparameter *digits->segments* '((0 . #b1110111) (1 . #b0100100) (2 . #b1011101) (3 . #b1101101) (4 . #b0101110) (5 . #b1101011) (6 . #b1111011) (7 . #b0100101) (8 . #b1111111) (9 . #b1101111))) And with this, to convert a signal to a digit all we have to do is: - For each character of the signal -- DOVECTOR - Find the segment it's connected to -- POSITION - Set the _right_ bit in the bitmask -- DPB - Look up the bitmask in the table above -- RASSOC (defun signal->digit (mapping s &aux (segments-mask 0)) (dovector (ch s) (let ((i (position ch mapping))) (setf segments-mask (dpb 1 (byte 1 i) segments-mask)))) (car (rassoc segments-mask *digits->segments*))) > (decode '(#\d #\e #\a #\f #\g #\b #\c) '("cdfeb" "fcadb" "cdfeb" "cdbaf")) 5353 Now, let's take another look at what exactly we are asked to do for part 2: > For each entry, determine all of the wire/segment connections and decode the four-digit output values. What do you get if you add up all of the output values? If we gloss over the "determine all of the wire/segment connects" part, and consider the fact that we already know how to decode a message (once we have the mapping), it should be pretty easy to wire things together and find the answer for part 2: (defun part2 (entries) (loop for e in entries for m = (find-mapping (inputs e)) sum (decode m (outputs e)))) Now we have to figure out what FIND-MAPPING will actually do. Well, even though the following algorithm produces the right answer (for my input at least), I honestly think I just have been super lucky because the more I think about this, the more I question _why_ this would work. But it does seem to work, so... Anyway, we begin by saying that each segment could be linked to any wire: 0 -> abcdefg 1 -> abcdefg 2 -> abcdefg 3 -> abcdefg 4 -> abcdefg 5 -> abcdefg 6 -> abcdefg 7 -> abcdefg From here, we process each signal pattern, from the _easy_ ones to the most difficult ones, and try to _refine_ our mapping until any of the following conditions happen: - Each segment maps to a _different_ wire -- in which case we found our mapping! - We cannot map segment to a wire -- in which case we will have to backtrack! Given a signal, first we figure out based on its length the different digits it could refer to; for example: - `ab` uniquely refers to the digit: `1` - `abc` uniquely refers to the digit: `7` - `abcd` uniquely refers to the digit: `4` - `abcde` could refer to any of the following digits: `2`, `3`, `5` - `abcdef` could refer to any of the following digits: `0`, `6`, `9` - `abcdefg` uniquely refers to the digit: `8` Then, for each of these options (e.g. a single option in case of `ab`, three different options for `abcde`), we see which segments the signal uses, and refine our mapping as follows: - If a segment is used by the current signal, we know that it can only be linked to any of the wires used by the signal; we can then safely remove (i.e. INTERSECT between the current list of options, for the segment, and the wires of the signal) any other possible option for the current segment, as choosing that would cause the constraint defined by the current signal to break - If a segment is not used by the current signal instead, we know it will never be linked to any of the wires used by the signal; we can safely remove (i.e. SET-DIFFERENCE between the current list of options, for the segment, and the wires of the signal) all the signal wires from the segment list of options, as again, choosing that, would cause the constraint defined by the current signal to break We then continue to the next signal until either we find a valid mapping, or we hit a dead end step back. Again: - If each segment is linked to a single wire, we found our mapping - If a segment, for whatever reason, winds up not being linked to any single wire, then we hit a dead end - If we run out signals (and none of the above hold true), we hit another dead end - Otherwise, for each digit that the signal could render -- POSSIBLE-DIGITS - And for each segment of this digit - We update the list of possible options as described above (i.e. INTERSECTION for the segments which are on, and SET-DIFFERENCE for the remaining ones) (defun find-mapping (signals) (labels ((recur (curr remaining) (cond ((loop for x in curr always (= (length x) 1)) (return-from find-mapping (mapcar #'car curr))) ((loop for x in curr thereis (zerop (length x))) nil) ((null remaining) nil) (t (let ((s (first remaining))) (dolist (d (possible-digits s)) (let ((segs (cdr (assoc d *digits->segments*)))) (recur (loop for c in curr for i below 7 collect (if (= (ldb (byte 1 i) segs) 1) (intersection c (coerce s 'list)) (set-difference c (coerce s 'list)))) (rest remaining))))))))) (recur (loop repeat 7 collect (coerce "abcdefg" 'list)) (sort (copy-seq signals) #'possible-digits<)))) But what are the digits that a signal could possibly try to render? As mentioned above, we answer this by looking at the length of the signal: (defparameter *length->digits* '((2 1) (3 7) (4 4) (5 2 3 5) (6 0 6 9) (7 8))) (defun possible-digits (s) (cdr (assoc (length s) *length->digits*))) Last, we said we wanted to process the _easy_ signals first; here is the predicate function to pass to SORT to make sure signals with the least number of _renderable_ digits are processed first: (defun possible-digits< (s1 s2) (< (length (possible-digits s1)) (length (possible-digits s2)))) And that's it! Final plumbing: (define-solution (2021 08) (entries parse-entries) (values (part1 entries) (part2 entries))) (define-test (2021 08) (330 1010472)) Test run: > (time (test-run)) TEST-2021/08.. Success: 1 test, 2 checks. Evaluation took: 0.005 seconds of real time 0.005222 seconds of total run time (0.004867 user, 0.000355 system) 100.00% CPU 13,017,280 processor cycles 2,160,512 bytes consed PS. As it turns out, part 2 could also be solved by brute-force: - Generate all the possible mappings (it's only seven wires!) - For each entry, try all the mappings until one successfully decodes the inputs > (time (let ((all-mappings (all-permutations (coerce "abcdefg" 'list))) (sum 0)) (dolist (e (parse-entries (uiop:read-file-lines "src/2021/day08.txt"))) (dolist (m all-mappings) (when (every (partial-1 #'signal->digit m) (inputs e)) (return (incf sum (decode m (outputs e))))))) sum)) Evaluation took: 0.384 seconds of real time 0.370098 seconds of total run time (0.362925 user, 0.007173 system) 96.35% CPU 884,284,986 processor cycles 2,194,816 bytes consed 1010472 WTF?! It runs pretty damn fast too! I guess next time I am going to try to remember that sometimes inefficient but no-brainer is better than efficient but requiring a big-brain. # 2021-12-09 Advent of Code: [2021/09]( > These caves seem to be lava tubes. Parts are even still volcanically active; small hydrothermal vents release smoke into the caves that slowly settles like rain. Rain...underwater? Aren't we still on the submarine, are we?! Anyways... > If you can model how the smoke flows through the caves, you might be able to avoid it and be that much safer. The submarine generates a heightmap of the floor of the nearby caves for you (your puzzle input). Continues: > Smoke flows to the lowest point of the area it's in. For example, consider the following heightmap: > > 2199943210 > 3987894921 > 9856789892 > 8767896789 > 9899965678 > > Each number corresponds to the height of a particular location, where 9 is the highest and 0 is the lowest a location can be. The first step is to parse the input heightmap; we are are going to use a HASHTABLE, mapping from `(row col)` to the corresponding height: (defun parse-heights (data &aux (heights (make-hash-table :test 'equal))) (loop for r below (length data) for string in data do (loop for c below (length string) for ch across string do (setf (gethash (list r c) heights) (- (char-code ch) (char-code #\0))))) heights) Now, here is what we are asked to do for part 1: > Find all of the low points on your heightmap. What is the sum of the risk levels of all low points on your heightmap? So, for each low point, we add one to its height, and sum everything together: (defun part1 (heights) (loop for p in (low-points heights) sum (1+ (gethash p heights)))) But what are the low points of our heightmap? All the points surrounded by locations with a higher height: - For each point in the heightmap - For each of its neighbor locations - Check if the height of the center is smaller - If it is, and for all the neighbor locations, then it's a low point so we should collect it (defun low-points (heights) (loop for p being the hash-keys of heights using (hash-value h) when (every (lambda (n) (< h (gethash n heights))) (neighbors heights p)) collect p)) But given a point, what are all its neighbor locations? The ones north, east, south, west, which do not fall off the grid: (defparameter *nhood* '((-1 0) (0 1) (1 0) (0 -1))) (defun neighbors (heights p) (loop for d in *nhood* for n = (mapcar #'+ p d) when (gethash n heights) collect n)) Now, for part 2 instead, we are asked to calculate the size of the largest _basin_: > A basin is all locations that eventually flow downward to a single low point. Therefore, every low point has a basin, although some basins are very small. Locations of height 9 do not count as being in any basin, and all other locations will always be part of exactly one basin. And in particular: > What do you get if you multiply together the sizes of the three largest basins? OK, so: - We find the basins - Sort them by size (biggest first) - Take the first 3 values - Multiply them together (defun part2 (heights) (reduce #'* (subseq (sort (basins heights) #'>) 0 3))) Let's dig a bit deeper, and see how we are going to calculate the size of the basins: - For each low point (we know each basin expands around one of the low points from part 1) - We apply BFS, scanning all the surrounding area, until we either fell off the grid (NEIGHBORS already implements this) or we hit a location with height of `9` (from the text: _locations of height `9` do not count as being in any basin_) - As we navigate adjacent cells (i.e. our :NEIGHBORS function), we increase a counter representing the size of the current basin I already have BFS available in my utility file, so implementing the above was quite straightforward: (defun basins (heights) (uiop:while-collecting (acc!) (dolist (lp (low-points heights)) (let ((size 0)) (bfs lp :test 'equal :neighbors (lambda (p) (incf size) (remove-if-not (lambda (n) (< (gethash n heights) 9)) (neighbors heights p)))) (acc! size))))) Final plumbing: (define-solution (2021 09) (heights parse-heights) (values (part1 heights) (part2 heights))) (define-test (2021 09) (502 1330560)) And that's it: > (time (test-run)) TEST-2021/09.. Success: 1 test, 2 checks. Evaluation took: 0.050 seconds of real time 0.046314 seconds of total run time (0.035313 user, 0.011001 system) 92.00% CPU 116,896,357 processor cycles 9,133,328 bytes consed I wished I were able to code this up, on the spot, as the clock was ticking; instead, my [REPL buffer]( for today wound up looking anything but clean: - Used a 2D array instead of a HASHTABLE to represent the map, and _that_ made pretty much all the rest really **really** verbose - I was only checking the location above and to the left of the current one in first BASINS attempt, but that clearly generated the wrong answer - It took me a while before I decided a NEIGHBORS function would have helped with the overall readability of the whole solution -- very important, especially when things ain't working as they should, but go figure why I had not thought about that sooner - Last but not least, I forgot I already had AREF-OR in my utilities file, for safely accessing array values even with out of bound indices; so, for part 1 I ended up sprinkling a bunch of OR / IGNORE-ERRORS forms all over the place, while for part 2, I wound up re-implementing AREF-OR as SAREF... Still got my 2 stars home anyway, and _that_ alone is a reason to celebrate! PS. As it turns out, part 2 could also be solved using disjoint sets / union find: - Each location starts as a disjoint-set / basin of its own - For each location, for each of its neighbors, if the neighbor height is not 9 then we join the two sets / basins - For each set, we find the root element and start counting how many locations roll up to the same set / basin And these counters, one per basin, actually represent the list of sizes we were looking for: (defun basins (heights &aux (basins (make-hash-table :test 'equal)) (sizes (make-hash-table :test 'eq))) (loop for p being the hash-keys of heights using (hash-value h) unless (= h 9) do (setf (gethash p basins) (make-dset h))) (loop for p being the hash-keys of basins using (hash-value ds) do (loop for np in (neighbors heights p) for nds = (gethash np basins) when nds do (dset-union ds nds))) (loop for ds being the hash-values of basins do (incf (gethash (dset:dset-find ds) sizes 0))) (hash-table-values sizes)) And just to be 100% sure, let's confirm we did not break anything: > (time (test-run)) TEST-2021/09.. Success: 1 test, 2 checks. Evaluation took: 0.021 seconds of real time 0.020646 seconds of total run time (0.017324 user, 0.003322 system) 100.00% CPU 49,747,608 processor cycles 6,147,472 bytes consed We didn't! # 2021-12-10 Advent of Code: [2021/10]( > You ask the submarine to determine the best route out of the deep-sea cave, but it only replies: > > Syntax error in navigation subsystem on line: all of them > > All of them?! The damage is worse than you thought. You bring up a copy of the navigation subsystem (your puzzle input). Continues: > The navigation subsystem syntax is made of several lines containing chunks. There are one or more chunks on each line, and chunks contain zero or more other chunks. Adjacent chunks are not separated by any delimiter; if one chunk stops, the next chunk (if any) can immediately start. Every chunk must open and close with one of four legal pairs of matching characters: > > - If a chunk opens with `(`, it must close with `)`. > - If a chunk opens with `[`, it must close with `]`. > - If a chunk opens with `{`, it must close with `}`. > - If a chunk opens with `<`, it must close with `>`. It goes on explaining the difference between _corrupted_ lines (our focus for part 1) and _incomplete_ lines: > Some lines are incomplete, but others are corrupted. Find and discard the corrupted lines first. > > A corrupted line is one where a chunk closes with the wrong character - that is, where the characters it opens and closes with do not form one of the four legal pairs listed above. What we are asked to do, is process each _corrupted_ line, stop at the first invalid character, and generate a _score_ as described below: > To calculate the syntax error score for a line, take the first illegal character on the line and look it up in the following table: > > - `)`: `3` points. > - `]`: `57` points. > - `}`: `1197` points. > - `>`: `25137` points. To recap: > Find the first illegal character in each corrupted line of the navigation subsystem. What is the total syntax error score for those errors? To solve this, we are going to use a stack; given a line, for each of its characters: - If open parenthesis, push the relative closing one into the stack (e.g. if `(` is read, I push `)`) - If close parenthesis: if it matches the value at the top of the stack, I pop that from the stack; otherwise it's an error, so we calculate the syntax error score Let's start by defining a table to map between open and close parentheses: (defparameter *closing* '((#\( . #\)) (#\[ . #\]) (#\{ . #\}) (#\< . #\>))) Next, let's define our main function, SYNTAX-SCORING, which we will use to: - Process all the lines in our file - Implement the stack based algorithm mentioned above - Sum all the error scores together...and return it (defun syntax-scoring (file &aux (error-score 0)) (dolist (line file) (let (stack) (loop for ch across line do (cond ((find ch "([{<") (push (cdr (assoc ch *closing*)) stack)) ((eq ch (car stack)) (pop stack)) (t (return (incf error-score (syntax-error-score ch)))))))) error-score)) Last missing bit, SYNTAX-ERROR-SCORE, which all it does, is looking up the given character into another table mapping from _invalid_ characters to their score: (defparameter *syntax-error-table* '((#\) . 3) (#\] . 57) (#\} . 1197) (#\> . 25137))) (defun syntax-error-score (ch) (cdr (assoc ch *syntax-error-table*))) For part 2 instead, we are asked to focus on _incomplete_ entries: > Incomplete lines don't have any incorrect characters - instead, they're missing some closing characters at the end of the line. To repair the navigation subsystem, you just need to figure out the sequence of closing characters that complete all open chunks in the line. Once we figure out these completion strings, we need to calculate their score: > Then, for each character, multiply the total score by 5 and then increase the total score by the point value given for the character in the following table: > > - `)`: `1` point. > - `]`: `2` points. > - `}`: `3` points. > - `>`: `4` points. Then we need to collect all the scores, sort them, and pick the one in the middle: > Autocomplete tools are an odd bunch: the winner is found by sorting all of the scores and then taking the middle score. (There will always be an odd number of scores to consider.) > > [...] > > Find the completion string for each incomplete line, score the completion strings, and sort the scores. What is the middle score? OK, first off, let's update SYNTAX-SCORING to start processing _incomplete_ lines as well. In our case, an entry is _incomplete_ if it's not _corrupted_ (i.e. it did not cause an _early_ return from the LOOP form), _and_ if when done processing all the line characters, `stack` turns out not to be empty; in which case: - we calculate the completion score -- :FINALLY clause added to the LOOP form - collect all the scores together -- see COMPLETION-SCORES added to the function parameter list - sort them and pick the middle value -- final LET* form (defun syntax-scoring (file &aux (error-score 0) completion-scores) (dolist (line file) (let (stack) (loop for ch across line do (cond ((find ch "([{<") (push (cdr (assoc ch *closing*)) stack)) ((eq ch (car stack)) (pop stack)) (t (return (incf error-score (syntax-error-score ch))))) finally (when stack (push (completion-score stack) completion-scores))))) (values error-score (let* ((completion-scores (sort completion-scores #'<)) (n (floor (length completion-scores) 2))) (nth n completion-scores)))) Note that in our case, `stack`, actually represents the _completion string_ mentioned above, so we can simply go ahead and calculate its score as instructed: (defparameter *point-value-table* '((#\) . 1) (#\] . 2) (#\} . 3) (#\> . 4))) (defun completion-score (stack &aux (score 0)) (dolist (ch stack score) (setf score (+ (* score 5) (cdr (assoc ch *point-value-table*)))))) And that's it! Final plumbing: (define-solution (2021 10) (data) (syntax-scoring data)) (define-test (2021 10) (323613 3103006161)) Test run: > (time (test-run)) TEST-2021/10.. Success: 1 test, 2 checks. Evaluation took: 0.000 seconds of real time 0.000799 seconds of total run time (0.000505 user, 0.000294 system) 100.00% CPU 1,872,677 processor cycles 97,968 bytes consed And we are good to go! PS. Today's problem was a breath of fresh air, really, especially if we compare it to the problems of the last two days, and even my [pre-refactoring REPL buffer](, seems to agree with that! # 2021-12-11 Advent of Code: [2021/11]( > You enter a large cavern full of rare bioluminescent dumbo octopuses! They seem to not like the Christmas lights on your submarine, so you turn them off for now. > > There are 100 octopuses arranged neatly in a 10 by 10 grid. Each octopus slowly gains energy over time and flashes brightly for a moment when its energy is full. Although your lights are off, maybe you could navigate through the cave without disturbing the octopuses if you could predict when the flashes of light will happen. Each octopus has an energy level whose value we receive as our puzzle input: 5483143223 2745854711 5264556173 6141336146 6357385478 4167524645 2176841721 6882881134 4846848554 5283751526 > You can model the energy levels and flashes of light in steps. During a single step, the following occurs: > > - First, the energy level of each octopus increases by `1`. > - Then, any octopus with an energy level greater than `9` flashes. This increases the energy level of all adjacent octopuses by `1`, including octopuses that are diagonally adjacent. If this causes an octopus to have an energy level greater than `9`, it also flashes. This process continues as long as new octopuses keep having their energy level increased beyond `9`. (An octopus can only flash at most once per step.) > - Finally, any octopus that flashed during this step has its energy level set to `0`, as it used all of its energy to flash. Note, adjacent flashes can cause an octopus to flash on a step even if it begins that step with very little energy. Before any steps: 11111 19991 19191 19991 11111 After step 1: 34543 40004 50005 40004 34543 After step 2: 45654 51115 61116 51115 45654 OK, our task for today: > Given the starting energy levels of the dumbo octopuses in your cavern, simulate 100 steps. How many total flashes are there after 100 steps? As usual, let's parse our input first. Like we did for [2021/09](, we are going to parse this into a HASH-TABLE, mapping from `(row col)` pairs, to the octopus energy level: (defun parse-octopuses (data &aux (octopuses (make-hash-table :test 'equal))) (loop for r below (length data) for string in data do (loop for c below (length string) for ch across string do (setf (gethash (list r c) octopuses) (- (char-code ch) (char-code #\0))))) octopuses) With the puzzle input taken care of, we can now try to sketch our solution: - We safely copy our input -- we are going to mutate it, and one too many times I have been bit by me accidentally using a mutated list / hash-table, so when in doubt, I copy things - Then we let the octopuses dance for 100 times, and keep track of the number of flashes at the end of each dance (defun flash-dance (octopuses &aux (octopuses (copy-hash-table octopuses))) (loop repeat 100 sum (dance octopuses))) Great, now we have to figure out how flashing octopuses influence adjacent ones during the dance; here is what we are going to do: - First off, we increment the energy level of each octopus, and if the octopus energy level is greater than 10, we FLASH it - FLASH-ing an octopus means adding the octopus itself to the list of octopuses that already flashed (remember, octopuses can flash only once, per step...err dance), plus scheduling all its neighbors for an additional energy bump increase - Next, until there are octopuses scheduled to be processed -- see `remaining` -- we increase their energy level, and if that winds up being greater than `9`, we call FLASH on them -- causing the octopus to be marked as having flashed, plus its neighbors to be added to `remaining` - Note: this is very similar to the classic DFS / BFS, with the exception that nodes (i.e. octopuses) are marked as _visited_ not the first time we process them, but the first time their energy becomes greater than `9` - At the end, we check all the octopus, count the ones that flashed and reset their energy level to `0` (defun dance (curr &aux remaining (flashed (make-hset nil :test 'equal))) (flet ((flashesp (energy) (> energy 9)) (flash (p) (hset-add p flashed) (setf remaining (append (neighbors curr p) remaining)))) (loop for p being the hash-keys of curr when (flashesp (incf (gethash p curr))) do (flash p)) (loop while remaining for n = (pop remaining) unless (hset-contains-p n flashed) when (flashesp (incf (gethash n curr))) do (flash n)) (loop for p being the hash-keys of curr using (hash-value e) count (when (flashesp e) (setf (gethash p curr) 0))))) NEIGHBORS is the exact same function we used for day 9; the inly difference is that *NHOOD* includes deltas for diagonal locations too: (defparameter *nhood* '((-1 0) (-1 1) (0 1) (1 1) (1 0) (1 -1) (0 -1) (-1 -1))) (defun neighbors (energies p) (loop for d in *nhood* for n = (mapcar #'+ p d) when (gethash n energies) collect n)) Cool, now let's take a look at part 2 instead: > If you can calculate the exact moments when the octopuses will all flash simultaneously, you should be able to navigate through the cavern. What is the first step during which all octopuses flash? This should be easy enough: we let the octopus dance until the number of the ones that flashed is 100 (i.e. the size of the grid). Let's update FLASH-DANCE accordingly: - We use `:for step from 1` instead of `:repeat 100` - We accumulate flashes into a loop variable, `flash-count` - When the current step is `100`, then we save `flash-count` onto a new `part1` variable - When the number of flashes in the current step is `100`, we break out of the function returning `(values part1 step)` (defun flash-dance (octopuses &aux (octopuses (copy-hash-table octopuses)) (part1 0)) (loop for step from 1 for flashes = (dance octopuses) sum flashes into flash-count when (= step 100) do (setf part1 flash-count) when (= flashes 100) return (values part1 step))) And that's it! Final plumbing: (define-solution (2021 11) (octopuses parse-octopuses) (flash-dance octopuses)) (define-test (2021 11) (1665 235)) Test run: > (time (test-run)) TEST-2021/11.. Success: 1 test, 2 checks. Evaluation took: 0.026 seconds of real time 0.026022 seconds of total run time (0.022644 user, 0.003378 system) [ Run times consist of 0.011 seconds GC time, and 0.016 seconds non-GC time. ] 100.00% CPU 59,922,835 processor cycles 2,554,240 bytes consed PS. [Here]( you can find my pre-refactoring REPL buffer, and as you can see, as I cleaned up my solution I mainly renamed things, and combined the _cowboy_ loops into FLASH-DANCE! # 2021-12-12 Advent of Code: [2021/12]( > With your submarine's subterranean subsystems subsisting suboptimally, the only way you're getting out of this cave anytime soon is by finding a path yourself. Not just a path - the only way to know if you've found the **best** path is to find **all** of them. OK... So, we are given the map of the cave: start / \ c--A-----b--d \ / end Except that it's encoded, like this: start-A start-b A-c A-b b-d A-end b-end And we are asked to find all the paths connecting `start` to `end` that don't visit _small_ caves more than once: > Your goal is to find the number of distinct **paths** that start at `start`, end at `end`, and don't visit small caves more than once. There are two types of caves: big caves (written in uppercase, like A) and small caves (written in lowercase, like b). It would be a waste of time to visit any small cave more than once, but big caves are large enough that it might be worth visiting them multiple times. So, all paths you find should visit small caves at most once, and can visit big caves any number of times. First off, let's load this map into memory. We are going to implement an adjacency list storing which cave is connected to which other one, and we are are going to do this using a HASH-TABLE instead of classic association lists: (defun parse-map (data &aux (map (make-hash-table))) (dolist (s data map) (destructuring-bind (from . to) (parse-line s) (push to (gethash from map)) (push from (gethash to map))))) (defun parse-line (string) (cl-ppcre:register-groups-bind ((#'make-keyword from to)) ("(\\w+)-(\\w+)" string) (cons from to))) Next, let's walk through the cave. We are going to recursively build up all the possible paths starting from `start`, and count the ones that lead up to `end`: - If we reached `end` then we found one more path to count -- add `1` - Otherwise, check all the caves reachable from the current position - And if _visitable_, add the new cave to the current path, and _recur_ (defun walk (map) (while-summing (path#) (labels ((recur (curr) (cond ((eq (car curr) :|end|) (path#)) (t (loop for next in (gethash (car curr) map) when (or (big-cave-p next) (not (member next curr))) do (recur (cons next curr))))))) (recur '(:|start|))))) And when can we visit a cave? When a cave is visitable? If a big one, or if a small one, only if we haven't stepped foot into it already: (defun visitablep (path next) (or (big-cave-p next) (not (member next path)))) (defun big-cave-p (cave) (not (small-cave-p cave))) (defun small-cave-p (cave) (loop for ch across (string cave) thereis (lower-case-p ch))) Cool! Let's now take a look at part 2: > After reviewing the available paths, you realize you might have time to visit a single small cave **twice**. Specifically, big caves can be visited any number of times, a single small cave can be visited at most twice, and the remaining small caves can be visited at most once. However, the caves named `start` and `end` can only be visited **exactly once each**: once you leave the `start` cave, you may not return to it, and once you reach the `end` cave, the path must end immediately. So, it looks like we will have to tweak VISITABLEP a little; for part 2, we can visit a cave if: - It's a big one (as before) - Or when a small one, if it's not `start` - Or if the number of small caves, visited more than once, is less than or equal to `1` (defun part2-visitable-p (path next) (cond ((big-cave-p next) t) ((eq next :|start|) nil) (t (<= (loop for p on (cons next path) for (c) = p when (small-cave-p c) count (> (count c p) 1)) 1)))) Let's update WALK to accept `visitablep` from the caller: (defun walk (map &optional (visitablep #'visitablep)) (while-summing (path#) (labels ((recur (path) (cond ((eq (car path) :|end|) (path#)) (t (loop for next in (gethash (car path) map) when (visitablep path next) do (recur (cons next path))))))) (recur '(:|start|))))) And that's it! Final plumbing: (define-solution (2021 12) (map parse-map) (values (walk map) (walk map #'part2-visitable-p))) (define-test (2021 12) (3000 74222)) Test run: > (time (test-run)) TEST-2021/12.. Success: 1 test, 2 checks. Evaluation took: 1.981 seconds of real time 1.867072 seconds of total run time (1.818760 user, 0.048312 system) [ Run times consist of 0.007 seconds GC time, and 1.861 seconds non-GC time. ] 94.25% CPU 4,557,591,987 processor cycles 29,753,344 bytes consed It's not the fastest solution, I will give you that, but I love how self contained the changes for part 2 were, especially if you compare that to my [original solution](, in which I ended up copying PART1 into PART2 because I needed to pass around a new boolean flag to keep track of whether walking into a small cave was allowed or not. PS. I was using UIOP:WHILE-COLLECTING to collect all the paths first, and then pass the result to LENGTH to get the answers to today's problems; well, I did not like that, and since I found myself with a little bit of time I decided to create the WHILE-SUMMING macro: (defmacro while-summing (summers &body body) (expand-while-summing summers body)) (defun expand-while-summing (summers body) (let* ((gen-syms (mapcar #'expand-while-summing-gensym summers)) (let-bindings (mapcar #'expand-while-summing-let-binding gen-syms)) (flet-bindings (mapcar #'expand-while-summing-flet-binding summers gen-syms))) `(let* (,@let-bindings) (flet (,@flet-bindings) ,@body (values ,@gen-syms))))) (defun expand-while-summing-gensym (name) (gensym (string name))) (defun expand-while-summing-let-binding (gensym) (list gensym 0)) (defun expand-while-summing-flet-binding (name binding) `(,name (&optional (delta 1)) (incf ,binding delta))) I am sure there are already 100+ different versions of this, better ones for sure, but still, one more would not hurt, so here is mine! --- + untrace function does not work with Vlime ? while-counting # 2021-12-13 Advent of Code: [2021/13]( > You reach another volcanically active part of the cave. It would be nice if you could do some kind of thermal imaging so you could tell ahead of time which caves are too hot to safely enter. The submarine has a thermal camera; the camera has never been activated; to activate the camera we need to enter the code found on page 1 of the manual, page which falls out as we open the manual page 1; luckily for us, the paper is marked with dots and includes instructions on how to fold it up. For example: 6,10 0,14 9,10 0,3 10,4 4,11 6,0 6,12 4,1 0,13 10,12 3,4 3,0 8,4 1,10 2,14 8,10 9,0 fold along y=7 fold along x=5 The first set of data represent dots, i.e. `x`, `y` pairs, with `0, 0` representing the top-left corner of the sheet; while the second set of data, are the folding instructions. What are we asked to do with this? > How many dots are visible after completing just the first fold instruction on your transparent paper? OK, first things first, the input. We are going to store all the dots into a HASH-TABLE (with `(row col)` as keys, and Ts as values); for the folding instructions instead, we are simply going to use a list: (defun parse-input (data &aux (dots (make-hash-table :test 'equal)) folds) (dolist (s data (list dots (reverse folds))) (when-let (point (parse-dot s)) (setf (gethash point dots) t)) (when-let (fold (parse-fold s)) (push fold folds)))) (defun dots (data) (car data)) (defun folds (data) (cadr data)) (defun parse-dot (string) (cl-ppcre:register-groups-bind ((#'parse-integer col row)) ("(\\d+),(\\d+)" string) (list row col))) (defun parse-fold (string) (cl-ppcre:register-groups-bind ((#'as-keyword axis) (#'parse-integer n)) ("fold along (\\w)=(\\d+)" string) (list axis n))) Now, applying the folding instructions should not take that long: - For each fold instruction (only one in our case) - For each dot - If it's on the left of a vertical folding line, or above an horizontal folding line, leave it where it is - Otherwise, project it left / up, using the folding line as rotation center (it sounds more complicated than it actually is) - Lastly, count the number of distinct dots (defun fold (input &aux (curr (dots input))) (dolist (f (folds input)) (let ((next (make-hash-table :test 'equal))) (destructuring-bind (axis n) f (loop for (row col) being the hash-keys of curr do (ecase axis (:x (when (> col n) (decf col (* (- col n) 2)))) (:y (when (> row n) (decf row (* (- row n) 2))))) (setf (gethash (list row col) next) t))) (setf curr next) (return-from fold (hash-table-count curr))))) Cool! What about part 2? Well, we are asked to run all the folding instructions, and to _decode_ the message to use to activate the thermal camera: > Finish folding the transparent paper according to the instructions. The manual says the code is always eight capital letters. What code do you use to activate the infrared thermal imaging camera system? OK, let's update FOLD to run all the instructions (and still store the answer for part 1), and then see how we could decode the message: (defun fold (input &aux (curr (dots input)) part1) (dolist (f (folds input)) (let ((next (make-hash-table :test 'equal))) (destructuring-bind (axis n) f (loop for (row col) being the hash-keys of curr do (ecase axis (:x (when (> col n) (decf col (* (- col n) 2)))) (:y (when (> row n) (decf row (* (- row n) 2))))) (setf (gethash (list row col) next) t))) (setf curr next) (unless part1 (setf part1 (hash-table-count curr))))) (values part1 (print-paper curr))) PRINT-PAPER? Nothing crazy: - We figure out the max row and col values - Then for each point from `(0 0)` to `(row-max col-max)` - If a dots exists, we output a `#`, or a ` ` otherwise (defun print-paper (dots &aux (rows (loop for d being the hash-keys of dots maximize (car d))) (cols (loop for d being the hash-keys of dots maximize (cadr d)))) (with-output-to-string (s) (terpri s) ; print an additional newline, so I can better format the expected string (dotimes (row (1+ rows)) (dotimes (col (1+ cols)) (princ (if (gethash (list row col) dots) #\# #\Space) s)) (terpri s)))) And that's it! Final plumbing: (define-solution (2021 13) (input parse-input) (fold input)) (define-test (2021 13) (724 " ## ### ## ### #### ### # # # # # # # # # # # # # # # # # # # # ### ### # # # # # # ### # # # # ### # # # # # # # # # # # # # # # # ## # ## ### #### # # ## #### ")) Test run: > (time (test-run)) TEST-2021/13.. Success: 1 test, 2 checks. Evaluation took: 0.022 seconds of real time 0.011407 seconds of total run time (0.005706 user, 0.005701 system) 50.00% CPU 51,063,965 processor cycles 1,038,608 bytes consed PS. [This morning](, while solving this, I forgot to swap `x` and `y` while reading dots; all my `(row col)` were flipped, and because of that: - Had to flip the bindings inside the for loop - Ended up swapping rows for columns inside PRINT-PAPER (see `(dotimes (c rows) ...)` and `(dotimes (r cols) ...)`) It's clear and simple, I am just not meant to be productive at 0600! # 2021-12-14 Advent of Code: [2021/14]( > The incredible pressures at this depth are starting to put a strain on your submarine. The submarine has polymerization equipment that would produce suitable materials to reinforce the submarine, and the nearby volcanically-active caves should even have the necessary input elements in sufficient quantities. Continues: > The submarine manual contains instructions for finding the optimal polymer formula; specifically, it offers a **polymer template** and a list of pair **insertion** rules (your puzzle input). You just need to work out what polymer would result after repeating the pair insertion process a few times. Here is how puzzle input is looking like: NNCB CH -> B HH -> N CB -> H NH -> C HB -> C HC -> B HN -> C NN -> C BH -> H NC -> B NB -> B BN -> B BB -> N BC -> B CC -> N CN -> C A note about how the polymer is being created: > The following section defines the pair insertion rules. A rule like `AB -> C` means that when elements `A` and `B` are immediately adjacent, element `C` should be inserted between them. These insertions all happen simultaneously. And finally, the task for part 1: > Apply 10 steps of pair insertion to the polymer template and find the most and least common elements in the result. What do you get if you take the quantity of the most common element and subtract the quantity of the least common element? As we need to make room for new elements, I figured I would use a list of characters instead of a STRING; so, the first line we are going to COERCE it into a LIST; the remaining ones instead, we are going to store them inside a HASH-TABLE: (defun parse-input (data) (list (coerce (first data) 'list) (parse-insertion-rules (cddr data)))) (defun parse-insertion-rules (data &aux (rules (make-hash-table :test 'equal))) (flet ((rule (string) (cl-ppcre:register-groups-bind (from to) ("(\\w+) -> (\\w)" string) (setf (gethash (coerce from 'list) rules) (coerce to 'list))))) (dolist (string data rules) (rule string)))) I think we are going to _simulate_ this: - For 10 times in a row -- DOTIMES - We are going to evolve the current polymer -- TICK - And at the end, we are going to return the expected result -- RESULT (defun evolve (times input &aux (polymer (first input)) (rules (second input))) (dotimes (_ times) (setf polymer (tick rules polymer))) (result polymer)) What does TICK look like? - We traverse the polymer by pairs - See if the set of insertion rules contain one for the current pair - If it does, we _collect_ the first element of the pair, plus the output of the insertion rule (i.e. this is how we insert the new element between the two existing ones) - Otherwise, we _collect_ the first element of the pair only (defun tick (rules curr) (loop for (a b) on curr for (z) = (gethash (list a b) rules) if z collect a and collect z else collect a)) Last, we are asked for the difference of the number of occurrences of the most and least common element: - We calculate the frequency of each element, and sort them - Pick the last one, the first one, and do the subtraction (defun result (polymer &aux (freqs (sort (frequencies polymer) #'< :key #'cdr))) (- (cdar (last freqs)) (cdar freqs))) Cool, let's take a look at part 2 now: > Apply **40** steps of pair insertion to the polymer template and find the most and least common elements in the result. What do you get if you take the quantity of the most common element and subtract the quantity of the least common element? Uh oh...I am pretty sure the above is not going to work: (defun evolve (times input &aux (polymer (first input)) (rules (second input))) (dotimes (i times) (setf polymer (tick rules polymer)) (pr i (length polymer))) (result polymer)) Let's take this new version of EVOLVE out for a spin: > (evolve 20 (parse-input (uiop:read-file-lines "src/2021/day14.txt"))) 0 39 1 77 2 153 3 305 4 609 5 1217 6 2433 7 4865 8 9729 9 19457 10 38913 11 77825 12 155649 13 311297 14 622593 15 1245185 16 2490369 17 4980737 18 9961473 19 19922945 8844255 I am afraid we are going to need something clever than that! But what exactly? Well, if we think about it, insertion rules are defined for pairs of adjacent elements only; also, if the same pair of elements is present `10` times in the polymer, if an insertion rule for that pair of elements exists, then that will produce `10` new output elements; so what if we simply kept track of the number of occurrences of each pair of elements in the polymer? Let's begin by _massaging_ the original polymer representation (i.e. the LIST of characters), into a HASH-TABLE counting the frequencies of each pair of adjacent elements: (defun massage (input &aux (freqs (make-hash-table :test 'equal))) (destructuring-bind (template rules) input (loop for (a b) on template do (incf (gethash (list a b) freqs 0))) (list freqs rules))) With this, we are then going to update TICK as follows: - For each pair of adjacent element, `a` and `b` (and their number of occurrences, `n`) - We check if an insertion rule for the pair exists (insertion rule producing the new element `z`) - If it does, then we know the new polymer will contain both `(a z)` and `(z b)`; how many of these will it contain? Well, we started from `n` occurrences of `(a b)`, so we will have `n` occurrences of `(a z)` and `(z b)` - Otherwise, if the no insertion rule is defined for `(a b)`, we simply carry that pair (and all its occurrences) along (defun tick (rules freqs &aux (next (make-hash-table :test 'equal))) (loop for (a b) being the hash-keys of freqs using (hash-value n) for (z) = (gethash (list a b) rules) if z do (incf (gethash (list a z) next 0) n) and do (incf (gethash (list z b) next 0) n) else do (incf (gethash (list a b) next 0) n)) next) Last but not least, we got to update RESULT to account for the fact that the polymer has now a different representation. How do we do this? - We know the frequency of each pair, `(a b)` - We know pairs are overlapping, i.e. for the polymer `(a b c)` we are going to have one entry for `(a b)`, one for `(b c)`, and one for `(c nil)` - If we don't want to double count frequencies, we then have consider only the first element of each entry (defun result (freqs &aux (ch-freqs (make-hash-table))) (loop for (a) being the hash-keys of freqs using (hash-value n) do (incf (gethash a ch-freqs 0) n)) (- (loop for v being the hash-values of ch-freqs maximize v) (loop for v being the hash-values of ch-freqs minimize v))) And that's it! Final plumbing: (define-solution (2021 14) (input parse-input) (let ((input (massage input))) (values (evolve 10 input) (evolve 40 input)))) (define-test (2021 14) (5656 12271437788530)) Test run: > (time (test-run)) TEST-2021/14.. Success: 1 test, 2 checks. Evaluation took: 0.012 seconds of real time 0.004045 seconds of total run time (0.003179 user, 0.000866 system) 33.33% CPU 28,206,847 processor cycles 976,656 bytes consed # 2021-12-15 Advent of Code: [2021/15]( > You've almost reached the exit of the cave, but the walls are getting closer together. Your submarine can barely still fit, though; the main problem is that the walls of the cave are covered in chitons, and it would be best not to bump any of them. Continues: > The cavern is large, but has a very low ceiling, restricting your motion to two dimensions. The shape of the cavern resembles a square; a quick scan of chiton density produces a map of risk level throughout the cave (your puzzle input). For example: 1163751742 1381373672 2136511328 3694931569 7463417111 1319128137 1359912421 3125421639 1293138521 2311944581 > You start in the top left position, your destination is the bottom right position, and you cannot move diagonally. The number at each position is its **risk level**; to determine the total risk of an entire path, add up the risk levels of each position you **enter** (that is, don't count the risk level of your starting position unless you enter it; leaving it adds no risk to your total). OK; as usual, let's start with the input -- let's parse ours into a HASH-TABLE: (defun parse-risk-levels (data &aux (levels (make-hash-table :test 'equal))) (loop for r below (length data) for string in data do (loop for c below (length string) for ch across string do (setf (gethash (list r c) levels) (- (char-code ch) (char-code #\0))))) levels) Next we need to find the path, starting from the top left corner and ending to the bottom right one, that minimizes the total risk, i.e. the sum of all the risk levels along the path. Well, it looks like we are only going to have to use A* for this, with the usual MANHATTAN-DISTANCE to the end state as heuristic: (defun lowest-risk (levels &aux (row-max (loop for (r _) being the hash-keys of levels maximize r)) (col-max (loop for (_ c) being the hash-keys of levels maximize c)) (end (list row-max col-max))) (search-cost (a* '(0 0) :goal-state end :test 'equal :neighbors (partial-1 #'neighbors levels) :heuristic (partial-1 #'manhattan-distance end)))) Pretty standard NEIGHBORS function (note: we are not allowed to move diagonally): (defparameter *nhood* '((-1 0) (0 1) (1 0) (0 -1))) (defun neighbors (levels p) (loop for d in *nhood* for n = (mapcar #'+ p d) for level = (gethash n levels) when level collect (cons n level))) Et voila`! Let's take a look at part 2 now: > The entire cave is actually **five times larger in both dimensions** than you thought; the area you originally scanned is just one tile in a 5x5 tile area that forms the full map. Your original map tile repeats to the right and downward; each time the tile repeats to the right or downward, all of its risk levels **are 1 higher** than the tile immediately up or left of it. However, risk levels above `9` wrap back around to `1`. And: > Using the full map, what is the lowest total risk of any path from the top left to the bottom right? So to repeat: - We need to expand the input map, both horizontally and vertically - Every time we do this (again, 5 times horizontally, and 5 times vertically), we use the risk levels of the input map, incremented by 1, 2, 3...(0 for the original tile, +1 for the second one, +2 for the third one...) - When risk levels get higher than `9`, the wrap around starting from `1` (and not `0`) Let's have the MASSAGE function take care of all that: (defun massage (levels &aux (rows (1+ (loop for (r _) being the hash-keys of levels maximize r))) (cols (1+ (loop for (_ c) being the hash-keys of levels maximize c))) (rez (make-hash-table :test 'equal))) ;; Extend the map -- horizontally first (loop for (r c) being the hash-keys of levels using (hash-value risk) do (loop for i below 5 for c1 = (+ c (* cols i)) for risk1 = (+ risk (* 1 i)) do (setf (gethash (list r c1) rez) (if (> risk1 9) (- risk1 9) risk1)))) ;; Then vertically (setf levels (copy-hash-table rez)) ;; copy because we are changing it as we scan it (loop for (r c) being the hash-keys of levels using (hash-value risk) do (loop for i below 5 for r1 = (+ r (* rows i)) for risk1 = (+ risk (* 1 i)) do (setf (gethash (list r1 c) rez) (if (> risk1 9) (- risk1 9) risk1)))) rez) We call MASSAGE first, then pass the result to LOWEST-RISK, and that's it! Final plumbing: (define-solution (2021 15) (risk-levels parse-risk-levels) (values (lowest-risk risk-levels) (lowest-risk (massage risk-levels)))) (define-test (2021 15) (745 3002)) Test run: > (time (test-run)) TEST-2021/15.. Success: 1 test, 2 checks. Evaluation took: 1.256 seconds of real time 1.193335 seconds of total run time (1.131199 user, 0.062136 system) [ Run times consist of 0.097 seconds GC time, and 1.097 seconds non-GC time. ] 94.98% CPU 2,890,660,005 processor cycles 206,780,240 bytes consed Few closing notes: - I already got a [A*]( implemented in my utilities file, so part 1 was just plumbing things together -- first time I ranked below 1000!!! - For part 2 instead, I had to use my brain a little more, but apparently neither me nor my brain were actually ready for it: first I got the wrap-around logic wrong, then I realized I was only expanding the map diagonally, and last, I bumped into some weird errors because I was iterating and changing the same HASH-TABLE at the same time Anyways... --------Part 1-------- --------Part 2-------- Day Time Rank Score Time Rank Score 15 00:10:48 464 0 00:59:10 1927 0 _"50 fucking minutes to _expand_ a map?!"_ Yes, I know, that's ridiculous! # 2021-12-16 Advent of Code: [2021/16]( We reached open waters; we received a transmission form the Elves; the transmission uses Buoyancy Interchange Transmission System (BITS), a method for packing numeric expressions into a binary sequence; we need to figure out what the Elves want from us. (It's one of those wordy problems, so sit tight!) > The first step of decoding the message is to convert the hexadecimal representation into binary. Each character of hexadecimal corresponds to four bits of binary data: Easy: (defun parse-input (data) (hexadecimal->binary (first data))) > The BITS transmission contains a single packet at its outermost layer which itself contains many other packets. The hexadecimal representation of this packet might encode a few extra 0 bits at the end; these are not part of the transmission and should be ignored. Let's update PARSE-INPUT to account for that: (defun parse-input (data) (parse-packet (hexadecimal->binary (first data)))) Now, we know we are going to be fiddling with bits a lot (well, `0` and `1` characters more than bits), so let's create ourselves BIN, a utility function to: - extract a range of characters from a given string - parse that into a binary number PARSE-INTEGER already does all this, so BIN will just be a tiny wrapper around it: (defun bin (s start &optional (end (length s))) (parse-integer s :start start :end end :radix 2)) Now we got everything we need to implement the parser. Let's begin: > Every packet begins with a standard header: the first three bits encode the packet **version**, and the next three bits encode the packet **type ID**. These two values are numbers; all numbers encoded in any packet are represented as binary with the most significant bit first. For example, a version encoded as the binary sequence `100` represents the number `4`. Continues: > Packets with type ID `4` represent a **literal value**. Literal value packets encode a single binary number. To do this, the binary number is padded with leading zeroes until its length is a multiple of four bits, and then it is broken into groups of four bits. Each group is prefixed by a `1` bit except the last group, which is prefixed by a `0` bit. These groups of five bits immediately follow the packet header. OK, let's give PARSE-LITERAL a shot: - We pop `1` bit from the stream, interpret it -- MOREP - We pop `3` more bits from the stream, and _add_ it to `literal` -- we are reading `4` bits at a time, starting from the most significant ones, so for every group we need to multiply the current result by `16` and then add the current group to it - We do this until there are no more groups to read - When done, we return the literal value and the number of consumed characters (defun parse-literal (s &optional (start 0) &aux (literal 0)) (loop (let ((morep (= (bin s start (incf start)) 1))) (setf literal (+ (* literal 16) (bin s start (incf start 4)))) (unless morep (return (values literal start)))))) > Every other type of packet (any packet with a type ID other than `4`) represent an **operator** that performs some calculation on one or more sub-packets contained within. Right now, the specific operations aren't important; focus on parsing the hierarchy of sub-packets. Cotninues: > An operator packet contains one or more packets. To indicate which subsequent binary data represents its sub-packets, an operator packet can use one of two modes indicated by the bit immediately after the packet header; this is called the **length type ID**: > > - If the length type ID is `0`, then the next **15** bits are a number that represents the **total length in bits** of the sub-packets contained by this packet. > If the length type ID is `1`, then the next **11** bits are a number that represents the **number of sub-packets immediately contained** by this packet. > > Finally, after the length type ID bit and the 15-bit or 11-bit field, the sub-packets appear. OK, that's a lot to unpack, but let's see what we can do: - We pop `1` bit and do a switch/case on it - If `0`, we pop `15` bits (the number of bits used by the current packet sub-packets), and recurse into PARSE-PACKET until we cannot parse packets anymore (note: the `3` there is used to discard any padding value introduced by the hex to bin conversion) - If `1` instead, we pop `11` bits and recurse into PARSE-PACKET a number of times equal to the number we just read - Lastly, we return all the sub-packets we read, plus the number of consumed characters (defun parse-operator (s &optional (start 0) &aux pkts) (ecase (bin s start (incf start)) (0 (let* ((remaining (bin s start (incf start 15)))) (loop while (> remaining 3) do (multiple-value-bind (pkt consumed) (parse-packet s start) (setf remaining (- remaining (- consumed start)) pkts (cons pkt pkts) start consumed))))) (1 (dotimes (_ (bin s start (incf start 11))) (multiple-value-bind (pkt consumed) (parse-packet s start) (setf pkts (cons pkt pkts) start consumed))))) (values (reverse pkts) start)) All is left now, is to implement PARSE-PACKET: - The first `3` bits represent the packet version - The next `3` bits represent the packet type - Next is the call to either PARSE-LITERAL or PARSE-OPERATOR - The result is all wrapped up inside a PLIST with: `:version`, `:type`, `:literal`, and `:sub-packets` (defun parse-packet (s &optional (start 0)) (let ((version (bin s start (incf start 3))) (type (bin s start (incf start 3)))) (multiple-value-bind (parsed start) (if (= type 4) (parse-literal s start) (parse-operator s start)) (values (list :version version :type type :literal (if (= type 4) parsed) :sub-packets (if (/= type 4) parsed)) start)))) OK, but what's the task for part 1? > Decode the structure of your hexadecimal-encoded BITS transmission; **what do you get if you add up the version numbers in all packets?** - We take the `:version` of the current packet - We recursively calculate the version-sum of all the sub-packets - Finally we add it all up together (defun part1 (pkt) (labels ((recur (pkt) (+ (getf pkt :version) (loop for next in (getf pkt :sub-packets) sum (recur next))))) (recur pkt))) Cool! For part 2 instead, we are asked to calculate the value of the expression the packet represents: > Literal values (type ID `4`) represent a single number as described above. The remaining type IDs are more interesting: > > - Packets with type ID `0` are **sum** packets - their value is the sum of the values of their sub-packets. If they only have a single sub-packet, their value is the value of the sub-packet. > - Packets with type ID `1` are **product** packets - their value is the result of multiplying together the values of their sub-packets. If they only have a single sub-packet, their value is the value of the sub-packet. > - Packets with type ID `2` are **minimum** packets - their value is the minimum of the values of their sub-packets. > - Packets with type ID `3` are **maximum** packets - their value is the maximum of the values of their sub-packets. > - Packets with type ID `5` are **greater than** packets - their value is **1** if the value of the first sub-packet is greater than the value of the second sub-packet; otherwise, their value is **0**. These packets always have exactly two sub-packets. > - Packets with type ID `6` are **less than** packets - their value is **1** if the value of the first sub-packet is less than the value of the second sub-packet; otherwise, their value is **0**. These packets always have exactly two sub-packets. > - Packets with type ID `7` are **equal to** packets - their value is **1** if the value of the first sub-packet is equal to the value of the second sub-packet; otherwise, their value is **0**. These packets always have exactly two sub-packets. Of all the operators listed above, only three behave differently from builtins: >, <, and =; builtins all return T or NIL, while we want to return `1` or `0` here; let's fix that: (defun b> (a b) (if (> a b) 1 0)) (defun b< (a b) (if (< a b) 1 0)) (defun b= (a b) (if (= a b) 1 0)) > **What do you get if you evaluate the expression represented by your hexadecimal-encoded BITS transmission?** Again, another recursive solution: - If the current packet is a _literal_ one, we return the value - Otherwise we pick the _right_ operator function based on the type - We recursively _evaluate_ all the sub-packets - Finally apply the operator to the the values returned by the sub-packets (defun part2 (pkt) (labels ((recur (pkt &aux (literal (getf pkt :literal))) (cond (literal literal) (t (apply (aref #(+ * min max identity b> b< b=) (getf pkt :type)) (loop for next in (getf pkt :sub-packets) collect (recur next))))))) (recur pkt))) And that's it! Final plumbing: (define-solution (2021 16) (pkt parse-input) (part2 pkt)) (define-test (2021 16) (943 167737115857)) Test run: > (time (test-run)) TEST-2021/16.. Success: 1 test, 2 checks. Evaluation took: 0.002 seconds of real time 0.002154 seconds of total run time (0.001430 user, 0.000724 system) 100.00% CPU 5,031,393 processor cycles 1,062,096 bytes consed Note: replacing APPLY with CONS inside PART2 will cause the function to actually output the AST of the expression stored inside the packet: > (defun part2 (pkt) (labels ((recur (pkt &aux (literal (getf pkt :literal))) (cond (literal literal) (t (cons (aref #(+ * min max identity b> b< b=) (getf pkt :type)) (loop for next in (getf pkt :children) collect (recur next))))))) (recur pkt))) PART2 > (part2 (parse-input (uiop:read-file-lines "src/2021/day16.txt"))) (+ (MIN 2479731930) 7 (* (B< 10381 100) 8) (* 188 242 135) (* 236 42) (MAX 57613) (* 189 (B> 2027 2269116)) (* 131 104 201 105 155) (* 2920854314 (B> (+ 6 3 7) (+ 13 3 15))) 6 (* 2165 (B= (+ 14 8 5) (+ 2 15 7))) 38575271553 (* (B< (+ 12 5 3) (+ 11 6 12)) 41379) (* (B> 727598 727598) 752530) (* 2939 (B< 1007934 660540)) (MAX 498405352 113 1358) (* 3146 (B> 5 5)) 17083744931 (* 11) (MIN 214 46980) 4 (MIN 1 25270 47 240) (MAX 247 45266 27 147874 49322) (* 45822 (B< 2491 2491)) 12 (* (B= 129 1004426) 4030837) 2368 172 (MAX 186 479 31 225) (+ 62424214260) (* 3722699 (B= 232 232)) (* 220 (B< 3667 50625)) (+ 960 9) (+ 852362 59034 3 10186466 44654) (* (B> 33431 39079) 712787) (* 146 140 4 55) (+ 3682 248 76) (MAX 14 184) (MIN 2 1206 15 1476 5005) (MAX (* (+ (MIN (+ (+ (+ (MAX (MIN (MAX (MIN (MIN (+ (MIN (+ (MIN (+ (+ (MAX (* 2673)))))))))))))))))))) (* 1343 (B> 38853 33177)) (* 119 (B< (+ 3 14 2) (+ 13 8 3))) (+ (* 4 5 7) (* 6 7 10) (* 11 3 3)) (+ 202 2004893107 2899 107) (* 901241660305 (B> (+ 4 4 8) (+ 13 6 3))) (* (+ 9 3 12) (+ 9 3 15) (+ 11 4 12)) (* (B= 217 40) 2513437316) 13 (* 11 (B< 2706 2706)) (MIN 63 216 14909) (* (B< 3296 812743) 52657) 72078562 (* (B> 144588154 652519) 5077920)) You feed that to EVAL, and there you will have the answer for part 2: > (eval *) 167737115857 For comparison, here is the [code]( that I used to get the stars; in there: - The _parser_ and the _evaluator_ are the same thing, so to sum the versions and evaluate the expression (part 1 and part 2 respectively) I had to resort to global ^Wspecial variables to keep track of the sum so far - I was creating new strings all over the place, instead of simply playing with indices Note for the future self: when implementing a parser, think about PARSE-INTEGER: - It uses a string as its input stream - It optionally can be told to skip a certain amount of characters from the beginning of the string - It returns not only the parsed object, but also the index of the next unprocessed character -- and that makes for easier composition of the different parsing functions you most likely are going to implement <3 # 2021-12-17 Advent of Code: [2021/17]( We treat ourselves with a little physics problem today: > Ahead of you is what appears to be a large ocean trench. Could the keys have fallen into it? You'd better send a probe to investigate. The probe launcher on your submarine can fire the probe with any integer velocity in the `x` (forward) and `y` (upward, or downward if negative) directions. For example, an initial `x`,`y` velocity like `0,10` would fire the probe straight up, while an initial velocity like `10,-1` would fire the probe forward at a slight downward angle. Continues: > The probe's `x`,`y` position starts at `0,0`. Then, it will follow some trajectory by moving in **steps**. On each step, these changes occur in the following order: > > - The probe's `x` position increases by its `x` velocity. > - The probe's `y` position increases by its `y` velocity. > - Due to drag, the probe's x velocity changes by `1` toward the value `0`; that is, it decreases by `1` if it is greater than `0`, increases by `1` if it is less than `0`, or does not change if it is already `0`. > - Due to gravity, the probe's `y` velocity decreases by `2`. OK, let's create a PROBE structure with `x`, `y`, `vx` and `vy` slots, and let's add a MOVE function as well, implementing the _step_ logic above: (defstruct (probe (:conc-name) (:constructor %make-probe)) x y vx vy) (defun make-probe (vx vy) (%make-probe :x 0 :y 0 :vx vx :vy vy)) (defun move (probe) (with-slots (x y vx vy) probe (let ((dx (<=> vx 0))) (incf x vx) (incf y vy) (decf vx dx) (decf vy))) probe) > For the probe to successfully make it into the trench, the probe must be on some trajectory that causes it to be within a **target area** after any step. The submarine computer has already calculated this target area (your puzzle input). Here is an example input: target area: x=20..30, y=-10..-5 For this, we are going to create a new structure, AREA, having slots for the min / max `x` and `y` values it represents; we are also going to implement PARSE-INPUT to extract `x1`, `x2`, `y1`, and `y2` values from our input, and create a new AREA instance off of it: (defstruct (area (:conc-name) (:constructor %make-area)) x1 x2 y1 y2) (defun make-area (x1 x2 y1 y2) (%make-area :x1 (min x1 x2) :x2 (max x1 x2) :y1 (min y1 y2) :y2 (max y1 y2))) (defun parse-input (data) (flet ((parse (string) (cl-ppcre:register-groups-bind ((#'parse-integer x1 x2 y1 y2)) ("target area: x=(-?\\d+)..(-?\\d+), y=(-?\\d+)..(-?\\d+)" string) (make-area x1 x2 y1 y2)))) (parse (first data)))) > Find the initial velocity that causes the probe to reach the highest y position and still eventually be within the target area after any step. **What is the highest y position it reaches on this trajectory?** To recap: - We know the position where the probe is shot from, i.e. `0,0` - We know where the target area is, i.e. our input - We know how the probe moves - We need to figure out the initial velocity which a) maximises the `y` value, and b) causes the probe to eventually hit the target area (without overshoot!) We are going to brute-force it: - For all the horizontal velocities in a _sensible_ range - For all the vertical velocities in a _sensible_ range - Shoot the probe - Check that the probe does not miss the target area - Maximise `y` values (defun solve (target) (loop for vx from (vx-min target) to (vx-max target) maximize (loop for vy from (vy-min target) to (vy-max target) when (shoot target vx vy) maximize it))) The tricky part here, is of course to figure out some _sensible_ ranges to use, for the horizontal and vertical velocities, but let us worry about the simulation part first: - We shoot - We keep on moving the probe until it gets past the target - We keep track of the maximum reached `y` value, and if the probe ever make it to the target area, we return it (defun shoot (target vx vy &aux (probe (make-probe vx vy))) (loop until (past-target-p target probe) when (on-target-p target probe) return y-max do (move probe) maximize (y probe) into y-max)) A probe gets past the target area if its `y` value is smaller than the area _smallest_ `y` value: (defun past-target-p (target probe) (< (y probe) (y1 target))) On the other hand, a probe hits the target if its `x` and `y` coordinates are contained within the box defined by the target area min / max `x` and `y` values: (defun on-target-p (target probe) (and (<= (x1 target) (x probe) (x2 target)) (<= (y1 target) (y probe) (y2 target)))) Now, let's do some guesswork to figure out some _sensible_ ranges to use for our horizontal and vertical velocities. The maximum horizontal velocity we can shoot the probe at, is the maximum `x` value of the target area; one more, and we are going to miss the target: (defun vx-max (target) (x2 target)) What about its minimum value? Well, the probe loses speed at each step, and if we don't want to miss our target then it means the horizontal speed of the probe when it reaches the target area, has to be `0` at least; so, if we start from the minimum `x` value of the target area, and keep on _accelerating_ towards `x=0`, we should find our minimal horizontal velocity value: (defun vx-min (target &aux (x (x1 target))) (loop while (> x 0) for vx from 0 do (decf x vx) finally (return vx))) On the vertical axis too we lose speed at every step; but what's interesting about this, is that if we shoot up with vertical speed equal to `10`, by the time the probe reaches `y=0` again (i.e. after it has decreased, reached its highest `y` value with vertical speed of `0`, and then came down again), its vertical speed would now be `-10`; also, the probe keeps on moving faster and faster after it has come back to `y=0`, and if we don't want miss our target, we cannot set the vertical speed any higher than _minus target's min `y` value_: (defun vy-max (target) (- (y1 target))) The minimum value? Well, `0` of course, given that we are trying to shoot the probe as high as possible: (defun vy-min (target) (declare (ignore target)) 0) Cool! > (solve (parse-input (uiop:read-file-lines "src/2021/day17.txt"))) 5565 We change our mind for part 2, and rather than focusing on shooting as high as possible, we want to count all the distinct velocities that cause the probe to hit target: > How many distinct initial velocity values cause the probe to be within the target area after any step? First we change SOLVE to cater for both part 1 and part 2 requirements: - For each horizontal velocity - For each vertical velocity - If we eventually hit the target - Maximize the highest `y` value reached by the trajectory - And add `1` to a counter (defun solve (target &aux (part1 0) (part2 0)) (loop for vx from (vx-min target) to (vx-max target) do (loop for vy from (vy-min target) to (vy-max target) do (uiop:if-let (highest (shoot target vx vy)) (setf part1 (max part1 highest) part2 (1+ part2))))) (values part1 part2)) We are not done yet! We were trying to maximize the `y` value of our trajectories previously, so we had our VY-MIN always return `0`; well, for part 2 we need to look into negative vertical velocities too, and the smallest value is if we aimed at the smallest `y` value of the target area -- one more, and we would miss it! (defun vy-min (target) (y1 target)) And that's it! Final plumbing: (define-solution (2021 17) (target parse-input) (solve target)) (define-test (2021 17) (5565 2118)) Test run: > (time (test-run)) TEST-2021/17.. Success: 1 test, 2 checks. Evaluation took: 0.338 seconds of real time 0.307870 seconds of total run time (0.297925 user, 0.009945 system) 91.12% CPU 779,177,010 processor cycles 1,669,536 bytes consed PS. Here is my [REPL buffer]( for the day, and I kind of feel ashamed by the things you can see there: - `x1` and `x2` are min / max, while `y1` and `y2` are max / min -- why?! - PAST-TARGET-P was not working, so inside SIMULATE I ended up breaking out of the loop if the probe `y` value is smaller than `-200` -- lol - With pen and paper I (thought I) figured out the _optimal_ vertical speed for part 1; but was not working with the input, hence the 1+ - For part 2 instead, I made the search space big enough to make it work with the example, and then feed the actual input to it -- and `304.470` seconds later, it spit out the right answer! It's clear my brain is not _functioning_ too well under time pressure, but _whatever it takes_ right?! ;-) # 2021-12-18 Advent of Code: [2021/18]( First off, it's the last _serious_ weekend before December 25th, so expect this **not** to be easy! We descend into the ocean trench and encounter some snailfish; they say they saw the sleigh keys; they will tell us which direction the keys went only if we help them with a little math problem...sigh! > Snailfish numbers aren't like regular numbers. Instead, every snailfish number is a **pair** - an ordered list of two elements. Each element of the pair can be either a regular number or another pair. Pairs are written as `[x,y]`, where `x` and `y` are the elements within the pair. Continues: > This snailfish homework is about **addition**. To add two snailfish numbers, form a pair from the left and right parameters of the addition operator. For example, `[1,2]` + `[[3,4],5]` becomes `[[1,2],[[3,4],5]]`. OK. > There's only one problem: **snailfish numbers must always be reduced**, and the process of adding two _snailfish numbers can result in snailfish numbers that need to be reduced. > > To reduce a snailfish number, you must repeatedly do the first action in this list that applies to the snailfish number: > > - If any pair is **nested inside four pairs**, the leftmost such pair **explodes**. > - If any regular number is **10 or greater**, the leftmost such regular number **splits**. > Once no action in the above list applies, the snailfish number is reduced. OK.. > During reduction, at most one action applies, after which the process returns to the top of the list of actions. For example, if **split** produces a pair that meets the **explode** criteria, that pair **explodes** before other **splits** occur. OK... > To **explode** a pair, the pair's left value is added to the first regular number to the left of the exploding pair (if any), and the pair's right value is added to the first regular number to the right of the exploding pair (if any). Exploding pairs will always consist of two regular numbers. Then, the entire exploding pair is replaced with the regular number `0`. OK.... > To **split** a regular number, replace it with a pair; the left element of the pair should be the regular number divided by two and rounded **down**, while the right element of the pair should be the regular number divided by two and rounded **up**. For example, `10` becomes `[5,5]`, `11` becomes `[5,6]`, `12` becomes `[6,6]`, and so on. OK..... > The homework assignment involves adding up a **list of snailfish numbers** (your puzzle input). The snailfish numbers are each listed on a separate line. Add the first snailfish number and the second, then add that result and the third, then add that result and the fourth, and so on until all numbers in the list have been used once. OK...... > To check whether it's the right answer, the snailfish teacher only checks the **magnitude** of the final sum. The magnitude of a pair is 3 times the magnitude of its left element plus 2 times the magnitude of its right element. The magnitude of a regular number is just that number. OK....... > Add up all of the snailfish numbers from the homework assignment in the order they appear. **What is the magnitude of the final sum?** Well, well, well... OK, first off we need a way to represent snailfish numbers; nothing fancy, we are just going to parse these as list of objects (i.e. `[`, `]`, or a digit): (defun parse-snumbers (data) (mapcar #'sread data)) (defun sread (string) (flet ((ch->number (ch) (- (char-code ch) (char-code #\0)))) (loop for ch across string unless (eql ch #\,) collect (if (find ch "[]") ch (ch->number ch))))) Let's confirm it's working as expected: > (sread "[[1,2],[3,4]]") (#\[ #\[ 1 2 #\] #\[ 3 4 #\] #\]) Now, adding two snailfish numbers together translates to: - Append the second number to the first one - Wrap it all up inside `[` and `]` characters - Then _reduce_ the number Let's do this: (defun s+ (number1 number2) (sreduce (append (list #\[) number1 number2 (list #\])))) To reduce a snailfish number: - Try to _expode_ it - On success, go back to the first step - Otherwise, try to _split_ it - On success, go back to the first step - Otherwise the number is fully reduced, and there is not anything else we can do with it (defun sreduce (number) (loop (uiop:if-let (exploded (sexplode number)) (setf number exploded) (uiop:if-let (split (ssplit number)) (setf number split) (return number))))) _Exploding_ a snailfish that's a tricky one! To recap what we have to do: - Find the first pair nested inside 4 pairs, say `[left, right]` - Replace the current pair with `0` - Add `left` to the first regular number _to the left_ of the exploded pair - Add `right` to the first regular number _to the right_ of the exploded pair We are going to be iterating over the tokens of the number; as we do it, we are going to keep track of: - the depth of the current element -- increase it by `1` on each `[` element, decrease it by `1` on each `]` one - the list of of processed elements, i.e. _to the left_, in the form of a stack - the list of yet to process elements, i.e. _to the right_ This way, as soon as the depth of the current element equals `4` we will know we found the number to explode. When this happens: - The first element inside _to the left_ will be `left` in the recap above, i.e. the element to add to the first regular number _to the left_ - The second element instead will represent `right` - The third one will be a `]` -- which we can discard, as the pair just exploded The last step, is to re-assemble the number: - We add `left` _to the left_, and reverse it (remember, it's a stack) - Then we add the `0` in place of the exploded number - We add `right` _to the right_, and append the result It sounds a bit more complicated than it actually is, so hopefully the following will help understanding what the general idea is: (defun sexplode (number &aux (depth 0) to-the-left) (loop for (v . to-the-right) on number do (cond ((numberp v) (push v to-the-left)) ((eql v #\]) (push v to-the-left) (decf depth)) ((and (eql v #\[) (/= depth 4)) (push v to-the-left) (incf depth)) (t (let ((left (pop to-the-right)) (right (pop to-the-right))) (assert (and (eql v #\[) (= depth 4))) (assert (eql (pop to-the-right) #\])) (flet ((add-first (addend list) (loop for (v . to-the-right) on list collect (if (numberp v) (+ v addend) v) into vv when (numberp v) append to-the-right into vv and return vv finally (return vv)))) (return (append (reverse (add-first left to-the-left)) (list 0) (add-first right to-the-right))))))))) _Splitting_ a snailfish number on the other hand, should hopefully be easier: - For each element of the number - If it's not a regular number, or if it's smaller than `10`, we keep it as is - Otherwise we found our number to split -- so we split it, and assemble things back together (defun ssplit (number) (loop for (v . to-the-right) on number if (or (not (numberp v)) (< v 10)) collect v into to-the-left else return (append to-the-left (list #\[) (list (floor v 2) (ceiling v 2)) (list #\]) to-the-right))) OK, the last missing piece of the puzzle is calculating the _magnitude_ of a snailfish number: - We are going to push the number elements into a stack - When we find a `]` it means we found a _leaf_ - So the pop one item from the stack -- the `right` part of the pair - We pop one more -- the `right` part of the pair - We pop and discard one more -- the `[` char - We do the _math_, i.e. `left * 3 + right * 2`, and push the result into the stack - Eventually, the only element inside the stack will be the magnitude of the number: (defun smagnitude (number &aux stack) (loop for (ch . remaining) on number do (case ch (#\[ (push ch stack)) (#\] (let ((right (pop stack)) (left (pop stack))) (assert (eql (pop stack) #\[)) (push (+ (* left 3) (* right 2)) stack))) (otherwise (push ch stack)))) (pop stack)) Alright, time to implement the solution for part 1: (defun part1 (numbers) (smagnitude (reduce #'s+ numbers))) LOL! Off to part 2: > What is the largest magnitude of any sum of two different snailfish numbers from the homework assignment? We are just going to brute-force this, keeping into account that addition of snailfish numbers is not commutative: (defun part2 (numbers) (loop for (n1 . remaining) on numbers maximize (loop for n2 in remaining maximize (smagnitude (s+ n1 n2)) maximize (smagnitude (s+ n2 n1))))) And that's it! Final plumbing: (define-solution (2021 18) (numbers parse-snumbers) (values (part1 numbers) (part2 numbers))) (define-test (2021 18) (3987 4500)) Test run: > (time (test-run)) TEST-2021/18.. Success: 1 test, 2 checks. Evaluation took: 0.197 seconds of real time 0.187851 seconds of total run time (0.179690 user, 0.008161 system) [ Run times consist of 0.020 seconds GC time, and 0.168 seconds non-GC time. ] 95.43% CPU 453,704,694 processor cycles 403,829,360 bytes consed It conses up a bit more than I thought it would, but there are a lot of APPEND operations going on in there, so it's all right. PS. I spent way too much time trying implement a _recursive_ solution for this, but I simply could not wrap my head around it so eventually I gave up and implemented the solution described above. I did come back to it later though, and finally made it work, though I am not sure if the [recursive solution]( reads any better than the _stack-based_ one! PPS. Looking for today's REPL buffer? It's a [mess]( PPPS. People on [r/adventofcode]( have been mentioning [Zippers]( as a way to solve today's problem; it's the first time I hear of them, so I might as well look into this again and implement an alternate solution! # 2021-12-19 Advent of Code: [2021/19]( Solving today's problem took me way to long, so I am not going put together a full write-up for it, not yet, especially because my solution takes around 2 minutes to complete, and conses up like hell! Evaluation took: 124.405 seconds of real time 92.907883 seconds of total run time (90.247127 user, 2.660756 system) [ Run times consist of 0.972 seconds GC time, and 91.936 seconds non-GC time. ] 74.68% CPU 286,124,018,756 processor cycles 13,399,299,136 bytes consed Very high level: - We pick a scanner and check it against every other one, and see if they overlap - To see if the two scanners overlap... - ...we keep one as is, while rotate the other in all the 24 possible directions - ...for each beacon of the first scanner, we calculate the relative distances with all the other beacons seen by the scanner - ...for each beacon of the second scanner (the rotated one), we calculate the relative distances with all the others beacons seen by the scanner - ...if these two sets of relative distances have at least 12 elements in common, then the two scanners overlap! - some math to return the scanner point, and the rotation matrix that we used to _orient_ the second scanner - We now have a new scanner (with a new origin, and a new rotation matrix), so we can: - ...convert the positions of the local beacons to the system of coordinates of the scanner used as reference - ...use the new scanner to check against all of the others Input parsing: (defun parse-scanner (paragraph &aux (paragraph (cl-ppcre:split "\\n" paragraph))) (flet ((numbers (string) (mapcar #'parse-integer (cl-ppcre:all-matches-as-strings "-?\\d+" string)))) (cons (first (numbers (first paragraph))) (mapcar #'numbers (rest paragraph))))) (defun id (scn) (car scn)) (defun beacons (scn) (cdr scn)) Some utilities to deal with matrices first: (defun row (i m) (nth i m)) (defun col (j m) (loop for row in m collect (nth j row))) (defun dot-product (v1 v2) (apply #'+ (mapcar #'* v1 v2))) (defun invert (m) (transpose m)) (defun transpose (m) (loop for i below (length m) collect (col i m))) ; Coutertesy of: (defparameter *rotate-90-x* '((1 0 0) (0 0 -1) (0 1 0))) (defparameter *rotate-90-y* '(( 0 0 1) ( 0 1 0) (-1 0 0))) (defparameter *rotate-90-z* '((0 -1 0) (1 0 0) (0 0 1))) (defun m* (m1 m2) (loop for i below (length m1) collect (loop for j below (length (first m1)) collect (dot-product (row i m1) (col j m2))))) (defun compute-all-rotation-matrices (&aux (midentity '((1 0 0) (0 1 0) (0 0 1))) rez) (loop repeat 4 for rz = midentity then (m* rz *rotate-90-z*) do (loop repeat 4 for ry = (m* rz midentity) then (m* ry *rotate-90-y*) do (loop repeat 4 for rx = (m* ry midentity) then (m* rx *rotate-90-x*) do (push rx rez)))) (reverse (remove-duplicates rez :test 'equal))) (defparameter *all-rotation-natrices* (compute-all-rotation-matrices)) The beef for part 1: (defun find-beacons (scanners &aux (queue (list (list (first scanners) '(0 0 0) '((1 0 0) (0 1 0) (0 0 1))))) (rez (beacons (first scanners))) (scanners (rest scanners))) (loop while queue for (scn1 d10 r10) = (pop queue) do (loop for scn2 in scanners do (multiple-value-bind (p1 r21) (locate-scanner scn1 scn2) (when p1 (flet ((transform21 (p2) (v+ (rotate p2 r21) p1)) (transform10 (p1) (v+ (rotate p1 r10) d10))) (let ((d20 (transform10 p1)) (r20 (m* r10 r21))) (prl (id scn1) d10 r10 (id scn2) p1 r21 d20 r20) (setf rez (append rez (mapcar (lambda (p2) (transform10 (transform21 p2))) (beacons scn2))) scanners (remove scn2 scanners :test 'equal) queue (cons (list scn2 d20 r20) queue)))))))) (remove-duplicates rez :test 'equal)) (defun locate-scanner (scn1 scn2 &aux (bb1 (beacons scn1))) (dolist (b1 bb1) (let ((dd1 (relative-to b1 bb1))) (dolist (m *all-rotation-natrices*) (let ((bb2 (rotate-all (beacons scn2) m))) (dolist (b2 bb2) (let ((dd2 (relative-to b2 bb2))) (let ((common (intersection dd1 dd2 :test 'equal))) (when (>= (length common) 12) (let ((b2-orig (rotate b2 (invert m)))) (return-from locate-scanner (values (locate b1 b2-orig m) m)))))))))))) (defun relative-to (b bb) (mapcar (lambda (o) (mapcar #'- o b)) bb)) (defun rotate-all (points matrix) (mapcar (partial-1 #'rotate _ matrix) points)) (defun rotate (point matrix) (loop for row in matrix collect (dot-product row point))) (defun locate (b1 b2 m) (v- b1 (rotate b2 m))) #; DO IT! (find-beacons (mapcar #'parse-scanner (cl-ppcre:split "\\n\\n" (uiop:read-file-string "src/2021/day19.txt")))) (setq beacons *) (length *) With part 1 done (and working!), part 2 should be pretty easy: - With the same algorithm we used for part 1, we figure out the list of scanners -- FIND-SCANNERS is 99% copy-pasta of FIND-BEACONS - Then we check each scanner against each other, and maximize the manhattan distance (defun find-scanners (scanners &aux (queue (list (list (first scanners) '(0 0 0) '((1 0 0) (0 1 0) (0 0 1))))) rez (scanners (rest scanners))) (loop while queue for (scn1 d10 r10) = (pop queue) do (loop for scn2 in scanners do (multiple-value-bind (p1 r21) (locate-scanner scn1 scn2) (when p1 (flet ((transform21 (p2) (v+ (rotate p2 r21) p1)) (transform10 (p1) (v+ (rotate p1 r10) d10))) (let ((d20 (transform10 p1)) (r20 (m* r10 r21))) (prl (id scn1) d10 r10 (id scn2) p1 r21 d20 r20) (setf rez (cons d20 rez) scanners (remove scn2 scanners :test 'equal) queue (cons (list scn2 d20 r20) queue)))))))) rez) #; DO IT! (find-scanners (mapcar #'parse-scanner (cl-ppcre:split "\\n\\n" (uiop:read-file-string "src/2021/day19.txt")))) (setq scanners *) (loop for (s1 . remaining) on scanners maximize (loop for s2 in remaining maximize (manhattan-distance s1 s2))) Things have gotten...interesting! # 2021-12-20 Advent of Code: [2021/20]( > With the scanners fully deployed, you turn their attention to mapping the floor of the ocean trench. When you get back the image from the scanners, it seems to just be random noise. Perhaps you can combine an image enhancement algorithm and the input image (your puzzle input) to clean it up a little. For example: ..#.#..#####.#.#.#.###.##.....###.##.#..###.####..#####..#....#..#..##..###..######.###...####..#..#####..##..#.#####...##.#.#..#.##..#.#......#.###.######.###.####...#.##.##..#..#..#####.....#.#....###..#.##......#.....#..#..#..##..#...##.######.####.####.#.#...#.......#..#.#.#...####.##.#......#..#...##.#.##..#...##.#.##..###.#......#.#.......#.#.#.####.###.##...#.....####.#..#..#.##.#....##..#.####....##...##..#...#......#.#.......#.......##..####..#...#.#.#...##..#.#..###..#####........#..####......#..# #..#. #.... ##..# ..#.. ..### The first section is the **image enhancement algorithm**. The second section is the **input image**, a two-dimensional grid of light pixels (`#`) and dark pixels (`.`). > The image enhancement algorithm describes how to enhance an image by **simultaneously** converting all pixels in the input image into an output image. Each pixel of the output image is determined by looking at a 3x3 square of pixels centered on the corresponding input image pixel. So, to determine the value of the pixel at (5,10) in the output image, nine pixels from the input image need to be considered: (4,9), (4,10), (4,11), (5,9), (5,10), (5,11), (6,9), (6,10), and (6,11). These nine input pixels are combined into a single binary number that is used as an index in the image **enhancement algorithm string**. The task for the day: > Start with the original input image and apply the image enhancement algorithm twice, being careful to account for the infinite size of the images. **How many pixels are lit in the resulting image?** OK, first off, the input: - A tuple - First element is the algorithm, with all the `#` and `.` characters replaced with Ts and NILs - The second element, a HASH-TABLE mapping from `(row col)` pairs, to whether the pixel was on (T), or not (NIL) (defun parse-input (data) (list (parse-enhancement-algorightm (first data)) (parse-image (cddr data)))) (defun parse-enhancement-algorightm (string) (map 'vector (partial-1 #'eql #\#) string)) (defun parse-image (data &aux (image (make-hash-table :test 'equal))) (loop for r below (length data) for string in data do (loop for c below (length string) for ch across string when (eql ch #\#) do (setf (gethash (list r c) image) t))) image) (defun enhancement-algorithm (input) (first input)) (defun image (input) (second input)) Next, the core of the enhance algorithm: - All the pixels are enhanced at the same time, so start off by copying the current image - Then, for each pixel -- plus some extra ones, to properly account account for the enhancements happening on the border of the image - We fetch the 3x3 block centered on _that_ pixel -- NEIGHBORS9 - We convert that into an index - We use that to access the algorithm and figure out whether the pixel will be on or off (defun enhance-step (algo curr &aux (next (make-hash-table :test 'equal)) (row-min (loop for (r _) being the hash-keys of curr minimize r)) (row-max (loop for (r _) being the hash-keys of curr maximize r)) (col-min (loop for (_ c) being the hash-keys of curr minimize c)) (col-max (loop for (_ c) being the hash-keys of curr maximize c))) (flet ((block-3x3 (pos) (mapcar (partial-1 #'gethash _ curr) (neighbors9 pos)))) (loop for row from (- row-min 3) to (+ row-max 3) do (loop for col from (- col-min 3) to (+ col-max 3) for pos = (list row col) for i = (as-binary (block-3x3 pos)) do (setf (gethash pos next) (aref algo i))))) next) NEIGHBORS9 simply checks all the adjacent positions (including the current one), and does this in the _right_ order -- fail to do so, and we would be accessing elements of the enhancement algorithm wrong! (defparameter *nhood* '((-1 -1) (-1 0) (-1 1) ( 0 -1) ( 0 0) ( 0 1) ( 1 -1) ( 1 0) ( 1 1))) (defun neighbors9 (pos) (loop for d in *nhood* collect (mapcar #'+ pos d))) Inside AS-BINARY, we scan the input list of surrounding pixels, and thanks to DPB and BYTE we construct the index to use while accessing the enhancement algorithm: (defun as-binary (list &aux (number 0)) (loop for ch in list for i from 8 downto 0 when ch do (setf number (dpb 1 (byte 1 i) number))) number) The last bit: the guy that does the enhancement, a given number of times (`2` in our case), and that finally returns the number of pixels which are on: (defun enhance (iterations input &aux (algo (enhancement-algorithm input)) (curr (image input))) (dotimes (_ iterations) (setf curr (enhance-step algo curr))) (loop for state being the hash-values of curr count state)) Feed this our input, and we should be getting our result back...right?!! As it turns out, the first entry of the enhancement algorithm is a `#`; that entry can only be accessed with index `0` which can only be obtained for pixels centered around a 3x3 with all the pixels off; now, since our image is _infinite_, and since all the surrounding pixels not specified by our input are expected to be off, that means that after the first enhancement step...**all the surrounding pixels will be on...WTF?!** Well, lucky for us, the last element of the enhancement algorithm is a `.`, which means that a 3x3 block with all the pixels turned on will cause the middle one to be turned off; so maybe, if after the first iteration all the _background_ pixels will be turned on, after the second one they would all be turned off again. OK, let's start by changing ENHANCE to check for the current enhancement step, and use that information to decide what the background state is going to be (i.e. it should be on, T, for all the odd steps): (defun enhance (iterations input &aux (algo (enhancement-algorithm input)) (curr (image input))) (dotimes (i iterations) (setf curr (enhance-step algo curr (oddp i)))) (loop for state being the hash-values of curr count state)) Next we update ENHANCEMENT-STEP to use `background-lit-p` accordingly (i.e. use that when accessing a pixel which is outside of the current image): (defun enhance-step (algo curr background-lit-p &aux (next (make-hash-table :test 'equal)) (row-min (loop for (r _) being the hash-keys of curr minimize r)) (row-max (loop for (r _) being the hash-keys of curr maximize r)) (col-min (loop for (_ c) being the hash-keys of curr minimize c)) (col-max (loop for (_ c) being the hash-keys of curr maximize c))) (flet ((block-3x3 (pos) (mapcar (partial-1 #'gethash _ curr background-lit-p) (neighbors9 pos)))) (loop for row from (- row-min 3) to (+ row-max 3) do (loop for col from (- col-min 3) to (+ col-max 3) for pos = (list row col) for i = (as-binary (block-3x3 pos)) do (setf (gethash pos next) (aref algo i))))) next) Calling ENHANCE on the input image should now work as expected. What about part 2? It looks like `2` steps were not enough: > Start again with the original input image and apply the image enhancement algorithm 50 times. **How many pixels are lit in the resulting image?** Easy peasy -- the code we have for part 1 will work for part 2 as well! Final plumbing: (define-solution (2021 20) (input parse-input) (values (enhance 2 input) (enhance 50 input))) (define-test (2021 20) (5503 19156)) Test run: > (time (test-run)) TEST-2021/20.. Success: 1 test, 2 checks. Evaluation took: 8.300 seconds of real time 7.358258 seconds of total run time (6.756652 user, 0.601606 system) [ Run times consist of 0.756 seconds GC time, and 6.603 seconds non-GC time. ] 88.65% CPU 19,090,743,941 processor cycles 2,622,069,728 bytes consed Well, that conses a lot, but gets the job done, so I am going to keep it as is for now! Maybe I could use a 2D BIT array, rather than a HASH-TABLE of Ts and NILs? PS. As usual, my [REPL buffer]( -- it's not too bad...I have seen (and authored) worse! # 2022-01-03 Advent of Code: [2021/21]( We have been challenged by the computer to play a game with it: > This game consists of a single die, two pawns, and a game board with a circular track containing ten spaces marked `1` through `10` clockwise. Each player's **starting space** is chosen randomly (your puzzle input). Player 1 goes first. Continues: > Players take turns moving. On each player's turn, the player rolls the die **three times** and adds up the results. Then, the player moves their pawn that many times **forward** around the track (that is, moving clockwise on spaces in order of increasing value, wrapping back around to `1` after `10`). So, if a player is on space `7` and they roll `2`, `2`, and `1`, they would move forward 5 times, to spaces `8`, `9`, `10`, `1`, and finally stopping on `2`. > After each player moves, they increase their **score** by the value of the space their pawn stopped on. Players' scores start at `0`. So, if the first player starts on space `7` and rolls a total of `5`, they would stop on space `2` and add `2` to their score (for a total score of `2`). The game immediately ends as a win for any player whose score reaches **at least `1000`**. > Since the first game is a practice game, the submarine opens a compartment labeled **deterministic dice** and a 100-sided die falls out. This die always rolls `1` first, then `2`, then `3`, and so on up to `100`, after which it starts over at `1` again. Play using this die. The task: > Play a practice game using the deterministic 100-sided die. The moment either player wins, **what do you get if you multiply the score of the losing player by the number of times the die was rolled during the game?** As usual, let's worry about our the input first; this is what we are expecting: Player 1 starting position: 4 Player 2 starting position: 8 And we are going to parse this into a CONS cell, with accessors functions for each player starting position: (defun parse-positions (data) (flet ((player-position (string) (cl-ppcre:register-groups-bind ((#'parse-integer pos)) ("Player \\d+ starting position: (\\d+)" string) pos))) (cons (player-position (first data)) (player-position (second data))))) (defun player1 (positions) (car positions)) (defun player2 (positions) (cdr positions)) get the answer for part 1, we are just going to have to simulate the game: - Each player score starts at `0` - We initialize `last-die` to `100`, and `roll-count` to `0` - We roll the die thrice, and store its sum, taking care of applying the _right_ wrap around logic - We increase `last-die` by 3 -- again, taking care of applying the _right_ wrap around logic - We move player 1 forward - We increase its score - We increase the die roll count - If player one won, we stop; otherwise, we recurse and swap player 1 data with with player 2 one (defun play (p1 p2 &optional (s1 0) (s2 0) (last-die 100) (roll-count 0) &aux (result (mod1 (+ (+ last-die 1) (+ last-die 2) (+ last-die 3)) 10)) (last-die (mod1 (+ last-die 3) 100)) (p1 (mod1 (+ p1 result) 10)) (s1 (+ s1 p1)) (roll-count (+ roll-count 3))) (if (>= s1 1000) (* s2 roll-count) (play p2 p1 s2 s1 last-die roll-count))) MOD1's job is simply to apply the _right_ wrap around logic, i.e. if the new value is equal to the maximum value, then set it to `1`: (defun mod1 (n max) (loop while (> n max) do (decf n max)) n) Now that we're warmed up, it's time to play the real game -- part 2: > A second compartment opens, this time labeled **Dirac dice**. Out of it falls a single three-sided die. > > As you experiment with the die, you feel a little strange. An informational brochure in the compartment explains that this is a **quantum die**: when you roll it, the universe **splits into multiple copies**, one copy for each possible outcome of the die. In this case, rolling the die always splits the universe into **three copies**: one where the outcome of the roll was `1`, one where it was `2`, and one where it was `3`. > > The game is played the same as before, although to prevent things from getting too far out of hand, the game now ends when either player's score reaches at least `21`. The task: > Using your given starting positions, determine every possible outcome. **Find the player that wins in more universes; in how many universes does that player win?** Again...we are going to simulate this. Let's suppose we had a COUNT-WINS function, capable of telling us, given each player positions, how many times each player would win; with it, getting the answer for part 2 would be as simple as calling MAX with each player win count: (defun part2 (positions) (destructuring-bind (wc1 . wc2) (count-wins (player1 positions) (player2 positions)) (max wc1 wc2))) Nice! Let's now see what our COUNT-WINS function is going to look like: - Each player rolls the three-faced die, thrice, and collect its score - If it won, we increase the player win counter - Otherwise, we let the other player play, and collect win count accordingly (when recursing, player 1 and player 2 data is swapped, so the win counts will be swapped as well) (defun count-wins (p1 p2 &optional (s1 0) (s2 0)) (let ((wc1 0) (wc2 0)) (loop for d1 from 1 to 3 do (loop for d2 from 1 to 3 do (loop for d3 from 1 to 3 for p1-next = (mod1 (+ p1 d1 d2 d3) 10) for s1-next = (+ s1 p1-next) do (if (>= s1-next 21) (incf wc1) (destructuring-bind (w2 . w1) (count-wins p2 p1-next s2 s1-next dp) (incf wc1 w1) (incf wc2 w2)))))) (cons wc1 wc2))) The problem with this naive solutions is that it will take forever to run; the text for part 2 should have hinted us about this: > Using the same starting positions as in the example above, player 1 wins in **`444356092776315`** universes, while player 2 merely wins in `341960390180808` universes. Let's think about this: - each player maximum score is `29` -- worst case, it started off position `20` (with a score of `20`) and rolled three `3` - the same is valid for player positions So maybe there aren't that many _distinct_ states after all; maybe we can _memoize_ all these calls?! Well, it turns out we can, and it will work pretty fast too! (defun count-wins (p1 p2 &optional (s1 0) (s2 0) (dp (make-hash-table :test 'equal)) &aux (key (list p1 s1 p2 s2))) (uiop:if-let (wins (gethash key dp)) wins (setf (gethash key dp) (let ((wc1 0) (wc2 0)) (loop for d1 from 1 to 3 do (loop for d2 from 1 to 3 do (loop for d3 from 1 to 3 for p1-next = (mod1 (+ p1 d1 d2 d3) 10) for s1-next = (+ s1 p1-next) do (if (>= s1-next 21) (incf wc1) (destructuring-bind (w2 . w1) (count-wins p2 p1-next s2 s1-next dp) (incf wc1 w1) (incf wc2 w2)))))) (cons wc1 wc2))))) Final plumbing: (define-solution (2021 21) (positions parse-positions) (values (part1 positions) (part2 positions))) (define-test (2021 21) (998088 306621346123766)) Test run: > (time (test-run)) TEST-2021/21.. Success: 1 test, 2 checks. Evaluation took: 0.066 seconds of real time 0.064617 seconds of total run time (0.062712 user, 0.001905 system) 98.48% CPU 153,951,731 processor cycles 20,318,160 bytes consed And that's it! As I was trying to clean this up and make use of my custom DEFUN/MEMO macro, I realized it was not properly supporting functions with `&optional` and `&aux` arguments: ; in: DEFUN COUNT-WINS ; (LIST AOC/2021/21::P1 AOC/2021/21::P2 &OPTIONAL (AOC/2021/21::S1 0) ; (AOC/2021/21::S2 0)) ; ; caught WARNING: ; undefined variable: COMMON-LISP:&OPTIONAL ; (AOC/2021/21::S1 0) ; ; caught STYLE-WARNING: ; undefined function: AOC/2021/21::S1 ; (AOC/2021/21::S2 0) ; ; caught STYLE-WARNING: ; undefined function: AOC/2021/21::S2 ; ; compilation unit finished ; Undefined functions: ; S1 S2 ; Undefined variable: ; &OPTIONAL ; caught 1 WARNING condition ; caught 2 STYLE-WARNING conditions WARNING: redefining AOC/2021/21::COUNT-WINS in DEFUN WARNING: redefining AOC/2021/21::COUNT-WINS/CLEAR-MEMO in DEFUN COUNT-WINS COUNT-WINS/CLEAR-MEMO So I went on, and fixed it -- pay attention to EXPAND-DEFUN/MEMO-KEYABLE-ARGS, in which we: - Skip `&optional` - Skip `&key` - Skip `&rest` and the following parameter - Exit on `&aux` - _Unwrap_ optional / key parameter (defmacro defun/memo (name args &body body) (expand-defun/memo name args body)) (defun expand-defun/memo (name args body) (let* ((memo (gensym (mkstr name '-memo)))) `(let ((,memo (make-hash-table :test 'equalp))) (values ,(expand-defun/memo-memoized-fun memo name args body) ,(expand-defun/memo-clear-memo-fun memo name))))) (defun expand-defun/memo-memoized-fun (memo name args body) (with-gensyms (key result result-exists-p) `(defun ,name ,args (let ((,key (list ,@(expand-defun/memo-keyable-args args)))) (multiple-value-bind (,result ,result-exists-p) (gethash ,key ,memo) (if ,result-exists-p ,result (setf (gethash ,key ,memo) ,@body))))))) (defun expand-defun/memo-keyable-args (args) (uiop:while-collecting (collect) (loop while args for a = (pop args) do (cond ((eq a '&optional) nil) ((eq a '&key) nil) ((eq a '&rest) (pop args)) ((eq a '&aux) (return)) ((consp a) (collect (car a))) (t (collect a)))))) (defun expand-defun/memo-clear-memo-fun (memo name) (let ((clear-memo-name (intern (mkstr name '/clear-memo)))) `(defun ,clear-memo-name () (clrhash ,memo)))) With this, COUNT-WINS can be updated as follows: (defun/memo count-wins (p1 p2 &optional (s1 0) (s2 0)) (let ((wc1 0) (wc2 0)) (loop for d1 from 1 to 3 do (loop for d2 from 1 to 3 do (loop for d3 from 1 to 3 for p1-next = (mod1 (+ p1 d1 d2 d3) 10) for s1-next = (+ s1 p1-next) do (if (>= s1-next 21) (incf wc1) (destructuring-bind (w2 . w1) (count-wins p2 p1-next s2 s1-next) (incf wc1 w1) (incf wc2 w2)))))) (cons wc1 wc2))) (Yes, all we did was replacing DEFUN with DEFUN/MEMO) # 2022-01-04 Advent of Code: [2021/22]( We overloaded the submarine's reactor, and we need to reboot it: > The reactor core is made up of a large 3-dimensional grid made up entirely of cubes, one cube per integer 3-dimensional coordinate (`x,y,z`). Each cube can be either **on** or **off**; at the start of the reboot process, they are all **off**. Continues: > To reboot the reactor, you just need to set all of the cubes to either **on** or **off** by following a list of **reboot steps** (your puzzle input). Each step specifies a cuboid (the set of all cubes that have coordinates which fall within ranges for `x`, `y`, and `z`) and whether to turn all of the cubes in that cuboid **on** or **off**. For example, this is how our input is going to look like: on x=10..12,y=10..12,z=10..12 on x=11..13,y=11..13,z=11..13 off x=9..11,y=9..11,z=9..11 on x=10..10,y=10..10,z=10..10 Let's begin by parsing this; for each line, we are going to be creating a tuple containing: - :ON/:OFF as first element - Min and max x values, as second and third argument - Min and max y values, as fourth and fifth argument - Min and max z values, as sixth and seventh argument (defun parse-instructions (data) (mapcar #'parse-instruction data)) (defun parse-instruction (string) (cl-ppcre:register-groups-bind ((#'as-keyword state) (#'parse-integer x1 x2 y1 y2 z1 z2)) ("(on|off) x=(-?\\d+)..(-?\\d+),y=(-?\\d+)..(-?\\d+),z=(-?\\d+)..(-?\\d+)" string) (list state (min x1 x2) (max x1 x2) (min y1 y2) (max y1 y2) (min z1 z2) (max z1 z2)))) Let's keep on reading: > The initialization procedure only uses cubes that have `x`, `y`, and `z` positions of at least `-50` and at most `50`. For now, ignore cubes outside this region. OK... > Execute the reboot steps. Afterward, considering only cubes in the region `x=-50..50,y=-50..50,z=-50..50`, **how many cubes are on?** Easy peasy: - For each instruction - _Clip_ the cuboid to the given -/+50 units region - For each 1x1x1 cuboid in the resulting region, turn it on or off accordingly - At the end, count the 1x1x1 cuboids which are on (defun part1 (instructions &aux (cuboids (make-hash-table :test 'equal))) (loop for (state x1 x2 y1 y2 z1 z2) in instructions do (setf x1 (max x1 -50) x2 (min x2 50) y1 (max y1 -50) y2 (min y2 50) z1 (max z1 -50) z2 (min z2 50)) (loop for x from x1 to x2 do (loop for y from y1 to y2 do (loop for z from z1 to z2 do (setf (gethash (list x y z) cuboids) state))))) (loop for state being the hash-values of cuboids count (eql state :on))) Part 2 starts with: > Now that the initialization procedure is complete, you can reboot the reactor. > > Starting again with all cubes **off**, execute all reboot steps. Afterward, considering all cubes, **how many cubes are on?** Note: the text contains an example, the solution of which seems to be: `2758514936282235`. So I am afraid we won't be able to bruteforce this -- not naively at least! The solution that I ended up implementing, which honestly took me way too long to get to and that only later on I found out that in literature it goes under the name of: [**coordinate compression**](, builds on the idea that we could _merge_ together adjacent cuboids if a) they are in the same state, i.e. :ON or :OFF, and b) if the final shape is still a cuboid. So, if we figured out a way to _split_ all the input cuboids (possibly overlapping) into another set of **non-overlapping** cuboids, we could then solve the problem as follows: - First we build a 3D array, where each element represents the state of a **non-overlapping** cuboid -- we will use a BIT array for this - Then, for each input cuboid - We figure out which **non-overlapping** cuboid it covers, and turn that on / off accordingly -- here we turn on / off the NxMxO cuboid, not each of its 1x1x1 ones - When done turning processing input instructions, we scan the grid of the **non-overlapping** cuboids, and sum up the _volume_ of the ones which are on The above should translate into the following function -- We are going to be looking into COMPRESS-COORDINATES in a sec: (defun part2 (instructions &aux) (destructuring-bind (xx yy zz) (compress-coordinates instructions) (let ((grid (make-array (list (length xx) (length yy) (length zz)) :element-type 'bit :initial-element 0))) (flet ((index-of (item vector &aux (start 0) (end (length vector))) (binary-search start end (partial-1 #'<=> (aref vector _) item)))) (loop for (state x1 x2 y1 y2 z1 z2) in instructions for i-min = (index-of x1 xx) for i-max = (index-of (1+ x2) xx) for j-min = (index-of y1 yy) for j-max = (index-of (1+ y2) yy) for k-min = (index-of z1 zz) for k-max = (index-of (1+ z2) zz) do (loop for i from i-min below i-max do (loop for j from j-min below j-max do (loop for k from k-min below k-max do (setf (aref grid i j k) (ecase state (:on 1) (:off 0)))))))) (loop with i-max = (1- (length xx)) and j-max = (1- (length yy)) and k-max = (1- (length zz)) for i below i-max for x1 = (aref xx i) for x2 = (aref xx (1+ i)) sum (loop for j below j-max for y1 = (aref yy j) for y2 = (aref yy (1+ j)) sum (loop for k below k-max for z1 = (aref zz k) for z2 = (aref zz (1+ k)) when (= (aref grid i j k) 1) sum (* (- x2 x1) (- y2 y1) (- z2 z1)))))))) So how can we split the complete 3d space into a set of **non-overlapping** cuboids? Well, we could use all the distinct `x1`, `x2`, `y1`, `y2`, `z1` and `z2` values mentioned in the input instructions; unfortunately though, this choice would still result in a list of overlapping cuboids, and that's because all the `*2` values mentioned in the input instructions turn out to be inclusive. So, what can we do instead? We could use all the `v1` and `(1+ v2)` values, and _that_ would for sure result in a set of **non-overlapping** cuboids. (defun compress-coordinates (instructions) (flet ((unique&sorted (list &aux (list (remove-duplicates list))) (make-array (length list) :initial-contents (sort list #'<)))) (loop for (_ x1 x2 y1 y2 z1 z2) in instructions collect x1 into xx collect (1+ x2) into xx collect y1 into yy collect (1+ y2) into yy collect z1 into zz collect (1+ z2) into zz finally (return (list (unique&sorted xx) (unique&sorted yy) (unique&sorted zz)))))) And that's it! Final plumbing: (define-solution (2021 22) (instructions parse-instructions) (values (part1 instructions) (part2 instructions))) (define-test (2021 22) (570915 1268313839428137)) Test run: > (time (test-run)) TEST-2021/22.. Success: 1 test, 2 checks. Evaluation took: 14.835 seconds of real time 13.695186 seconds of total run time (13.327857 user, 0.367329 system) [ Run times consist of 0.162 seconds GC time, and 13.534 seconds non-GC time. ] 92.32% CPU 34,120,998,148 processor cycles 231,646,416 bytes consed Unfortunately this takes quite some time to generate the correct answer for part 2, but I am not sure what we could do to speed things up a bit. When I timed the two LOOP forms of PART2 inside: (defun part2 (instructions &aux) (destructuring-bind (xx yy zz) (compress-coordinates instructions) (let ((grid (make-array (list (length xx) (length yy) (length zz)) :element-type 'bit :initial-element 0))) (flet ((index-of (item vector &aux (start 0) (end (length vector))) (binary-search start end (partial-1 #'<=> (aref vector _) item)))) (time (loop for (state x1 x2 y1 y2 z1 z2) in instructions for i-min = (index-of x1 xx) for i-max = (index-of (1+ x2) xx) for j-min = (index-of y1 yy) for j-max = (index-of (1+ y2) yy) for k-min = (index-of z1 zz) for k-max = (index-of (1+ z2) zz) do (loop for i from i-min below i-max do (loop for j from j-min below j-max do (loop for k from k-min below k-max do (setf (aref grid i j k) (ecase state (:on 1) (:off 0))))))))) (time (loop for i below (1- (length xx)) for x1 = (aref xx i) for x2 = (aref xx (1+ i)) sum (loop for j below (1- (length yy)) for y1 = (aref yy j) for y2 = (aref yy (1+ j)) sum (loop for k below (1- (length zz)) for z1 = (aref zz k) for z2 = (aref zz (1+ k)) when (= (aref grid i j k) 1) sum (* (- x2 x1) (- y2 y1) (- z2 z1))))))))) And run the test again I got the following: > (time (test-run)) TEST-2021/22 Evaluation took: 4.088 seconds of real time 3.189448 seconds of total run time (3.073847 user, 0.115601 system) [ Run times consist of 0.037 seconds GC time, and 3.153 seconds non-GC time. ] 78.01% CPU 9,402,713,194 processor cycles 65,504 bytes consed Evaluation took: 10.920 seconds of real time 9.741171 seconds of total run time (9.536960 user, 0.204211 system) 89.20% CPU 25,115,619,739 processor cycles 416 bytes consed .. Success: 1 test, 2 checks. Evaluation took: 15.704 seconds of real time 13.589492 seconds of total run time (13.193504 user, 0.395988 system) [ Run times consist of 0.084 seconds GC time, and 13.506 seconds non-GC time. ] 86.53% CPU 36,120,260,013 processor cycles 231,686,928 bytes consed Which means the two thirds of the run time are spent while scanning the grid of non overlapping cuboids (FYI, it's a `828x836x828` 3D array, with `573148224` elements in it). I am going to leave this for now, but if anyone had any idea on how to optimize this, do reach out. For comparison, my [original solution]( took around twice as much to get the answer for part 2. What did I do wrong, there?! Well, as I moped things up for this entry, I realized that we could speed things up a little bit by a) using **binary search** instead of a linear scan to find the min / max indices of the **non-overlapping** cuboids, but most importantly, b) we could do this search **once** per input instruction (i.e. once for each `x1`, `(1+ x2)`, `y1`, `(1+ y2)`, `z1`, `(1+ z2)` value) instead of doing this over and over again inside each one of the nested LOOP forms. Live and learn! # 2022-01-05 Advent of Code: [2021/23]( > A group of amphipods notice your fancy submarine and flag you down. "With such an impressive shell," one amphipod says, "surely you can help us with a question that has stumped our best scientists." Continues: > They go on to explain that a group of timid, stubborn amphipods live in a nearby burrow. Four types of amphipods live there: **Amber** (`A`), **Bronze** (`B`), **Copper** (`C`), and **Desert** (`D`). They live in a burrow that consists of a **hallway** and four **side rooms**. The side rooms are initially full of amphipods, and the hallway is initially empty. We are given a diagram of the situation (your puzzle input), for example: ############# #...........# ###B#C#B#D### #A#D#C#A# ######### The amphipods would like a method to organize every amphipod into side rooms so that each side room contains one type of amphipod and the types are sorted `A`-`D` going left to right, like this: ############# #...........# ###A#B#C#D### #A#B#C#D# ######### Amphipods can move up, down, left, right, but there is a cost attached to each movement: > Amphipods can move up, down, left, or right so long as they are moving into an unoccupied open space. Each type of amphipod requires a different amount of **energy** to move one step: Amber amphipods require `1` energy per step, Bronze amphipods require `10` energy, Copper amphipods require `100`, and Desert ones require `1000`. The amphipods would like you to find a way to organize the amphipods that requires the **least total energy**. In addition, amphipods are not free to move wherever they want: > However, because they are timid and stubborn, the amphipods have some extra rules: > > - Amphipods will never **stop on the space immediately outside any room**. They can move into that space so long as they immediately continue moving. (Specifically, this refers to the four open spaces in the hallway that are directly above an amphipod starting position.) > - Amphipods will never **move from the hallway into a room** unless that room is their destination room **and** that room contains no amphipods which do not also have that room as their own destination. If an amphipod's starting room is not its destination room, it can stay in that room until it leaves the room. (For example, an Amber amphipod will not move from the hallway into the right three rooms, and will only move into the leftmost room if that room is empty or if it only contains other Amber amphipods.) > - Once an amphipod stops moving in the hallway, **it will stay in that spot until it can move into a room**. (That is, once any amphipod starts moving, any other amphipods currently in the hallway are locked in place and will not move again until they can move fully into a room.) OK, so given all these rules, and given the energy that each amphipod uses to make a move, we need to help the amphipods organize themselves and at the same time minimize the energy required to do that: > **What is the least energy required to organize the amphipods?** Input first: - we will be parsing the burrow into a CHARACTER 2D array - we will also keep track, separately, of the `(row col)` locations of all the siderooms -- bottom ones first! (defun parse-input (lines &aux (rows (length lines)) (cols (length (car lines))) (burrow (make-array (list rows cols) :element-type 'character :initial-element #\Space)) rooms) (loop for row from 0 for string in lines do (loop for col from 0 for ch across string do (setf (aref burrow row col) ch) (when (and (< 1 row (1- rows)) (member col '(3 5 7 9))) (let* ((type (case col (3 #\A) (5 #\B) (7 #\C) (9 #\D))) (existing (assoc type rooms))) (if existing (push (list row col) (second existing)) (push (list type (list (list row col))) rooms)))))) (cons burrow rooms)) (defun siderooms (rooms type) (second (assoc type rooms))) Next, a little constant for all the _valid_ locations in the hallway (i.e. the ones **not** facing any rooms): (defparameter *hallway* '((1 1) (1 2) (1 4) (1 6) (1 8) (1 10) (1 11))) We are going to treat this as a search problem: - Starting the input situation - We try to move every amphipod that can actually be moved (as per the rules above) -- POSSIBLE-MOVES - We keep on doing this until each amphipod is in the right room Note: we don't have a heuristic to play with, hence we will go with Dijkstra's algorithm, and not A*. (defun organize (input &aux (burrow (car input)) (rooms (cdr input))) (search-cost (dijkstra burrow :test 'equalp :goalp (partial-1 #'donep rooms) :neighbors (partial-1 #'possible-moves rooms)))) Let's now take a look at DONEP, the function used to break out of the search loop: - For each sideroom -- a sideroom is characterized by a type (i.e. `#\A`, `#\B`, `#\C`, and `#\D`) and a location - We check if it's occupied by an amphipod of matching type - If not, it means we are not done yet (defun donep (rooms burrow) (loop for (type siderooms) in rooms always (loop for (row col) in siderooms always (char= (aref burrow row col) type)))) We now have to figure out, given the current amphipods configuration, all the possible moves that do not violate any of the rules above; we will do this in two steps: first we will try to move amphipods from their current (and likely wrong) room into the hallway; next from the hallway into to the _correct_ room/sideroom. For the first part, i.e. out of the current sideroom, and into the hallway: - For each sideroom location - If it's empty or occupied by an amphipod already in the right place, we move on - Otherwise, for each valid position in the hallway - If not blocked, we move into that location ... (loop for (_ positions) in rooms do (loop for pos in positions for type = (aref burrow (car pos) (cadr pos)) unless (or (char= type #\.) (in-place-p (siderooms rooms type) burrow type pos)) do (loop for target in *hallway* unless (blockedp burrow pos target) do (add-move type pos target)))) ... For the second step instead, i.e. out of the hallway, and into the _right_ room / sideroom: - For each hallway location - If occupied by a amphipod (of type `type`) - For each of of the siderooms for type `type` (from the bottom one, up to the top) - If not blocked, we move into that location - As we scan the siderooms of type `type`, bottom up, we stop as soon as we bump into a location occupied by an amphipod of the wrong type -- it does not make sense to move any amphipod into any sideroom above the current one, occupied by the wrong amphipod, if later on we would be forced to make space to let the amphipod out of this room and into their right one ... (loop for pos in *hallway* for type = (aref burrow (car pos) (cadr pos)) unless (char= type #\.) do (loop for target in (siderooms rooms type) unless (blockedp burrow pos target) do (add-move type pos target) always (eql (aref burrow (car target) (cadr target)) type)))) ... We add two locally defined functions, ADD-MOVE, responsible for: - Creating a copy of the current situation, i.e. the burrow - Swapping the content of `pos` with `target`, i.e. we move `pos` into `target` - Calculating the cost to move there, i.e. manhattan distance times the amphipod unitary energy cost We wrap this all into a DEFUN, and there is our POSSIBLE-MOVES complete definition, in all its glory: (defun possible-moves (rooms burrow &aux moves) (labels ((add-move (type pos target &aux (next (copy-array burrow)) (cost (* (manhattan-distance pos target) (ecase type (#\A 1) (#\B 10) (#\C 100) (#\D 1000))))) (rotatef (aref next (car pos) (cadr pos)) (aref next (car target) (cadr target))) (push (cons next cost) moves))) ;; From the wrong sideroom to the hallway (loop for (_ positions) in rooms do (loop for pos in positions for type = (aref burrow (car pos) (cadr pos)) unless (or (char= type #\.) (in-place-p (siderooms rooms type) burrow pos)) do (loop for target in *hallway* unless (blockedp burrow pos target) do (add-move type pos target)))) ;; Fromt he hallway to the right sideroom (loop for pos in *hallway* for type = (aref burrow (car pos) (cadr pos)) unless (char= type #\.) do (loop for target in (siderooms rooms type) unless (blockedp burrow pos target) do (add-move type pos target) always (eql (aref burrow (car target) (cadr target)) type))) moves)) Let's now take a look at IN-PLACE-P, the function responsible for figuring out if the amphipod at position `pos` needs to move out of the way or not: - We get the amphipod type first - Then for each sideroom of the same type (again, bottom up) - If the location of the room matches with the position of the amphipod, then we the amphipod is already in place - Otherwise we check if that sideroom is occupied by an amphipod of the right type - If the types match, we move to the next sideroom (i.e. the one above) - Otherwise it means the amphipod needs to move into the hallway (defun in-place-p (rooms burrow pos &aux (type (aref burrow (car pos) (cadr pos)))) (loop for r in (siderooms rooms type) thereis (equal r pos) ;; if not in the room, at least let's make sure the one below is ;; occupied by an amphipod of the right type always (eql (aref burrow (car r) (cadr r)) type))) The last missing piece of the puzzle, BLOCKEDP; we know our starting position, `pos`, and we know where we are headeded, `target`; all we have to do is to check all the locations _along the way_ and break out if any of them is occupied by another amphipod. But what are all the locations along the way? If we are inside a room, we need to move vertically first to get to the hallway, and then horizontally; otherwise, horizontally first to get in front of the room, and then vertically into the sideroom: - We destruct `pos` and `target` into their row / col components - Then using the spaceship operator <=> we figure out which direction we need to move horizontally to get from `pos` to `target` - Then, we check if `pos` is in the hallway - If it is, we move up first, then horizontally - Otherwise, horizontally first, and then down into the room (defun blockedp (burrow pos target) (destructuring-bind (row1 col1) pos (destructuring-bind (row2 col2) target (let ((col-step (<=> col2 col1))) (flet ((check-all (row1 col1 row2 col2 row-step col-step) (loop do (incf row1 row-step) (incf col1 col-step) thereis (char/= (aref burrow row1 col1) #\.) until (and (= row1 row2) (= col1 col2))))) (if (= row1 1) (or (check-all row1 col1 row1 col2 0 col-step) (check-all row1 col2 row2 col2 1 0)) (or (check-all row1 col1 row2 col1 -1 0) (check-all row2 col1 row2 col2 0 col-step)))))))) And that's it: > (organize (parse-input (uiop:read-file-lines "src/2021/day23.txt"))) 18282 Let's take a look at part 2. It begins with: > As you prepare to give the amphipods your solution, you notice that the diagram they handed you was actually folded up. As you unfold it, you discover an extra part of the diagram. Uh oh... > Between the first and second lines of text that contain amphipod starting positions, we are asked to insert the following lines: > > #D#C#B#A# > #D#B#A#C# > > Using the initial configuration from the full diagram, **what is the least energy required to organize the amphipods?** OK, let's _massage_ our input as instructed: (defun massage (lines) (append (subseq lines 0 3) (list " #D#C#B#A#" " #D#B#A#C#") (subseq lines 3))) And that's it -- our part 1 solution should be well equipped to deal with the bigger burrow! Final plumbing: (define-solution (2021 23) (lines) (values (organize (parse-input lines)) (organize (parse-input (massage lines))))) (define-test (2021 23) (18282 50132)) Test run: > (time (test-run)) TEST-2021/23.. Success: 1 test, 2 checks. Evaluation took: 3.197 seconds of real time 3.089562 seconds of total run time (2.924923 user, 0.164639 system) [ Run times consist of 0.134 seconds GC time, and 2.956 seconds non-GC time. ] 96.65% CPU 7,353,654,160 processor cycles 249,187,888 bytes consed Nice! Unfortunately, I was not able to solve this problem on time; actually, I could not solve this until a couple of days after Christmas. In addition, the [solution]( I came up with back then was not nearly as efficient as this one: around 8 seconds for part 1, while for part 2... Evaluation took: 2742.203 seconds of real time 2570.780518 seconds of total run time (2510.998329 user, 59.782189 system) [ Run times consist of 17.916 seconds GC time, and 2552.865 seconds non-GC time. ] 93.75% CPU 6,306,877,691,085 processor cycles 130,990,228,720 bytes consed 50132 What did I get so wrong, back then, to achieve such a huge runtime? Well, a few things actually. First, I did not realize I was _pre-calculating_ all the costs to move between any two locations, over and over again every time A* would invoke my heuristic function...WTF?! Well, I did realize my solution was taking forever to run, so I came up with the following heuristic hoping that this would speed things up a little: - For each amphipod type - We find where these amphipods are - We find the minimal cost to move them in place, as if no other amphipod could get into our way (e.g. with two amphipods, `A1` and `A2`, and the two siderooms, `R1` and `R2`, we have two possible scenarios: `A1` moves into `R1` and `A2` into `R2`, or `A1` moves into `R2` and `A2` into `R1`; we try both, and pick whichever costs less) - We sum all these minimum costs together, and that's it _But we cannot calculate all these costs, all the time, right?_ So we are going to pre-calculate these, _once_, and then look the values up: (defun heuristic (costs amphipods &aux by-type) (loop for (type) being the hash-values of amphipods using (hash-key pos) for existing = (assoc type by-type) do (if existing (push pos (second existing)) (push (list type (list pos)) by-type))) (loop for (type (pos1 pos2 pos3 pos4)) in by-type for m = (cost type) sum (loop for (room1 room2 room3 room4) in (all-permutations (rooms type)) minimize (* (+ (gethash (cons pos1 room1) costs) (gethash (cons pos2 room2) costs) #+#:part2 (gethash (cons pos3 room3) costs) #+#:part2 (gethash (cons pos4 room4) costs)) m)))) Except...I did a silly mistake on the calling site: (defun part1 (input &aux (opens (car input)) (amphipods (cdr input))) (multiple-value-bind (best cost path all-costs) (a* amphipods :test 'equalp :goalp #'donep :neighbors #'next :heuristic (partial-1 #'heuristic (calculate-all-costs opens))) cost)) Spot the problem? It's the use of PARTIAL-1! Let's MACROEXPAND-1 it: (LAMBDA (#:MORE-ARG632) (FUNCALL #'HEURISTIC (CALCULATE-ALL-COSTS OPENS) #:MORE-ARG632)) I am not exactly sure what I was thinking, but I guess I thought it would create a new local binding, assign to it the result of `(calculate-all-costs opens)`, and than pass that along to HEURISTIC; instead, PARTIAL-1 is keeping all its input forms as they are, meaning CALCULATE-ALL-COSTS would be invoked over and over again! Oh boy... Something else that I did, and that only later I realized was not beneficial at all (in terms of performance) was keeping track of the number of moves each amphipod had made (and of course backtrack when any of them did two and still found itself _not in place_). The thing is, we only allow two types of moves, from a room to the hallway, and vice versa, and each move has a cost attached to it; so why care if an amphipod keeps on bouncing back and forth between the same two locations, especially if it does not make the burrow anymore organized than it was before? The cost to get to that state would simply increase over time, and after a bit A* would simply pick a different, and cheaper branch. My original solution (**without** the heuristic!) would take around 316s to organize the example burrow situation, and when I changed it to stop keeping track of the amphipod moves and simply let A* do its thing, the runtime went down to 264s. Still a lot, but definitely better. Last but not least: search state representation. My [A*](