This is my log ... When I accomplish something, I write a * line that day. Whenever a bug / missing feature / idea is mentioned during the day and I don't fix it, I make a note of it and mark it with ? (some things get noted many times before they get fixed). Occasionally I go back through the old notes and mark with a + the things I have since fixed, and with a ~ the things I have since lost interest in. --- Matteo Landi # 2011-01-29 * finished reading [Rework]( # 2011-03-14 * finished reading [Learning the vi and Vim Editors: Text Processing at Maximum Speed and Power]( # 2011-04-02 * finished reading [Getting Real: The Smarter, Faster, Easier Way to Build a Successful Web Application]( # 2011-06-15 * finished reading [Growing Object-Oriented Software, Guided by Tests]( # 2012-08-10 * finished reading [Effective Java (2nd Edition)]( # 2012-08-17 * finished reading [Functional Programming for Java Developers: Tools for Better Concurrency, Abstraction, and Agility]( # 2012-08-23 * finished reading [Vim and Vi Tips: Essential Vim and Vi Editor Skills]( # 2012-09-12 * finished reading [Design Patterns: Elements of Reusable Object-Oriented Software]( # 2012-10-24 * finished reading [Simplify]( # 2012-11-03 * finished reading [Effective Programming: More Than Writing Code]( # 2013-02-01 * finished reading [Python Cookbook]( # 2013-10-01 * finished reading [Java Concurrency in Practice]( # 2015-08-01 * finished reading [The Lean Startup: How Today's Entrepreneurs Use Continuous Innovation to Create Radically Successful Businesses]( # 2015-09-03 * finished reading [Zero to One: Notes on Start Ups, or How to Build the Future Kindle Edition]( # 2015-09-07 * finished reading [The Pragmatic Programmer: From Journeyman to Master]( # 2015-11-05 * finished reading [Clean Code: A Handbook of Agile Software Craftsmanship]( # 2016-08-21 is a domain that resolves to localhost, and it works with subdomains too ( would still resolve to localhsot). Why is this helpful? Because you could use it when developing applications that need subdomain configurations. # 2016-09-12 * finished reading [The Startup Playbook: Secrets of the Fastest-Growing Startups from their Founding Entrepreneurs]( # 2016-09-13 Analyze angular performance: angular.element(document.querySelector('[ng-app]')).injector().invoke(['$rootScope', function($rootScope) { var a =; $rootScope.$apply(); console.log(; }]) or: angular.element(document.querySelector('[ng-app]')).injector().invoke(['$rootScope', function($rootScope) { var i; var results = [] for (i = 0; i < 100; i++) { var a =; $rootScope.$apply(); results.push(; } console.log(JSON.stringify(results)); }]) # 2016-09-14 Move pushed commit to a different branch git checkout $destination-branch git cherry-pick $change-done-on-the-wrong-branch git push git checkout $origin-branch git revert $change-done-on-the-wrong-branch Related reads - - # 2016-09-16 Run the last history command (like `!!`): fc -s Run last command without appending it to the history: HISTFILE=/dev/null fc -s # 2016-12-26 * successfully built vim, from sources sudo apt-get install -y \ liblua5.1-0-dev \ libluajit-5.1-dev \ lua5.1 \ luajit \ python-dev \ ruby-dev sudo ln -s /usr/include/lua5.1 /usr/include/lua5.1/include cd ~/opt git clone cd vim ./configure \ --with-features=huge \ --enable-multibyte \ --enable-largefile \ --disable-netbeans \ --enable-rubyinterp=yes \ --enable-pythoninterp=yes \ --enable-luainterp=yes \ --with-lua-prefix=/usr/include/lua5.1 \ --with-luajit \ --enable-cscope \ --enable-fail-if-missing \ --prefix=/usr make # 2016-12-28 * found a way to debug why a node app refuses to grecefully shut-down // as early as possible const wtf = require('wtfnode'); setInterval(function testTimer() { wtf.dump(); }, 5000); # 2016-12-31 * figured out a way to always update vbguest add-ons: `vagrant plugin install vagrant-vbguest` ( Also, you could instruct Vagrant to automatically install required dependencies with the following: required_plugins = %w( vagrant-hostmanager vagrant-someotherplugin ) required_plugins.each do |plugin| system "vagrant plugin install #{plugin}" unless Vagrant.has_plugin? plugin end Source: # 2017-01-03 Found this nice hack to quickly get the average color of an image img2 = img.resize((1, 1)) color = img2.getpixel((0, 0)) # 2017-01-06 * downloaded all my intagram photos - Load your profile page on the browser, and scroll to the bottom until you reached the end of your stream. - Open the developer console, right click on the element, copy it, and dump it somewhere - Run the following command: `lynx -dump -listonly -image_links ../foo.html | grep cdninstagram | - 640x640 | awk '{print $2}' | xargs -n 1 -P8 wget` # 2017-01-08 * finished reading [The Everything Store: Jeff Bezos and the Age of Amazon]( # 2017-01-18 * finished reading [Drive: The Surprising Truth About What Motivates Us]( # 2017-02-19 * finished reading [Designing with the Mind in Mind: Simple Guide to Understanding User Interface Design Guidelines]( # 2017-04-26 * finished reading [To Sell is Human: The Surprising Truth About Persuading, Convincing, and Influencing Others]( # 2017-07-25 Me of the future, do enjoy this! " Dear /bin/bash: fuck you and your bullshit, arcane command-line behaviour. " " Basically, I want to set this to a non-login, non-interactive bash shell. " Using a login/interactive bash as Vim's 'shell' breaks subtle things, like " ack.vim's command-line argument parsing. However, I *do* want bash to load " ~/.bash_profile so my aliases get loaded and such. " " You might think you could do this with the --init-file command line option, " which is used to specify an init file. Or with --rcfile. But no, those only " get loaded for interactive/login shells. " " So how do we tell bash to source a goddamned file when it loads? With an " *environment variable*. Jesus, you have multiple command line options for " specifying files to load and none of them work? " " Computers are bullshit. let $BASH_ENV = "$HOME/.bashrc" # 2017-08-08 * web resource caching Manually adding headers via express or anything: 'Cache-Control': 'public, max-age=31536000' // allow files to be cached for a year Nginx configuration: location /static { alias /site; location ~* \.(html)$ { add_header Last-Modified $date_gmt; add_header Cache-Control 'no-store, no-cache, must-revalidate, proxy-revalidate, max-age=0'; if_modified_since off; expires off; etag off; } } # 2017-08-27 * edit markdown files with live-reload Install `markserv`: npm install markserv -g Move into the folder containing any markdown file you would like to preview, and run: markserv # 2017-09-26 * finished reading [Practical Vim: Edit Text at the Speed of Thought]( # 2017-10-13 * finished reading [Programming JavaScript Applications: Robust Web Architecture with Node, HTML5, and Modern JS Libraries]( # 2017-10-21 * Installed Common Lisp on Mac OS brew install roswell # ros install sbcl/1.3.13 # 2017-11-26 * finished reading [Don't Make Me Think, Revisited: A Common Sense Approach to Web Usability (Voices That Matter)]( # 2017-12-12 * fixed slow bash login shells on Mac OS Unlink old (and slow) bash-completion scripts: brew unlink bash-completion Install bash-completion for bash4: brew install bash-completion@2 Change '.bash\_profile' to source the new bash\_completion files if [ -f $(brew --prefix)/share/bash-completion/bash_completion ]; then . $(brew --prefix)/share/bash-completion/bash_completion fi Finally, make sure your default `bash` shell is the +4 one: sudo bash -c 'echo /usr/local/bin/bash >> /etc/shells' chsh -s /usr/local/bin/bash # 2017-12-15 * fixed key-repeat timings on Mac OS We would do this by using Karabiner/Karabiner-Elements, but that option is long gone, so we need to do that by using `defaults`: defaults write NSGlobalDomain InitialKeyRepeat -int 11 defaults write NSGlobalDomain KeyRepeat -float 1.3 In particular, the above sets: - Delay until repeat: 183 ms - Key repeat: 21 ms # 2017-12-20 * learned how to clone repos subdirectories This can be done by using *sparse checkouts* (available in Git since 1.7.0): mkdir cd git init git remote add -f origin This creates an empty repository with your remote, and fetches all objects but doesn't check them out. Then do: git config core.sparseCheckout true Now you need to define which files/folders you want to actually check out. This is done by listing them in `.git/info/sparse-checkout`, eg: echo "some/dir/" >> .git/info/sparse-checkout echo "another/sub/tree" >> .git/info/sparse-checkout Last but not least, update your empty repo with the state from the remote: git pull origin master To undo this instead: echo "/*" > .git/info/sparse-checkout git read-tree -mu HEAD git config core.sparseCheckout false # 2017-12-21 * fix symbolic links on Windows/cygwin Define the following environment variable: CYGWIN=winsymlinks:nativestrict # 2017-12-27 * fucking fixed MacOS not going to sleep! As of OS X 10.12, the nfsd process now holds a persistent assertion preventing sleep. This is mostly a good thing, as it lets NFS work on networks where wake-on-lan packets don't work. However, this means that when plugged in, OS X will no longer automatically sleep. If you unload the daemon, sleep works fine, and I noticed that vagrant starts nfsd if required on vagrant up. This is now worse with 10.12.4 as a call to nfsd stop is no longer supported when system integrity protection is turned on. Thus the only way to let the machine sleep is to prune /etc/exports and restart nfsd. It would be nice to at least add a cli command to have vagrant prune /etc/exports (so we can script in when we want nfs pruned) or even better have it automatically prune exports on suspend and halt and re-add on wake and up. For this I created the following, and added it to my .bashrc: function fucking-kill-nfsd() { # sudo sh -c "> /etc/exports" sudo nfsd restart } [Source]( # 2018-01-08 * finished reading [Never Split the Difference: Negotiating As If Your Life Depended On It]( # 2018-06-03 * learned how to take a peek at a repo's remote Fetch incoming changes: git fetch Diff `HEAD` with `FETCH_HEAD`: git diff HEAD..FETCH_HEAD If the project had a changelog, you could just diff that: git diff HEAD..FETCH_HEAD -- some/path/CHANGELOG Bring changes in: git pull --ff # 2018-06-05 * finished reading [The Senior Software Engineer]( * make `appcelerator` work again Unset `_JAVA_OPTS` as scripts are too fragile and the `javac` command is not properly parsed ~/Library/Application Support/Titanium/mobilesdk/osx/7.2.0.GA/node_modules/node-appc/lib/jdk.js Added log messages because of uncaught exceptions # 2018-07-05 * finished reading [Modern Vim: Craft Your Development Environment with Vim 8 and Neovim]( # 2018-07-12 * fucking fixed weird Cygwin/SSH username issues Symptoms: cygwin `USER` and `USERNAME` env variables are different[0]: $ set | egrep "USER=|USERNAME=" USER=IONTRADING+User(1234) USERNAME=mlandi Where does the value of USER come from? Take a look in '/etc/profile'. You will find the following: # Set the user id USER="$(/usr/bin/id -un)" `id` gets it's data from /etc/passwd (which, if not there, you might have to create with the following command[1]): mkpasswd -c > /etc/passwd You will have to kill all the cygwin processes to see this working. # 2018-08-01 * init'd new work windows box -- and addes stuff to my-env's README # 2018-08-19 * finished reading [Cypherpunks: Freedom and the Future of the Internet]( # 2018-08-20 * re-learned how to release on pypi Make sure your `~/.pypirc` file is current: [distutils] index-servers: testpypi pypi [testpypi] repository: username: landimatte [pypi] repository: username: landimatte Clean up the `dist/` directory: rm -rf dist/* Build the source distribution package: python sdist Release to the *test* instance of Pypi, and check everything is in place: twine upload dist/* -r testpypi Release to the *public* instance of Pypi: twine upload dist/* -r pypi # 2018-08-25 * force merge to keep feature branches in the history This [post]( does a good job at explaining how to keep feature branches in the git history (i.e. forgin Git to always create a merge commit). In a nutshell, to disable _fast-forward_ merges completely: git config --global --add merge.ff false While to disable them just for `master`: git config --global branch.master.mergeoptions "--no-ff" # 2018-08-26 * finished reading [No Place to Hide: Edward Snowden, the NSA, and the U.S. Surveillance State]( * setting up Mutt on Windows Cannot find mutt-patched, so let's go with the default apt-cyg mutt It turns out, default mutt now ships with the sidebar patch -- yay!!! Manually installed installed `offlineimap` using Cygwin apt-cyg install offlineimap + build offlineimap from sources SSL certificates on Cygwin are put inside a different directory, so I had to customize files a little to point to the right path + create a script to inspect the disk and symlink certificates in a known location (e.g. ~/.certs) Used python's [keyring]( library to handle secrets (it work on MacOS too!) urlview is not available on Cygwin; I tried with [urlscan]( for a couple of days but ended up with manually building urlview from sources: git clone ... apt-cyg install \ autoconf \ automake \ gettext \ libcurses-devel ./configure autoreconf -vfi make Created `~/bin/urlview` script delegating to `~/opt/urlview/urlview` (location of my local urlview repo); created `~/.urlview` containing: COMMAND br Luckily for me, msmpt is available on Cygwin, so a simple `apt-cyg install ...` did the trick # 2018-08-29 A tale of Sinon, Chai, and stubs Imports: const chai = require('chai'); const expect = chai.expect; const sinon = require('sinon'); const sinonChai = require('sinon-chai'); chai.should(); chai.use(sinonChai); Then inside the tests: expect(; expect(; # 2018-09-07 * reinstalled docker-ce on Ubuntu # 2018-09-08 * first aadbook release! * fork vim-goobook and add support for specifying a custom program (e.g. aadbook) # 2018-09-09 * finished reading [The Art of Invisibility: The World's Most Famous Hacker Teaches You How to Be Safe in the Age of Big Brother and Big Data]( # 2018-09-14 * finished reading [When Google Met WikiLeaks]( # 2018-09-16 * aadbook: always regenerate tokens when invoking 'authenticate' * aadbook: don't refresh auth token, unless expired # 2018-09-21 * reimplement vim-goobook filtering logic in pure vimscript # 2018-09-24 * finished reading [Ghost in the Wires: My Adventures as the World's Most Wanted Hacker]( # 2018-09-26 * fix certbot on # 2018-09-28 * make vim-goobook platform independent # 2018-09-29 * vim-goobook completion works even on lines starting with spaces/tabs # 2018-10-01 * aadbook: fix soft-authentication issue # 2018-10-02 * add aadbook page to # 2018-10-03 * make mobile friendly # 2018-10-05 * set up a .plain file, some vim scripts, and a bash script to manage it + add a `logfilter` page on ? fix mutt 'indicator' colors taking over 'index' ones! I want index rules to be more important than indicator one: ? pin specific gmail/microsoft certificates + PGP mutt ? live notification fetcher plugin on # 2018-10-06 * cleaned up mail related scripts (personal/work) * added .plan page to * images on are no more cropped on mobile # 2018-10-07 * yet another restyle -- ripped of css rules from: * automatic generation of sitemap.xml # 2018-10-08 * fix s format on sitemap # 2018-10-09 * added personal GPG key to * added fuzzy-finding support to aadbook by replacing each whitespace ` ` with `.*` * added fuzzy-finding support to goobook # 2018-10-10 * finished reading [Masters of Doom: How Two Guys Created an Empire and Transformed Pop Culture ]( # 2018-10-11 * disabled caching of html|css files on # 2018-10-14 * finished setting up GPG w/ Mutt!! It's been a long journey, but I finally did it. At first I tried setting up all the pgp_* mutt commands to verify/sign/encrypt/decrypt, but while encrpyting seemed to work, when I tried to decrypt my own messages (i.e. message I had sent to my own address) Mutt errored out complaining that GPG was not able to decrypt the message -- note that manually running the `gpg` command on a copy of the message worked without any issues. Anyway, I finally gave up and went on and added `set crypt_use_gpgme` to my .muttrc (had to re-install mutt, with `--with-gpgme` flag). Few related (and interesting) articles: Gnu PGP Intro: Getting Started with PGP and GPG: Simpler GnuPG & Mutt configuration with GPGME: GPG / Mutt / Gmail: # 2018-10-20 * improved vim-syntax for .plan files * fixed `aadbook authenticate` bug where the comamnd could fail when initializing the Auth object, leaving the user with anything to do but wipe everything out ? don't advance mutt cursor when deleting/archiving threads # 2018-10-22 * created MR for goobook, fixing pip installation -- is trying to read a file not included in the manifest * fixed goobook crash happening when user contacts list contained some contacts without neither a name, nor a email address * added support for local `b:goobookprg` to vim-goobook * added editorconfig integration to dotfiles/vim # 2018-10-23 * expensio: created skeleton Node.js app -- logger, module loader, express, openrecord * expensio: created skeleton Angular app -- boilerplate, Login/Register pages ? expensio: send auth tokens via email # 2018-10-25 * expensio: added back/front end support for editing paymentmethods * expensio: created passport/bearer based auth layer # 2018-10-26 * expensio: almost complete auth workflow -- missing: emails * expensio: authenticate paymentmethods-related workflows ? expensio: dynamically make 'code' required ? expensio: intercept UNIQUE CONSTRAINT and return 409 ? expensio: stop reloading pm/expenses -- especially when going back after pm/expense edit # 2018-10-27 * expensio: initial support (server/client) for groups: list, create * expensio: initial support (server/client) for group-members: list, create * expensio: initial support (server/client) for groupcategories: list, create ? expensio: confirm group membership ? expensio: hierarchical categories # 2018-10-28 * expensio: add logging of HTTP requests using morgan ? expensio: fix logging of warnings/errors with winston # 2018-10-29 * expensio: initial support (server/client) for expenses: list, create * expensio: edit of group categories * expensio: fixed exception loggin when `json: true` # 2018-10-31 * vim-lsp: use loclist instead of quickfix for document diagnostics # 2018-11-01 * aadbook: 0.1.2 -- fix potential security vulnerability related to `requests==2.19.1` # 2018-11-03 Finally found some time to play with Microsoft's [Language Server Protocol]( * js-langserver: submitted PR for loading ESLint from the project directory -- instead of the language server one * js-langserver: updated the LS manifest based on implemented features * js-langserver: fixed 'rename' -- the protocol must have changed since the first implementation, and 'rename' wasn't working anymore # 2018-11-04 + tern.js does not recognize `async function foo() ..` as definition # 2018-11-06 * submitted PR to rainbow_paretheses.vim for configuring the maximum number of levels to color # 2018-11-10 * finished reading [Machines of Loving Grace: The Quest for Common Ground Between Humans and Robots]( Played around with Tern.js internals to understand: 1) why `findDef` of an `async function` is not returning anything, 2) how to enhance js-langserver to support pretty much all the supported properties one can configure inside .tern-config/.tern-project Regarding point 1 above, I tried to factor out a boostrap.js file inside tern.js/lib, containing all the code necessary to start up a Tern server (code gently borrowed from tern.js/bin/tern) -- let's see if tern.js maintainers are interested in merging my PR # 2018-11-11 * managed to get tern.js to reference `async function` definitions -- I did not realize I had `"ecmaVersion": 6` in my .tern-project file * created a PR to tern.js to factor out server-boostrapping logic into a different file people can use to easily start a Tern server without the HTTP wrapper Read a few articles about [Initiative Q]( and, and I feel like it's just a big scam: they are giving away free money in exchange of...your data, and information about your inner circles (you only have 5 invites, so you need to be carefull whom you are sending it to; and when you receive one, since you know the sender very well and you know they only had 5 invites, you might as well simply accept it and join the community because really trust the sender of the invite and he wouldn't have sent it to you if the thiing was a scam). Anyway, let's see how it pans out. # 2018-11-13 * finished reading [Ripensare la smart city]( # 2018-11-25 * fixed editorconfig configuration for .vimrc -- # 2018-12-03 * completed advent of code day1-1 # 2018-12-05 * completed advent of code day1-2 * cleared Spotify cache Couple interesting articles on how to clear/configure Spotify's cache: - - # 2018-12-06 * completed advent of code day2-1 * completed advent of code day2-2 # 2018-12-08 * completed advent of code day3-1 * completed advent of code day3-2 # 2018-12-09 * completed advent of code day4-1 * completed advent of code day4-2 * completed advent of code day5-1 -- it takes around 2 minutes to run... we can do better * completed advent of code day5-2 -- left running at night + more efficient solution for day5-1 and day5-2 # 2018-12-10 * completed advent of code day6-1 * completed advent of code day6-2 # 2018-12-11 * completed advent of code day7-1 -- it did not seem that complex... yet it took me forever to implement the dependency-tree traversal # 2018-12-12 * completed advent of code day8-1 * completed advent of code day8-2 # 2018-12-13 * completed advent of code day9-1 * completed advent of code day9-2 -- slow as balls, had to leave it going during the night + more efficient solution for day9-2 # 2018-12-14 * completed advent of code day10-1 * completed advent of code day10-2 * completed advent of code day11-1 # 2018-12-15 * completed advent of code day12-1 * completed advent of code day12-2 ? understand why day12-2 first set of loops need to be skipped (> cur-gen 10000) # 2018-12-16 * finished reading [Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy]( * completed advent of code day13-1 * completed advent of code day13-2 * completed advent of code day14-1 * completed advent of code day14-2 -- slow-ish ? make solution for dat14-2 more efficient # 2018-12-20 * completed advent of code day16-1 ? figure out how to parse day16-2 input as a single file, instead of 2 # 2018-12-21 * completed advent of code day16-2 + learn about lispwords -- vim # 2018-12-24 * completed advent of code day17-1 -- it took me forever to get this right.. I am dumber that I thought I were * compelted advent of code day17-2 -- once you get part 1 right.. * compelted advent of code day18-1 * compelted advent of code day18-2 * compelted advent of code day19-1 # 2018-12-25 * completed advent of code day19-2 I started analyzing all the comparison instructions and the status of the registers before and after these were executed. Then tweaking the initial state of the registers to fast forward the program execution I managed to understand that the program was trying to calculate the sum of the divisors of a given number (where _given_ means some funky formula I have no idea of... I simply looked at the registers to see what this number was). # 2018-12-26 * completed advent of code day21-1 I looked for all the comparison instructions where R0 (the only one we could mutate) appeared as one of the argument; put a breakpoint on the instruction, and inspected what the expected value was, used that as R0 and that's it. # 2018-12-30 * completed advent of code day23-1 -- it took me forever to realize I was looping over the sequence I had passed to `sort` -- which is destructive -- and that some elements were gone (I should have either `copy-seq`d or looped over the sorted list) * completed advent of code day22-1 + memoize decorator in common lisp # 2018-12-31 ? parse string using regexpes in common lisp # 2019-01-03 * completed advent of code day22-2 -- the solution (a-star) runs in 48 seconds.. common lisp doesn't ship with an implementation of heapq, so adding+sort on every step is probably killing the performance + make day22-2 run fuster -- find a good implementation of heapq Simple Dijkstra/path-finding Python implementations: # 2019-01-04 * completed advent of code day20-1 -- Dijkstra * completed advent of code day20-2 # 2019-01-05 * completed advent of code day24-1 -- it took me quite some time to get all the rules implemented, but it worked eventually * completed advent of code day24-2 -- starting from a boost of 2 I started to double it until I found **a** winning boost; then run a binary search between the winning one, and its half # 2019-01-06 * completed advent of code day25-1 -- disjointset * completed advent of code day7-2 ? write a popn macro popping n elements from place # 2019-01-07 * completed advent of code day11-2 -- before, I were generating all the squares, calculating the energy of a square, and then finding the max; now, I am doing all this using for loops, without allocating much stuff. It's slow as balls, but only after 19 minutes it spit the right solution :P + make solution for dat11-2 more efficient # 2019-01-08 * completed advent of code day21-2 After spending almost an hour to try and undestand what the problem was doing, I gave up and peeked into Reddit to see how other people had solved it and... it turned out I got the problem all wrong! The program loops forever, and the only way to break out of it is if (in my case) R0 is loaded with some random values, say 10 234 55 1 12 and 10 again. Now, if you load R0 with 10, it will break out immediately; if you load it with 234 it will check against 10 and then 234 (at which point it will exit); the problem asks for the number for which the program will run the maximum number of instructions, and the answer is the latest value of the loop: 12. You load R0 with 12, and the program will run and check against 10, 234, 55, 1, and finally 12! The program run in 511 seconds but I don't want to try to understand what the program does and simplify it (or transpile it into C or something). I can try and make opcodes run faster, but that's it ? Make Elf opcodes run faster -- use arrays instead of consing # 2019-01-10 * completed advent of code day23-2 I spent a few hours trying to come up with a proper solution for this, but I gave up eventually; I got so clouded by the size of the input (not just the number of nanobots, but also their coordinates and their radius sizes), than rather than immediately searching for the correct solution, I should have just tried things in a trial and error fashion. The solution I happened to copy from Reddit goes as follows (I am still not 100% sure it is generic, but it worked for my input so...): we sample the three dimensional space looking for a local maximum; once we have one, we zoom in around that maximum, and keep on sampling, until the area is a point. Of course we have to take into account not only the number of nanobots covering the sampled point, but also its distance to the origin. # 2019-01-13 * completed advent of code day15-1 * completed advent of code day15-2 -- it's official, I don't know how to code a-star. Also, I am unlucky: my solution was working with pretty much all the inputs I found only, but still it wasn't accepting mine. Anyway, there was a bug in my a-star algorithm, and once fixed it, it finally accepted my answer (funny how it was working OK for all the other inputs). * completed andvent of code 2018 -- now time to look at the solutions again, and see what can be improved, factored out * fixed my laptop -- `brewski` fucked it Vim (it would not start up), and it turned out there were some issues with [python2 vs python3]( # 2019-01-17 * finished reading: [Common LISP: A Gentle Intorduction to Symbolic Compoutation]( # 2019-01-19 * configured my inputrc to popup an editor (rlwrap-editor) with C-xC-e, when playing with Node, Python, or SBCL -- which are rlwrap-wrapped * uninstalled `roswell` and installed plain `sbcl` instead; had to install quicklisp though, to make it work with vlime, but that was [easy]( 1 Downlaod quicklisp: `curl -O` 2 Execute the following forms from a sbcl REPL: (load quicklisp.lisp) (quicklisp-quickstart:install) 3 Finally instruct SBCL to always load quicklisp -- add the following to your '.sbclrc: `(load "~/quicklisp/setup.lisp")` # 2019-01-20 * got more familiar with CL `loop`'s syntax `loop` macro supports iterating over a list of elements and exposing not just the current element, but also the remaining ones; it's implemented using the `:on` phrases (this will make `var` step var over the cons cells that make up a list.). (let ((data '(1 2 3 4 5 6 6))) (loop :for (a . remaining) :on data :for b = (find a remaining) :do (when b (return a)))) Note that you cannot simply `:for b :in remaining` because `remaining` will be `NIL` when accessed, forcing and early exit from `loop` (this does not happen when we use [Equals-Then Iteration]( `loop` macro supports `count` (and `counting`), so expressions like: :summing (if (hash-table-find 3 freqs) 1 0) :into threes can be simplified into: :counting (hash-table-find 3 freqs) :into threes # 2019-01-24 Trying to make sense of lisp indentation with Vim and Vlime, and it turns out you have to `set nolisp` (or simply not `set lisp` if you were explicitly setting it): + I still don't get why I should use `lispwords` though (I thought Vlime used a different set of rules), but maybe I just don't know what I am doing... # 2019-01-26 * replaced TSlime with a custom-made mini-plugin that that ships with a couple of `` mappings: one to list all the buffers and pick a terminal you want you send data to, and another to send the current selection to the selected terminal # 2019-01-27 * converted my advent-of-code solutions repository to [quickproject]( It's important to symlink your project, or its .asd file won't properly load up: mkdir -p ~/quicklisp/local-projects/ ln -s $(pwd) ~/quicklips/local-project/$(dirname) Once you do this, and you open a REPL inside a _quickproject_, a simple `(ql:quickload ...)` should work just fine # 2019-01-28 Need to concatenate multiple images into a single PDF? [Easy]( `convert "*.{png,jpeg}" -quality 100 outfile.pdf` # 2019-02-03 + `recursively` macro -- and use it for day08 ? is it possible to do `:collecting` with a specific type ? parsing input is still annoying... # 2019-02-05 * implement `RECURSIVELY` macro # 2019-02-06 * refactored day9 using doubly linked list and ring structures `ARRAY-IN-BOUNDS-P` can be used to figure out if an index (list of indexes) is a valid array (multi-dimensional array) position You loop over _lazily_ generated numbers by using `LOOP`: (loop :for var = initial-value-form [ :then step-form ] ...) For example, to indefinitely loop over a _range_ of values: (loop :for player = 0 :then (mod (1+ player) players) # 2019-02-10 Lightroom shortcuts: - R: grid overlay (press O to change the grid type; shift-O to rotate the grid) - L: lights out (dim background, isolating the photo you are working on); press it again, all the controls will now be black - F: fullscreen - Y: before & after - \: compare with original - J: show where the image is clipping (red: overexposed, blue: underexposed) Lightroom quick tips: - Exposure > Tone Auto - Reset button (bottom right corner) / or double click on the effect - Grid > Angle: you can now draw a line, and the photo will be rotated based on that - Radio brush adjustment # 2019-02-11 [04 – Focal Length]( Ultra-wide angle – 14-24mm Wide angle – 24-35mm Normal – 40-75mm Mild tele – 85-105mm Medium tele – 120-300mm Long and exotic tele – 300-800mm > This was a very interested exercise for me. I used a kit Canon 18-55mm lens with the shots at 18/35/55. I then approached the hydrant and shot it at 18mm. The most noticeable impact that I saw was the loss of depth you have when you are zooming in versus physically approaching the subject. If you look at the dirt pile just over the top left of the hydrant in the third image versus the fourth, the depth of field and the scope are significantly different versus trying to shoot the same shot just physically closer without any significant zoom. Interesting exercise. > > # 2019-02-16 fixed DNS problems after upgrading Ubuntu from 16.04 to 18.04 > After not understanding why this wouldn't work I figured out that what was also needed was to switch /etc/resolv.conf to the one provided by systemd. This isn't the case in an out-of-a-box install (for reasons unknown to me). sudo ln -sf /run/systemd/resolve/resolv.conf /etc/resolv.conf Source: Also, recently W10 started pushing me to use a new Snip tool, so had to figure out how to change my AutoHotKey files to use that: > I figured it out! Create a shortcut as described here: and then incorporate that shortcut into the following code: +#s:: IfWinExist Snip & Sketch { WinActivate WinWait, Snip & Sketch,,2 Send ^n } else { Run "F:\Snip & Sketch.lnk" sleep, 500 send, ^n } return Source: # 2019-02-19 * completed advent of code 2017 day1-1 * completed advent of code 2017 day1-2 # 2019-02-20 * completed advent of code 2017 day2-1 * completed advent of code 2017 day2-2 TIL, `MULTIPLE-value-LIST`: (loop :for values :in data :for (divisor number) = (multiple-value-list (evenly-divisible-values values)) :summing (/ number divisor)))) # 2019-02-21 * completed advent of code 2017 day3-1 * completed advent of code 2017 day3-2 + is there anything similar to `WITH-SLOTS`, but for structures? # 2019-02-23 * completed advent of code 2017 day4-1 * completed advent of code 2017 day4-2 In CL, you can use `:thereis` to break out of a `LOOP` when a given condition is `T`: (loop :for (a . rest) :on pass-phrase :thereis (member a rest :test 'equal))) There are `:never` and `:always` too! # 2019-02-24 * completed advent of code 2017 day5-1 * completed advent of code 2017 day5-2 * completed advent of code 2017 day6-1 * completed advent of code 2017 day6-2 # 2019-02-25 * completed advent of code 2017 day7-1 * completed advent of code 2017 day7-2 A little messy my solution for 2017 day 07 -- gotta find someone else's solution and compare it with mine to see what can be improved # 2019-02-26 * completed advent of code 2017 day8-1 * completed advent of code 2017 day8-2 * completed advent of code 2017 day9-1 * completed advent of code 2017 day9-2 It would be nice if there was a macro similar to `WITH-SLOTS`, but for structures, so I wouldn't have to do the following: (loop :with registers = (make-hash-table :test 'equal) :for instruction :in data ; XXX isn't there a better way to handle this? :for reg-inc = (reg-inc instruction) :for inc-delta = (inc-delta instruction) :for op = (comp-op instruction) :for reg-comp-1 = (reg-comp-1 instruction) :for value-comp = (value-comp instruction) # 2019-02-27 * completed advent of code 2017 day10-1 * completed advent of code 2017 day10-2 Couple interesting `FORMAT` recipes: - `~x` to print a hexadecimal (`~2,'0x~` to pad it with `0`s) - `~(string~)` to lowercase a string # 2019-03-01 * completed advent of code 2017 day11-1 * completed advent of code 2017 day11-2 * completed advent of code 2017 day12-1 * completed advent of code 2017 day12-2 Day11 was all about hex grids; found this interesting [article]( about how to represent hex grids, and how to calculate the distance between tiles -- I opted for the _cube_ representation. Day12 instead, disjointsets! # 2019-03-02 * completed advent of code 2017 day13-1 * completed advent of code 2017 day13-2 * completed advent of code 2017 day14-1 * completed advent of code 2017 day14-2 Day13 part 2 kept me busy more than expected, but eventually figured out what I was doing wrong: Severity == 0 isn't the same as not getting caught. Also, at first I implemented this, but simulating the dilay, and the packets passing through the different firewall layers; it worked, but was slow as balls (around 13 secs), so I decided to do some math to figure out collisions, without having to simulate anything. It paid off, as now it takes around 0.4secs to find the solution. Day14 part 1 was all about re-using Knot Hashes from day 10, and figure out an hexadecimal to binary conversion. Part 2 instead, disjointsets, again! + implement a proper hex to binary macro or something -- `HEXADECIMAL-BINARY` # 2019-03-03 * completed advent of code 2017 day15-1 * completed advent of code 2017 day15-2 Upgraded vim-fugitive, bus since then, I cannot `:Gcommit` anymore. Even when I do it from the command line, not only I see the following error message, but when I try to save, Vim complains with: `E13: File exists (add ! to override)` ... Error detected while processing function 41_config[4]..119_buffer_repo: line 1: E605: Exception not caught: fugitive: A third-party plugin or vimrc is calling fugitive#buffer().repo() which has been removed. Replace it with fugitive#repo() Error detected while processing function FugitiveDetect: line 11: E171: Missing :endif + Need to figure this out, as it's super annoying + implement something similar to `SUMMATION`, but for `MAX` and `MIN` -- `MAXIMIZATION` and `MINIMIZATION` Also, TIL (or better, remembered) that you can `(loop :repeat N ...)` to execute a loop `N` times # 2019-03-04 * completed advent of code 2017 day16-1 * completed advent of code 2017 day16-2 * completed advent of code 2017 day17-1 * completed advent of code 2017 day17-2 + Make day17-2 run faster -- it takes minutes for that 50_000_000 iterations on that ring buffer to complete # 2019-03-05 * completed advent of code 2017 day18-1 * completed advent of code 2017 day18-2 * completed advent of code 2017 day19-1 * completed advent of code 2017 day19-2 Day 18: Another "here is your instructions set, create a compiler for it" kind of exercise. Part1 was OK, it did not take me long to got it right; part2 instead, it got messier than expected, but overall it doesn't look that _bad_ and it runs pretty quickly too, so... # 2019-03-06 * completed advent of code 2017 day20-1 * completed advent of code 2017 day20-2 Really sloppy solution, mine. The problem statement says: Which particle will stay closest to position <0,0,0> in the long term? So I said, why don't I let the simulation run for some time until the particle closest to <0,0,0> doesn't change (part1), or no more particles collide with one another (part2)? Well, it turned out 500 iterations were enough to converge to the right answers! ~ find a clever (and more generic) way to solve 2017 day20 -- implement a function to check when all the particles are moving away from <0,0,0> # 2019-03-07 * optimized 2017 day 17 We are interested in the value next to 0 after 50000000 iterations, so we don't have to keep the ring buffer up to date, we can simply simulate insertions, and keep track of all the values inserted after value-0 # 2019-03-08 * optimize solution for 2017 day20 -- add GROUP-BY, refactored UNIQUE-ONLY to use GROUP-BY, and that's it, from O(N²) to O(N)! ? Is it guaranteed that iterating over hash-table keys would return them in insertion order? It seems to be the case for SBCL at least, but is it implementation dependent? # 2019-03-09 * completed advent of code 2017 day21-1 * completed advent of code 2017 day21-2 I decided to represent the pixels grid as a string, so I had to implement logic for: - accessing specific sub-squares of such grid -- `(+ (* i * size) j)` - combine sub-squares together -- It takes less than 4 seconds to complete the 18th iteration, so I did not bother optimizing further; I have however read something interesting on Reddit: > After 3 iterations a 3x3 block will have transformed into 9 more 3x3 blocks, the futures of which can all be calculated independently. Using this fact I just keep track of how many of each "type" of 3x3 block I have at each stage, and can thus easily calculate the number of each type of 3x3 block I'll have 3 iterations later. So yeah, this can be optimized further but you know, I am engineer.. # 2019-03-10 * finally solved all the vim-fugitive issues I was facing since the last upgrade; it turns out some plugin functions were deprecated, and other plugins (e.g fugitive-gitlab were still them) -- after I upgraded those plugins as well, everything started to work just fine * completed advent of code 2017 day22-1 * completed advent of code 2017 day22-2 * completed advent of code 2017 day22-1 -- reused code from day18 and managed to quickly get a star for part y; for part2 it seems like I need to reverse engineer that code, which I don't want to do right now TIL: - I can use `:when .. :return ..` instead of `:do (when .. (return ..))` - I can use `:with .. :initially ..` to create and initialize `LOOP`'s state instead of `LET` + initialization logic *outside* of `LOOP` ? implement MULF to multiply a place -- # 2019-03-11 * completed advent of code 2017 day24-1 * completed advent of code 2017 day24-2 + `goobook-add` is not working anymore -- it ends up creating an account having name and email both set to the email address: `goobook-add Matteo Landi` creates a new account with name:, and address: ~ in CL, what's the idiomatic way of filtering a list keeping the elements matching a specific value? `(remove value list :test-not 'eql)`? `(remove-if-not #'= value list)` # 2019-03-12 * completed advent of code 2017 day25-1 -- it was painful to parse the input, and as many did in the megathread, I should have manually parsed it, hopefully got some scores, and then implemneted the input parser, but you know, I am late for the leaderboard anyway, so... # 2019-03-13 * completed advent of code 2017 day23-3 -- and with it AoC 2017!!! Oh boy, I am done reverse-engineering assembly code -- probably because I am bad at it. Few lesson learned though: - Use TAGBODY to create CL code matching your input - When iterating over big ranges of numbers (like here, where we working in the range 108100 - 125000), one could try to make such range smaller, hopefully making it easier to figure out what the code is actually doing Enjoy my abomination -- it's ugly, still it helped me realize we were counting non-prime numbers (defun solve-part-2 () (let ((b 0) (c 0) (d 0) (e 0) (f 0) (g 0) (h 0)) (tagbody (setf b 81 c b) part-a (prl 'part-a) (setf b (* b 100) b (- b -100000) c b c (- c -17000)) part-b (prl 'part-b b c (break)) (setf f 1 d 2) part-e (prl 'part-e) (setf e 2) part-d (prl 'part-d d e b) (setf g (- (* d e) b)) (if (not (zerop g)) (go part-c)) ; (prl 'resetting-f (break)) (prl 'resetting-f) (setf f 0) part-c (prl 'part-c d e b) (setf e (- e -1) g e g (- g b)) (if (not (zerop g)) (go part-d)) (setf d (- d -1) g d g (- g b)) (if (not (zerop g)) (go part-e)) (if (not (zerop f)) (go part-f)) ; (prl 'setting-h (break)) (prl 'setting-h) (setf h (- h -1)) part-f (prl 'part-f b c) (setf g b g (- g c)) (if (not (zerop g)) (go part-g)) (return-from solve-part-2 (prl b c d e f g h)) part-g (prl 'part-g) (setf b (- b -17)) (go part-b)))) Also, this! from [reddit]( The key to understanding what this code does is starting from the end and working backwards: - If the program has exited, g had a value of 0 at line 29. - g==0 at line 29 when b==c. - If g!=0 at line 29, b increments by 17. - b increments no other times on the program. - Thus, lines 25 through 31 will run 1000 times, on values of b increasing by 17, before the program finishes. So, given that there is no jnz statement between lines 25 and 28 that could affect things: - If f==0 at line 25, h will increment by 1. - This can happen once and only once for any given value of b. - f==0 if g==0 at line 15. - g==0 at line 15 if d*e==b. - Since both d and e increment by 1 each in a loop, this will check every possible value of d and e less than b. - Therefore, if b has any prime factors other than itself, f will be set to 1 at line 25. Looking at this, then h is the number of composite numbers between the lower limit and the upper limit, counting by 17. # 2019-03-14 ? read about Y-combinator: -- I am pretty sure there are better resources out there # 2019-03-16 ~ review vim-yankring config -- I have a custom branch to override some mappings which hopefully could go; also, it stomps on vim-surround mappings too Extracts from Everything is an expression. [Lisps] Most programming languages are a combination of two syntactically distinct ingredients: expressions (things that are evaluated to produce a value) and statements (things that express an action). For instance, in Python, `x = 1` is a statement, and `(x + 1)` is an expression. Every expression is either a single value or a list. [Lisps] Single values are things like numbers and strings and hash tables. (In Lisps, they’re sometimes called atoms.) That part is no big deal. Macros. [Racket] Also known in Racket as syntax transformations. Serious Racketeers sometimes prefer that term because a Racket macro can be far more sophisticated than the usual Common Lisp macro. # 2019-03-17 ? change RING related methods to use DEFINE-MODIFY-MACRO --, and # 2019-03-19 *refactored 2018 22 - Use COMPLEX instead of (LIST X Y) - Use CASE / ECASE instead ASSOC - Add poor's heap-queue implementation to utils - Review a-star implementation -- soon I will factor out a new function for it The execution time dropped from ~40 seconds to ~5 In the process I have also added HASH-TABLE-INSERT to utils, and used that wherever I saw fit # 2019-03-20 + gotta go vim-sexp a try -- and tpope's [mappings]( in particular # 2019-03-21 * factored out A-STAR from 2018 22 ? DESTRUCTURING-BIND for COMPLEX ? how to destructuring-bind an espression, ignoring specific elements? # 2019-03-23 * changed PARTIAL-1 to support nested substitution via SUBST * added PARTIAL-2 (similar to PARTIAL-1, but with the returned function accepting 2 arguments instead of 1) I am pretty excited about what I came up with. At first I got stuck when trying to use SUBST because I couldn't get nested backticks/commans to work as expected; then I realized I coudl do all the transformation inside the macro, and only put inside backticks the transformed argument list (defmacro partial-1 (fn &rest args) (with-gensyms (more-arg) (let ((actual-args (subst args '_ more-arg))) `(lambda (,more-arg) (funcall (function ,fn) ,@actual-args))))) # 2019-03-24 * implemented DEFUN/MEMO macro to define a memoized function Tried to see if I could somehow expose the cache used (i.e. replacing `LET` with `DEFPARAMETER`) but could not find a way to programmatically generate a name as the concatenation of the function name, and a `/memo` literal. Anyway, I am happy with it already.. (defmacro defun/memo (name args &body body) (with-gensyms (memo result result-exists-p) `(let ((,memo (make-hash-table :test 'equalp))) (defun ,name ,args (multiple-value-bind (,result ,result-exists-p) (gethash ,(first args) ,memo) (if ,result-exists-p ,result (setf (gethash ,(first args) ,memo) (progn ,@body)))))))) # 2019-04-04 * implemented Summed-area Table data-structure, in Common Lisp -- # 2019-04-06 * implemented Clojure's threading macro, in Common Lisp -- * refactored 2018 day 11 to use Summed-area table! # 2019-04-13 * finally got gcj-qualifications-3-criptopangrams accepted by the grader -- it's possible it was blowing up because of memory issues before, and I did fix that by reading the ciphertext one piece at a time (i.e. multiple `(read)`) instead of read-line + split + parse If a codejam solution is RE-ing out, but cannot find any apparent flaw in the algorithm, try to invoke the GC at the end of each case -- `(sb-ext:gc)` # 2019-04-14 Had some *fun* with [sic](, and implemented a wrapper to: - create a SSL proxy using `socat` -- - keep track of input history using `rlwrap` - store IRC logs using `tee` - highlight my name with `gsed` # 2019-04-16 * fucking fixed iterm2/tmux/term-mode weirdness -- vitality.vim lacked support for [terminal-mode]( ? vitality for mintty -- mintality + read lispwords from project file # 2019-04-21 Use `hexdump(1)` to view text as binary data (use `-C` to display original strings alongside their binary representation). And why would I care so much about viewing text as binary data? Because I am trying to understand bracketed pasted mode, and figure out if the extra new-line when pasting text in the terminal is intentional or a bug (readline? iTerm? iTerminal?) # 2019-04-25 Fucking figured out why [bracketed paste mode]( had stopped working sometime during the last 3/4 months...finally! Long story short, readline 8.0 was released this January, and with it shipped a fix for, a fix which I think broke something somewhere else. Anyway, if your bash setup includes a custom prompt command - set via PROMPT_COMMAND - that outputs some text with newline characters in it, when bracketed paste mode is on and you try to run a pasted command, changes are you would end up running not just the command you pasted but also one or more empty commands as well (depending on the number of newline chars that are generated inside the custom prompt command). matteolandi at hairstyle.local in /Users/matteolandi $ bind 'set enable-bracketed-paste on' matteolandi at hairstyle.local in /Users/matteolandi $ pwd | pbcopy matteolandi at hairstyle.local in /Users/matteolandi $ echo "CMD+V + Enter" CMD+V + Enter matteolandi at hairstyle.local in /Users/matteolandi $ /Users/matteolandi -bash: /Users/matteolandi: Is a directory matteolandi at hairstyle.local in /Users/matteolandi matteolandi at hairstyle.local in /Users/matteolandi 126 > Luckily for me I was able to fix my bashrc setup and move my custom prompt command `echo` calls into `$PS1` (see:, but I wonder if this wasn't a regression actually -- to be honest I don't know if it's expected to genrate output from custom prompt commands or not. * added initial support for mintty terminal to vitality.vim # 2019-05-01 * bumbped aadbook dependencies -- github sent me an alert that one of them contained a vulnerability * set up keyring to play well with `twice` -- * finished my urlview vim wrapper -- # 2019-05-04 Participated to Google Code Jam Round1-C but did not make it to the second one: - 1st problem: miss-read the description and took a pass on it -- even though, in the end, it turns out I should have started with that - 2nd problem: the interactive ones, as since I have never done an interactive problem, I took a pass on it too - 3rd problem: it did not seem to complicated at first (especially if you were amining for completing the first part), but somehow I bumped into recursion errors and could not submit a working solution on time Lesson learned: - Google Code Jam is though, and you better prepare for it... in advance - You gotta read carefully man! # 2019-05-05 * finally got around to implement [cg](, a command line utility (written in Common Lisp) that tries to extract commands to run from its input -- e.g. you pipe `git log -n ..` into it, and it will recommend you to `git show` a specific commit. It parses definitions from a RC file (err, a lisp file), and ships with FZF and TMUX plugins # 2019-05-06 * more `cg` guessers: 'git branch -D', pruning '[gone]' branches, `sudo apt install` ? cg: nvm use --delete-prefix v8.16.0 # 2019-05-08 * change `cg`'s Makefile not to specify `sbcl` full path -- let the SHELL figure it out, as the path might be different based on the OS # 2019-05-09 * cg: fzf script to enable multi-select # 2019-05-10 * cg: documentation and refactoring I feel like `cg` is ready for people to give it a try, so maybe I will post it on Reddit... maybe.. # 2019-05-12 * cg: documentation # 2019-05-13 All right, I think I figured out a way to avoid asking users to symlink/clone my repo inside ~/quicklisp/local-projects; putting the following at the top of my build.lisp file, right above (ql:quickload :cg), should do the trick: ;; By adding the current directory to ql:*local-project-directories*, we can ;; QL:QUICKLOAD this without asking users to symlink this repo inside ;; ~/quicklisp/local-projects, or clone it right there in the first place. (push #P"." ql:*local-project-directories*) Also, about getting "deploy" automatically installed: I guess I can throw in there a (ql:quickload :deploy) and get the job done, right? Still need to figure out how easy it would be to get QUICKLOAD automatically installed; well, maybe I found one: # 2019-05-15 I managed to get QUICKLISP automatically installed as part of the build process, however, I am not 100% sure that's the way to go; some users on Reddit pointed out that it might not be ideal for a build script to automatically install dependencies under someone's HOME. Next experiments: + Try to get `cg` to build using travis-ci + Add a `./download` script to fetch the latest release # 2019-05-16 * added command line opts to cg: -h/--help, and -v/--version * added a `download` script to `cg`, to download the latest binary available (Linux, Mac OS, Windows) + cg: git branch --set-upstream-to=origin/ opts # 2019-05-18 * got `cg` to build automatically (for Linux, Mac OS) using travis-ci * calculate version at build time, and use `git rev-list %s..HEAD --count` to append a build number to the base version Windows build are kind of working too; list of caveats: - set `langugage: shell` or it will hang - created custom roswell install script to fetch binaries somehwere, add them to $PATH, and then invoke the ufficial roswell ci script -- by then, ros is found available, so it will set it up and not compile it: # 2019-05-19 * implemented some cg guessers * fixed a bug with cg build script where when bumbing a version, `git rev-list` would blow because unable to find the base version * released cg 0.1.0 # 2019-05-20 * cg: fix bug with downloader -- it would blow up in case ./bin was missing * don't output duplicate suggestions * add page for `cg` on # 2019-05-21 * cg: stop `fzf` from sorting `cg`'s output * released cg 0.2.0 # 2019-06-01 * published new repo / page for: [tmux-externalpipe]( -- with it I could deprecate tmux-urlview / tmux-fpp, and hopefully simplify `cg` too # 2019-06-05 + ?? .file -> rm.... + rm "is directory" -> rm -rf... # 2019-06-17 * cg: added few guessers (git-st -> git-add/reset/rm, rm -> rmr) * cg: redirect stderr as well 2>&1 * cg: keep on processing guessers, even after a match -- this makes it possible for users to define multiple suggestions for the same input line * cg: add support for guessers which return multiple suggestions ? tmux-externalpipe: spawn a new pane for the selection # 2019-06-19 * cleaning up my vimrc -- from `nomodeline` -- it's dangerous No more `nocompatible` -- it's useless Don't use shortnames /\v<\w\w\= " should match ts= /\v<\w\w\w\= " should match sts= Allow your functions to `abort` upon encountering an error -- this uses a zero-width negative look-behind assertion... /\v\s+function.*(abort.*)@sort! " Change § back to \r :%s/§/\r # 2019-08-21 * Updated async.vim MR to wait for transmit buffer to be empty before closing stdin * Asked on vim_dev about a weird ch_sendraw behavior -- not sure if it's expected behavior, or a bug Hello everyone, ch_sendraw seems to fail when sending large chunks of data to a job started in non-blocking mode. Is this expected? Steps to reproduce: - Copy the following into a file, and source it from vim let args = ['cat'] let opts = { \ 'noblock': 1 \} let job = job_start(args, opts) let data = repeat('X', 65537) call ch_sendraw(job, data) call ch_close_in(job) Expected behavior: - Nothing much Actual bevarior: - E630: channel_write_input(): write while not connected If on the other hand, I initialized data with 65536 bytes instead of 65537, then no error would be thrown. In case you needed it, here is the output of `vim --version` $ vim --version VIM - Vi IMproved 8.1 (2018 May 18, compiled Aug 16 2019 02:18:16) macOS version Included patches: 1-1850 Compiled by Homebrew Huge version without GUI. Features included (+) or not (-): +acl -farsi -mouse_sysmouse -tag_any_white +arabic +file_in_path +mouse_urxvt -tcl +autocmd +find_in_path +mouse_xterm +termguicolors +autochdir +float +multi_byte +terminal -autoservername +folding +multi_lang +terminfo -balloon_eval -footer -mzscheme +termresponse +balloon_eval_term +fork() +netbeans_intg +textobjects -browse +gettext +num64 +textprop ++builtin_terms -hangul_input +packages +timers +byte_offset +iconv +path_extra +title +channel +insert_expand +perl -toolbar +cindent +job +persistent_undo +user_commands -clientserver +jumplist +postscript +vartabs +clipboard +keymap +printer +vertsplit +cmdline_compl +lambda +profile +virtualedit +cmdline_hist +langmap -python +visual +cmdline_info +libcall +python3 +visualextra +comments +linebreak +quickfix +viminfo +conceal +lispindent +reltime +vreplace +cryptv +listcmds +rightleft +wildignore +cscope +localmap +ruby +wildmenu +cursorbind +lua +scrollbind +windows +cursorshape +menu +signs +writebackup +dialog_con +mksession +smartindent -X11 +diff +modify_fname -sound -xfontset +digraphs +mouse +spell -xim -dnd -mouseshape +startuptime -xpm -ebcdic +mouse_dec +statusline -xsmp +emacs_tags -mouse_gpm -sun_workshop -xterm_clipboard +eval -mouse_jsbterm +syntax -xterm_save +ex_extra +mouse_netterm +tag_binary +extra_search +mouse_sgr -tag_old_static system vimrc file: "$VIM/vimrc" user vimrc file: "$HOME/.vimrc" 2nd user vimrc file: "~/.vim/vimrc" user exrc file: "$HOME/.exrc" defaults file: "$VIMRUNTIME/defaults.vim" fall-back for $VIM: "/usr/local/share/vim" Compilation: clang -c -I. -Iproto -DHAVE_CONFIG_H -DMACOS_X -DMACOS_X_DARWIN -g -O2 -U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=1 Linking: clang -L. -fstack-protector-strong -L/usr/local/lib -L/usr/local/opt/libyaml/lib -L/usr/local/opt/openssl/lib -L/usr/local/opt/readline/lib -L/usr/local/lib -o vim -lncurses -liconv -lintl -framework AppKit -L/usr/local/opt/lua/lib -llua5.3 -mmacosx-version-min=10.14 -fstack-protector-strong -L/usr/local/lib -L/usr/local/Cellar/perl/5.30.0/lib/perl5/5.30.0/darwin-thread-multi-2level/CORE -lperl -lm -lutil -lc -L/usr/local/opt/python/Frameworks/Python.framework/Versions/3.7/lib/python3.7/config-3.7m-darwin -lpython3.7m -framework CoreFoundation -lruby.2.6 Thanks, Matteo * Created new async.vim MR to workaround ch_sendraw erroring out when data is bigger than 65536 bytes -- in this case, we would fall back to manual data chunking # 2019-08-22 Still fighting with non-blocking vim channels, and how to close stdin making sure that all the pending write operations are flushed first # 2019-08-24 * Cleaned up configs for mutt and friends (offlineimap, msmtp) * Restored Ctrl+Enter behavior on Change some terminal mappings - Ctrl-Enter key from ✠ to ◊ - Shift+Enter key from ✝ to Ø Why? Because I recently switched from iTerm2 to, and I could not find a way to creat custom shortcuts (e.g. to generate the maltese cross on Ctrl+Enter). I tried then using Karabiner-Element, but it doesn't seem it supports unicodes, so the only solution I found was: - Figure out a new char to use (the lozenge), whose Option+key shortcut was known (Option+Shift+v) - Tell Karabiner elements to generate that sequence when Ctrl+Enter is pressed - Change all the other settings to use the lozenge instead of the maltese cross Too bad I could not use a more meaningful character, like the carriage return one. PS. Now that these key mappings have been restored too, is pretty much usable; I do not see any performance improvements (when scrolling through Tmux + Vim panes), and true color support is not there, but that's OK, I will survive I guess # 2019-08-25 * Suppressed CR/LF warnings raised when working on Unix Git repos from Windows -- git config core.autocrlf input * Figured out a way to prevent `git up` from updating the dotfiles submodule while on the parent repo, my-env: add `update = none` to the sub-module entry inside .gitmodules [submodule "dotfiles"] path = dotfiles url = ignore = dirty update = none Why would I want that you might ask? Well, I usually keep the dotfiles submodule up to date already (e.g. `cd dotfiles; git up`), so when I decide to sync my-env, I don't want `git up` to accidentally checkout an older version of dotfiles. * Created tmux binding to spawn a command/application in a split pane -- similar to how `!' on mutt lets you spawn an external command * finished reading [Black Box Thinking: Why Most People Never Learn from Their Mistakes--But Some Do]( # 2019-08-27 * Added new reading.html page to, listing all the books I remember having read * Changed ansible scripts to re-use letsencrypt certificates for the docker registry -- so I don't have to re-generate self signed certificates every year # 2019-08-28 * Created [cb.vim]( -- still a WIP, but it's working, with my set up at least # 2019-08-30 * added Makefile and support for br * added Makefile and support for cb * removed a bunch of wrappers from ~/bin (br, cb, urlview, mutt-notmuch-py) + fzf ain't working on cygwin: and + try to ditch the following wrappers: fzf-tmux -- find a way to install them locally (make install PREFIX=..., or the same for python crap) + avoid to update PATH for tmux, and winpty # 2019-08-31 * added Makefile and support for cb * removed tmux,winpty from PATH -- now installing it into ~/local/bin * fucking managed to run fzf on Cygwin! Created `fzf` wrapper that would start `fzf` via cmd.exe on Windows: #!/usr/bin/env bash EXE=$(realpath ~/.vim/pack/bundle/start/fzf/bin/fzf) if ! hash cygpath 2>/dev/null; then "$EXE" "$@" else stdin=$(tempfile stdout=$(tempfile wrapper=$(tempfile trap "rm -f $wrapper $stdout $stdin" EXIT # save stdin into a temporary file cat - > "$stdin" # create wrapper cat > "$wrapper" < $stdout EOF # run fzf through cmd.exe -- stolen from: cmd.exe /C 'set "TERM=" & start /WAIT sh -c '"$wrapper" cat "$stdout" fi And updated `fzf-tmux` as follows -- no need to run tmux if we would be spawning a `cmd.exe` anyway EXE=~/.vim/pack/bundle/start/fzf/bin/fzf-tmux if [ -f ~/.vim/pack/bundle/start/fzf/bin/fzf.exe ]; then # While on Cygiwn, fzf-tmux would normally open a tmux pane, which would open # a cmd.exe (yes, because on cygwin fzf has to be run through a cmd.exe). # # That's ridicolous, let's skip the tmux part completely EXE=fzf fi $EXE "$@" * fucking figured out (well, partially) how to make tmux preserve current path when splitting panes: it appears I have to export CHERE_INVOKING=1. That also fixes a problem I have always had when running tmuxinator on Windows where it would not always move in the expected directory + what the hell is CHERE_INVOKING, and why do I need to set it on Windows, but not on MacOS/Linux + backup files Mutt was behaving weirdly when run inside tmux, and googling around I discovered that it's not recommended to set TERM to anything different from screen/scren-256color while inside tmux: So I tried to disable the lines where I override TERM in my .tmux.conf, let's see what happens. # 2019-09-01 * Started moving my old journal entries over to my file Regarding CHERE_INVOKING, on Cygwin: it turns out that the cygwin default /etc/profile checks for an environment variable named CHERE_INVOKING and inhibits the change of directory if set. So by default, when sourcing /etc/profile, cygwin would take you to your home directory, unless CHERE_INVOKING=1 is set -- * Implemented plan-rss script -- in bash for now And here there was me thinking it would be easy to manually generate RSS: - XML escape VS turns out you need both: XML escape because the output has to be valid HTML, CDATA if you want the parser to ignore non-existng HTML tags - Channel's description has to be plain text -- item's description has to be HTML compatible + plan-rss: new-fucking-lines + plan-rss: publish latest 5 entries only + plan-rss: rewrite in CL # 2019-09-02 * Backed up all the into Dropbox * plan-rss: populate * plan-rss: fucking fixed (I guess) new-line madness * plan-rss: pubDate matching plan entry's date - XML-escape all your content - Wrap it into `
` ... `
` - Wrap it into `` - Voila'! No need to append `
`s to each line -- `
` will do the trick!

~ plan-rss: populated  but feedly still does not show it
? Cygwin + Tmux weirdnesses: if I don't force tmux to set TERM=xterm-256color, everything will go black and white -- it turns out TERM is being set to screen, and not screen-256color
? Tmux weirdnesses: if I don't force tmux to set TERM=xterm-256color, tmux will set it to screen-256color which doesn't support italics well

# 2019-09-03
~ plan-rss: remove old pub-date logic

Regarding the image not showing on Feedly: could it be that we are not actually pointing to a proper image resource, but rather to an endpoint that will generate it?!

~ tmuxinator on windows:

# 2019-09-04
* Started caching gravatar locally, on my box; I hoped this could fix the problem with my RSS channel image not being rendered, but it did not work -- still not image on Feedly, and with the other RSS client I am using on Mac OS
* Fixed my rubygems cygwin env

When trying install any gem, I would be presented with the following error:

    $ gem install tmuxinator
    ERROR:  While executing gem ... (NameError)
        uninitialized constant Gem::RDoc

After some Googling around, I landed on this GitHub [page]( where they suggest to manually edit 'rubygems/rdoc.rb' file, and move RDoc logic inside the begin/rescue block -- where it should be

    vim /usr/share/rubygems/rubygems/rdoc.rb
    # move the last line inside begin/rescue block

Everything started working again since then

+ Add step to to guard against this rubygems/rdoc weirdness
+ plan-rss: plan.xml did not update overnight -- manually running update-plan-xml did the trick though

I cannot get any RSS client I found to parse the image that I added to my plan.xml file, but I checked the specification, and it should be correct:

- Someone on SO pointing about the `` tag in the specification:
- Codinghorror's feed: -- it seems to be using ``, but somehow his image is popping up, mine not
- Another feed (taken from NetNewsWire) where image is being used:
- Some guy on SO suggesting that we use favicon.ico:
- GitHub's repo feed: -- it's ATOM, and when adding this to NetNewsWire, it seems to be using the favicon.ico

* Opened a ticket to NetNewsWire to see if RSS2.0 channel's image is supported or not --

# 2019-09-05
* enhance to isntall the various rubygems I am using
* enhance to abort when running bogus version of rubygems
* plan-rss: better item pubdate management: 1) new item: use date of the plan entry, 2) content differs: use $NOW, 3) re-use last pubdate

+ .plan (and plan.xml) did not update overnight.. again!
+ plan-rss: publish last 10 recently updated entries

# 2019-09-06
* Copying more entries from the old journal into this file
~ .plan does not automatically sync on my phone, unless I open Dropbox first

# 2019-09-07
* Figured out why my .plan was not being getting updated anymore -- it turns out the two cron jobs I had created (one for .plan, the other for plan.xml) were sharing the same name, so the second one (for plan.xml) would override the first one
* Changed some of my ansible scripts to run specific commands (i.e. init of avatar or init download of .plan file) only if the target file does not exist already -- if it does, the cron job would take care of updating it in due time:
* Copying more entries from the old journal into this file

~ I am still having second thoughts on whether updating old .plan entries should force an update on the relevant plan.xml ones (i.e. republish them with new content).

# 2019-09-08
* finished reading [Hackers & Painters: Big Ideas from the Computer Age](

+ aadbook does not work with python3
? offlineimap (the version I am manually building) does not work with python3
~ cg: add an option to use a single guesser only -- that would help me replace urlview completely

TIL, you can stop Vim to increment numbers in octal notation (i.e. C-A) with the following

    :set nrformats-=octal.

# 2019-09-09
Started looking into rewriting plan-rss into CL.  I found xml-emitter, but I am afraid I will have to submit a couple PRs for it to be able to generate valid RSS2.0

+ xml-emitter: missing support for  isPermaLink
+ xml-emitter: missing CDATA support
+ xml-emitter: Missing atom:link with rel="self"

# 2019-09-18
+ cg rm -> rmdir

    rm '.tmux-plugins/tmux-fpp/'
    rm: cannot remove '.tmux-plugins/tmux-fpp/': Is a directory

~ git co bin/fzf-tmux # XXX why on earth would the install script delete fzf-tmux

# 2019-09-22
* finished reading [The History of the Future: Oculus, Facebook, and the Revolution That Swept Virtual Reality](
* cleaend/validated pages -- broken elements inside  in particular
* cg: better rm-rmr support for mac and linux outputs: it's funny how little the output of the same command on different OSes could change...I mean, what's the point, really?!
* aadbook: added py3 support! yay
~ fzf's install script seems to delete bin/fzf-tmux on Windows/Cygwin only -- on Mac, I run it, and fzf-tmux was stll there

Spent some time trying to get offlineimap to work with Python3, but no luck:


This is going to be annoying, especially when python2 sunsets in January next year...

Started reading John Carmack .plan archive:

I have also started learning more about multi-tenant applications, so I am going to start dropping notes here.

A multi-tenant application needs to be carefully designed in order to avoid major problems.  It:

- needs to the detect the client (or tenant) that any individual web request is for;
- must separate persistent data for each tenant and its users;
- can not allow cached data (e.g. views) to leak beetween tenants;
- requires separate background tasks for each tenant;
- must identify to which tenant each line of log output belongs.

What’s the definition? (of a multi-tenant application)

We don’t think there is a clear, concise definition.  In broad strokes, y'know it when y'see it.  You might say that the application is multi-tenant if the architects ever had to decide between deploying a number of separate (not pooled) instances of the app, or deploying a single instance.

Or you might say it has to do with data segregation: the idea that there are some datasets that simply shouldn't touch, shouldn't interact, and shouldn't be viewable together in any combination.  This seems to be the “definition” that most closely fits the examples we’ve come up with, and the spirit of the term (as we understand it).

Set up different domains per client:

- ...

# 2019-09-23
+ Change autohotkeys comment string from `/* .. */` is being used

In a multi-tenant app, each request that comes in can be for a separate tenant.  We need two things from the outset:

- a way to determine what tenant a request is for
- and a way to process the request in the context of that tenant.

Single login leading to tenants

    def current_tenant

Don't change your public API as follows:


Users will never go off wandering to a nother tenant, and since a user is linked to a tenant only, all the magic can happen behind the scenes without the user knowing anything about it.

Another approach is to implement a custom piece of middleware that would take care of figuring out the tenant, and change configs (e.g. DB user) accordingly:

    module Rack
      class MultiTenant
        def initialize(app)
          @app = app

        def call(env)
          request =
          # CHOOSE ONE:
          domain =               # switch on domain
          subdomain = request.subdomain       # switch on subdomain

          @tenant = TENANT_STORE.fetch(domain)

          # Do some configuration switching stuff here ...


And the list of tenants could be configured as:

- in memory hash (not dynamic, will require a bounce of the server)
- database (dynamic, but you would still want to cache some data, or it would slow down every request)

How to provide the *right* data for each user, based on their tenant?

- have a single database, but to use associations to control the siloing.  Pros: single database (less maintenance); Cons: you forget about that node-tenant relation, and all of a sudden users are able to see data from other tenants
- multiple independent copies of the application's database.  Pros: once the connection is switched, siloing is complete and guaranteed; Cons: configuration changes at run-time, possibly invalidating some of the db connection pool techniques implemented by drivers

Also, the database is not the only "feature" you might want to swap out based on the user tenant; for example, if you were using Redis, you might want to try and implement (and then switch) [Redis namespace](

    var fooRedis = new Redis({ keyPrefix: 'foo:' });
    fooRedis.set('bar', 'baz');  // Actually sends SET foo:bar baz

# 2019-09-26
Started doing burpees -- used this timer web-app to set the pace:

Rounds: 8
Time on: 0:30
Time off: 1:00

For those unfamiliar with what a burpee is:

1) Begin in a standing position.
2) Move into a squat position with your hands on the ground. (count 1)
3) Kick your feet back into a plank position, while keeping your arms extended. (count 2)
4) Immediately return your feet into squat position. (count 3)
5) Stand up from the squat position (count 4)

Well, I initially tried to jump instead of stand up at step 5), but then I quickly realize I would not have made it alive that way, so I opted for something more lighter.

Anyway, it was tough, and sometimes I even stopped before the time, but I guess I shouldn't be beating myself too much about it: in the end it's the first session.  Let's if tomorrow I can get out of my bed.

# 2019-09-27
* fixed commentstring for autohotkey files

TIL: Vim will try and syn-color a maximum of `synmaxcol` characters.  -- `:help 'synmaxcol'`

To separate various tenants' data in Redis, we have – like SQL – two options. The first is to maintain independent Redis server instances, and switch out the connection per request. Of course, this has the disadvantage of reducing or eliminating reuse of the connection pool, as the Rails server must open a new set of connections to the Redis server on each web request.

Instead, we recommend using a simple "namespace" concept to identify each key with its tenant.

What's a Redis namespace?  When working with Redis, it's customary to namespace keys by simply prepending their name with the namespace, i.e. namespace:key_name. We often do this manually, and write classes to encapsulate this process, so that other parts of our applications needn't be aware of it.

# 2019-09-28
I have been trying to create a private mailing on for the last couple of days, but they seem to have put 'Request a service' page on mainentance mode, and by the look of their tech blog it seems like they are undergoing a big infrastructure update.  Anyway, shot them an email to understand if there is something we can do about it (e.g. sign up, and register a service).

Follows another dump of notes re multitenancy...

Benefits of multitenancy:

- Segregation of data across the clients
- Manage and customize the app according to the client's needs (e.g. separate branding, separate feature flags)
- Maximize efficiency while reducing the cost needed for centralized updates and maintenance

Ways of implementing this:

- Logical separation of data
- physical separation of data

# 2019-09-30
* enhanced vitality.vim to support escapes -- for changing the cursor shape

I have been thinking about this lately, and I don't think relying on env variables to figure out which terminal Vim or other programs are being run into, is The Right Thing.  I mean, it works pretty decently for local sessions, but the moment you ssh into a remote box and create a Tmux session for some long-running activity, your screwed.

For example, say that I am at work (Windows, Mintty), and I log into my remote Workstation.  When you do so -- ssh into a remote box -- based on your settings, some/all your env variables on your local host will be maved available in your remote session; this could seem sexy at first, because you might have Vim-specific settings based on the running terminal emulator, but what if you create a Tmux session (which saves env variables), start some work, and decide to carry it on from home, where you have a different terminal?  You won't be destroying your tmux session, but since it was created earlier, with a different terminal emulator, some of the already pre-configured variables might not apply for the *new* client.

I believe the Right Thing to do here, is to use escape sequencies, when available, to query the Terminal emulator capabilities -- which is what the Mintty manteiner is suggeseting people do:

Anyway, I guess for the time being I will kill myself and set/unset env variables based on the client I am connecting from, and based on the tmux environment. <3

I am going to dump here the steps to reprovision my Linux box:

    cd /some/dir
    curl -O
    vagrant up
    vagrant ssh
    cd /data
    mkdir Workspace
    ln -s $(pwd)/Workspace ~/Workspace
    ln -s $(pwd)/Workspace ~/workspace
    git clone --recurse-submodules
    ln -s $(pwd)/my-env ~/my-env
    cd my-env
    git clone --recurse-submodules
    bash --force --bootstrap --os-linux

# 2019-10-02
Was playing with my dev box earlier, trying to test persistent drives -- this way, once a new LTS is released I can destroy the box, recreate it, and pick up from where I left -- and accidentally wiped out my ConnectION workspace; I did not realize `vagrant destroy` would delete also attached drives...

SO has a solution for this:

- Mark the storage drive as hot-pluggable
- Register a pre-destroy hook, and inside of it detach the drive -- so it wont be automatically deleted

Unfortunately that does not seem to work as expected: syntax errors, triggers being invoked multiple times.  I guess I will be creating a custom `./vagrantw` wrapper to take care of all this


Getting a reference to `Window` in Angular, with AOT enabled, is super painful.  `Window` is defined as an interface, and because of that you cannot simply rely on Angular DI, but instead you have to...sacrifice few animals the God of forsaken programmers!

First you define a named provider:

    // file: ./window-ref.service.ts

    import { InjectionToken } from '@angular/core';

    export const WindowRef = new InjectionToken('WindowRef');

    export function windowRefFactory() {
      return window;

    export const WindowRefProvider = { provide: WindowRef, useFactory: windowRefFactory }

Next you register the provider into your main module:

    // file: ./app.module.ts

    import { WindowRefProvider } from './utils/window-ref.service';

      providers: [

Finally you update your dependent services / components as follows:

     import { Injectable, Inject } from '@angular/core';
     import { WindowRef } from '@connection-ui/utils/window-ref.service';

     export class HelloWorldService {
       $window: Window;

       constructor(@Inject(WindowRef) $window) {
         this.$window = $window as Window;

Yeah, you need to use the named injector but without any type (i.e. `any`), and then inside the constructor you cast it to `Window`; it's retarded, but that's the only solution I could come up with.


Played with mocking classes in Typescript, and stumbled upon something neat:

    type PublicInterfaceOf = {
        [Member in keyof Class]: Class[Member];
    class MockHelloWorld implements PublicInterfaceOf {
      hello(name) {
        return "Mocked!";


? create vagrantw to properly deal with persistent drives, and prevent they are removed when `vagrant destroy`-ing

# 2019-10-03
Time for another burpees training session.  Last time it took me almost 3 days to fully *recover* so I figured I should try with something lighter this time:

Rounds: 6
Time on: 0:30
Time off: 1:00

6 rounds instead of 8, and no jumping since the beginning (last time I had to stop in the middle of the second round).  Let's what my body thinks about this tomorrow or the day after.

# 2019-10-04
* finished reading john carmak's collection of .plan files

I am taking some time off from work next week, and I plan to do some research on Natural Language Processing (NPL); in particular I would like to experiment with Named Entity Recognition (NER) in the context of search engines (populate search filters based on users free text input).

Also, I will be in London for the next couple of days to meet some friends, so don't expect much done (other than reading, I guess).

# 2019-10-07
Reading about Natural Language Processing (NLP), and Named Entity Recognition (NER) in particular, to be able to populate search engine filters, based on user's free text input

    mkdir ~/tmp/ner
    cd ~/tmp/ner
    virtualenv -p python3 venv
    pip install nltk
    pip install numpy


I was reading this book about NLP, and there was this exercise about text tokeninzation, that the author thought about solving it using simulated annealing.  Now, I always found simulated annealing fascinating, and because of that I decided to dump the exercise (and solution) here for future reference.

Let's first define a function, `segment()`, that given some text (i.e. a string, like 'Hello world!'), and another string representing where such text should be broken into chunks, would split the text into chunks:

    def segment(text, segs):
      # text = 'helloworld'
      # segs = '000100000'
      # => ['hello', 'world']
      words = []
      last = 0
      for i in range(len(segs)):
        if segs[i] == '1':
          words.append(text[last:i + 1])
          last = i + 1
      return words

`evaluate()` instead is a function that evaluates the _quality_ of a *segmentation*.

    def evaluate(text, segs):
      words = segment(text, segs)
      text_size = len(words)
      lexicon_size = len(''.join(list(set(words))))
      return text_size + lexicon_size

Now the simulated annealing: basically we start with a *random* segmentation kernel, and start randomly flipping *bits* on/off, evaluating how the new segmentation compares with the best one found so far; repeat this multiple times, always reducing the number of *bits* to flip (e.g. temperature, decreasing over time)

    def flip(segs, pos):
      return segs[:pos] + str(1 - int(segs[pos])) + segs[pos + 1:]

    def flip_n(segs, n):
      for i in range(n):
        segs = flip(segs, randint(0, len(segs) - 1))
      return segs

    def anneal(text, segs, iterations, cooling_rate):
      temperature = float(len(segs))
      while temperature > 0.5:
        best_segs, best = segs, evaluate(text, segs)
        for i in range(iterations):
          guess = flip_n(segs, int(round(temperature)))
          score = evaluate(text, guess)
          if score < best:
            best, best_segs = score, guess
        score, segs = best, best_segs
        temperature = temperature / cooling_rate
        print(evaluate(text, segs), segment(text, segs))
      return segs

# 2019-10-08
* finished reading [Natural Language Processing with Python: Analyzing Text with the Natural Language Toolkit](
+ add dark mode support to, just for fun -- Dark mode in CSS with `prefers-color-scheme` -

# 2019-10-09
Started reading The Mythical Man-Month (TMMM)


Finally got around to create a SPID ID, so I can - hopefully - easily access all my health record and anything else PA related, without having to use obsolete smart card readers or remember crazy PINs.

You have to request a SPID ID to one of the entity providers out there, and they might ask you for proof of identity; lucky me, Hello Bank is partnering with Poste Italiane (an entity provider), so I could easily request an ID without much of a hassle: the bank trasferred all the required documents over, and in 15 mins I had my SPID created.

I then installed PosteID on my mobile, signed in with my SPID ID, created a new code (and there was silly me thinking the SPID ID would be the last username/password to remmeber), and that's it: every time I try to access PA websites and choose SPID as login workflow, I would receive a notification on my mobile asking me to confirm or deny the access -- living in the future!

Few handy PA links:

- Fascicolo Sanitario Elettronico (FSE):
- CUP online:


I have decided to list all the applications I have on my mobile, so I can 1) monitor which I have been constantly using vs the ones I throw away after a bit, and 2) easily re-setup my mobile in case Google refused to re-install my previously installed apps.

Here we go:

- (notes)
- Authenticator (2FA codes)
- British Airways
- Calendar (Google)
- Chrome
- Dropbox
- E-mail (non G-mail accounts)
- Feedly
- File Manager+
- Fineco
- Firefox Nightly
- Fogli (Google spreadsheets)
- Gmail
- Hello bank!
- Hoplite (roguelike game)
- Instagram
- Maps
- Materialistic (hacker news)
- My3
- Netflix
- Odyssey (endless runner snowboarding game)
+ Orario Treni (train timetables)
- PosteID (SPID identity)
- Reddit
- Ryanair
- SAP concur (tracking expenses when traveling)
- Satispay
- Settle Up
- Shazam (the future!)
- Signal
- Snapseed
- Spotify
- Stremio
- Tasker
- Teams (Microsoft)
- TV Time
- Twitter
- Waze
- WhatsApp
- Youtube

Damn, I was not expecting this to be that long...


Few "interesting" links:

- koa.js, from the creators of express.js:
- Apex ping, easy and sexy monitoring for websites, applications, and APIs:



- Train the NLP manager, then save it on disk so you don't have to train it again:
- NER Manager:


My Node.js application kept on failing to start up from my new Virtualbox box, blowing up with a ENOSPC error.

    echo fs.inotify.max_user_watches=524288 | sudo tee -a /etc/sysctl.conf && sudo sysctl -p


? setting up a vpn on my DO instance (for mobile, and laptop)

# 2019-10-10
I was reading this hackernews post ( about adding support for a dark-theme, and even though the overall activity took me less than one minute to complete (well, my website is very *simple*, so I am sure adding dark-theme support for a more complex website / web app is going to take longer, usually), one commend made me wonder: "What if someone was interested in *muting* the browser colors, the container's, but not the content?  Should we add some Js to let people toggle between light and dark mode?"

Few examples:

- Google, does not seem to support a dark theme
- Wikipedia, does not seem to support a dark theme
- Facebook, does not seem to support this either

Again, is nothing compared to Google and Wikipedia, so honestly...I am not that surprised they haven't added support for dark-mode yet; but still, is it possible they haven't done it yet because, as things stand, it's not yet a "problem" for them?  Somewhere on the Web I read that one should tweak CSS only to solve a specific user problem, so maybe adding a dark-them is not going to solve any of their problems, who knows...

Anyway, if you have dark-mode enabled at OS/browser level, going to you should be presented with a dark version of the website -- and here is what I added to my style.css to make this happen:

    @media (prefers-color-scheme: dark) {
      body {
        background-color: #444;
        color: #e4e4e4;
      a {
        color: #e39777;
      img {
        filter: grayscale(30%);


*  added dark-them to

# 2019-10-17


Rounds: 6
Time on: 0:30
Time off: 1:00

Still no jumping.

# 2019-10-20
Disabled dark-theme on

# 2019-10-27
Few bits I found around, about generating html emails, starting from markdown, with Mutt.

Mutt's editor first:

    set editor = "nvim -c 'set ft=mdmail' -c 'normal! }' -c 'redraw'"

Then a macro, activated with `H` or whatever, to pipe the body of the message to `pandoc` to generate the html version of the message, and attach it to the email:

    macro compose H "| pandoc -r markdown -w html -o ~/.mutt/temp/neomutt-alternative.html~/.mutt/temp/neomutt-alternative.html"

And the '.vim/syntax/mdmail.vim' syntax file:

    if exists("b:current_syntax")
    let s:cpo_save = &cpo
    set cpo&vim

    " Start with the mail syntax as a base.
    runtime! syntax/mail.vim
    unlet b:current_syntax

    " Headers are up to the first blank line.  Everything after that up to my
    " signature is the body, which we'll highlight as Markdown.
    syn include @markdownBody syntax/markdown.vim
    syn region markdownMailBody start="^$" end="\(^-- $\)\@=" contains=@markdownBody

    " The signature starts at the magic line and ends at the end of the file.
    syn region mailSignature start="^-- $" end="\%$"
    hi def link mailSignature Comment

    let b:current_syntax = "mdmail"
    let &cpo = s:cpo_save
    unlet s:cpo_save


Niceties to cat / edit the content of an executable (`which`-able program); the first one, will open the specified command/executable, in your configured editor

    function ew() { $EDITOR $(which "$1"); }
    complete -c ew -w which

While this other one, will simply `cat` its content:

    function cw() { cat $(which "$1"); }
    complete -c cw -w which


Found this intersting thread started by Aaron Swartz, from 2008, on HTTP sessions:!topic/webpy/KHCDcwLpDfA

I was initially confused by it -- and somewhat I still am -- as I could not really understand where Aaron was getting at with this, but then I guess his biggest complain was that sessions require servers to store information locally, on a per-user basis, breaking the *stateless* idea of the web.  Cookies are good (and Session cookies are good to, as far as I can tell), but Aaron did not seem lo like the use of cookies for session management; for that, one should use Digest access authentication (

Cookies, for session managements, I think they are good, especially because they prevent username/password to fly around -- even though hashed -- betweeen the clients and the server; better to generate a temporary session ID and have clients pass that to the server.

Does this break Web's stateless-ness?  Yes of course, but is that really a problem?  I mean, if the session is stored on disk, then yes, it's a problem, because the client will have to hit the same server to restore its session (or at least it would be uselessly complicated to manage it);but if one instead used a different session storate system, like Redis, then client requests could be handled by any available server in the pool, and it would be able to restore the specific client session just fine.

Anyway, the sad thing is, we are never going to get to know what Aaron had in mind when started that thread.


Changed my household booklet spreadsheet to:

- Handle cash withdrawal by entering two expenses: one negative on my bank account, and one positive, as cash bucket
- Changed the payment type sheet to know read the starting balance by summing up all the "positive" expenses

Sooner or later this will have to become an app...

# 2019-10-30
Finally A/I came back online, and I was finally able to create a request for a mailing list (to use it with the other college friends).  Anyway, the request has been created, so hopefully over the following days we will hear back from them...stay tuned!

# 2019-11-01
* xml-emitter: Add support for guid isPermaLink=false (
* xml-emitter: Add support for atom:link with rel="self" (

# 2019-11-02

Update on my journery to a Common Lisp version of plan-rss

Figured out a way to wrap "description" values inside CDATA/pre elements, but I don't think I am 100% satisfied by the solution (reason why I haven't submitted a MR yet).

So basically I added a new key argument to RSS-ITEM, `descriptionWrapIn` a 2 elements LIST, which the function would use to wrap the content of "description" (the first element, before, and the second after).  Here is what I came up with:

    (xml-emitter:rss-item date
                          :link *link*
                          :description (plan-day-content day)
                          :descriptionWrapIn '("" "
]]>")) Don't take me wrong, it works, but I just don't think it's good way of implementing it. Another approach would be to convert RSS-ITEM into a macro and let it accept a `BODY` that users can specify to further customize the content of the item. Something along the lines of: (xml-emitter:with-rss-item (date :link *link*) (xml-emitter::with-simple-tag ("description") (xml-emitter::xml-as-is "") (xml-emitter::xml-out (plan-day-content day)) (xml-emitter::xml-as-is "]]>")))))) More verbose, but more flexible too; in fact, yesterday's MRs to add `isPermaLink` to ``, and to support Atom's link with rel=self would not be required anymore if RSS-ITEM and RSS-CHANNEL-HEADER were macros. Channel with Atom link: (xml-emitter::with-rss-channel-header (*title* *link* :description (read-channel-description) :generator *generator* :image *image*) (xml-emitter::empty-tag "atom:link" `(("href" ,*atom-link-self*) ("rel" "self") ("type" "application/rss+xml")))) Item with guid (isPermaLink=false) and description wrapped inside CDATA/pre blocks: :do (xml-emitter:with-rss-item (date :link *link* :pubDate "XXX") (xml-emitter::simple-tag "guid" (format NIL "~a#~a" *link* date) '(("isPermaLink" "false"))) (xml-emitter::with-simple-tag ("description") (xml-emitter::xml-as-is "") (xml-emitter::xml-out (plan-day-content day)) (xml-emitter::xml-as-is "]]>")))))) --- TIL: you can pretty print a Common Lisp condition with: `(format t "~a" cond)` --- CL: unix-opts Required options seem to conflict with typical use of `--help` or `--version`: Here is a possible workaround (adapted by another one found in the thread there): (defun parse-opts (&optional (argv (opts:argv))) (multiple-value-bind (options) (handler-case (handler-bind ((opts:missing-required-option (lambda (condition) (if (or (member "-h" argv :test #'equal) (member "--help" argv :test #'equal) (member "-v" argv :test #'equal) (member "--version" argv :test #'equal)) (invoke-restart 'opts:skip-option) (progn (format t "~a~%" condition) (opts:exit 1)))))) (opts:get-opts argv)) (opts:unknown-option (condition) (format t "~a~%" condition) (opts:exit 1)) (opts:missing-arg (condition) (format t "~a~%" condition) (opts:exit 1))) (if (getf options :help) ... Important bits: - We do want to invoke the `OPTS:SKIP-OPTION` restart when any of the special options is set -- this way the library will use `NIL` and move on - HANDLER-BIND -- and not HANDLER-CASE -- needs to be used to handle `OPTS:MISSING-REQUIRED-OPTION`, especially if we want to call INVOKE-RESTART (with HANDLER-CASE the stack is already unwound, when the handler runs. Thus the restart established in the function is gone.) --- * xml-emitter: RSS macros ( (also cancelled the other MRs, as this one would now give more control to the user) * plan-rss: add pubDates (rfc2822) # 2019-11-03 Mac OS builds on Travis-CI were not working lately, and after some quick Googling around I stumbled upon this: The following indeed seems to fix the problem: before_install: # 2019-10-30: Homebrew/brew.rb:23:in `require_relative': ... # ... unexpected keyword_rescue, expecting keyword_end # - if [ "$TRAVIS_OS_NAME" = osx ]; then set -x && brew update-reset && if ! brew install gsl; then cd "$(brew --repo)" && git add . && git fetch && git reset --hard origin/master; fi; set +x; fi But what the hell should I install `gsl`? Let's try something else (`brew update-reset` only): before_install: # 2019-10-30: Homebrew/brew.rb:23:in `require_relative': ... # ... unexpected keyword_rescue, expecting keyword_end # - if [ "$TRAVIS_OS_NAME" = osx ]; then set -x && brew update-reset && set +x; fi This one works too, so I guess I am going to stick with this for now! --- Setting up Travis-CI deployment keys can be as easy as running `travis setup releases`, or way more messier (that's the price you have to pay if you don't to share your username/password with `travis`). 1) Add a new Oauth token:, and copy it -- only the `public_repo` scope seems to be required to upload assets 2) Run `cb | travis encrypt iamFIREcracker/plan-rss` (replace `iamFIREcracker/plan-rss` with your public repository) 3) Copy the secured token inside your .travis.yml file deploy: provider: releases api_key: secure: XXX_SECURE_XXX skip_cleanup: true file: bin/$DEST on: repo: iamFIREcracker/plan-rss tags: true Also, when *all* conditions specified in the `on:` section are met, your build will deploy. Possible options: - `repo`: in the form `owner_name/repo_name`. - `branch`: name of the branch (or `all_branches: true`) - `tags` -- to have tag-based deploys - other.. Useful links: - - - - --- * plan-rss: v0.0.1 * plan-rss: fix Windows build issue * plan-rss: remove trailing new-line from * plan-rss: v0.0.2 * plan-rss: --version not working for dev version (i.e. not taking into account commits since last tag) * plan-rss: travis ci binaries crashing (fucking had forgotten to check in some changes...) * plan-rss: v0.0.3 + plan-rss: ERROR instead of OPTS:EXIT so one can test opts parsing without accidentally closing the REPL # 2019-11-04 * Use plan-rss (CL version) to generate # 2019-11-05 * plan-rss: `--disable-pre-tag-wrapping` * plan-rss: remove `-m`, `-s` options (use longer versions instead) # 2019-11-06 * plan-rss: v0.0.4 + travis-ci builds are failing on Windows -- ros is erroring out # 2019-11-07 Not really sure what happened, but now CL travis-ci builds have started working again... Anyway, I am going to add the .sh script into every CL repo I am building via travis-ci, as using a single scrip hosted on gist/github turned out not to be a good idea; I mean, when the script works, fine, but when it doesn't and have to update it, then I am forced to update all the repos too, as each of them would still be pointing to the old version of the script. Also, having the script checked into the repo, sounds like the right thing to do, especially if one wants reproducible builds. #!/usr/bin/env bash ROSWELL_VERSION= ROSWELL_URL="$ROSWELL_VERSION/roswell_${ROSWELL_VERSION}" OS_WIN=$(uname -s | grep -e MSYS_NT) if [ -n "$OS_WIN" ]; then ROSWELL_IN_PATH=$(echo $PATH | grep -F /tmp/roswell) if [ -z "$ROSWELL_IN_PATH" ] ; then echo "/tmp/roswell not found \$PATH" exit 1 fi echo "Downloading Roswell (v$ROSWELL_VERSION) from: $ROSWELL_URL" curl -L "$ROSWELL_URL" \ --output /tmp/ unzip -n /tmp/ -d /tmp/ fi # Run roswell's CI script, and since it will find `ros` already available # in $PATH, it would not try to build it but instead will install the specified # CL implementation + quicklisp curl -L "$ROSWELL_VERSION/scripts/" All good! I also remembered I had asked roswell mainteiners about adding Windows support to -- -- and it appears they did it! Switching from custom .sh (wrapping the official one), to simply using the official one was pretty easy: 1) Define the following ENV variables env: - PATH=~/.roswell/bin:$PATH - ROSWELL_INSTALL_DIR=$HOME/.roswell 2) Call roswell's ci script: install: - curl -L | sh ... And that's it! Bonus: you might want to define `LISP` too, and make it point to a specific version -- again, reproducible builds are the right thing to do! --- * added to plan-rss and cg * plan-rss: use MD5 of plan entries as content of `` tags * plan-rss: v0.0.5 + cg: parse git push output to quickly get a reference of the last commit ID -> ' f5455f40f1..6443447668 story/US2709-Workstreams-Make-workstreams-visible-only-to-their-members -> story/US' # 2019-11-08 * plan-rss: preserve empty lines -- we are using ~& instead of ~% (use ~^ to avoid the last newline) * plan-rss: v0.0.6 # 2019-11-10 Typescript interfaces are not available at runtime: they are getting used at compilation time for type-checking, but then thrown away. So, if we want to implement run-time checking and throw an error if the decorator is being applied to a non-compatible object, we are forced to: - Define a dummy class, implementing the interface - Create an instance of such class - Store its `Object.keys()` --- Downgrading a Angular attribute directive is simply **not** supported: People suggest that you can wrap `downgradeComponent()` calls and change `component.restrict` from 'E' to 'A' or 'EA', but I could not make it work -- and to be honest I am kind of happy that it did not work, as otherwise I would have probably left the hack there, and it would broken again on the next upgrade of the framework. So what to do about it? - Migrate the old component in need of the new directive (duh) - Maintain two directives: one for Angular, and one for Angular.js --- Fucking figured out how, with AutoHotKeys, to activate window and cycle through all the similar windows, with the same hotkey. At first I thought I would implement it with something like: - if the focused window does not match "activation criteria", then activate one that maches it - otherwise, activate the next matching window Well, it turns out AutoHotKeys already supports all this, via [window groups](; the idea is simple, you define first a group of windows (all with a given title, class, or exe), and then you call `GroupActivate` instead of `WinActivate`. For example: - Define te ChromeGroup GroupAdd, ChromeGroup, ahk_exe chrome.exe - Activate the group / cycle its windows using `Hyper+k`: F15 & k::GroupActivate, ChromeGroup, R (`R`, is for activating the most recent window of the group: --- ? cg: br regexp is not working under Windows # 2019-11-11 ? cb.vim is adding ^M when pasting on Windows -- :.!cb --force-paste doesn't though... # 2019-11-12 More typescript niceties Earlier I thought it wasn't possible to enforce (at compile-time) that a decorator were applied to a class implementing specific interface, and because of it I wound up implementing run-time checks (check the notes of some previous .plan file). Well I was wrong! First you define an interface for the constructor: export interface UrlDescriptorConstructor { new (...args: any[]): UrlDescriptor; } (for reference, here is how `UrlDescriptor` is looking like) export interface UrlDescriptor { generatePathname(): string; generateParams(): any; } Next you change the signature of your decorator to expect the target class to be a constructor for that type: export function SearchHrefDirective(target: UrlDescriptorConstructor) { ... # 2019-11-16 Trying to understand why copy-pasting things on Vim/Windows generates trailing ^M... Let's create a file with `\r\n`s (that's how the clipboard appears to be populated when I copy things from Chrome): $ echo -en "OS cliboards are hart...\r\naren't they?\r\n" > clipboard.txt $ cat clipboard.txt OS cliboards are hart... aren't they? $ cat clipboard.txt -A OS cliboards are hart...^M$ aren't they?^M$ Let's open up vim now (`vim -u NONE`) and do some experiments: 1) read 'clipboard.txt', under the cursor :read clipboard.txt The content is loaded, not trailing ^M, at the bottom I see: "clpboard.txt" [dos format] 2 lines, 40 characters 2) load 'clipboard.txt' using `read!` :read! cat clipboard.txt The content is loaded, no trailing ^M 3) load the content into a variable, then paste it under the cursor :let @@ = system('cat clipboard.txt') | exe 'normal p' The content is loaded, trailing ^M are added Let's now push the file into the OS clipboard: $ cat clipboard.txt | clip.exe Open up vim again, and: 4) paste from the `*` register (works with `Shift+Ins` too): :exe 'normal "*p' OS clipboard is read, no trailing ^M 5) load the content of the clipboard using `read!` :read! powershell.exe Get-Clipboard The content is loaded, no trailing ^M 6) Read the content of the clipboard into a variable, then paste it under the cursor :let @@ = system('powershell.exe Get-Clipboard') | exe 'normal p' OS clipboard is read, trailing ^M are added So what's the difference between *reading* a file, and calling another program that reads it and outputs its content? --- ? add Makefile to my-env/dotfiles, so I can simply run make to reubild what changed # 2019-11-18 Angular, AOT, and custom decorators Plan on using custom decorators with Angular, and AOT? Well, don't...or at least, beware! We did implement a custom decorator lately, to add behavior to existing Angular components in a mixins kind of fashion, and while everything seemed to be working just fine on our local boxes -- where the app runs in JIT mode -- once we switched to prod mode -- where AOT is enabled -- all components decorated with our custom decorator stopped working, completely, like Angular did not recognize them as all. After some fussing around, I noticed that the moment I added a dummy `ngOnInit` to such components, all of a sudden the `ngOnInit` defined by our custom decorator would be run; but not any of the others `ng*` methods enriched by our decorator though, just `ngOnInit`, the one that we also re-defined inside the Angular component. This led me to this Angular ticket:, where someone seemed to have a pretty good understanding of what was going on: Appears the reason here is that Angular places `ngComponentDef` generated code before all `__decorate` calls for class. `ɵɵdefineComponent` function takes `onInit` as a reference to `Class.prototype.ngOnInit` and overridding it later has no any effect. ... Here's a pseudo-code of what happens: ``` class Test { ngOnInit() { console.log('original') } } const hook = Test.prototype.ngOnInit; Test.prototype.ngOnInit = () => { console.log(1); } hook(); // prints 'original' ``` In case of JIT compilation `componentDef` is created lazily as soon as `getComponentDef` gets called. It happens later. Someone at the end even tried to suggest a possible solution,, but when I noticed that the author suggested that we added `ngOnInit` to the final component, I hoped there would be a better way to work-around this (especially because we would be forced to redefine `ngOnInit`, `ngOnChanges`, `ngAfterViewInit` and `ngOnDestroy`). Further digging took me to this other Angular ticket:, where I found a couple of not-so-promising comments, left almost at the start of the thread: This is a general "problem" with AOT: It relies on being able to statically analyze things like lifecycle hooks, properties with @Input on them, constructor arguments, ... As soon as there is dynamic logic that adds new methods / properties, our AOT compiler does not know about this. Immediately followed by: As we are moving to AOT by default in the future, this won't go away. Sorry, but closing as infeasible. All right, to recap: - Typescript compiler, `tsc`, supports Decorators just fine - Angular's compiler is a custom compiler, different from `tsc` - Angular's compiler is not 100% feature-compatible with `tsc` - Since Angular's compiler does not properly implement support for custom decorators, it's not possible to use custom decorators that enrich objects prototypes (Angular would not notice any enriched method) So it turns out the components annotated with our custom decorator do have to implement `ng*` methods -- we cannot reply on the decorator adding them -- otherwise the AOT compiler would not know of them, and not call them at the right time...Sweet! The solution? Create a base class implementing these `ng*` methods, and have the components annotated with our custom decorator *extend* this base class; the only problem is that we would also have to add `super()` to the component's constructor -- which could be annoying -- but it's either this or no custom decorators, so I guess this will do just fine. ``` export class ObserveInputsBase implements OnChanges, OnDestroy { ngOnChanges(changes: SimpleChanges): void {} ngOnDestroy(): void {} } @Component({ ... }) @CustomDecorator export class TodoComponent extends ObserveInputsBase { constructor(){ super(); } } ``` --- The thing is, :read! appears to be stripping ^M already, irrespective of the use of ++opts:   :read !powershell.exe Get-Clipboard :read ++ff=unix !powershell.exe Get-Clipboard :read ++ff=dos !powershell.exe Get-Clipboard To be honest, I'd expect the second one, where we specify ++ff=unix, to leave trailing ^M, but somehow that's not happening, and for your reference (after I vim -u NONE):   :verbose set ff Returns 'fileformat=unix'   :verbose set ffs Returns 'fileformats=unix,dos' So am I correct if I say that there is something "weird" going on with system()?  I also found the following at the end of system()'s help page, but somehow the experienced behavior is not the documented one:   To make the result more system-independent, the shell output   is filtered to replace with for Macintosh, and   with for DOS-like systems. I even tried to give systemlist() a go, but each entry of the array still has that trailing ^M, so it really seems like Vim cannot properly guess the fileformat from the command output. I am really in the dark here. # 2019-11-25 plann-rss Previously, testing PARSE-OPTS was a bit of a pain as your REPL could end up getting accidentally killed if passed in options would cause the parsing logic to invoke OPTS:KILL; it goes without saying it that this was very annoying. The fix for this was simple though: 1) Create a new condition, EXIT 2) ERROR this condition (with the expected exit code) instead of directly invoking OPTS:EXIT 3) Invoke PARSE-OPTS inside HANDLER-CASE, and invoke OPTS:EXIT when the EXIT condition is signalled This way: - When run from the command line OPTS:EXIT will be called, and the app will terminate - When run from the repl, an unmanaged condition will be signalled, and the debugger will be invoked --- * plan-rss: add --max-items option # 2019-11-26 ? plan-rss: use #'string> when merging, and get rid of the call to REVERSE + reg-cycle: use localtime() instead of timers # 2019-12-01 * aoc: 2019/01 + PR, PRL defined inside .sbclrc, is not available when running the REPL via VLIME -- + loading :aoc does not work cleanly anymore -- have to load RECURSIVELY first, then all the ARGS-* related functions needed for PARTIAL, then it will eventually work... # 2019-12-02 Unfortunately I spent too much time because of a wrong use of RETURN: you can indeed use RETURN (which is the same as RETURN-FROM NIL) only from inside a DO or PROGN block, otherwise you have to use RETURN-FROM followed by the name of the surrounding function. can create a named block with BLOCK, and RETURN-FROM it: --- * aoc: 2019/02 # 2019-12-03 * aoc: 2019/03 -- needs some clean up tho ? ONCE-ONLY: # 2019-12-04 I am trying to look into compilation errors thrown when loading my :AOC project, but even though I shuffled functions/macros around to make sure functions are defined before they are getting used, it still errors out ... The function AOC::ARGS-REPLACE-PLACEHOLDER-OR-APPEND is undefined. It is defined earlier in the file but is not available at compile-time. I am afraid I need to re-study again how Lisp code is being parsed/compiled, and if there is anything we can do about it. I am on the go right now, so this will have to wait until tomorrow. --- Why on earth would someone use rlwrap with `--pass-sigint-as-sigterm`? I mean, all my *-rlwrap scripts had it, but it was kind of dumb if you think about it... Most of the times rlwrap is used with REPLs, and REPLs usually catch SIGINT to interrupt running operations (e.g. break out from an infinite loop), so why would someone kill the REPL, instead of leave the REPL deal with it? I am sure there are legit use cases for this, it's just that I cannot think of any. --- Damn, my Lisp is really rusty! Today I re-learned about: - :THEREIS and :ALWAYS keywords on LOOP --- Few days ago I threw PR/PRL inside .sbclrc hoping I could handily use them from any REPL I had open. Well I was right and wrong at the same time: I could indeed use PR/PRL in the REPL, but only if inside the :CL-USER package. So I moved those macros inside a new file, made a package out of them (pmdb, poor's man debugger, lol), and loaded it from my sbclrc file; after that I was finally able to use those PR/PRL from any package -- too bad I had to use the package classifier to use them (i.e. PMDB:PR). To fix that, I went on and added :pmdb to the :use form of the package I was working on. Alternatively, I could have created two vim abbreviations like: inoreabbr pr pmdb:pr inoreabbr prl pmdb:prl but for some reasons, abbreviations ain't working while editing Lisp files -- I bet on parinfer getting in the way.. --- * aoc: 2019/04 ? SCASE: a switch for strings -- and maybe use it on 2019 day 03 ? figure out why abbreviations are not working on Lisp files # 2019-12-05 * aoc: 2019/05 -- fucking opcodes :-O # 2019-12-06 * aoc: 2019/06 # 2019-12-07 * aoc: 2019/07 -- part2 missing still, fucking opcodes # 2019-12-08 * aoc: 2019/08 + aoc: 2019/08/2 -- figure out a way to test for the output string # 2019-12-09 * aoc: 2019/07 + aoc: 2019/05 -- use intcode as per 2019/07 + aoc: 2019/08 -- add test for output part2 # 2019-12-10 Wasted the night trying to get a solution for AOC 2019/09, but somehow it ain't working. The logic is simple: for each asteroid, as 'curr' initialize 'remaining' as **all** asteroids for each remaining asteroid, as 'other' calculate the line of sigth between 'current' and 'other' remove from 'remaining' any asteroid hidden **behind** 'other' maximize 'remaining' So the biggest challenge here would be to calculate the sight direction vector between 'curr' and 'other'. Few things to take into account though: - when dir is (0 0): return (0 0) - when dir-x is 0: return vertical unit vector (preserve direction) - when dir-y is 0: return hirizontal unit vector (preserve direction) - when dir-y divides dir-x: reduce it (so we can return (2 1) instead of (4 2)) - when dir-x divides dir-y: reduce it (so we can return (1 2) instead of (2 4)) - otherwise return dir, as-is All good, except it doesn't work -- and I have no idea why... # 2019-12-11 * aoc: 2019/09 -- "parameters in relative mode can be read from or written to." GODDAMMIT! * aoc: 2019/10/a -- my direction-reduce function was able to reduce (5, 0) into (1, 0), (5, 5) into (1, 1), (2, 4) into (1, 2), but **not** (4 6) into (2 3) -,- # 2019-12-12 * aoc: 2019/10/2 -- phases are hard * aoc: 2019/11 -- phases are hard * aoc: 2019/12/1 ~ aoc: 2019/10 needs some serious refactoring around phases, and vector math + aoc: 2019/11/2 -- figure out a way to test for the output string # 2019-12-13 My virtualbox date and time is getting out of sync when the laptop is suspended. A temporary fix was to 1) install ntp, 2) restart the service every hour: $ crontab -l 28 * * * 0 sudo service ntp restart But there has to be a better way to fix this / prevent this from happening --- * aoc: 2019/13 -- intcode problem, with some AI (well ... a simple `(- ball-x bar-x)` seems to do just fine) # 2019-12-14 * aoc: 2019/14/1 ? refactor partial-1/partial-2 to accept a form, instead of a FN and ARGS ? also, why is the following form not compiled? (partial-2 #'* -1 (- _1 _2)) It spits the following error: The value -1 is not of type (OR (VECTOR CHARACTER) (VECTOR NIL) BASE-STRING SYMBOL CHARACTER) # 2019-12-15 You cannot really schedule `sudo` commands with crontab, you better use root's crontab instead... So, to fix datetime drifing on Virtualbox: # crontab -l 28 * * * * service ntp restart --- AOC 2019/12: That was freaking painful -- for me at least. Part 1 was easy: parse input, apply gravity, apply velocity, repeat, calculate energy. Part 2? Not quite! At first I tried to look at the output of some of the variables to figure out if there were any patters, and noticed: - the *baricenter* of the system (i.e. sum of all xs, sum of all the ys, sum of all the zs) was constant - the overall velocity of the system (i.e. sum of all vxs, sum of all vys, sum of all the vzs) was 0 But unfortunately none of the above turned out to be useful/usable. Something else that I played a lot with, was looking at each moon separately: I would still simulate the whole universe of course, but inspect a moon at time, again, hoping to find patterns to exploit. I was hoping that each moon had a different cycle, and that the overlall system cycles could be calculated as the the LCM of each moon cycles, but each moon was influencing the others, so my logic was flawed. Anyway, I put this on hold, then tried again, I eventually gave up and looked for some help on Reddit, and this is what brought me to the solution [0]: Note that the different dimensions are independent of each other. X's don't depend on y's and z's. The idea that the overall cycle would be calculated as the LCM of other *independent* systems cycles was right; unfortunately I failed to understand what these *independent* systems were. [0] --- * aoc: 2019/15 * aoc: 2019/12/02 + aoc: 2019/15: figure out a proper way to explore the map (now it goes on for 50000 steps) + aoc: 2019/15: why my BFS costs are off by 1? + aoc: print-hash-table-as-grid # 2019-12-16 * aoc: 2019/16/1 + define CIRCULAR, and use it for 2019/16, and 2018/01: # 2019-12-18 * aoc: 2019/17 * aoc: 2019/16/2 ~ find a proper solution for 2019/17/2 -- right now I got the solution by manually looking at the map, and figuring out program inputs. Also, there is no proper solution at the moment :) ? explain the realization about 2019/16/2 # 2019-12-19 * aoc: 2019/19 ~ aoc: proper solution for 2019/19/2 (figure out how to calculate the direction of the uppear border of the beam -- now I manually calculated it by looking at the output) # 2019-12-20 * aoc: 2019/14/2 + aoc: 2019/14/2 better euristic (binary search from cargo-limit and 0) + create a function to return sortable-difference between two numbers: a < b ? -1 : a > b ? 1 : 0, and use it for 2019/14/2, phases, robots # 2019-12-21 * aoc: 2019/20 # 2019-12-22 * aoc: 2019/21 # 2019-12-23 * aoc: 2019/23 # 2019-12-24 * aoc: 2019/24/1 # 2019-12-25 * aoc: 2019/24/2 ~ aoc: 2019/24: better logic for recurive neighbors ~ aoc: 2019/24: how to efficiently add levels to the state? currently we end up with way more states, and that takes processing time # 2019-12-26 * aoc: 2019/22/1 ? aoc: 2019/22 -- explain the math behind it, the modeling # 2019-12-27 * aoc: 2019/22/2 * aoc: 2019/25 ? aoc: 2019/22/2: explain all that math stuff, and move some of it inside utils.lisp ~ aoc: 2019/25 automate it?! # 2019-12-30 * aoc: 2019/18/1 + aoc: 2019/18/1: 20s to complete part 1... # 2019-12-31 Migrated all the mercurial repository from Bitbucket to Github: cd ~/tmp git clone hg clone ssh:// git init logfilter.git ../fast-export/ -r ../logfilter git remote add origin git push origin --all git push origin --tags # repeat --- * migrated logfilter to github -- thanks bitbucket * migrated strappo-api to github -- thanks bitbucket * migrated weblib to github -- thanks bitbucket * migrated strappo-analytics to github -- thanks bitbucket * migrated strappon to github -- thanks bitbucket * migrated getstrappo to github -- thanks bitbucket * migrated deploy-strappo to github -- thanks bitbucket * migrated osaic to github -- thanks bitbucket * migrated expensio to github -- thanks bitbucket * migrated medicinadellosport to github -- thanks bitbucket * migrated webpy-facebook-login to github -- thanks bitbucket * aoc: 2019/18/2 -- this is fucking it! + aoc: 2019/18/2: 400s to complete part 2... I blame it on HEAPQ + aoc: 2019/18/2: don't manually update the input to generate the 4 sub-vaults + aoc: add DIGITS function to utils, and use it inside 2019/04 and 2019/16 + aoc: 2019/17: use PRINT-HASH-MAP-GRID + aoc: 2019/20: figure out a way to programmatically figure out if a portal is on the outside or on in the inside ~ aoc: add goap-p to search algo, and review dijkstra and bfs.. + aoc: add FLATTEN to utils + common lisp use quickutils! ? update logfilter repo references to the new github location ? update osaic repo references to the new github location # 2020-01-02 ? ? PARTIAL-1: evaluate only once, and not every-time the lambda function is called # 2020-01-03 * add quickutils to aoc ~ # 2020-01-04 * add ADJACENTS to aoc.utils * change BFS to use QUEUE instead of poor's HEAPQ # 2020-01-05 Imported yet another trick from SJL, this time to automatically load all the .lisp files conained a directory -- this way I don't have to update aoc.asd everytime I add a new day. First you define a new class, and then implement the generic method COMPONENT-CHILDREN: (defclass auto-module (module) ()) (defmethod component-children ((self auto-module)) (mapcar (lambda (p) (make-instance 'cl-source-file :type "lisp" :pathname p :name (pathname-name p) :parent (component-parent self))) (directory-files (component-pathname self) (make-pathname :directory nil :name *wild* :type "lisp")))) Next, you use `:auto-module` in your components list as follows (2nd line): (:file "intcode") (:auto-module "2017") (:module "2018" :serial t :components ((:file "day01") --- TIL, VALUES in Common Lisp, is SETF-able! (loop :with remainder :do (setf (values n remainder) (truncate n base)) :collect remainder :until (zerop integer))) --- * vim: add mapping to (in-package) the current file in the repl + quickutil: pull request to fix a problem with DIGITS: use `n` instead of `integer` when calling :until ~ quickutil: pull request to change DIGITS to optionally return digits in the reverse order (this way I can ditch DIGITS-REVERSE) # 2020-01-11 * Fix DIGITS compilation error: `integer` is not a variable #64 # 2020-01-15 Today I installed `roswell` again, because I wanted to test `xml-emitter` with abcl, but somehow it seemed the runtime corrupted: $ brew install roswell ... boring output $ ros install sbcl sbcl-bin/1.4.0 does not exist.stop. $ ros help $ sbcl-bin/1.4.0 does not exist.stop. $ Making core for Roswell... $ sbcl-bin/1.4.0 does not exist.stop. $ sbcl-bin/1.4.0 does not exist.stop. Googling for the error I stumbled upon this GH issue:, and after nuking ~/.roswell, `ros setup` successfully installed the latest version of sbcl. --- I was erroneously trying to use `cl-travis` instead of plain `roswell`. Why erroneusly? `cl-travis` uses `CIM`:, which turns out to have been deprectated in [2017]( What's the replacement? `ros` and similars -- so better directly use `ros` instead. The command `ros` however won't be available unless you tweak a couple of ENV vars: global: - PATH=~/.roswell/bin:$PATH - ROSWELL_INSTALL_DIR=$HOME/.roswell # 2020-01-16 Couple of TILs (not all today's really, but whatever): - `:serial T` inside ASDL systesms, make sure that components are compiled in order; otherwise, in case of inter-file dependencies, one is forced to to use `:depends-on`, which is annoying - you need to run `(ql:update-dist "...")` to download an updated version of a dist, or `(ql:update-all-dists)` to download them all --- Fixed xml-mitter#6, added tests support to the project Fixing #6 was pretty easy; in fact, double quoting WITH-RSS optional arguments was all we had to do. Adding unit tests support to the module on the other end, wound up being not as easy as I initially thought it would, but still, I am quite happy with the final result. I split the tests support in 3 different sub-activities: 1) Add a package (and a system) for the tests: `:xml-emitter/tests` 2) Add a Makefile to run the tests This turned out to be more annoying than expected: - The interactive debugger needs to be disabled when tests fail, or the CI job would hang forever - The spawn REPL needs to be shut down when tests pass, or again, it would cause the CI job to hang forever - Certain implementations did not exit with non-0 code when tests failed And all this needed to be done outside of the testing system (hence in the Makefile) becasue I still wanted to leave things interactive when people tried to run the tests with `(asdl:test-system ...)` 3) Add a .travis.yml file to test the library with multiple implementations The result: - Makefile .PHONY: test test-sbcl test-ros lisps := $(shell find . -type f \( -iname \*.asd -o -iname \*.lisp \)) cl-print-version-args := --eval '\ (progn \ (print (lisp-implementation-version)) \ (terpri))' cl-test-args := --eval '\ (progn \ (ql:quickload :xml-emitter/tests :verbose T) \ (let ((exit-code 0)) \ (handler-case (asdf:test-system :xml-emitter) \ (error (c) \ (format T "~&~A~%" c) \ (setf exit-code 1))) \ (uiop:quit exit-code)))' all: test # Tests ----------------------------------------------------------------------- test: test-sbcl test-sbcl: $(lisps) sbcl --noinform $(cl-test-args) test-ros: $(lisps) ros run $(cl-print-version-args) $(cl-test-args) - .travis.yml language: generic env: global: - PATH=~/.roswell/bin:$PATH - ROSWELL_INSTALL_DIR=$HOME/.roswell - ROSWELL_VERSION= - ROSWELL_URL="$ROSWELL_VERSION/scripts/" matrix: - LISP=abcl - LISP=allegro - LISP=ccl - LISP=clisp - LISP=cmucl - LISP=ecl - LISP=sbcl install: - curl -L $ROSWELL_URL | sh script: - make test-ros This is it! # 2020-01-17 Finally got around to refactor 2019/15, and fix the +1 issue The issue was pretty dumb... when reading 2 from the robot I ended up setting oxygen with the current position, and not with the position after the movement! Thanks to some Reddit folk, I ended up simplifying the solution by implementing a simple dfs to explore the map (no more 5000 ticks random walking) It feels good! # 2020-01-18 EVAL-WHEN is there to tell the file compiler whether it should execute code at compile-time (which it usually does not do for function definitions) and whether it should arrange the compiled code in the compiled file to be executed at load time. This only works for top-level forms. Common Lisp runs the file compiler (remember we are talking about compiling files, not executing in a REPL) in a full Lisp environment and can run arbitrary code at compile time (for example as part of the development environment's tools, to generate code, to optimize code, etc.). If the file compiler wants to run code, then the definitions need to be known to the file compiler. Also remember, that during macro expansion the code of the macro gets executed to generate the expanded code. All the functions and macros that the macro itself calls to compute the code, need to be available at compile time. What does not need to be available at compile time, is the code the macro form expands to. This is sometimes a source of confusion, but it can be learned and then using it isn't too hard. But the confusing part here is that the file compiler itself is programmable and can run Lisp code at compile time. Thus we need to understand the concept that code might be running at different situations: in a REPL, at load time, at compile time, during macro expansion, at runtime, etc. Also remember that when you compile a file, you need to load the file then, if the compiler needs to call parts of it later. If a function is just compiled, the file compiler will not store the code in the compile-time environment and also not after finishing the compilation of the file. If you need the code to be executed, then you need to load the compiled code -> or use EVAL-WHEN -> see below. --- All right, replaced a bunch of `(if (< a b) -1 (if (> a b) 1 0))` expressions with `(signum (- a b))`, but I wonder: doesn't a mathematical/cs function exist, do refer to this? It seems too common as a pattern, not to have an agreed name -- Its name is: three-way comparison, a.k.a. spaceship operator --- Figured out how to customize Clozure-CL REPL prompt... it couldn't be easier, could it?! ;; What sorcery is this? In summary: ;; ;; - Use the first argument as conditional (I believe it represents the number ;; of pending stacktraces or something ;; - If none, call PACKAGE-PROMPT -- we need to re-add one argument to the stack ;; or invoking user defined functions would fail ;; - If there are pending exceptions to process, print the lot followed by `]` ;; ;; FORMAT directives 101 ;; ;; ~[...~] Conditional expression. This is a set of control strings, called ;; clauses, one of which is chosen and used. The clauses are separated by ~; ;; and the construct is terminated by ~]. Also, ~:; can be used to mark a ;; default clause ;; ~/name/ calls the user defined function, NAME ;; ~:* ignores backwards; that is, it backs up in the list of arguments so ;; that the argument last processed will be processed again. ~n:* backs up ;; n arguments. ;; ;; Source: (setf ccl::*listener-prompt-format* "~[~:*~/package-prompt/~:;~:*~d]~]") --- ~ aoc: abcl: hangs on 2018/22 ~ aoc: allegro: attempts to call RECURISVELY, which is undefined.... ~ aoc: ccl: hangs on 2018/22 ~ aoc: clisp: =: NIL is not a number (last 1AM test appears to be 2019/18) ~ aoc: cmulc: Structure for accessor AOC/2019/24::TILES is not a AOC/2019/24::ERIS: NIL ~ aoc: ecl: hangs on 2019/20 # 2020-01-19 All right, I feel dumb... For almost one year I have been dreaming of a `WITH-SLOTS` macro to help with structures and not only classes, and today I stumbled upon a piece of Common Lisp code where the author was using `WITH-SLOTS` with a structure. "That has to be homemade/library macro, right?" I thought. Well, it turns out `WITH-SLOTS` works with slots, and strucutres use slots too, so I have been waiting all this time.. for nothing. Let's look at the bright side of this, though: now I can finally use it! (defstruct foo bar baz) => FOO (let ((foo (make-foo :bar 123 :baz 'asd))) (with-slots ((bar1 bar) (baz1 baz)) foo (format T "~a ~a~&" bar1 baz1))) => 123 ASD NIL Reference: --- ? implement topaz's paste, in common lisp: # 2020-01-24 Got presented with the following info messages this morning, while booting up my dev box: [default] GuestAdditions seems to be installed (6.0.14) correctly, but not running. Got different reports about installed GuestAdditions version: Virtualbox on your host claims: 5.2.8 VBoxService inside the vm claims: 6.0.14 Going on, assuming VBoxService is correct... Got different reports about installed GuestAdditions version: Virtualbox on your host claims: 5.2.8 VBoxService inside the vm claims: 6.0.14 Going on, assuming VBoxService is correct... VirtualBox Guest Additions: Starting. VirtualBox Guest Additions: Building the VirtualBox Guest Additions kernel modules. This may take a while. VirtualBox Guest Additions: To build modules for other installed kernels, run VirtualBox Guest Additions: /sbin/rcvboxadd quicksetup VirtualBox Guest Additions: or VirtualBox Guest Additions: /sbin/rcvboxadd quicksetup all VirtualBox Guest Additions: Kernel headers not found for target kernel 4.15.0-72-generic. Please install them and execute /sbin/rcvboxadd setup VirtualBox Guest Additions: Running kernel modules will not be replaced until the system is restarted All right then, let's install them and see what happens: $ sudo apt-get install linux-headers-$(uname -r) ... $ sudo /sbin/rcvboxadd setup VirtualBox Guest Additions: Starting. VirtualBox Guest Additions: Building the VirtualBox Guest Additions kernel modules. This may take a while. VirtualBox Guest Additions: To build modules for other installed kernels, run VirtualBox Guest Additions: /sbin/rcvboxadd quicksetup VirtualBox Guest Additions: or VirtualBox Guest Additions: /sbin/rcvboxadd quicksetup all VirtualBox Guest Additions: Building the modules for kernel 4.15.0-72-generic. update-initramfs: Generating /boot/initrd.img-4.15.0-72-generic VirtualBox Guest Additions: Running kernel modules will not be replaced until the system is restarted Reload the virtual box and.... $ vagrant reload ... [default] GuestAdditions 6.0.14 running --- OK. ==> default: Checking for guest additions in VM... ... # 2020-01-26 Few Lisp book recommendations (in this order): * Common Lisp: A Gentle Introduction to Symbolic Computation * Practical Common Lips ? Paradigms of Artificial Intelligence Programming (PAIP) + Common Lisp Recipes (CLR): + Patterns of software Book by Richard P. Gabriel ? On Lisp (Paul Graham): ? Let Over Lambda: --- * finished reading [Practical Common Lisp]( # 2020-01-31 * finished reading [The Mythical Man-Month]( # 2020-02-02 Finally got around to enhance pmdb.lisp (Poor Man's debugger) to add a few macros to facilitate profiling of slow functions -- not really debugging per-se, I will give you that, but since I did not have a Poor Man's profiler package yet and I wanted to get this done quickly, I figured I could shove it inside pmdb.lisp and start using it (I can always refactor it later). Anyway, these are the macros that I added: - DEFUN/PROFILED: a DEFUN wrapper to create functions that, when executed, would keep track of their execution times - PROFILE: a macro (similar to TIME) to evaluate a given expression and output information about profiled functions (e.g. % of CPU time, execution time, number of calls) Say you TIMEd an expression and decided you wanted to try and understand where the bottlenecks are; the idea is to replace all the DEFUNs of all the potential bottleneck-y functions, with DEFUN/PROFILED, and then replace TIME with PROFILE. For example, say you started from something like: (defun fun-1 (&aux (ret 0)) (dotimes (n 10000) (incf ret)) ret) (defun fun-2 (&aux (ret 0)) (dotimes (n 2000000) (incf ret)) ret) (defun fun-3 (&aux (ret 0)) (dotimes (n 200000000) (incf ret)) ret) (defun main () (fun-1) (fun-2) (fun-3)) To time the execution time of MAIN, you would probably do the following: (time (main)) ; Evaluation took: ; 0.396 seconds of real time ; 0.394496 seconds of total run time (0.392971 user, 0.001525 system) ; 99.49% CPU ; 910,697,157 processor cycles ; 0 bytes consed ; ; 200000000 Perfect! But what's the function that kept the CPU busy, the most? Replace all the DEFUNs with DEFUN/PROFILED: (defun/profiled fun-1 (&aux (ret 0)) (dotimes (n 10000) (incf ret)) ret) (defun/profiled fun-2 (&aux (ret 0)) (dotimes (n 2000000) (incf ret)) ret) (defun/profiled fun-3 (&aux (ret 0)) (dotimes (n 200000000) (incf ret)) ret) Replace TIME with PROFILE, et voila', you now have a better idea of where your bottleneck might be: (profile (main)) ; Profiled functions: ; 98.00% FUN-3: 449 ticks (0.449s) over 1 calls for 449 (0.449s) per. ; 2.00% FUN-2: 8 ticks (0.008s) over 1 calls for 8 (0.008s) per. ; 0.00% FUN-1: 0 ticks (0.0s) over 1 calls for 0 (0.0s) per. ; ; 200000000 It also works with nested function calls too: (defun/profiled fun-4 (&aux (ret 0)) (dotimes (n 200000000) (incf ret)) ret) (defun/profiled fun-5 (&aux (ret 0)) (dotimes (n 200000000) (incf ret)) (fun-4) ret) (defun main () (fun-1) (fun-2) (fun-3) (fun-4) (fun-5)) (profile (main)) ; Profiled functions: ; 39.00% FUN-5: 880 ticks (0.88s) over 1 calls for 880 (0.88s) per. ; 21.00% FUN-4: 464 ticks (0.464s) over 1 calls for 464 (0.464s) per. ; 20.00% FUN-5/FUN-4: 451 ticks (0.451s) over 1 calls for 451 (0.451s) per. ; 19.00% FUN-3: 430 ticks (0.43s) over 1 calls for 430 (0.43s) per. ; 0.00% FUN-2: 9 ticks (0.009s) over 1 calls for 9 (0.009s) per. ; 0.00% FUN-1: 0 ticks (0.0s) over 1 calls for 0 (0.0s) per. ; ; 200000000 Percentages would start to lose meaning here (FUN-5/FUN-4 execution time would be included in FUN-5's one), but still I am quite happy with the overall result. PS. If you read PCL, you should notice quite a few similarities with the WITH-TIMING macro explained at the end of the book. --- I am going to leave it here (another e-learning platform): # 2020-02-03 Vim was segfaulting when opening lisp files, and after bisecting my ft_commonlisp `autogroup` I found the culprit: au syntax lisp RainbowParenthesesLoadRound I dug deeper and found that the line in the plugin that Vim was tripping over, was: for each in reverse(range(1, s:max_depth)) It did not take me long to get to this vim issue:, and changing `reverse(range(...))` with a simple `range()` with negative 'stride' did solve the problem for me: diff --git a/autoload/rainbow_parentheses.vim b/autoload/rainbow_parentheses.vim index b4963af..455ad22 100644 --- a/autoload/rainbow_parentheses.vim +++ b/autoload/rainbow_parentheses.vim @@ -106,7 +106,7 @@ func! rainbow_parentheses#load(...) let b:loaded = [0,0,0,0] endif let b:loaded[a:1] = s:loadtgl && b:loaded[a:1] ? 0 : 1 - for each in reverse(range(1, s:max_depth)) + for each in range(s:max_depth, 1, -1) let region = 'level'. each .(b:loaded[a:1] ? '' : 'none') let grp = b:loaded[a:1] ? 'level'.each.'c' : 'Normal' let cmd = 'sy region %s matchgroup=%s start=/%s/ end=/%s/ contains=TOP,%s,NoInParens fold' Opened a MR for this: # 2020-02-06 I got tired of the colors of my prompt (probably because the default colors palette used by most of the terminal applications are kind of terrible and I don't want to bother finding new colorscheme), so here is what I am experienting with: - username: bold (previously pink) - hostname: cyan / pink (previously yellow / blue) - path: underline (previously green) It's definitely less colored as it used to be, and I am pretty sure I will tweak it again over the following days, but let's see if it works --- + # 2020-02-08 * finished reading [Remote]( ? bunny1 in common lisp # 2020-02-13 + remap '/" to F13, and then use it as a hyper when hold-down (e.g. +S as CTRL) + remap '[" to F14, and then use it as a hyper when hold-down (e.g. +w to move forward one word, +b to move backward one word) # 2020-02-14 The Old Way Using Decorate-Sort-Undecorate This idiom is called Decorate-Sort-Undecorate after its three steps: - First, the initial list is decorated with new values that control the sort order. - Second, the decorated list is sorted. - Finally, the decorations are removed, creating a list that contains only the initial values in the new order. From: Spent a good 30 mins trying to find this reference... somehow I remembered it was called Wrap-Sort-Unwrap and Google wasn't returning anything meaningful. After the fact, I was able to find this in the Python Cookbook Recipes by searching for pack/unpack. # 2020-02-15 All right, I finally got around to get `sic(1)` up and running on my laptop and I am quite...OK with the result. Why sic and not irssi, or weechat, you migth ask; I don't know, I guess I just got bored... Anyway, [sic]( stands for simple irc client, and it really lives up to its name: - let you connect to a server (plain connection, no SSL) - support `PASS` authentication - outputs a single stream of messages (yes, all the messages received on the different channels you joined, are just...comingled) That's it, really. Suckless tools might seem...lacking at first -- well, I guess they really are -- but there is a reason for that: they usually are highly composable, which means that most of the times you can combine them with other tools (suckless' tools, or unix tools in general) and say implement a feature that was originally missing. For example, `sic(1)` does only support plaintext connections; not a problem, create a secure TCP connection using `socat(1)` and have `sic(1)` connect to the local relay instead: socat tcp-listen:6697 sic -h -p 6697 -n your-nickname Or, `sic(1)` does not support colors, but everyone would agree that a certain level of color does not hurt; easy, pipe `sic(1)`'s output into `sed(1)`, et voila: sic -h -p 6697 -n your-nickname \ sed/your-nickname/${RED}your-nickname${DEFAULT}/ Or, you want to store every message ever received or sent? easy, that's what `tee(1)` is for: sic -h -p 6697 -n your-nickname \ tee -a ~/irc/server.log \ ... Or you can add a typing hisory using `rlwrap(1)`, or on-connect auto-commands (e.g. join #vim) by using a combination of shell pipes, `echo(1)` and `cat(1)`. Anyway, I guess you got the idea...and if anything, I hope I made a you a bit curious about this whole _compose-all-the-things_ philosophy. These are the scripts I ended up creating -- feel free to poke around and tell me if anything intersting come into your mind: - [sicw]( - [sicw-rlwrap]( --- Say you have an interactive process like a REPL or something; how can you send some data/text to it, before entering in interactive mode, waiting for the user to type anything? For example: how can you pass some _auto-commands_ like `:j #suckless` and `:j #lisp` to `sic(1)`, so that you don't have to manually type them every time you connect to the server? Note, you still want to be able to manually send messages to the server, after the auto-commands have been sent. Well, I found the solution somewhere on StackOverflow (unfortunately I cannot find the link to the page anymore): #!/usr/bin/env bash echo -en "$@" cat - What this `input` script does, is: - `echo(1)` its input argument $ echo -en 'foo\nbar\n' foo bar - Then start reading from stdin (i.e. enter interactive mode) Clever, isn't it? $ input ':j #suckless\n' ':j #vim\n' | sic --- * cb: added support for xsel(1) -- not that I needed it, but since I stumbled upon it when looking at other people dotfiles repos, I figured I should just steal it + add SelectAndSendToTerminal("^Vap") to my .vimrc -- currently there is `SendSelectionToTerminal`, but it forces me to use nmap/xmap, instead of nnoremap/xnoremap, and I have to manually save/restore the state of the window (annoying) ? add kr to the family of 2-letters-named tools, to run `keyring` remotely -- will have to be careful to make sure people on the same network can not snoop my passwords ;-) # 2020-02-16 + R-type notation for documenting Javascript functions ? Auto-curry recursive magic spell (from composing software) # 2020-03-03 When we estimate, it is worth considering worst-case scenarios as well as best-case scenarios in order to provide a more accurate estimate. A formula I often use in estimation templates is the following (derived from a book I read a long time ago): [Best + (Normal * 2) + Worst] / 4 Keep a risk register alongside the estimates Review the estimates and the risk register at the end of the sprint Estimating work is never easy. No two tasks are the same, and no two people are the same. Even if they were, the environment (code base) would most certainly not be the same. Getting to a precise estimation of work is almost impossible without doing the work itself. Source: # 2020-03-05 Convert markdown files to PDF on MacOS: $ brew install pandoc $ brew install homebrew/cask/basictex $ ln -s /Library/TeX/Root/bin/x86_64-darwin/pdflatex /usr/local/bin/pdflatex $ pandoc -o multi-tenancy.pdf # 2020-03-06 TIL: when you have an unresponsive SSH connection, you can kill it with `~.` Also, you can type `~?` and get the list of supported escape codes: remote> cat # then type ~? Supported escape sequences: ~. - terminate connection (and any multiplexed sessions) ~B - send a BREAK to the remote system ~C - open a command line ~R - request rekey ~V/v - decrease/increase verbosity (LogLevel) ~^Z - suspend ssh ~# - list forwarded connections ~& - background ssh (when waiting for connections to terminate) ~? - this message ~~ - send the escape character by typing it twice --- * finished reading [Composing Software]( ? I realized my kindle highlights for books I haven't purchased through amazon are not getting displayed here: Need to figure out a way to extract them # 2020-03-07 Today I finally got around to toy a bit with MacOS Karabiner-Elements, and customize my configuration even further so that: - `'` behaves exactly as `` (i.e. ``-modifier when held down, `` when tapped) - `[` behaves exactly as `` (i.e. _movement_-modifier when held down, `` when tapped) What's the deal with these _movement_-modifiers? Just to give you an idea: - ` + h` -> `` - ` + j` -> `` - ` + k` -> `` - ` + l` -> `` - ` + p` -> _paste from clipboard_ - ` + b` -> _one word backward_ - ` + w` -> _one word forward_ For the record, these are the relevant blocks of my karabiner.json file that enabled this: { "description": "Tab as modifier", "manipulators": [ { "description": "tab as modifier", "from": { "key_code": "tab" }, "to": [ { "set_variable": { "name": "tab_modifier", "value": 1 } } ], "to_after_key_up": [ { "set_variable": { "name": "tab_modifier", "value": 0 } } ], "to_if_alone": [ { "key_code": "tab" } ], "type": "basic" }, { "conditions": [ { "name": "tab_modifier", "type": "variable_if", "value": 1 } ], "description": "tab + h as left_arrow", "from": { "key_code": "h" }, "to": [ { "key_code": "left_arrow" } ], "type": "basic" }, { "conditions": [ { "name": "tab_modifier", "type": "variable_if", "value": 1 } ], "description": "tab + j as left_arrow", "from": { "key_code": "j" }, "to": [ { "key_code": "down_arrow" } ], "type": "basic" }, { "conditions": [ { "name": "tab_modifier", "type": "variable_if", "value": 1 } ], "description": "tab + k as left_arrow", "from": { "key_code": "k" }, "to": [ { "key_code": "up_arrow" } ], "type": "basic" }, { "conditions": [ { "name": "tab_modifier", "type": "variable_if", "value": 1 } ], "description": "tab + l as left_arrow", "from": { "key_code": "l" }, "to": [ { "key_code": "right_arrow" } ], "type": "basic" }, { "conditions": [ { "name": "tab_modifier", "type": "variable_if", "value": 1 } ], "description": "tab + p as shift inssert", "from": { "key_code": "p" }, "to": [ { "key_code": "v", "modifiers": [ "left_command" ] } ], "type": "basic" } ] } { "description": "open_bracket as modifier", "manipulators": [ { "description": "open_bracket as modifier", "from": { "key_code": "open_bracket" }, "to": [ { "set_variable": { "name": "open_bracket", "value": 1 } } ], "to_after_key_up": [ { "set_variable": { "name": "open_bracket", "value": 0 } } ], "to_if_alone": [ { "key_code": "open_bracket" } ], "type": "basic" }, { "conditions": [ { "name": "open_bracket", "type": "variable_if", "value": 1 } ], "description": "open_bracket + w as left_alt + right_arrow (one word forward)", "from": { "key_code": "w" }, "to": [ { "key_code": "right_arrow", "modifiers": [ "left_alt" ] } ], "type": "basic" }, { "conditions": [ { "name": "open_bracket", "type": "variable_if", "value": 1 } ], "description": "open_bracket + b as left_alt + left_arrow (one word backward)", "from": { "key_code": "b" }, "to": [ { "key_code": "left_arrow", "modifiers": [ "left_alt" ] } ], "type": "basic" } ] } --- ? implement these very same mappings on Windows (AutoHotKey + SharpKeys) # 2020-03-08 English writing lessons: - Use verb + adverb, not the opposite Good: I have worked closely with... Bad: I have closely worked with... - Use 'be experienced with' instead of 'have experience with' - Don't use 'seem'... be assertive: either one is, or is not! - Use 'further' instead of 'improve' --- TIL: you can pass '60m' to `docker logs --since ...` to get logs of the last hours... I knew how to se data/times with `--since`, and always been doing the math in my head to figure out the correct cut-off date-time value; but being able to `--since 60m` is just... pure gold! # 2020-03-09 MacOS Bundle IDs: function mac-os-bundle-id() { lsappinfo info -only bundleid "$*" | cut -d '"' -f4 } Note: this only works if the the application you are querying the bundle ID for, is running. # 2020-03-15 Asking for help, on [reddit](, on best-practices on how to implement multitenancy Node.js applications: Hello Everyone, I am trying to add [multitenancy]( support to an existing Node.js/Express.js application, and while the Web is full of articles on how to properly segregate data (e.g. additional 'tenant_id' column, different databases), I am currently struggling to find best-practices on how to properly implement it... let me explain. Irrespective of your chosen data segregation model, there comes a point throughout the life-cycle of the user request where you will have to: 1) figure out what the current tenant is, 2) use that information to fetch the _right_ data (e.g. run custom _by-tenant-id_ queries, or by selecting the proper database connection). I am particularly interested in hearing how others have implemented this, especially given that point 1 and 2 might be a few asynchronous function calls away. There are 3 possible options I could think of, to solve this problem: 1. Use libraries like [continuation-local-storage](, or its successor [cls-hooked]( - these libraries rely on experimental APIs and require monkey-patching of other asynchronous libraries to make sure the context is indeed properly propagated - when users ask about [maturity](, their authors suggest to heavily test internally and make sure the context is never lost - other users on [reddit]( confirm they had a lot of issues with cls, and because of that only used it for logging purposes 2. Pass a context object around, **everywhere** (a la Golang's [context]( - it's a **lot** of work, especially for an already existing application (and I am not [alone]( in thinking this) 3. Defer the initialization of the used services until the request is received, and inject this context object as a constructor argument. - this seems to require less changes as compared to the previous option, but again, it forces you to defer the initialization of **all** your services until you know what the request tenant is Now, all this journey has led me to believe that maybe this is not the _right_ way to implement multitenancy inside Node.js applications; maybe, instead of having a single Node.js application being able to serve requests coming from different tenants, we should just _multi-instantiate_ this application and have each of these instances serve one specific tenant only. This is one realization that I had when thinking about all this: - Service multitenancy (i.e. a single service, being able to serve requests coming from different tenants), is not required, but could save you some money later on, as that would let you utilize hardware resources to full extent - Service multi-instantiation (i.e. different services, each one specialized to serve a specific tenant), is not required, but it's **inevitable**: there are only so many requests you will be able to serve with a single multitenant service So probably I shouldn't worry much about how to add support multitenancy support to an existing Node.js application, but rather _multi-instantiate_ it and put a broker in front of it (i.e. Nginx). Let me know your thoughts. --- Interesting article about [Zone.js]( the dummy implementation is ridiculously simple! var currentZoneFrame = {zone: root, parent: null}; class Zone { static get current() { return; } run(callback, applyThis, applyArgs) { _currentZoneFrame = {parent: _currentZoneFrame, zone: this}; try { return callback.apply(applyThis, applyArgs); } finally { _currentZoneFrame = _currentZoneFrame.parent; } } } This is how you are expected to use it: var zoneA = Zone.current.fork({name: 'zoneA'}); function callback() { // _currentZoneFrame: {zone: zoneA, parent: rootZoneFrame} console.log('callback is invoked in context',; } // _currentZoneFrame = rootZoneFrame = {zone: root, parent: null} { // _currentZoneFrame: {zone: zoneA, parent: rootZoneFrame} console.log('I am in context',; setTimeout(zoneA.wrap(callback), 100); }); Too bad, the [TC39 proposal]( has been withdrawn! --- * finished reading [Graph Algorithms: Practical Examples in Apache Spark and Neo4j]( # 2020-03-16 ? ctags and javascript files: # 2020-03-17 _Rant against software development in 2020: start_ We added `prettier` to our repo because you know, it feels good not to have to worry about indentation and style; so we linted all of our files (yes, it was a massing changeset), and then carried on. Except...changesets pushed by different team members looked different from one another, like `prettier` settings were not properly parsed...and yet all of them had their IDEs properly configured. Well, it turns out 'prettier-vscode', `prettier` plugin for VisualStudio Code, is not picking up '.prettierignore' unless it's in the project working directory: And guess what? Since we have a monorepo, we have different '.prettierignore' files, one per that's why changesets were looking different. Long story short: - `prettier` does not support that behavior: - 'prettier-vscode' does not want to support that, unless `prettier` support that: So yeah, users, go fuck yourselves! Well, of course there are [workarounds](, but still, `git` works that way, `npm` works that way...why not implement a similar behavior? I am pretty sure there are reasons, but still... _Rant against software development in 2020: start_ --- Few notes on how to implement multitenancy with Elasticsearch: [shared-index](, i.e. a single index, shared by all the tenants, in which all the documents have a tenant identifer PUT /forums { "settings": { "number_of_shards": 10 }, "mappings": { "post": { "properties": { "forum_id": { "type": "string", "index": "not_analyzed" } } } } } PUT /forums/post/1 { "forum_id": "baking", "title": "Easy recipe for ginger nuts", ... } This approach works, but all the documents for a specific tenant will be scattered across all the shards of the index. Enters [document routing]( PUT /forums/post/1?routing=baking { "forum_id": "baking", "title": "Easy recipe for ginger nuts", ... } By specifing 'baking' as routing value, we are basically telling Elasticsearch to always use a single shard... which one? `shard = hash(routing) % number_of_primary_shards` (not that we care, as long as it's always the same). Oh, and when searching data (with routing), don't forget to filter by the tenant identifer, or you ill accidentally retrun other tenant's data: GET /forums/post/_search?routing=baking { "query": { "bool": { "must": { "match": { "title": "ginger nuts" } }, "filter": { "term": { "forum_id": { "baking" } } } } } } But...can we do better, and have Elasticsearch do the filtering for us? Yes, [aliases]( to the rescue! When you associate an alias with an index, you can also specify a filter and routing values: PUT /forums/_alias/baking { "routing": "baking", "filter": { "term": { "forum_id": "baking" } } } Now we can use the 'baking' index when indexing, and forget about _routing_: PUT /baking/post/1 { "forum_id": "baking", "title": "Easy recipe for ginger nuts", ... } Similarly, when searching, we can omit the _routing_ parameter, and the filters too! GET /baking/post/_search { "query": { "match": { "title": "ginger nuts" } } } Next step: see if it's possible to implement ACL so that one user can connect to Elasticsearch and only see their aliases: neither the aliases used by other tenants, nor the big, fat, shared index. # 2020-03-21 I have got a bit serious about trying to use `w3m` for web browsing (yeah, I know, I bit ironic coming from me, given that my team is working on a SPA unable to load shit without Javascript enabled). First things first... mappings! keymap r RELOAD keymap H PREV keymap L NEXT keymap C-U PREV_PAGE keymap C-D NEXT_PAGE keymap gg BEGIN keymap w NEXT_WORD keymap b PREV_WORD keymap zz CENTER_V keymap o COMMAND "GOTO /cgi-bin/b1.cgi; NEXT_LINK" keymap t COMMAND "TAB_GOTO /cgi-bin/b1.cgi; NEXT_LINK" keymap ( NEXT_TAB keymap ) PREV_TAB keymap gt TAB_LINK keymap d CLOSE_TAB keymap x CLOSE_TAB keymap :h HELP keymap :o OPTIONS keymap :q QUIT keymap gs SOURCE keymap V EXTERN keymap p GOTO /cgi-bin/cb.cgi keymap P TAB_GOTO /cgi-bin/cb.cgi keymap yy EXTERN "url=%s && printf %s "$url" | cb && printf %s "$url" &" I am going to change these quite a lot over the following weeks, but these should be enough to get you started using it. A few gems - You won't be able to read all the pages you visit inside `w3m`... so when you bump into one, just press `V` (or `M`) to open the current page in a configured external browser - There is no support for a default search engine (a bit annoying), but given that you can customize the hell out of `w3m`, I was able to work around that with the following: `COMMAND "GOTO /cgi-bin/b1.cgi; NEXT_LINK"`. This renders the content of cgi script that I created, and moves the cursor to the next link (so I can then press ``, type anything I want to search for, press `` to quit editing the text field, press `` to focus the Submit button, and press `` again to submit... easy uh?). Anyway, this is the content of my cgi script (it's a simple wrapper for [bunny1]( #!/usr/bin/env bash set -euo pipefail cat < bunny1 wrapper
EOF - Another intersting thing you can do by using a custom cgi script, is opening withing `w3m` the URL saved in the OS clipboard. How? 1) read the content of the clipboard (assuming it's a URL), 2) tell `w3m` to go to that URL (yes, your cgi script can send commands to `w3m` ): #!/usr/bin/env bash set -euo pipefail echo "w3m-control: GOTO $(cb --force-paste)" - This last tip will teach you how to _copy_ the current page URL into the OS clipboard. We are going to use the `EXTERN` command for this (i.e. invoke an _external_ command and pass the current URL as the first argument) and in particular we will: 1) set `url` to the input URL, 2) print `$url` (without the trailing new-line) and 3) pipe that into [cb]( to finally copy it into the OS clipboard. keymap yy EXTERN "url=%s && printf %s "$url" | cb && printf %s "$url" &" Few interesting articles I found when setting this all up: - Bunch of `w3m` related tricks: - A vim like firefox like configuration for w3m: --- Finally decided to give [Alacritty]( a go, and damn... now I wished I had tried it earlier: scrolling in `vim` inside a `tmux` session is finally fast again! It supports `truecolors` too, so what else could I ask more? Well, one thing maybe I would ask for: [support multiple windows]( We can _multi-instantiate_, and that's fine, but those instances would not be recognized as being part of the same group (i.e. multiple entries in the dock, no support for `cmd + backtick`)... and that would make it really hard for people to set up keystrokes to focus a specifc instance of a group (currently with ``, I have a mapping to focus the group, and then then automatically inject: `cmd + opt + 1` to select the first instance of the group, `cmd + opt + 2` for the second...). There is a `--title`, but Keyboard Maestro would not let me select an application given its title (also, I do want the title of the window to be dynamic, based on the current process...). There is a `--class` option, but it seems to be working on Linux only... so what else? Well, after a bit of fiddling I finally found a solution: $ cp -r /Applications/ /Applications/ $ vim /Applications/ # And change CFBundleName CFBundleDisplayName accordingly It's ugly, I know it, it's a bit like installing the same application twice, with a different name, but who cares? I mean, scrolling is damn fast now <3 PS. remember to re-sync these '/Applications' directories the next time that an new release of `alacritty` is published. # 2020-03-22 Re-set up SSH on my W10 laptop: 1. Re-install: `cygrunsrv`, and `openssh` 2. Open a cygwin terminal, **as-administrator**, and run: $ ssh-host-config -y $ vim /etc/sshd_config # make sure 'PubkeyAuthentication yes' is there, and not commented out $ cygrunsrv --start cygsshd 3. Add a Firewall rule to let in all incoming connections on TCP: 22 Note: if you do this on a company's laptop, `sshd` will try and contact the company's AD when authenticating connections, so make sure your laptop is connected to the Intranet. Links: - - # 2020-03-23 * finished reading [Node.js 8 the Right Way: Practical, Server-Side JavaScript That Scales]( # 2020-03-24 Today I learded, you can quickly setup SSH proxies by adding the following to your ~/.ssh/config file: Host ProxyCommand ssh -q nc %h %p By this, when we connect to the server by git push, git pull or other commands, git will first SSH to As the ssh client will check the config file, the above rule makes it set up a proxy by SSH to and relaying the connection to %h ( with port %p (22 by default for SSH) by nc (you need to have nc installed on proxy). This way, the git connection is forwarded to the git server. And why would this be relevant? Think about this scenario: 1. Main laptop: decent CPU / memory, connected to the home WIFI, and company VPN 2. Dev laptop: good CPU / memory, connected to the home WIFI, but not to the company VPN (because you know, you don't want to connect and re-enter auth tokens... **twice**) Well, with this proxy trick setup, you would still be able to access configured internal services (e.g. git) from the dev laptop without it being connected to the company VPN. Quite handy! --- Something is messing up my bash prompt, when running it inside Alacritty: 1. Open a new instance $ open -n /Applications/ --args --config-file /dev/null The new instance is opened, and the configured prompt is printed: matteolandi at hairstyle.local in /Users/matteolandi $ █ 2. Press `up-arrow` to access the last executed command: matteolandi at hairstyle.local in /Users/matteolandi $ open -n /Applications/ --args --config-file /dev/null█ 3. Press `ctrl+a` to get to the beginning of the line Expected behavior: the cursor moves indeed at the beginning of the line matteolandi at hairstyle.local in /Users/matteolandi $ █pen -n /Applications/ --args --config-file /dev/null Actual behavior: the cursor stops somewhere in the middle of the line matteolandi at hairstyle.local in /Users/matteolandi $ open -n█/Applications/ --args --config-file /dev/null Note 1: if I now press `ctr-e` (to get to the end of the line), the cursor get past the current end of the line: matteolandi at hairstyle.local in /Users/matteolandi $ open -n /Applications/ --args --config-file /dev/null █ Note2: if I hit `ctrl+c` (to reset the prompt), press `up-arrow` again, and start hammering `left-arrow`, it eventually gets to the beginning of the line: matteolandi at hairstyle.local in /Users/matteolandi $ open -n /Applications/ --args --config-file /dev/null ^C matteolandi at hairstyle.local in /Users/matteolandi 130 > █pen -n /Applications/ --args --config-file /dev/null And if I know press `ctrl+e` (again, to get to the end of the line), the cursor does actually move to the end of the line: matteolandi at hairstyle.local in /Users/matteolandi 130 > open -n /Applications/ --args --config-file /dev/null█ I opened a new [ticket]( to the project's GitHub page -- hopefully someone will get back to me with some insights. # 2020-03-26 My reg-cycle plugin is not quite working as expected... sometimes you yank some text... then yank something else but when you try to paste what you yanked the first time (`p` followed by `gp`), other stuff will be pasted instead (like text previously deleted). Few things to look at: - `:help redo-register` - ? fix reg-cycle # 2020-03-27 Fucking figured out why my `PS1` was breaking input with Alacritty: It turns out I was using non-printing characters in the prompt... without escaping them! Hence `bash` was not able to properly calculate the length of the prompt, hence `readline` blew up! You can read more about escaping non-printing characters, here: # 2020-04-04 A new version of Alacritty has been released (0.4.2), so time to do the _dance_: 1. Update the main app: brew cask install --force alacritty 2. Do the dance cd /Applications for suffix in fullscreen main scratch w3m; do target=Alacritty-$suffix echo " -> $" rm -rf $ cp -rf $ echo "Patchiing $" sed -i "s/Alacritty/$target/" $ echo "Done" cat $ | grep VersionString -A1 done cd - That's it! --- Been working on a little tool called `ap`: the activity planner. You feed to it a list of **activities** (ID, effort, and dependencies), and a list of **people** (ID, allocation, and whether they are working already on a specific activity): activity act-01 5 activity act-02 3 activity act-03 5 activity act-04 3 activity act-05 3 activity act-06 3 activity act-07 3 activity act-08 5 activity act-09 3 activity act-10 5 activity act-11 5 activity act-12 5 act-03 activity act-13 10 person user-01 80 act-01 person user-02 20 act-02 person user-03 60 act-03 person user-04 80 act-04 person user-05 60 act-05 person user-06 20 act-06 person user-07 80 And it will output what it thinks the _best_ execution **plan** is: act-04 0 3.75 user-04 act-05 0 5.0 user-05 act-01 0 6.25 user-01 act-03 0 8.333333 user-03 act-13 0 12.5 user-07 act-06 0 15.0 user-06 act-02 0 15.0 user-02 act-10 3.75 10.0 user-04 act-08 5.0 13.333333 user-05 act-07 6.25 10.0 user-01 act-09 8.333333 13.333333 user-03 act-11 10.0 16.25 user-04 act-12 10.0 16.25 user-01 Where each line tells: - The ID of the activity (column 1) - When _someone_ will start working on it (column 2) - When the work will be done (column 3) - Who this _someone_ actually is (column 4) This also tell us that all the planned activities will be completed after `16.25` days of work. Now, actual days of work can be converted into calendar days, week-ends can be excluded... act-04 2020-03-30 2020-04-02 user-04 act-05 2020-03-30 2020-04-06 user-05 act-01 2020-03-30 2020-04-07 user-01 act-03 2020-03-30 2020-04-09 user-03 act-13 2020-03-30 2020-04-15 user-07 act-06 2020-03-30 2020-04-20 user-06 act-02 2020-03-30 2020-04-20 user-02 act-10 2020-04-02 2020-04-13 user-04 act-08 2020-04-06 2020-04-16 user-05 act-07 2020-04-07 2020-04-13 user-01 act-09 2020-04-09 2020-04-16 user-03 act-11 2020-04-13 2020-04-21 user-04 act-12 2020-04-13 2020-04-21 user-01 ...and with this we can then plot a Gantt chart and I don't know, maybe make some decisions about it. Finding the optimal execution plan turned to be more complicated than anticipated; in fact, the moment I remove activities pre-allocations from the example above, `ap` would blow up and crash (apparently the search space becomes too large). _There must be a problem with your implementation_ you might argue, and I would not disagree; the reason why I am putting this into writing, is so I can clarify the problem in my mind (never heard of rubber-ducking before?!). Anyway, `ap` internally uses A*[1] to find the best execution plan; for the sake of readability, we can say that each state object is just a list of `workers`, where each worker is defined as two fields: - `been-working-on`: the list of activities completed _so far_ - `free-next`: when the worker will next be available to pick up something else With this, we can easily project our `target-date` (i.e. when all the in-progress activities will be completed) by simply running the `max` of all the workers' `free-next` values. But what's the **cost** to move from one state to another? Well that depends on: 1. The activity itself (i.e. it's `effort`) 2. Who is to claim that activity (i.e. someone with a _high_ `allocation` value, will complete the activity in less time) 3. When all the activities claimed so far will be completed For example, let's imagine: - the current `target-date` (before the worker had claimed the new activity) is '13' - the current worker has been working on for '10' days - and that it will will take them '5' days to complete the a newly claimed activity In this case, the _new_ target date (again, _max of all the been-working-on values_) will be '15', so the cost of moving into this new state is `15 - 13 = 2`; if on the other hand `target-date` was '17', then the cost to get into this new state would have been '0' (i.e. the _projected_ target date would not have changed). So far so good... I hope. How can we now _speed-up_ things a bit, and suggest A* which states to evaluate first, so that `ap` does not blow up when the search space increases? That's where the heuristic come into play -- as a remainder, in order for A* to find the optimal solution, the heuristic function has to be _admissible_ [2] i.e. it should never _overestimate_ the cost required to reach the target state. Let's look at couple of examples: 1. Sum of all the remaining activities' `effort`. This assumes that all the remaining activities are completed one after the other (and not in parallel), so it's easy to understand how this would end up overestimating the cost remaining. This heuristic is clearly non-admissible. 2. Sum of all the remaining activities' `effort`, over the number of total `workers`. This is a bit better from the previous one, as at least it tries to assume that work can actually happen in parallel); however, not **all** of it can actually be done in parallel, so this too is doomed to overestimate under certain circumstances. Please note that using a non-admissible heuristic does not necessarily implies A* will stop _working_; it will probably find a solution, sometimes even the optimal one, but it really depends on the proble, your input, and how lucky you were. For example, let's see what happens when we try and use the first heuristic: (defun heuristic (activities state) (let* ((effort-remaining (effort-remaining activities state))) effort-remaining)) With this, `ap` would still return '16.25' as closest completion date, but when I try that without taking pre-allocations into account, it would return '22.5' (clearly a success given that the planner is not blowing up anymore, but the solution found is _sub-optimal_). Is there an **admissible** heuristic that we can use with this problem? Have I modelled the problem poorly, and painted myself into a corner? [1]*_search_algorithm [2] # 2020-04-05 Put together a little script to generate the RSS feed URL given a YouTube page. YouTube does not include such URL in the HTML page, but it's easy to guess it (thanks to [1]): 1. Go to the YouTube channel you want to track 2. View the page’s source code 3. Look for the following text: channel-external-id 4. Get the value for that element (it’ll look something like UCBcRF18a7Qf58cCRy5xuWwQ 5. Replace that value into this URL: Here you go: #!/usr/bin/env bash set -euo pipefail URL=${1?YouTbe URL required} channel_id=$(curl "$URL" \ | grep --only-matching --extended-regexp 'data-channel-external-id="[^"]*"' \ | head -n1 \ | cut -d\" -f2) echo "$channel_id" [1] --- Very interesting (and quite lengthy) Q&A session on remote working, held from the Basecamp guys: # 2020-04-07 Another interesting video from the Basecamp guys, on how they use run Basecamp. Check it out: Home: - 3 sections: HQ, teams, projects - HQ i.e. company space - Teams i.e. teams, or departments (how people are organized) - Projects i.e. how work is organized - each of these sections has **tools** - each person has access to HQ, and only those teams and projects they have been invited to Tools: - Message board - project/team/company announcements: so when a new joiner is added, they can easily browse past announcements - Todos - Lists (or scopes, or chunks of work) - Each list has items - An item can: - be assigned to someone - notify list of people when done - have a due date assigned - and store additional notes - Hill-chart - simple way to understand _where things stand_, at a given time (not based on number of tasks completed/remaining, as tasks are not linear) - two sides: left, figuring things out (lots of unknown), right, making it happen (just need time to implement it, but no blockers in sight) - each dot is a scope of work, and people can drag it around the hill curve - scopes are dragged manually, not systematically... we want to exercise people gut feelings - you can roll-back a project and see how things moved on the hillchart, over time - Docs & Files - Dropbox, Google Drive ... - Can create folders - Can _color_ document -- should you need some clues - Campfire - realtime chat, built-in - Schedule - project/team/company shared calendar - Automatic Check-ins - scheduled questions sent to everybody in the company - Click on someone's name, and it will pull all someone's answers to a specific automatic check-in (it works similarly for other features too) General: - Everything is _commentable_ - Almost everything has a deep-link associated with it, so you can share it with anyone, and if they have the right permissions they will be able to see it - You can leave **boosts** (like reactions, or 16-character-long comments) to messages or to comments - From each team, or each project, if you scroll down the Tools section, you get access to the **whole** history of the team/project. Organized stuff at the top (your tools), scroll through history down the bottom - The same **history** concepts exists for the whole **company**. There is a "Latest activity" section where you can see what's happening, in realtime, across all the teams, across all the projects - There is a "My Schedule" section, that lists all the events that you are involved with; same with the Schedule tool, you can get a link and display this information inside your personal Google Calendar, or whatever - You can open someone's profile, and access all that is on their plate (i.e. all the to-dos currently assigned to them) - You can open someone's profile, and access all of their activities... not just to-dos, but comments Basecamp for Basecamp: - Discuss multiple things in a **team**... but very specific ones inside **projects** - If what you want to do can be summarized in one to-do item, or a message... then it goes into a team; if not, then it goes into a project - Pretty much all the **teams** have everybody on it -- very open company. Only a few teams have a few people only -- mostly for executives to discuss about strategy. - Tons of projects and they usually last for 1 week or 2 weeks (once completed, you can archive them) For example: every new release of the iOS Basecamp app, has a new project, with to-dos, hill-chart and everything - They have a project called OUT, with only the Schedule tool turned on, where people tell when they will **not** be working. The nice thing about the Schedule tool, is that you can import that into Google Calendar, Outlook, iCal. - Idea pitches are completely managed asynchronously on Basecamp: big writeup on the message board (usually the _deep_ thinking is done by one person), with enough details for programmers to be able to come up with high level estimates, but that's it (too many details, at this stage, are almost a waste of time, especially if the idea does not turn out to be a good one). - How to manage contractors? 1. create a new company (otherwise they would have access to your HQ) 2. invite them to projects. HQ: - Announcements - Everyone has to see this stuff - All Talk - Chat-room, social stuff, everyone has access to it - Dates - Company coming/past meetups - Reference Materials - Policies - Logos for products - National Holidays - 5x12s: every month, 5 random people, 1h to talk about stuff which is not work -- all of this is transcribed - Fan mail - Forward mail into Basecamp, so they show up there - Scheduled check-ins: - Every Monday morning: what did you do this weekend? - Every month, what are you reading? -- helps building a culture, especially for remote teams Example project: Hey (a new email platform they are launching): - 2 develoeprs, but everyone invited to the project - Tools: Message board, To-dos, Campfire - All starts with a _shaping-doc_ (read more at: - high level what the project is about - edge cases maybe - few low-fidelity mockups - the team creates the tasks - To-dos - Mixed to-dos for the programmer and the designer - Different scopes (list of to-dos), like 'sign-up flow' Another example project: 'What Works [2020]' project (one project per year): - Message board - Heartbeats: every team lead, every 6 weeks (they run 6 weeks dev cycles), write about what they have done over the last cycle - Kickoffs: every team lead, every 6 weeks, write about what they will be working on - Just summaries for the rest of the company to read, and stay up to date - One heartbeat / kickoff per team pretty much - Automatic checkins - Every Monday, 9am, what will you be working on this week? Rough estimate of what we think we will be working on - Every day, at 4:30pm, what did you do today? Not just a list of to-dos... as most of the time, what you are working on does not have a to-do attached. Own words, 15 minutes tops - Managers can stop asking people about updates, but read them instead - Schedule - When 6 weeks cycle start, end - To-dos # 2020-04-08 Is it possible to implement Basecamp's automatic check-ins with Teams? I think it is, with some duct tape of course. - Each Teams channel has an email address associated with it; you send an email to it, and a few seconds later that message would pop up on teams - A Jenkins job can be configured to run periodically (e.g. every day @ 9am) - You can can easily send email from the command line as follows: `echo 'body' | mutt -s 'Subject'` So... 1. Gather your Microsoft Teams' channel email 2. Set up your script: #!/bin/bash -ex /bin/mutt -s 'Automatic check-in: What will you be working on this week?' $TO_EMAIL < A2 (depends on) P1: A1 P2: A2 P2 cannot start working on A2 until A1 is finished, so the best is T(A1) + T(A2), while I suspect the algo is going to spit out max(T(A1), T(A2)) # 2020-04-11 Ode to [checklists]( Good checklists are: - Short enough to memorize - Only include the keypoints For example (quite relevant these days): - Use soap - Wet both hands completely - Rub the soap until it forms a thick lather covering both hands completely - Wash hands for at least 20 seconds [not included in this study, but we know now it takes at least 20 seconds to break down viral bugs including Coronavirus so I’m putting it here for posterity] - Completely rinse the soap off Other examples, closer to the software development industry Before asking peers to review your pull request, make sure that: - [ ] Successfully tested on Jenkins - [ ] Product team has signed off the implemented workflow - [ ] Bonus: Screenshots/screencast demo included Before approving a pull request and merging that into master, make sure that: - [ ] Code is readable - [ ] Code is tested - [ ] The features are documented - [ ] Files are located and named correctly - [ ] Error states are properly handled - [ ] Successfully tested on Jenkins Before pushing a new version out, make sure that: - [ ] Look into failing tests (if any), and confirm if intermittent - [ ] Announce on whichever social media you are using, the main features / bugfixes / changes being released After a successful release, make sure that: - [ ] Do some smoke-test to confirm it's all good - [ ] Run any post-release action (e.g. database migration, if not automatic, support tickets updated) - [ ] Announce on social media that the release has completed successfully - [ ] Archive completed stories When adding a new Defefct to the backlog, make sure it includes: - [ ] Description - [ ] Steps to reproduce - [ ] Expected behavior - [ ] Actual behabior - [ ] Bonus: Screenshot/video demonstrating the bug - [ ] Bonus: any additional pre-analysis I believe we can go on forever, but this seems promising. # 2020-04-12 * finished reading [The 4-Hour Work Week: Escape the 9-5, Live Anywhere and Join the New Rich]( # 2020-04-13 Flipped the order of notes in my .plan file...again! The main reason why switched from _most-recent-last_ to _most-recent-first_ last August, was that I thought that would make it easier for a casual reader to access the most recent entry; however, that made it really difficult for someone, to dump the whole file and read it in chronological order. Anyway, now that I can create a Feed RSS off my .plan file (see:, there no more need for adding new notes at the top of my .plan file, so I decided to flip the order of my notes to _most-recent-last_. This is what I last used last August, to reverse the order of all the notes: " Select all the end of lines *not* preceding a new day note /\v\n(# [0-9]{4})@! " Change all the end of lines with a § -- unlikely to use character :%s//§ " Sort all the collapsed notes (! is for reverse :help sort) :'<'>sort! " Change § back to \r :%s/§/\r It still does not properly support the scenario where a `§` character is being used in a note, but I checked my plan file before running it, and it's going to be all right. --- * finished reading [It Doesn't Have to Be Crazy at Work]( # 2020-04-16 Source: how to remember tar flags 🎏 tar -xzf 👈 eXtract Ze Files! tar -czf 👈 Compress Ze Files! This never gets old! # 2020-04-18 Wanted to try and setup some _automatic checkins_ for the team; as people work remotely, I would like to experiment with reducing the number of scheduled meetings, but before implementing that and say, kill every daily standup and have people write their daily status update, I thought we should first give people a place where they could talk about anything, and stay up to date with each other lives (if they want). So rather than start asking people what they will be working on during the coming week, I thought we should start with something lighter, something social, something like: How was your weeked like? The task is quite easy: every Monday morning at 8am, have Jenkins fire an email (later published on Teams) to ask people: what have you guys done over the weekend? People into sharing, would then reply to the thread, or simply read what others have been up to. Nothing creazy, nothing creepy, just a bit of sharing to make people we work with a bit more interesting. All right, let's talk implementation: > every Monday morning at 8am We have people working from: Jakarta, Noida, Hyderabad, Pisa, Scotland, and New York. So what that 8am really means here? Well, ideally we would want people to process this as soon as they get in (i.e. skim through inbox / Teams to see if there is anything critical, then spend a few minutes _sharing_), so the notification needs to be there when Jakarta people get in. > have Jenkins Jenkins lets you easily schedule when jobs need to run (Build Triggers > Build periodically > Schedule), and it supports custom timezones too: TZ=Asia/Jakarta # Every Monday morning, at 8am # MINUTE HOUR DOM MONTH DOW * 0 8 * * 0 > fire an email ... Nothing could have been easier: on a Jenkins job configuration page, Build > Add build step > Execute shell, then copy the following into the 'Command' field (replace `$TO_EMAIL` with the email address associated with the specific MSFT Teams channel): #!/bin/bash -ex /bin/mutt -s 'How was your weekend?' $TO_EMAIL < git # # So here we manually source the file, if available if [ -f /usr/share/bash-completion/completions/git ]; then source /usr/share/bash-completion/completions/git fi function g() { git "$@"; } __git_complete g __git_main # 2020-05-22 ? book: tools for thought # 2020-05-23 Simplified Poor's man yankstack implementation to to use `localtime()` instead of timers. The idea is pretty simple, everytime you invoke it: - Take a timestamp - Check that timestamp another one (buffere variable) representing when the function was last executed - If the two timestamps are more than 5 seconds apart, it means the user wants to start from register 0, otherwise 1, 2, 3... --- Change SendToTerminal (STT), to strip out command prompts (if configured) Sometimes you just want to _send_ a line like the following one, without worrying about stripping the `> ` bit out of it. $ fortune So I changed STT to automatically strip it out for me! # 2020-05-24 Random quote from a podcast I recently listened to: A lot of people say, just do a support model... and I think everybody in the business room looks at this and laughs... because you don't trade time for money... only poor people do that... rich people, they trade products for money. # 2020-05-25 On indenting lisp files with Vim. First things first, Vim has Lisp mode (`:h lisp`), an that alone should give you good enough indentation for lisp files. au FileType lisp setlocal lisp However, Vim's default way of indenting s-expressions is not always perfect, so if you happen to use [Vlime]( (and if you don't, I highly reccommend that you do), then you can switch to Vlime's `indentexpr` by simply disabling lisp mode (the plugin will do the rest): " Weirdly, you have to set nolisp for (the correctly working) indentexpr to work. " If lisp (the vim option) is set, then it overrules indentexpr. " Since an indentexpr is set by vlime (and working!) it would be great to add nolisp to the default configuration, I guess. " au FileType lisp setlocal nolisp Lastly, you can go back to using lisp mode, but tell Vim to use an external program like [scmindent]( to invoke when indenting s-expressions (make sure `lispindent` is in your `$PATH`): au FileType lisp setlocal lisp au FileType lisp setlocal equalprg=lispindent You might prefer this last approach more, especially when collaborating with others, as it does not rely on a specific conbination of editor / plugin, but on an external tool that others can easily configure in their workflow. Few final words about Vim `lispwords` (`:h lisowords`), and particular a question that has bugged me for quite some time: do you need to set them up with using Vlime's `indentexpr`, or an external program to indent your lisp files? Yes, especially if you care for Vim properly indenting s-expressions as you type them: `indentexpr` is only used when in conjunction with the `=` operator (i.e. when manually indenting blocks of code), while `lispwords` are used when typing in new code. # 2020-05-29 ~ small utility to create RSS feed URL given another URL (i.e. give it a YouTube URL, and it will generate a feed for it; give it a GitHub URL, and it will generate a feed for it) # 2020-05-31 ? add a Vim operator for neoformat: this requires the current formatter to support reformatting of partial areas, in which case, one could surround the area to re-format with markers (e.g. comments, that the formatter is likely to ignore), and then use these markers to extract the formatted area of interest and stick it back into the buffer # 2020-06-01 Been finally giving [vim-sexp]( a try, and I gotta be honest, I am quite liking it. I have also been reading about _structural code editing_, and landed on this [SO page]( about _barfing_ and _slurping_: slurping and barfing are the essential operations/concepts to use one of the modern structural code editors. after getting used to them I'm completely incapable of editing code without these. Of the ~20 people that sit with me writing clojure all day all of them use these all the time. so saying that they are "helpful for lisp coders" is a very tactful and polite understatement. slurp: to include the item to one side of the expression surrounding the point into the expression barf: to exclude either the left most or right most item in the expression surrounding the point from the expression That's an alright definition, but when would you actually use it? Another [answer]( to the same question tries to shed some light on this: so a session might be: (some-fn1 (count some-map)) (some-fn2 (count some-map)) aha, a let could come in here to refactor the (count some-map): (let [c (count some-map)]|) (some-fn1 c) (some-fn2 c) But the let isn't wrapping the 2 calls, so we want to pull in (slurp) the next 2 forms inside the let s-exp, so now at the cursor position, slurp twice which will give after first: (let [c (count some-map)] (some-fn1 c)|) (some-fn2 c) and then on second: (let [c (count some-map)] (some-fn1 c) (some-fn2 c)|) It all starts to make sense. Let's now have a look at Tpope's [vim-sexp-mappings](, and in particular what he thought some sane mappings should be for slurpage and barfage: " Slurping/Barfing " " angle bracket indicates the direction, and the parenthesis " indicates which end to operate on. nmap <( (sexp_capture_prev_element) nmap >) (sexp_capture_next_element) nmap >( (sexp_emit_head_element) nmap <) (sexp_emit_tail_element) So going back the example above where the cursor was on the outermost closed paren: (let [c (count some-map)]|) (some-fn1 c) (some-fn2 c) To slurp the next expression in, you would end up pushing the closing paren the cursor is on to right (past to the next element), and the chosen mapping, `>)`, really tries to caption that... ...again it all starts to make sense... --- * Changed `lispindent` to support reading ./.lispwords: * Changed `lispindent` to properly format FLET, MACROLET, and LABELS forms: # 2020-06-07 * Changed my `lispindent` for to read `.lispwords` from the current directory, from its parent, from its parent parent... until the process gets past the user home directory. * scmindent: it leaves trailing empty spaces when keywords are surrounded with empty lines * scmindent: skip commented out lines * Created Vim plugin, vim-lispindent, to: 1. automatically `set equalprg` for lisp and scheme files, 2. parse `.lispwords` and 3. set Vim's `lispwords` accordingly ? vim-lispindent: download `lispindent` if not found on $PATH ~ vim-lispindent: support rules with LIN = 0 (we want to remove these from `lispwords`) ? vim-lispindent: README + doc/ ? cg: pass the last executed command (e.g '$!') and use that to _influence_ the order of suggested commands to execute ? scmindent: cases inside `HANDLER-CASE` are not _properly_ indented # 2020-06-11 TIL: only the Quartz Java scheduler (i.e. the one used by Jenkins)[0] supports the H character to _evenly_ distribute jobs across a defined time range [1] [0] [1] # 2020-06-20 What?! $ curl Weather report: Pisa, Italy \ / Partly cloudy _ /"".-. 20 °C \_( ). ← 6 km/h /(___(__) 10 km 0.0 mm ┌─────────────┐ ┌──────────────────────────────┬───────────────────────┤ Sat 20 Jun ├───────────────────────┬──────────────────────────────┐ │ Morning │ Noon └──────┬──────┘ Evening │ Night │ ├──────────────────────────────┼──────────────────────────────┼──────────────────────────────┼──────────────────────────────┤ │ \ / Partly cloudy │ \ / Partly cloudy │ \ / Partly cloudy │ \ / Partly cloudy │ │ _ /"".-. 22 °C │ _ /"".-. 24..25 °C │ _ /"".-. 22..25 °C │ _ /"".-. 20 °C │ │ \_( ). ↑ 6-8 km/h │ \_( ). → 10-12 km/h │ \_( ). → 12-15 km/h │ \_( ). → 9-14 km/h │ │ /(___(__) 10 km │ /(___(__) 10 km │ /(___(__) 10 km │ /(___(__) 10 km │ │ 0.0 mm | 0% │ 0.0 mm | 0% │ 0.0 mm | 0% │ 0.0 mm | 0% │ └──────────────────────────────┴──────────────────────────────┴──────────────────────────────┴──────────────────────────────┘ ┌─────────────┐ ┌──────────────────────────────┬───────────────────────┤ Sun 21 Jun ├───────────────────────┬──────────────────────────────┐ │ Morning │ Noon └──────┬──────┘ Evening │ Night │ ├──────────────────────────────┼──────────────────────────────┼──────────────────────────────┼──────────────────────────────┤ │ \ / Partly cloudy │ \ / Partly cloudy │ \ / Partly cloudy │ \ / Partly cloudy │ │ _ /"".-. 23..24 °C │ _ /"".-. 25..26 °C │ _ /"".-. 23..25 °C │ _ /"".-. 22 °C │ │ \_( ). ↑ 4-5 km/h │ \_( ). → 9-10 km/h │ \_( ). → 11-14 km/h │ \_( ). ↘ 6-10 km/h │ │ /(___(__) 10 km │ /(___(__) 10 km │ /(___(__) 10 km │ /(___(__) 10 km │ │ 0.0 mm | 0% │ 0.0 mm | 0% │ 0.0 mm | 0% │ 0.0 mm | 0% │ └──────────────────────────────┴──────────────────────────────┴──────────────────────────────┴──────────────────────────────┘ ┌─────────────┐ ┌──────────────────────────────┬───────────────────────┤ Mon 22 Jun ├───────────────────────┬──────────────────────────────┐ │ Morning │ Noon └──────┬──────┘ Evening │ Night │ ├──────────────────────────────┼──────────────────────────────┼──────────────────────────────┼──────────────────────────────┤ │ \ / Partly cloudy │ \ / Partly cloudy │ \ / Partly cloudy │ \ / Partly cloudy │ │ _ /"".-. 25..27 °C │ _ /"".-. 27..28 °C │ _ /"".-. 24..26 °C │ _ /"".-. 22 °C │ │ \_( ). ↑ 4-5 km/h │ \_( ). → 8-10 km/h │ \_( ). → 10-14 km/h │ \_( ). ↘ 6-11 km/h │ │ /(___(__) 10 km │ /(___(__) 10 km │ /(___(__) 10 km │ /(___(__) 10 km │ │ 0.0 mm | 0% │ 0.0 mm | 0% │ 0.0 mm | 0% │ 0.0 mm | 0% │ └──────────────────────────────┴──────────────────────────────┴──────────────────────────────┴──────────────────────────────┘ This is pretty dope! # 2020-06-30 ? ap: parsing errors are not loggedf to stdout ... and there is no exit status too ? book: the medici effect ? book: the innovator's dilemma # 2020-07-01 ? cb.vim: can it work like a standard register? + cb.vim: vim-fugitive: ["x]y copies the current git-object, but how can I do the same with ,y and send it to cb? # 2020-07-03 A little bit off-topic from the usual stuff I write about, but I just want to drop here some links to videos about DJ techniques (don't ask why I got into these): * Strobing/Strobe Technique: * Another video about Strobing/Chasing: * Back-spinning: * Finally the original DJ battle video that got me into this rabbit hole: --- TIL: Ansible won't re-create a container, when you remove one of its ENV variables -- and apparently, that's by design! Apparently ansible would not recreate the container when ENV variables are removed, because the condition of the remaining ENV variables still being set would still hold true. The work-around, is to inject a dummy ENV variable, and keep on updating it's value every time we want to force ansible to recreate the container (for whatever reason). # 2020-07-04 Bit of vim g* niceties: g8: show the ASCII code of the character under the cursor g<: see the output of the last command gF: got to file, and line number (./path/file.js:23) -- handy if you have a stack-trace in a buffer gv: re-select the last selection gi: go back to last insert location gq: wrap command (e.g. break at the specified text-width) # 2020-07-06 Rails Conf 2013 Patterns of Basecamp's Application Architecture by David Heinemeier Hansson: HTML: document based programming, or delivery mechanism? Google maps vs Blogs (and to some extend, SPAs vs server-side rendered applications, or hybrid applications) To make things fast: 1. Key-based cache expiration (or generational caching) - Model name + id / "new" + last-updated-at (if available) 2. Russian Doll nested caching - Comment is inside a post, a post is inside a project - When you change a comment, the system automatically touches the parent, then the parent touches its parent... (where "touches" really means update the last-modified-date timestamp) - The MD5 of a partial, is included in the cache key, which means every time you update a partial, the new view will be rendered (and cached again); if the partial is not updated, the old, cached value would still be safe to use - Templates are inspected, and caches are invalidates based on `render` calls - What if you want to show local time? You add UTC to the template, add `data-time-ago` attribute, inject some JavaScript on the client to display the utc value into local value - What if you have admin links (Edit, Delete, Copy, Move) which are only visible to a subset of users? We cannot access the logged-in user when rendering/caching the view, so a similar solution to the problem above: annotate the link with `data-visible-to`, and then some JavaScript will take care of removing the link which is not visible 3. Turbolinks process persistence With Turbolinks, your app could be as fast as the backend is able to serve responses: 50/150 ms? The experience is great; higher than that, and you are not going to notice much of a difference from a complete page-refresh. 4. Polling for updates # 2020-07-12 TIL: `git` fixups / autosquashes: --- * finished reading [Sails.js in Action 1st Edition]( # 2020-07-14 * finished reading [Lisp for the Web]( * finished reading [Lisp Web Tales]( --- RailsConf 2016 - Turbolinks 5: I Can’t Believe It’s Not Native! by Sam Stephenson: Basecamp3: - 18 months - 200+ screens - 5 platforms (desktop web, mobile web, android native, ios native, and email) - 6 rails devs, 2 android, 2 ios - built 3 open source frameworks: Trix (editor), Action Cable, and Turbolinks 5 Turbolinks for iOS (or Android): - Intercepts link-click events, and creates a new view on the stack - There is a single WebView only, and that's shared by all the windows on the stack (smaller memory footprint) - You can customize the user agent, and on the backend for example, hide certain areas which would not look good on the APP - Native (kind-of) app in 10 minutes - You can go fully native on specific screens -- hybrid approach - You can create native menus, by simply extracting metadata from the received page # 2020-07-15 TIL: `cal` can be instructed to display not just the current month, but also surrounding ones, with `-$N` (if `$N` is 3, `cal` will show the current month, the previous, and the next one): $ cal -3 June 2020 July 2020 August 2020 Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa 1 2 3 4 5 6 1 2 3 4 1 7 8 9 10 11 12 13 5 6 7 8 9 10 11 2 3 4 5 6 7 8 14 15 16 17 18 19 20 12 13 14 15 16 17 18 9 10 11 12 13 14 15 21 22 23 24 25 26 27 19 20 21 22 23 24 25 16 17 18 19 20 21 22 28 29 30 26 27 28 29 30 31 23 24 25 26 27 28 29 30 31 Source: # 2020-07-17 * finished reading [Lisp Hackers]( * finished reading [Full Stack Lisp]( # 2020-07-18 ? lispindent: backquote templates are not properly indented This is the output: (defmacro with-tansaction (conn &body body) `(let ((*connection* ,conn)) (cl-dbi:with-transaction *connection* ,@body))) While I would expect: (defmacro with-tansaction (conn &body body) `(let ((*connection* ,conn)) (cl-dbi:with-transaction *connection* ,@body))) Note how `(cl-dbi:with-transaction ...)` and its body should be indented by an additional space # 2020-07-28 Oh boy, time-zones are a bitch! Spent the day looking at the different DB-related libraries I am using in my web app (MITO, CL-DBI, and CL-MYSQl) trying to chase down a problem with DATE fields; these fields, even though stored with the **right** value inside the database, once _inflated_, they ended up with an unexpected time-zone offset (in MITO's terminology, _inflated_ is the action of transforming data, after it's read from the database, so it's easier to work with it inside the controller). Before we dive into the details, let's make it clear I want my web app to store DATE/DATETIME/TIMESTAMP in UTC (everyone should, really)... and here I thought the following SETF statement will get the job done: (ql:quickload :local-time) => (:LOCAL-TIME) (setf local-time:*default-timezone* local-time:+utc-zone+) => # It all starts with the _model_ trying to save a date object; it reads it as string from the request, converts it to a `local-time:timestamp`, and pass it down: (defvar date "2020-07-28") => DATE (local-time:parse-timestring date) => @2020-07-28T00:00:00.000000Z MITO will then process the value twice; it will try to _deflate_ it first inside DEFLATE-FOR-COL-TYPE (_deflate_ is the opposite of _inflate_, i.e. make a memory value ready to be processed by the database access layer) -- this step will turn out to be a noop for us, since we are using `local-time:timestamp` already: (defgeneric deflate-for-col-type (col-type value) (:method ((col-type (eql :date)) value) (etypecase value (local-time:timestamp value) (string value) (null nil))) Then it will apply some MySQL specific conversions inside CONVERT-FOR-DRIVER-TYPE (defgeneric convert-for-driver-type (driver-type col-type value) (:method (driver-type (col-type (eql :date)) (value local-time:timestamp)) (local-time:format-timestring nil value :format *db-date-format*)) This basically converts the `local-time:timestamp` back into a YYYY-MM-DD string: (defvar *db-date-format* '((:year 4) #\- (:month 2) #\- (:day 2))) => *DB-DATE-FORMAT* (local-time:format-timestring nil ** :format *db-date-format*) => "2020-07-28" And this is how the data is actually getting stored inside the database. Let's have a look now at what happens when we try to read the data back. I am using a MySQL database, and CL-DBI internally uses CL-MYSQL to actually talk to the database; CL-MYSQl in particular, tries to convert all the DATE, DATETIME, and TIMESTAMP objects into CL [universal time](, and it does it with the following function: (defun string-to-date (string &optional len) (declare (optimize (speed 3) (safety 3)) (type (or null simple-string) string) (type (or null fixnum) len)) (when (and string (> (or len (length string)) 9)) (let ((y (parse-integer string :start 0 :end 4)) (m (parse-integer string :start 5 :end 7)) (d (parse-integer string :start 8 :end 10))) (unless (or (zerop y) (zerop m) (zerop d)) (encode-universal-time 0 0 0 d m y))))) => STRING-TO-DATE Let's apply this function to the value stored inside the database and see whta it returns: (string-to-date **) => 3804876000 This is what MITO receives, before _inflating_ it; so let's _inflate_ this and see if we get back to where we started: (local-time:universal-to-timestamp *) => @2020-07-27T22:00:00.000000Z What?! That's what I thought too. The first thing I tried was to change STRING-TO-DATE, and force a UTC offset when invoking ENCODE-UNIVERSAL-TIME: (defun string-to-date (string &optional len) (declare (optimize (speed 3) (safety 3)) (type (or null simple-string) string) (type (or null fixnum) len)) (when (and string (> (or len (length string)) 9)) (let ((y (parse-integer string :start 0 :end 4)) (m (parse-integer string :start 5 :end 7)) (d (parse-integer string :start 8 :end 10))) (unless (or (zerop y) (zerop m) (zerop d)) (encode-universal-time 0 0 0 d m y 0))))) => STRING-TO-DATE (string-to-date "2020-07-28") => 3804883200 (local-time:universal-to-timestamp *) => @2020-07-28T00:00:00.000000Z This is better! What happened? Basically SETF-ing `local-time:*default-timezone*` changes the behavior of the LOCAL-TIME library, but does not change how built-in date-related functions operate; which means that if the runtime thinks it's getting runned in CEST, then CEST is the time-zone it will try and apply unless instructed not not to. Alright, it kind of makes sense, but how can we fix this? Clearly we can't change CL-MYSQL and hard-code the UTC time-zone in there, as everyone should be free to use the library under whichever time-zone the like. Enters the `TZ=UTC` environment variable. $ TZ=UTC sbcl (ql:quickload :local-time) => (:LOCAL-TIME) (setf local-time:*default-timezone* local-time:+utc-zone+) => # (defvar date "2020-07-28") => DATE (local-time:parse-timestring date) => @2020-07-28T00:00:00.000000Z (defvar *db-date-format* '((:year 4) #\- (:month 2) #\- (:day 2))) => *DB-DATE-FORMAT* (local-time:format-timestring nil ** :format *db-date-format*) => "2020-07-28" (defun string-to-date (string &optional len) (declare (optimize (speed 3) (safety 3)) (type (or null simple-string) string) (type (or null fixnum) len)) (when (and string (> (or len (length string)) 9)) (let ((y (parse-integer string :start 0 :end 4)) (m (parse-integer string :start 5 :end 7)) (d (parse-integer string :start 8 :end 10))) (unless (or (zerop y) (zerop m) (zerop d)) (encode-universal-time 0 0 0 d m y))))) => STRING-TO-DATE (string-to-date **) => 3804883200 (local-time:universal-to-timestamp *) => @2020-07-28T00:00:00.000000Z _Lovely time-zones..._ # 2020-08-03 Had some troubles updating my brew formulae today: $ brew cask upgrade java ==> Upgrading 1 outdated package: Error: Cask 'java' definition is invalid: Token '{:v1=>"java"}' in header line does not match the file name. It seems like my java cask is [unbelievably old]( $ rm -rf "$(brew --prefix)"/Caskroom/java $ brew cask install java Updating Homebrew... ==> Auto-updated Homebrew! Updated Homebrew from 36c10edf2 to 865e3871b. No changes to formulae. ==> Downloading Already downloaded: /Users/matteolandi/Library/Caches/Homebrew/downloads/4a16d10e8fdff3dd085a21e2ddd1d0ff91dba234c8c5f6588bff9752c845bebb--openjdk-14.0.2_osx-x64_bin.tar.gz ==> Verifying SHA-256 checksum for Cask 'java'. ==> Installing Cask java ==> Moving Generic Artifact 'jdk-14.0.2.jdk' to '/Library/Java/JavaVirtualMachines/openjdk-14.0.2.jdk'. 🍺 java was successfully installed! # 2020-08-09 Today I decided to spend some time to scratch one of the itches I have been having lately, when working with CL (Vim + Vlime): closing all the splits except for the current one, the Lisp REPL (terminal buffer), and Vlime's REPL. When I work on a CL project, I usually arrange my splits as follows: +------------------+------------------+ | file.lisp | CL-USER> | | | | | | | | | | | +------------------+ | | VLIME REPL | | | | | | | | | | +------------------+------------------+ Well, that's when I start working... few minutes/hours later, it might as well look similar to the below: +------------+------------+-----------+ | file.lisp | file2.lisp | CL-USER> | | | | | | | | | | | | | +------------+------------+-----------+ | fil11.lisp | file3.lisp | VLIMEREPL | | | | | | | | | | | | | +------------+------------+-----------+ So, how can I easily and quickly get from this messy state, to where I started off, with a single file, and the termnial and Vlime REPLs, and Vlime's REPL? `:only` would unfortunately close all the open splits except for the current one, which would force me to create new splits for the terminal and Vlime REPLs. I ended up creating the following VimL function: function! LispCurrentWindowPlusVlimeOnes() abort " {{{ let focused_win = win_getid() for tab in gettabinfo() for win in if win != focused_win let bufname = bufname(winbufnr(win)) if bufname =~? 'vlime' continue else let winnr = win_id2win(win) execute winnr . 'close' endif endif endfor endfor endfunction " }}} Which can be summarized as follows: - Get a hold of the focused split's Window-ID - Iterate all the splits of the current tab - If the current split is **not** the focused one - and the loaded buffer does not contain 'vlime' (that will match both my terminal REPL, and Vlime's one) - then we should close the split And that's it! Wire those mappings in (I like to override `:only` ones) and you should be ready to roll: nnoremap o :call LispCurrentWindowPlusVlimeOnes() nnoremap O :call LispCurrentWindowPlusVlimeOnes() nnoremap :call LispCurrentWindowPlusVlimeOnes() # 2020-08-10 TIL: DevOps folks have [design patterns too](, like the _ambassador_ pattern, described [here]( and [here]( # 2020-08-11 + read Structure and Interpretation of Computer Programs (SICP) # 2020-08-16 * finished reading [Common Lisp Recipes (CLR)]( # 2020-08-18 ? book: Living with Complexity --- The JavaScript That Makes Basecamp Tick, By Javan Makhmali, 2014: Basecamp (2): - 40 app servers - 100 total servers - 1/2 PB file storage - 1 TB database - 1 TB caching - 70 million requests per day - 50-60ms Rails response times The Asset Pipeline: - Tool for compiling, concatenating, and compress asset files - -> erb first, then coffee compiler, then javascript... - Bunch of _require_ comments that bring in assets from somewhere else, and add them to the pipeline - All of this will later be replaced by webpacker - You can also configure an `asset_host`, so the generated ` ... # 2021-05-02 Splitting routes into packages, with Caveman If you use [caveman]( to build your Web applications, sooner or later you will realize that it would not let you create an instance in one package, and use DEFROUTE in another. Why? Because the library internally keeps a HASH-TABLE mapping instances to the package that was active at the time these instances were created, and since DEFROUTE looks up the instance in this map using *PACKAGE* as key, if the current package is different from the one which was active when was created then no instance will be found, and error will be signalled, and you will be prompted to do something about it: (defmethod initialize-instance :after ((app ) &key) (setf (gethash *package* *package-app-map*) app)) (defun find-package-app (package) (gethash package *package-app-map*)) Of course I was not the first one bumping into this (see [caveman/issues/112](, but unfortunately the one workaround listed in the thread did not seem to work: it's not a matter of carrying the instance around (which might or might not be annoying), but rather of _changing_ *PACKAGE* before invoking DEFROUTE so that it can successfully look up the instance inside *PACKAGE-APP-MAP* at macro expansion time. But clearly you cannot easily change *PACKAGE* before executing DEFROUTE, or otherwise all the symbols used from within the route definition will most likely be unbound at runtime (unless of course such symbols are also accessible from the package in which the instance was created). So where do we go from here? Well, if we cannot _override_ *PACKAGE* while calling DEFROUTE from within a different package, then maybe what we can do is _adding_ *PACKAGE* to *PACKAGE-APP-MAP* and make sure it links to the instance you already created! (defun register-routes-package (package) (setf (gethash package*package-app-map*) *web*)) Add a call to REGISTER-ROUTES-PACKAGE right before your first call to DEFROUTE and you should be good to go: (in-package :cl-user) (defpackage chat.routes.sessions (:use :cl :caveman2 :chat.models :chat.views :chat.routes)) (in-package :chat.routes.sessions) (register-routes-package *package*) (defroute "/" () (if (current-user) (redirect-to (default-chatroom)) (render :new :user))) Ciao! # 2021-05-03 Calling MAKE-INSTANCE with a class name which is only known at run-time In CL, how can you instantiate a class (i.e. call MAKE-INSTANCE) if you had a string representing its name (and the package it was defined in)? > (intern "chat.channels.turbo::turbo-channel") |chat.channels.turbo::turbo-channel| > (intern (string-upcase "chat.channels.turbo::turbo-channel")) |CHAT.CHANNELS.TURBO::TURBO-CHANNEL| > (read-from-string "chat.channels.turbo::turbo-channel") CHAT.CHANNELS.TURBO::TURBO-CHANNEL At first I gave INTERN ago (I am not exactly sure why though, but that's where my mind immediately went to), but then irrespective of the case of the string I feed into it, it would end up using quote characters (i.e. `|`s). READ-FROM-STRING on the other hand seemed to get the job done, though I would like to avoid using it as much as possible, especially if the string I am feeding it with is getting generated _externally_ (e.g. sent over a HTTP request). Well, as it turns out, if you have the strings in `package:symbol` format, then you can parse out the package and symbol names and use FIND-PACKAGE and FIND-SYMBOL to get the symbol you are looking for: > (find-symbol (string-upcase "turbo-channel") (find-package (string-upcase "chat.channels.turbo"))) CHAT.CHANNELS.TURBO::TURBO-CHANNEL :INTERNAL > (make-instance *) # PS. @zulu.inuoe on Discord was kind enough to post this utility function that he uses: > this is a thing I use but uhh no guarantees. it also doesn't handle escape characters like | and \ (defun parse-entry-name (name) "Parses out `name' into a package and symbol name. eg. symbol => nil, symbol package:symbol => package, symbol package::symbol => package, symbol " (let* ((colon (position #\: name)) (package-name (when colon (subseq name 0 colon))) (name-start (if colon (let ((second-colon (position #\: name :start (1+ colon)))) (if second-colon (prog1 (1+ second-colon) (when (position #\: name :start (1+ second-colon)) (error "invalid entry point name - too many colons: ~A" name))) (1+ colon))) 0)) (symbol-name (subseq name name-start))) (values package-name symbol-name))) > (multiple-value-bind (package-name symbol-name) (parse-entry-name "chat.channels.turbo::turbo-channel") (find-symbol (string-upcase symbol-name) (if package-name (find-package (string-upcase package-name)) *package*))) CHAT.CHANNELS.TURBO::TURBO-CHANNEL :INTERNAL # 2021-05-06 > Is it possible to imagine a future where “concert programmers” are as common a fixture in the worlds auditoriums as concert pianists? In this presentation Andrew will be live-coding the generative algorithms that will be producing the music that the audience will be listening too. As Andrew is typing he will also attempt to narrate the journey, discussing the various computational and musical choices made along the way. A must see for anyone interested in creative computing. Andrew Sorensen Keynote: "The Concert Programmer" - OSCON 2014 ([link to video]( # 2021-05-16 Configuring lightline.vim to display the quickfix's title In case anyone was interested in displaying the quickfix title while on quickfix buffers (works with location-list buffers too), all you have to do is loading the following into the runtime: let g:lightline = { \ 'component': { \ 'filename': '%t%{exists("w:quickfix_title")? " ".w:quickfix_title : ""}' \ }, \ } Before: NORMAL [Quickfix List] | - And after: NORMAL [Quickfix List] :ag --vimgrep --hidden --smart-case --nogroup --nocolor --column stream-name | - Related link: --- Lack's Redis session store does not appear to be thread-safe Steps to reproduce: - Load the necessary dependencies: (ql:quickload '(:clack :lack :lack-session-store-redis :cl-redis)) - Setup a dummy "Hello, World" application: (defparameter *app* (lambda (env) (declare (ignore env)) '(200 (:content-type "text/plain") ("Hello, World")))) - Setup the middleware chain, one that uses Redis as session storage: (setf *app* (lack:builder (:session :store ( :connection (make-instance 'redis:redis-connection :host #(0 0 0 0) :port 6379 :auth "auth"))) *app*)) - Start the app: > (defvar *handler* (clack:clackup *app*)) Hunchentoot server is started. Listening on - Run `wrk` against it: $ wrk -c 10 -t 4 -d 10 Running 10s test @ 4 threads and 10 connections Expected behavior: - The test runs successfully, and the number of requests per second is reported Actual behavior: - Lots of errors are signaled; from unexpected errors: Thread: 7; Level: 1 Redis error: NIL ERR Protocool error: invalid multibulk length [Condition of type REDIS:REDIS-ERROR-REPLY] Restarts: R 0. ABORT - abort thread (#) Frames: F 0. ((:METHOD REDIS:EXPECT ((EQL :STATUS))) #) [fast-method] F 1. (REDIS::SET "session:fdf15f46896842bcd84ef2e87d2321e97419f70e" "KDpQQ09ERSAxICg6SEFTSC1UQUJMRSAxIDcgMS41IDEuMCBFUVVBTCBOSUwgTklMKSk=") F 2. ((:METHOD LACK.MIDDLEWARE.SESSION.STORE:STORE-SESSION (LACK.MIDDLEWARE.SESSION.STORE.REDIS:REDIS-STORE T T)) #S(LACK.MIDDLEWARE.SESSION.STORE.REDIS:REDIS-STORE :HOST #(0 0 0 0) :PORT 6379 :NAMESPACE ".. F 3. (LACK.MIDDLEWARE.SESSION::FINALIZE #S(LACK.MIDDLEWARE.SESSION.STORE.REDIS:REDIS-STORE :HOST #(0 0 0 0) :PORT 6379 :NAMESPACE "session" :EXPIRES NIL :SERIALIZER # [Condition of type REDIS:REDIS-CONNECTION-ERROR] Restarts: R 0. RECONNECT - Try to reconnect and repeat action. R 1. ABORT - abort thread (#) Frames: F 0. (REDIS::SET "session:5b391bd4699e86f5b6e757451572c0a3ba825557" "KDpQQ09ERSAxICg6SEFTSC1UQUJMRSAxIDcgMS41IDEuMCBFUVVBTCBOSUwgTklMKSk=") F 1. ((:METHOD LACK.MIDDLEWARE.SESSION.STORE:STORE-SESSION (LACK.MIDDLEWARE.SESSION.STORE.REDIS:REDIS-STORE T T)) #S(LACK.MIDDLEWARE.SESSION.STORE.REDIS:REDIS-STORE :HOST #(0 0 0 0) :PORT 6379 :NAMESPACE ".. F 2. (LACK.MIDDLEWARE.SESSION::FINALIZE #S(LACK.MIDDLEWARE.SESSION.STORE.REDIS:REDIS-STORE :HOST #(0 0 0 0) :PORT 6379 :NAMESPACE "session" :EXPIRES NIL :SERIALIZER # # #) [fast-met.. A similar error was mentioned on `cl-redis` GitHub's space ([cl-redis/issues#19](, to which the author replied that one the reasons why this might happen is when trying to share the same connection between multiple threads. So a took a look the implementation of the Redis store, and it looks like it is indeed sharing the same Redis connection between different HTTP requests (i.e. multiple threads). The following seems to be fixing the problem though I would like to hear @fukamachi's view on this, before submitting a pull-request: diff --git a/src/middleware/session/store/redis.lisp b/src/middleware/session/store/redis.lisp index 9e489d9..3ccfaec 100644 --- a/src/middleware/session/store/redis.lisp +++ b/src/middleware/session/store/redis.lisp @@ -54,9 +54,8 @@ (defun redis-connection (store) (check-type store redis-store) (with-slots (host port auth connection) store - (unless (redis::connection-open-p connection) - (setf connection - (open-connection :host host :port port :auth auth))) + (setf connection + (open-connection :host host :port port :auth auth)) connection)) (defmacro with-connection (store &body body) # 2021-05-21 Example of how to use [AbortController]( to cancel an action when input changes: let currentJob = Promise.resolve(); let currentController; function showSearchResults(input) { if (currentController) currentController.abort(); currentController = new AbortController(); const { signal } = currentController; currentJob = currentJob .finally(async () => { try { startSpinner(); const response = await fetch(getSearchUrl(input), { signal }); await displayResults(response, { signal }); } catch (err) { if ( === 'AbortError') return; displayErrorUI(err); } finally { stopSpinner(); } }); } Source: [twitter]( # 2021-05-27 Few days ago Replit announced support for [every programming language](! I never heard of [NixOS]( (that's what enables them to support "every programming language"), but I decided to give it a go anyway, and after a little bit of struggles I was then able to put together the following Repls: - [Common Lisp]( -- of course the first one had to be one about CL - [Common Lisp w/ SSL]( -- it turns out you have to mess with `LD_LIBRARY_PATH` to get SSL to work with `:cl+ssl` - [Game of Life in Common Lisp]( -- why not... - [10 PRINT.BAS]( -- ever heard of the famous one-line Commodore 64 BASIC program to generate mazes? They even wrote a whole [book]( about it! This is pretty cool, well done Replit! --- This is the famous one-line Commodore 64 BASIC program to generate random mazes (the Commodore 64 uses the [PETSCII]( character set, not ASCII): 10 PRINT CHR$(205.5+RND(1)); : GOTO 10 While this is a Common Lisp equivalent: (loop (princ (aref "/\\" (random 2)))) If the above run a little bit too fast int he REPL (it most likely will, these days), then you can try with this other alternative: (loop (finish-output) (princ (aref "/\\" (random 2))) (sleep (/ 1 30))) Happy retro-hacking! # 2021-05-31 TIL: Untracked files are stored in the third parent of a stash commit. > Untracked files are stored in the third parent of a stash commit. (This isn't actually documented, but is pretty obvious from The commit which introduced the -u feature, 787513..., and the way the rest of the documentation for git-stash phrases things... or just by doing git log --graph stash@{0}) > You can view just the "untracked" portion of the stash via: > git show stash@{0}^3 > or, just the "untracked" tree itself, via: > git show stash@{0}^3: > or, a particular "untracked" file in the tree, via: > git show stash@{0}^3: Source: [stackoverflow](,as%20part%20of%20the%20diff) # 2021-06-02 Web scraping for fun and profit^W for a registration to get my COVID-19 jab Rumor says that in a few days everyone in Tuscany, irrespective of their age, could finally register for their first shot of the vaccine; that means "click day", i.e. continuously polling the registration website until your registration category is open. I am sure a bird will eventually announce when the service will be open, i.e. the time of the day, so no need to stay on the lookout for the whole day, but I figured I could use this as an excuse to do a little bit of Web scraping in Common Lisp, so here it goes. Dependencies first: (ql:quickload "drakma") ; to fire HTTP requests (ql:quickload "local-time") ; to track of when the scraping last happened (ql:quickload "st-json") ; guessed it (ql:quickload "uiop") ; read env variables The next step is writing our MAIN loop: - Wait for categories to be activated: new categories can be added, or existing ones can be activated - Send an SMS to notify me about newly activated categories (so I can pop the website open, and try to register) - Repeat (defun main () (loop (with-simple-restart (continue "Ignore the error") (wait-for-active-categories) (send-sms)))) WAIT-FOR-ACTIVE-CATEGORIES is implemented as loop form that: - Checks if categories have been activated, in which case the list of recently activated ones is returned - Otherwise it saves the current timestamp inside *LAST-RUN-AT* and then goes to sleep for a little while (defvar *last-run-at* nil "When the categories scraper last run") (defun wait-for-active-categories () (loop :when (categories-activated-p) :return it :do (progn (setf *last-run-at* (local-time:now)) (random-sleep)))) Let's now dig into the category processing part. First we define a special variable, *CATEGORIES-SNAPSHOTS*, to keep track of all the past snapshots of services we scraped so far; next we fetch the categories from the remote website, see if any of them got activated since the last scraping, and last we push the latest snapshot into *CATEGORIES-SNAPSHOTS* and do some _harvesting_ to make sure we don't run out of memory because of all these scraping done so far: (defvar *categories-snapshots* nil "List of categories snapshots -- handy to see what changed over time.") (defun categories-activated-p () (let ((categories (fetch-categories))) (unless *categories-snapshots* (push categories *categories-snapshots*)) (prog1 (find-recently-activated categories) (push categories *categories-snapshots*) (harvest-categories-snapshots)))) Before fetching the list of categories we are going to define a few utilities: a condition signaling when HTML is returned in spite of JSON (usually a sign that the current session expired): (define-condition html-not-json-content-type () ()) The URLs for the registration page (this will be used later, when generating the SMS) and for the endpoint returning the list of categories: (defparameter *prenotavaccino-home-url* "") (defparameter *prenotavaccino-categories-url* "") A cookie jar to create a _persistent_ connection with the website under scraping: (defvar *cookie-jar* (make-instance 'drakma:cookie-jar) "Cookie jar to hold session cookies when interacting with Prenotavaccino") With all this defined, fetching the list of categories becomes as easy as firing a request to the endpoint and confirm the response actually contains JSON and not HTML: if JSON, parse it and return it, otherwise, simply try again. The idea behind "trying again" is that when the response contains HTML, that will most likely contain a redirect to the home page with a newly created session cookie, and since the same cookie jar is getting used, the simple fact that we processed that response should be enough to make the next API call succeed: (defun parse-json (response) (let* ((string (flexi-streams:octets-to-string response))) (st-json:read-json string))) (defun fetch-categories () (flet ((fetch-it () (multiple-value-bind (response status headers) (drakma:http-request *prenotavaccino-categories-url* :cookie-jar *cookie-jar*) (declare (ignore status)) (let ((content-type (cdr (assoc :content-type headers)))) (if (equal content-type "text/html") (error 'html-not-json-content-type) response)))) (parse-it (response) (st-json:getjso "categories" (parse-json response)))) (parse-it (handler-case (fetch-it) (html-not-json-content-type (c) (declare (ignore c)) (format t "~&Received HTML instead of JSON. Retrying assuming the previous session expired...") (fetch-it)))))) Note: we retry only once, so if two consecutive responses contain HTML instead of JSON, that would result in the debugger to pop open. Finding all the recently activated categories consists of the following steps: - for each category just scraped - compare it with the same in the last snapshot of categories - a category is considered as _recently_ activated if it's active and it is not present inside the second to last snapshot, or it was present but either it was inactive or if its title has changed (yes, sometimes the title of a category is changed to advertise that new people with different age can now sign up) Anyways, all the above translates to the following: (defun category-name (cat) (st-json:getjso "idCategory" cat)) (defun category-title (cat) (st-json:getjso "title" cat)) (defun category-active-p (cat) (and (eql (st-json:getjso "active" cat) :true) (eql (st-json:getjso "forceDisabled" cat) :false))) (defun find-recently-activated (categories) (flet ((category-by-name (name categories) (find name categories :key #'category-name :test #'equal))) (let ((prev-categories (first *categories-snapshots*))) (loop :for cat :in categories :for cat-prev = (category-by-name (category-name cat) prev-categories) :when (and (category-active-p cat) (or (not cat-prev) (not (category-active-p cat-prev)) (not (equal (category-title cat-prev) (category-title cat))))) :collect cat)))) Harvesting is pretty easy: define a maximum length threshold for the list of snapshots and when the list grows bigger than that, start popping items from the back of the list as new values are pushed to the front: (defparameter *categories-snapshots-max-length* 50 "Maximum numbers of categories snapshots to keep around") (defun harvest-categories-snapshots () (when (> (length *categories-snapshots*) *categories-snapshots-max-length*) (setf *categories-snapshots* (subseq *categories-snapshots* 0 (1+ *categories-snapshots-max-length*))))) Note: I could have removed "consecutive duplicates" from the list, or prevented these from getting stored in the list to begin with, but I am going to leave this as an exercise for the reader ;-) Two pieces of the puzzle are still missing: RANDOM-SLEEP, and SEND-SMS. For RANDOM-SLEEP we decide the minimum number of seconds that the scraper should sleep for, and then add some _randomness_ to it (like the remote site cared that we pretended to try and act like a _human_, but let's do it anyway): (defparameter *sleep-seconds-min* 60 "Minimum number of seconds the scraper will sleep for") (defparameter *sleep-seconds-jitter* 5 "Adds a pinch of randomicity -- see RANDOM-SLEEP") (defun random-sleep () (sleep (+ *sleep-seconds-min* (random *sleep-seconds-jitter*)))) For the SMS part instead, we are going to use Twilio; first we define all the parameters required to send SMSs: - Account SID - Auth token - API URL - From number - To numbers (space separated values) (defun getenv-or-readline (name) "Gets the value of the environment variable, or asks the user to provide a value for it." (or (uiop:getenv name) (progn (format *query-io* "~a=" name) (force-output *query-io*) (read-line *query-io*)))) (defvar *twilio-account-sid* (getenv-or-readline "TWILIO_ACCOUNT_SID")) (defvar *twilio-auth-token* (getenv-or-readline "TWILIO_AUTH_TOKEN")) (defvar *twilio-messages-api-url* (format nil "" *twilio-account-sid*)) (defvar *twilio-from* (getenv-or-readline "TWILIO_FROM_NUMBER")) (defvar *twilio-to-list* (split-sequence:split-sequence #\Space (getenv-or-readline "TWILIO_TO_NUMBERS"))) Next we assemble the HTTP request, fire it, and do some error checking to signal an error in case it failed to deliver the message to any of the _to_ numbers specified inside *TWILIO-TO-LIST*: (defun send-sms (&optional (body (sms-body))) (flet ((send-it (to) (drakma:http-request *twilio-messages-api-url* :method :post :basic-authorization `(,*twilio-account-sid* ,*twilio-auth-token*) :parameters `(("Body" . ,body) ("From" . ,*twilio-from*) ("To" . ,to))))) (let (failed) (dolist (to *twilio-to-list*) (let* ((jso (parse-json (send-it to))) (error-code (st-json:getjso "error_code" jso))) (unless (eql error-code :null) (push jso failed)))) (if failed (error "Failed to deliver **all** SMSs: ~a" failed) t)))) Last but not least, the SMS body; we want to include in the message which services got activated since the last time the scraper run, and to do so we: - Take the latest snapshot from *CATEGORIES-SNAPSHOTS* - Temporarily set *CATEGORIES-SNAPSHOTS* to its CDR, like the latest snapshot wasn't recorded yet - Call FIND-RECENTLY-ACTIVATED effectively pretending like we were inside CATEGORIES-ACTIVATED-P and were trying to understand if anything got recently activated (dynamic variables are fun, aren't they?!) (defun sms-body () (let* ((categories (first *categories-snapshots*)) (*categories-snapshots* (cdr *categories-snapshots*))) (format nil "Ora attivi: ~{~A~^, ~} -- ~a" (mapcar #'category-title (find-recently-activated categories)) *prenotavaccino-home-url*))) Note: "Ora attivi:" is the Italian for "Now active:" And that's it: give `(main)` a go in the REPL, wait for a while, and eventually you should receive an SMS informing you which category got activated! > (find "LastMinute" (first *categories-snapshots*) :key #'category-name :test #'string=) #S(ST-JSON:JSO :ALIST (("idCategory" . "LastMinute") ("active" . :TRUE) ("forceDisabled" . :FALSE) ("start" . "2021-06-02") ("end" . "2021-07-31") ("title" . "Last
Minute") ("subtitle" . "ATTENZIONE. Prenotazione degli slot rimasti disponibili nelle prossime 24H.") ("message" . "Il servizio aprirà alle ore 19:00 di ogni giorno") ("updateMaxMinYear" . :TRUE))) > (sms-body) "Ora attivi: Last
Minute --" Happy click day, Italians! # 2021-06-07 * finished reading [Object-Oriented Programming in COMMON LISP: A Programmer's Guide to CLOS]( # 2021-06-11 ? That will never work: Netflix ? No rules Netflix Culture Reinvetion ? book: Designing Data-intensive applications ? book: Microservices Patterns ? book: Grokking algorithms # 2021-06-17 * Opened [cl-cookbook/pull/385]( Don't wrap SWANK:CREATE-SERVER inside BT:MAKE-THREAD # 2021-06-19 Remote live coding a Clack application with Swank, and Ngrok Today I would like to show you how to create a very simple CL Web application with Clack, and how to use Swank and [Ngrok]( to enable remote [live coding]( Here is what we are going to wind up with: - Clack application running on `localhost:5000` -- i.e. the Web application - Swank server running on `localhost:4006` -- i.e. the "live coding" enabler - Ngrok tunnel to remotely expose `localhost:4006` (just in case you did not have SSH access into the server the app is running on) As usual, let's start off by naming all the systems we are depending on: - `clack`, Web application environment for CL - `ngrok`, CL wrapper for installing and running Ngrok - `swank`, what enables remote "live coding" - `uiop`, utilities that abstract over discrepancies between implementations Now, `ngrok` has not been published to Quicklisp yet, so you will have to manually download it first: $ git clone $ sbcl --noinform > ... ...and then make sure it is QL:QUICKLOAD-able: (pushnew '(merge-pathnames (parse-namestring "ngrok/") *default-pathname-defaults*) asdf:*central-registry*) Now you can go ahead and load all the required dependencies: (ql:quickload '("clack" "ngrok" "swank" "uiop")) Let's take a look at the MAIN function. It starts the Web application, the Swank server, and Ngrok: (defun main () (clack) (swank) (ngrok)) For the Web application: - We define a special variable, *HANDLER*, to hold the Clack Web application handler - We start the app, bind it on `localhost:5000`, and upon receiving an incoming request we delegate to SRV - Inside SRV, we delegate to APP (more to this later) - Inside APP, we finally process the request and return a dummy message back to the client (defvar *handler* nil) (defun clack () (setf *handler* (clack:clackup #'srv :address "localhost" :port 5000))) (defun srv (env) (app env)) (defun app (env) (declare (ignore env)) '(200 (:content-type "text/plain") ("Hello, Clack!"))) You might be wondering: "why not invoking CLACK:CLACKUP with #'APP? Why the middle-man, SRV?" Well, that's because Clack would dispatch to whichever function was passed in at the time CLACK:CLACKUP was called, and because of that, any subsequent re-definitions of such function would not be picked up by the framework. The solution? Add a level of indirection, and so long as you don't need to change SRV, but always APP, then you should be all right! Let's now take a look at how to setup a Swank server. First we ask the user for a secret with which to [secure]( the server (i.e. it will accept incoming connections only from those clients that do know such secret), then we write the secret onto ~/.slime-secret (yes, it's going to overwrite the existing file!), and last we actually start the server: (defun getenv-or-readline (name) "Gets the value of the environment variable, or asks the user to provide a value for it." (or (uiop:getenv name) (progn (format *query-io* "~a=" name) (force-output *query-io*) (read-line *query-io*)))) (defvar *slime-secret* (getenv-or-readline "SLIME_SECRET")) (defparameter *swank-port* 4006) (defun swank () (write-slime-secret) (swank:create-server :port *swank-port* :dont-close t)) (defun write-slime-secret () (with-open-file (stream "~/.slime-secret" :direction :output :if-exists :supersede) (write-string *slime-secret* stream))) Last but not least, Ngrok: we read the authentication token, so Ngrok knows the account the tunnel needs to be associated with, and then we finally start the daemon by specifying the port the Swank server is listening to (as that's what we want Ngrok to create a tunnel for) and the authentication token: (defvar *ngrok-auth-token* (getenv-or-readline "NGROK_AUTH_TOKEN")) (defun ngrok () (ngrok:start *swank-port* :auth-token *ngrok-auth-token*)) Give the MAIN function a go, and you should be presented with a similar output: > (main) Hunchentoot server is started. Listening on localhost:5000. ;; Swank started at port: 4006. [10:13:43] ngrok setup.lisp (install-ngrok) - Ngrok already installed, changing authtoken [10:13:44] ngrok setup.lisp (start) - Starting Ngrok on TCP port 4006 [10:13:44] ngrok setup.lisp (start) - Tunnnel established! Connect to the tcp:// "tcp://" Let's test the Web server: $ curl https://localhost:5000 Hello, Clack! It's working. Now open Vim/Emacs, connect to the Swank server (i.e. host: ``, port: `12321`), and change the Web handler to return something different: (defun app (env) (declare (ignore env)) '(200 (:content-type "text/plain") ("Hello, Matteo!"))) Hit the Web server again, and this time it should return a different message: $ curl https://localhost:5000 Hello, Matteo! Note how we did not have to bounce the application; all we had to do was re-define the request handler...that's it! Happy remote live coding! PS. The above is also available on [GitHub](, and on [Replit]( # 2021-06-28 * finished reading [10 PRINT CHR$(205.5+RND(1)); : GOTO 10]( # 2021-08-23 * finished reading [The Friendly Orange Glow: The Untold Story of the PLATO System and the Dawn of Cyberculture]( # 2021-08-27 > The power of Lisp is its own worst enemy. [The Lisp Curse]( # 2021-08-30 * finished reading [Grokking Simplicity]( --- Grokking Simplicity notes This page of the book says it all: > We have learned the skills of professionals > > Since we're at the end of the book, let's look back at how far we've come and do a high-level listing of the skills we've learned. These are the skills of professional functional programmers. They have been chose for their power and depth. > > Part 1: Actions, Calculations, and Data: > > - Identifying the most problematic parts of your code by distinguishing actions, calculations, and data > - Making your code more resusable and testable by extracting calculations from actions > - Improving the design of actions by replacing implicit inputs and outputs with explicit ones > - Implementing immutability to make reading data into a calculation > - Organizing and improving code with stratified design > > Part 2: First-class abstractions > > - Making syntactic operations first-class so that they can be abstracted in code > - Reasoning at a higher level using functional iteration and other functional tools > - Chaining functional tools into data transformation pipelines > - Understanding distributed and concurrent systems by timeline diagrams > - Manipulating timelines to eliminate bugs > - Mutating state safely with higher-order functions > - Using reactive architecture to reduce coupling between cause and effects > - Applying the onion architecture (Interaction, Domain, Language) to design services that interact with the world Where here is the list of primitives presented in the book to work with timelines. Serialize tasks with `Queue`: function Queue(worker) { var queue_items = []; var working = false; function runNext() { if(working) return; if(queue_items.length === 0) return; working = true; var item = queue_items.shift(); worker(, function(val) { working = false; setTimeout(item.callback, 0, val); runNext(); }); } return function(data, callback) { queue_items.push({ data: data, callback: callback || function(){} }); setTimeout(runNext, 0); }; } Example: function calc_cart_worker(cart, done) { calc_cart_total(cart, function(total) { update_total_dom(total); done(total); }); } var update_total_queue = Queue(calc_cart_worker); Serialize tasks and skip duplicate work with `DroppingQueue`: function DroppingQueue(max, worker) { var queue_items = []; var working = false; function runNext() { if(working) return; if(queue_items.length === 0) return; working = true; var item = queue_items.shift(); worker(, function(val) { working = false; setTimeout(item.callback, 0, val); runNext(); }); } return function(data, callback) { queue_items.push({ data: data, callback: callback || function(){} }); while(queue_items.length > max) queue_items.shift(); setTimeout(runNext, 0); }; } Example: function calc_cart_worker(cart, done) { calc_cart_total(cart, function(total) { update_total_dom(total); done(total); }); } var update_total_queue = DroppingQueue(1, calc_cart_worker); Synchronize (i.e. cut) multiple timelines with `Cut`: function Cut(num, callback) { var num_finished = 0; return function() { num_finished += 1; if(num_finished === num) callback(); }; } Example: var done = Cut(3, function() { console.log("3 timelines are finished"); }); done(); done(); done(); // only this one logs Make sure a function is called only once, with `JustOnce`: function JustOnce(action) { var alreadyCalled = false; return function(a, b, c) { if(alreadyCalled) return; alreadyCalled = true; return action(a, b, c); }; } Example: function sendAddToCartText(number) { sendTextAjax(number, "Thanks for adding something to your cart. Reply if you have any questions!"); } var sendAddToCartTextOnce = JustOnce(sendAddToCartText); sendAddToCartTextOnce("555-555-5555-55"); // only this one logs sendAddToCartTextOnce("555-555-5555-55"); sendAddToCartTextOnce("555-555-5555-55"); sendAddToCartTextOnce("555-555-5555-55"); Encapsulate state inside _reactive_ `ValueCell`s: function ValueCell(initialValue) { var currentValue = initialValue; var watchers = []; return { val: function() { return currentValue; }, update: function(f) { var oldValue = currentValue; var newValue = f(oldValue); if(oldValue !== newValue) { currentValue = newValue; forEach(watchers, function(watcher) { watcher(newValue); }); } }, addWatcher: function(f) { watchers.push(f); } }; } Example: var shopping_cart = ValueCell({}); function add_item_to_cart(name, price) { var item = make_cart_item(name, price); shopping_cart.update(function(cart) { return add_item(cart, item); }); var total = calc_total(shopping_cart.val()); set_cart_total_dom(total); update_tax_dom(total); } shopping_cart.addWatcher(update_shipping_icons); Derived values can instead be implemented with `FormulaCell`s: function FormulaCell(upstreamCell, f) { var myCell = ValueCell(f(upstreamCell.val())); upstreamCell.addWatcher(function(newUpstreamValue) { myCell.update(function(currentValue) { return f(newUpstreamValue); }); }); return { val: myCell.val, addWatcher: myCell.addWatcher }; } Example: var shopping_cart = ValueCell({}); var cart_total = FormulaCell(shopping_cart, calc_total); function add_item_to_cart(name, price) { var item = make_cart_item(name, price); shopping_cart.update(function(cart) { return add_item(cart, item); }); } shopping_cart.addWatcher(update_shipping_icons); cart_total.addWatcher(set_cart_total_dom); cart_total.addWatcher(update_tax_dom); --- Location sharing alternative to the Google built-in one @idea - It would be fun - You would not have to share data with Google (why would anyone share it with you instead? My friends and family probably would!) --- Streamlining answering the donors questionnaire @idea "Which cities have you visited during the last 4 weeks?" "Which countries have you visited over the last 6 months?" These are some questions donors get asked over and over again before they are are cleared for the donation, and it always catches me off guard. Can this information be automatically pulled from Google? What about my wife's data instead, would I have access to it? During COVID, most of the time it's not just about the place you have visited, but the places other people you are living with had... # 2021-09-06 * `gem install`-d all the things again. I guess the latest `brew upgrade` that I run last week might have broken a few things (and while at it, I also `rvm install --default ruby-3.0.0`) ? implement rainbow-blocks for Vim (see: A simple hack would be to define 5/6 syntax rules for `(...)` blocks and configure them so that they can only exist within an outer block, i.e. block1, block2 only inside block1, block3 only inside block2. # 2021-09-08 Notes about reverse engineering Basecamp's Recording pattern Before we begin: I never used RoR nor programmed in Ruby, so apologies if some of the things mentioned are inaccurate or completely wrong; I simply got interested in the pattern, and tried my best to understand how it could have been implemented. So bear with me, and of course feel free to suggest edits. Main concepts: - Buckets are collections of records, i.e. _recordings_, having different types e.g. `Todo`, `Todoset` - Recordings are just pointers, pointing to the actual data record, i.e. the _recordables_ - Recordables are immutable records, i.e. every time something changes, a new _recordable_ record is created and the relevant recording is updated to point to it - Events are created to link together 2 recordables (the current version and the previous one), the recording, the bucket, and the author of the change - This enables data versioning - This enables data auditing (at the recording level, at the bucket level, and at the user level) It all started with this [tweet]( > Probably the single most powerful pattern in BC3 ❤️. Really should write it up at one point. This is how a Basecamp URL looks like: `/1234567890/projects/12345/todos/67890` In it: - `1234567890` is the account ID -- it identifies a company's instance, it represents a tenant - `12345` is the bucket ID, pointing to a record whose `bucketable_type` is `Project` - `67890` is the recording ID, pointing to a record whose `recordable_type` is `Todo` Bucket is not a project, but links to one (project is the _bucketable_ entity): class Bucket < ActiveRecord::Base ... belongs_to :account delegated_type :bucketable, types: %w[ Project ] # this adds bucketable_id, bucketable_type to the schema ... or `belongs_to :bucketable` has_many :recordings, dependent: :destroy ... end Recording is not a todo, but links to one (todo is the _recordable_ entity): class Recording < ActiveRecord::Base ... belongs_to :bucket, touch: true belongs_to: :creator, class_name: 'Person', default: -> { Current.person } ... delegated_type :recordable, types: %w[ Todo Todolist Todoset Dock ] # this adds recordable_id and recordable_type to the schema delegate :account, to: :bucket end module Recordable extend ActiveSupport::Concern included do has_many :recordings, as: :recordable # same as specified inside `Recording` as `delegated_type` end end Recordables are immutable, and do not usually have all the created-at, updated-at metadata...recordings do! Recordings have parents, and children; it almost becomes a graph database, with a tree structure: - recording.recordable_type: Todo - recording.parent.recordable_type: Todolist - recording.parent.parent.recordable_type: Todoset - recording.parent.parent.parent.recordable_type: Dock - Chat::Transcript, Message::Board, Todoset, Schedule, Questionnaire, Vault The above, i.e. the tree structure, is most likely implemented via the `Tree` concern: module Recording::Tree extend ActiveSupport::Concern included do belongs_to :parent, class_name: 'Recording', optional: true has_many :children, class_name: 'Recording', foreign_key: :parent_id end end Note: I don't assume the above to be correct, but it should get the job done; I question whether it would make sense to add a _type_ field on this parent/child relation, but maybe it's not strictly required (i.e. you always know that one todo's parent is a todolist). As stated above, recordings are pointers, almost like symlinks, and when you update a todo, you don't actually update the record, but create a new one, and update the recording to point to that; since URLs are all keyed-up accordingly (i.e. they use Recordings IDs, and not Recordable ones), the URL does not change but the actual todo the recording point to does. Controllers then look something like this: class TodosController < ApplicationController def create @recording = @bucket.record new_todo end def update @recording.update! recordable: new_todo end private def new_todo params.require(:todo).permit(:content, :description, :starts_on, :due_on) end end While this is how `Bucket.record` looks like: class Bucket < ActriveRecord::Base ... def record(recordable, children: nil, parent: nil, status: :active, creator: Current.person, **options) transaction do! options.merge!(recordable: recordable, parent: parent, status: status, creator: creator) recordings.create!(options).tap do | recording | Array(children).each do | child | record child, parent: recording, status: status, creator: creator end end end end end This enables you to do things like, "Copy message to a new project", way more efficient: previously you had to set up a new job that literally copied the message, the comments, the events onto the new project; while now, all you have to do is set up a new recording pointing to the existing _immutable_ recordable. For example: - Project has messages -- messages are recordables - Messages have comments -- comments are recordables - When you copy a message from one project to another, you will have to create a recording not just for the message to be copied, but for all its existing comments as well (that's what `children` inside `Bucket.record` is for) Note: `children` inside `Bucket.record` is expected to be a recordable, and not a recording; this means we will have to pass in all the child recordables, e.g. ``. Note: `Bucket.record` (at least the version above, which was taken from [shown in the presentation]( does not seem to recurse, which means you can only create recordings for the parent entity and its direct children, i.e. messages and comments, but not `Todoset`s, `Todolist`s _and_ `Todo`s. Note: is this more efficient, really? Even though you will not have to copy the immutable records (i.e. the recordables), you will still have to recreate the same recordings tree, right? After you copied a recording, you should end up with something like this: >> Recording.find(111111111) => 33333333333333 >> Recording.find(222222222) => 33333333333333 Again, two pointers, i.e. recordings, pointing to the same immutable record, i.e. the recordable. Note: a recordable has many recordings, not one; this is because the same recordable object can be pointed by two different recordings. This Recording pattern works in tandem with `Event`s: - They link to the newer version of the recordable, and the previous one - They belong to a `Recording`: this way you can easily see all the times a recording was updated - They belong to a `Bucket`: this way you can easily see all the times a bucket was updated - They belong to a `Person`: this way you can easily see all the changes done by one user - They have one `Detail`, i.e. a hash containing some additional metadata linked to the current event class Event < ActiveRecord::Base belongs_to: :recording, required: false belongs_to: :bucket belongs_to: :creator, class_name: 'Person' has_one :detail, dependent: :delete, required: true include Notifiable...Requested ... end and with the `Eventable` concern: module Recording::Eventable extends ActiveSupport::Concern ... included do has_many :events, dependent: :destroy after_create :track_created after_update :track_updated aftter_update_commit :forget_adoption_tracking, :forget_events end def track_event(action, recordable_previous: nil, **particulars) Event.create! \ recording: self, recordable: recordable, recordable_previous: recordable_previous, bucket: bucket, creator: Current.person, action: action, detail: end ... end `track_event` here is the generic method, but more specific ones exist as well, for more common use cases so we don't have to manually plumb arguments. We track an event every single time an eventable is created, or updated: after_create: :track_created after_update :track_updated The private method `track_created` simply delegates to `track_event`: private def track_created track_event :created, status_was: status end A recordable has many events; again, the same recordable could referenced by multiple recordings, and since events are created when recordings are created, then that's how you end up with multiple events. OK, great, nice, but what schema would enable such a pattern? Again, not 100% accurate, but hopefully close enough: - Buckets (id, bucketable_type, bucketable_id, ...) - Recordings (id, bucket_id, recordable_type, recordable_id, parent_id, ...) - Events (id, bucket_id, recording_id, recordable_id, prevous_recordable_id, user_id, ...) Note: recordables are _untyped_ here, but the type information is actually one join-table away, i.e. it's available inside the linked recording - Projects (id, ...) - Messages (id, ...) And this seems to be it really. Note: this pattern heavily relies on Rails delegate types (see [rais/rails/pull#39341](, to automatically traverse the recording table to automatically get to the recordable entry. My first thought was: how will this perform? How is the extra join operation required to fetch any linked entity going to affect performance? If it's working for Basecamp, hopefully it's working decently enough for other applications as well, but I guess it also depends on how much the different entities are linked together. Note: in the end, this is not that different from 'time-based versioning' (read: [Keeping track of graph changes using temporal versioning](, with recordings being the _identity_ nodes, and recordables being the _state_ objects. Some references: - [Basecamp 3 Unraveled]( -- where an overview of the Bucket/Recording pattern is given - [On Writing Software Well #3: Using globals when the price is right]( -- where `Eventable` concern, and the `Event` model are shown - [On Writing Software Well #5: Testing without test damage or excessive isolation]( -- where the `Recording` model is shown - [GoRails: Recording pattern (Basecamp 3)]( # 2021-09-10 ? book: The Dream Machine # 2021-09-13 * finished reading [Tools for Thought: The History and Future of Mind-Expanding Technology]( # 2021-09-15 Nice little vim/fugitive gem: use `!` to open a command prompt with the file under the cursor. nnoremap ! :! It's very handy if you want to delete an untracked file (see netrw comes with an identical mapping too!