Most websites require your email address for creating an account. Email-based authentication allow the website to 1) contact you and 2) uniquely identify you.
Email does a good job at enabling contact - you can retrieve forgotten passwords just as easily as you can email your Grandmother - but it does a pretty poor job at identifying you. Given your email address, there's no understanding nor context about you, the user. Because of this, companies like Clearbit have built large businesses around enriching email addresses with data sourced from around the web, to make them more useful as identities.
This lack of context & understanding is in contrast to other common mechanisms of identifying users, such as OAuth. Login with Google, Twitter or Facebook are a few examples of OAuth-based logins that go beyond just an identity - they let 3rd party developers tap into the closed data these companies have (for better or for worse).
This helps identifiers moves beyond just an identifier, and toward becoming a user, with interests, friends, contacts, followers, profile pictures, genders, birthdates, and more.
This is powerful. As a developer, supporting “Login with [social platform]” lets you tap into this enriched data for numerous things:
Bootstrap network effects (eg prompting users to subscribe to all their Twitter followers immediately after registration)
Provide mechanisms of virality (eg automatically crosspost content they create to Facebook)
Provide better onboarding (eg pre-populating user accounts with avatars, interests and more)
There are several downsides to this method of OAuth based authentication, for both developers building atop these platforms and for users that authenticate with them:
Platforms can change the rules. Developers are constrained by what platforms allow. Google and Facebook are not credibly neutral open protocols - they're closed platforms with everchanging functionality. There have been many different instances in the past where platforms changed the rules and crippled developers building atop of them.
Developers get an incomplete picture. A user's audience is commonly split across multiple platforms. OAuth'ing with Facebook isn’t very beneficial to myself or the developer's app if 90% of my social graph exists on Twitter. It's unrealistic for developers to request or expect users to manually link with every single social platform to get a full picture.
User's can't exit with interoperability. If I decide I want to exit Twitter after the Elon Musk acquisition, it’s not possible to export my Twitter followers and use it elsewhere. Even worse - if Twitter decides to ban me, that’s it: there’s no way 3rd party developers building atop Twitter can make use of this data anymore.
Using cryptowallets alongside crypto-enabled social protocols are an alternative that I believe will bridge the gap between an identity and a user and improve upon the aforementioned issues:
They are credibly neutral. Building atop sufficiently decentralized protocols, not platforms, guarantee that the underlying functionality won't change after I, as a developer, built a sustainable business.
It's easier to get a full picture of a user's social graph. As a developer, I can permissionlessly ingest the full social graphs of all web3 social platforms, just given a user's wallet address with no intervention from the user needed. If a user has never used Farcaster but is active on Lens, no problem -- I can still programmatically ingest both to get the complete picture.
Users can exit with interoperability. If users dislike (or are banned from) a specific client built atop a certain social graph, they can move to another client and continue interacting with the followerbase they've built at the protocol-level. Similarly, developers can continue making use of all the underlying data.
Though, crypto wallets are not without downsides. Adoption is currently low - everyone has an email but not everyone has a crypto wallet. Additionally, emails allow for easy message exchange but wallets currently do not (though many protocols are trying to fix this), so it's challenging to reach users strictly by their wallets. Lastly, privacy controls are non-existent: if I didn’t want to share a list of my on-chain followers with a certain platform, there’s no current method of preventing this (in contrast to the granular, user-controlled permissions that OAuth uses).
At Paragraph, we've built several integrations involving wallets-as-an-identity and it’s been a very positive experience. For example, when a user connects their wallet and signs in, we detect if they have a Faracaster account. If so, we permissionlessly pull in their avatar, name, & users they’re following, and bootstrap their Paragraph account by prompting them to subscribe to their followers. No gatekeeping, no OAuth, no action needed from the user - nothing beyond just a wallet address & credibly neutral social protocols.
Collect this post as an NFT:
Throughout any given day, my mood & state of mind fluctuates depending on many factors: quality of sleep, what I ate that day, how my personal relationships are going, how my company is doing, etc.
Because of this, I try to avoid a fixed schedule as much as possible. I view my day as fluid: I have a collection of different states of mind throughout the day, and I have a collection of work items that need to be done, so I try to match these up.
When I'm feeling creative, I write code or do frontend design. When I'm feeling insightful, I think about longer-term company strategy. When I'm feeling extroverted, I focus on user outreach and customer conversations. Mapping my states of mind to the appropriate deliverable lends itself to my best work. The opposite is also true - forcing tasks in an improper state of mind often produces worse results.
This is a simplified view - it certainly won't be possible in all jobs or on all days - but I'm fortunate enough right now that I'm able to abide by this as much as possible, given that an early-stage startup has time spent mainly between building and talking to users (and both of these have a spectrum of sub-tasks that tap into different states of mind). This is in contrast to my time spent at Google, where I often had days filled with back-to-back meetings.
Deliberately not working is also important. If I'm feeling awful and particularly unproductive, I prefer to make a conscious decision to step away and rest, in contrast to making little progress on something while beating myself up over the lack of productivity.
Paragraph is a blazing-fast, privacy-first, no-frills-attached blogging & newsletter platform. Sign up and start writing posts just like this one.
Productivity & automation tools can be powerful. I was delighted to come across a recent thread on Hacker News discussing a supercharged automation tool: Huginn. This open-source software performs automated tasks by using 'agents' to watch for 'events', and triggering 'actions' based on these events.
For example, if there's a sudden spike in discussion on Twitter with the terms "San Francisco Earthquake", Huginn can send a text to my phone. Or, if a time-sensitive flight deal is posted on one of the many deal-finding websites out there, Huginn can send me an email with the price and a link to Google Flights.
Compared to other popular automation tools (IFTTT, Zapier), Huginn has the following benefits:
Self-hosted & completely private
Powerful data processing: write your own JS or use shell scripts
Liquid templating
I wanted to go beyond just automation and introduce some organization - I wanted all notifications to be cataloged & delivered in a centralized way. A personal Slack workspace seemed like the perfect solution for this - I can have a #flights
channel for flight deals, or a #trending
channel for the, er, pending San Francisco emergencies.
I also wanted all of this to be free. Huginn has pretty lax runtime resource requirements (even able to run on a Raspberry Pi, with some tweaking), so a free GCP micro tier instance was perfect for this.
Let's formalize what I specifically wanted to accomplish with Huginn. Note that this is a small subset of the things possible with Huginn - check out the project's Github for more inspiration.
Twitter notifications: whenever keywords of interest are tweeted (such as my projects or blog), I want to get notified immediately. Whenever a spike occurs for other keywords ("San Francisco Emergency"), notify me.
Hacker news notifications: whenever an article hits the frontpage discussing something I'm interested in, notify me.
Flight deals: if a flight deal is posted online to one of the many websites I follow (Secret Flying, ThePointsGuy, FlyerTalk), and the flight originates from a nearby airport, notify me.
Product deals: if a product I'm interested in is posted on Slickdeals, notify me.
Amazon price drops: if a product I'm interested in drops below some predefined price threshold, notify me.
I want all notifications to be sent to me via a personal Slack workspace, on different channels.
The easiest way to install Huginn is via Docker. Luckily, Google Compute Engine supports deploying Docker containers natively on a lean container-optimized OS.
There are a few key things we need to do in order to have a successful Huginn deploy on the f1-micro (free tier) instances.
Enable and create a swap file.
f1-micro instances have 614MB of memory. This is not enough to run Huginn out of the box - doing so will cause Docker to encounter Error Code 137
(out of memory) errors. To solve this, we need to create a swap file in the VM. Note that a swapfile will decrease the performance of Huginn - if you're interested in better performance for a price, consider deploying on a better VM.
Disk-based swap is disabled by default in container-optimized OS. To enable and set the swap file every time the VM is booted, we can use a custom startup script (shown below).
Mount the Huginn MySQL database to a directory on the host.
By default, Huginn creates a MySQL database inside the container. This is problematic, as the container now relies on state, and your database will get deleted every Huginn upgrade. We can use a volume mount to mount the database in the container to a directory in the host. Alternatively, you can mount a persistent disk and write the database to it.
Head over to GCP, create a new project, and create a new instance.
On the instance creation page, use the following settings:
f1-micro
machine type
Check 'Deploy a container image to this VM instance'
Container image URL is docker.io/huginn/huginn
Add a Directory volume mount. The mount and host paths should be /var/lib/mysql
We also need to add in a statup script. This script lets us 1) enable and turn on a swap file, and 2) change permissions of the volume mount on the host. The latter is required or MySQL won't be able to start.
#! /bin/bash
sysctl vm.disk_based_swap=1
fallocate -l 2G /var/swapfile
chmod 600 /var/swapfile
mkswap /var/swapfile
swapon /var/swapfile
chmod 777 /var/lib/mysql
After the VM is created, head to your VM's external URL (port 3000) and you should be greeted with the default Huginn login page!
I suggest reserving a static external IP for this VM, so the IP doesn't change. You can even take it a step further and associate this to a domain name - like automation.colinarms.com - for ease of access.
Now, let's dive into Huginn.
At a high level, Huginn relies on two key things: agents and events. Agents are things that monitor for you and create events (possibly if some criteria is met). An example agent is an RssAgent
, which monitors an RSS feed for new articles. Events created by the RssAgent
can be passed to a TriggerAgent
, which uses some regex filter to only listen to keywords of interest; and finally it emits a formatted message, perhaps to a SlackAgent
, that finally sends a message to a Slack channel.
You can imagine how this works in practice. For the flight deals usecase, for example: we can create an RssAgent
for the Secret Flying RSS feed. The TriggerAgent
can listen to these events, filter for "San Francisco Airport", and the SlackAgent
can message my #flights
channel when this happens
Multiple agents for a single usecase can be grouped into a Scenario
- in the above example, a Flight Deal Scenario
would make sense.
Let's walk through this example. If you want to get started immediately, you can download my agents and import them into Huginn directly.
On Huginn, create a new RSS Agent. Configure the following params:
Name your agent something descriptive. I used "Secret Flying RSS Agent".
Schedule your agent for however frequently you'd like it to check for updates. I used 30 mins.
Keep events for some period of time. I used 7 days.
Use the following JSON in the options:
{
"expected_update_period_in_days": "2",
"clean": "true",
"url": "https://www.secretflying.com/feed/"
}
Expected update period is the period at which Huginn should expect the agent to be updated - if it doesn't happen, the agent is considered not working.
Save your agent, and give it a manual run - you should see events populate from the underlying feed.
Now, create a TriggerAgent
. We want to filter newly posted articles for only nearby airports - in my case, San Francisco or San Jose airport.
Fill it out similar to the first agent. But, this time select your RSS Agent as this agent's source
. Events from the RSS Agent will be fed into this.
Use the following JSON in the options:
{
"expected_receive_period_in_days": "2",
"keep_event": "false",
"rules": [
{
"type": "regex",
"value": "(sfo|san francisco|sjc|san jose)",
"path": "title"
}
],
"message": "[Secret Flying] <a href=\"{{url}}\">{{title}}<\/a> {{description}}"
}
We're filtering the RSS feed's title to contain my nearby airports, and emitting a message
with the URL, title and a description.
Lastly, create a SlackAgent
. After registering a new Slack workspace and creating a new Slack webhook, set the source
to the Trigger Agent you just created.
Put the following JSON in the options:
{
"webhook_url": "https://hooks.slack.com/services/your-webhook-url",
"channel": "#flights",
"username": "Huginn",
"message": "{{message}}",
"icon": ""
}
Save your agent, and eventually you should begin receiving flight deals!
This example describes a single usecase for what's possible with Huginn. If you're interested in the other usecases I described above, you can download and import them into your own installation.
This post just scratches the surface of what's possible. With Huginn, let your Agents
monitor on your behalf and free up your time for more important things.
Most Popular
Automatically remove unused imports & variables in Vim using ALE and ESLint
Wallets instead of emails for identity
NextJS: server-side and client-side mismatch
The data spectrum: portable, usable & interoperable
Personal automation with Huginn using Slack, Docker and GCP
Most Popular
NextJS: server-side and client-side mismatch
The data spectrum: portable, usable & interoperable
Personal automation with Huginn using Slack, Docker and GCP