PostHog – Essential product insights for startups

Standard

I’ll admit it; I used to think metrics were only an afterthought when creating a software product.

Build the product according to client specs, add some Google Analytics to capture the basics like pageviews, and that’s a wrap!

This approach may have been valid for projects with well-defined user requirements, where the market was already known, but startup products are different.

Startups need analytics from the start

Startups are testing an unknown market with an offering they cannot possibly know will fit in that market. Most startups fail and, of those that fail, most fail due to poor product-market fit or marketing problems.

If market alignment is so existential for startups then it makes sense that metrics & insights should inform the development and evolution of a startup product, not be added at the end.

But, as with most things, saying something is important is much easier than implementing it.

Startups need flexibility

I’ve used several tools over the years for insights & analytics (namely Firebase & Google Analytics). All are great on their own merits, but none have provided an all-in-one solution to my small startup needs. For a small startup scenario in the EU, I ideally need something that provides:

  • Control over the data. Data privacy is always important, especially in the EU where rules are arguably tougher than in the US.
  • Open source. Not only does open source give reassurance over how the data you are collecting is being used, but it also allows self-hosting if required in the future.
  • Scalable pricing. Small startups are generally unwilling or unable to pay upfront for tools. Having generous free tiers and fair scalable pricing allows startups to test their assumptions with minimal risk.
  • Developer experience. Small teams have very limited time & resources, so being able to move quickly is a must.
  • Usability. Having a tool that can equally be used by developers, marketers and managers is important, especially if the developers need to hand off the project at some point.

PostHog

PostHog is a new entrant but already fulfils all of the criteria above. It is an all-in-one customer insight platform that empowers startups to gather essential metrics, providing a clear pathway to refine their product and align it with market needs.

PostHog offers a suite of advanced features that go beyond basic tracking. Startups can leverage A/B testing to experiment with different versions of their product, ensuring they make data-driven decisions. Session replays allow teams to watch real user interactions, providing invaluable insights into user experience and potential pain points. Additionally, feature flags enable developers to roll out new features gradually, testing their impact without a full release.

PostHog also has a user-friendly interface, making it significantly easier to navigate than Google Aanalytics. This ease of use means startups can quickly set up and start collecting data without the steep learning curve.

As an open-source solution, PostHog offers the flexibility to self-host, addressing privacy concerns that come with third-party hosting. The approach is similar to Supabase’s open-core model, providing the best of both worlds: robust features with the option to maintain complete control over your data.

Try it on your next project

In summary, PostHog is not just another analytics tool; it’s a comprehensive user insights platform. It is specifically designed to help startups find their product-market fit. With an easy setup, the flexibility of self-hosting, and scalable pricing, PostHog is a valuable asset for any startup aiming to understand its users better and grow smarter.

What is gross margin? And should dev teams be talking more about it?

Formula for subscription gross margin
Standard

Gross Margin?

For startup companies, particularly those in the Software as a Service (SaaS) sector, gross margin is not just a financial metric; it’s a key indicator of product viability and business sustainability.

For a SaaS company, in simple terms, gross margin is the difference between how much each new customer costs in terms of 3rd party services and dev costs Vs how much that customer pays for their subscription.

Formula for subscription gross margin
The formula for subscription gross margin: https://www.drivetrain.ai/strategic-finance-glossary/saas-gross-margin

Understanding this metric could be existential for a startup that is looking to scale, yet it’s a topic often left out of daily conversations among development teams. But what has it even got to do with the dev teams involved in building SaaS products?

Unpacking Gross Margin in the SaaS Context

Gross margin is calculated by subtracting the cost of goods sold (COGS) from revenue, then dividing that number by the revenue, and finally multiplying by 100 to get a percentage. For SaaS companies, COGS typically includes:

  • Servers and hosting space for the software platform e.g. Vercel, Github, CDN, AWS, GCP, Azure etc
  • Licensing for third-party integrations and services e.g. stripe, openai, algolia etc
  • Expenses related to onboarding new customers (excluding sales and marketing).
  • Customer support and account management.
  • Fees and commissions to various partners.
  • Employee salaries related to operating expenses, broken out by core function such as development, DevOps, customer support.

https://www.drivetrain.ai/strategic-finance-glossary/saas-gross-margin

This metric is crucial as it reflects the efficiency and scalability of a SaaS product. High gross margins (80%+), which are common in the SaaS industry due to low incremental costs, suggest that a company can cover its operating expenses and invest in growth initiatives.

Dev teams can have a direct impact on this metric by the actions they take and the architectural choices they make.

If this metric has a direct effect on whether or not the company has a future, and development teams’ actions directly influence it, then it makes sense that this metric should play some part in team planning.

Practical Steps for Developers to Enhance Gross Margin

Understanding the importance of gross margin is one thing, but what can individual developers at SaaS startups actually do to influence this key metric positively? Here are some practical strategies:

  1. Evaluate Third-Party Services: While third-party services can add significant functionality to your product, they often come at a high cost. Where possible, consider the use of Free and Open Source Software (FOSS) alternatives that can provide similar functionality without the recurring fees.
  2. Build vs. Buy Decisions: Always weigh the costs and benefits of building a solution in-house versus purchasing it. Building in-house can be more cost-effective long-term, especially if it gives you more control over your service offerings and reduces dependency on external vendors. On the other hand, ask honestly how much building in-house will cost, factoring in total development hours, support and maintenance.
  3. How Much Do I Cost?: Developers at SaaS startups should periodically reflect on how they utilize their own time, considering the associated costs and benefits of their activities. Time is a finite resource, and how it’s spent can directly impact the company’s financial metrics, particularly gross margin. For instance, focusing on automating routine processes, improving system efficiencies, or eliminating costly dependencies can have a significant positive effect on the company’s profitability. This reflective practice not only fosters a more financially aware culture within the team but also encourages developers to make strategic choices that contribute directly to the business’s bottom line.
  4. Optimize Code Efficiency: Avoid wasteful code patterns that consume unnecessary resources. Efficient code reduces server load, which can save on hosting costs and improve scalability. Ask things like:
    • How many round trips are we doing to the server?
    • How much data is going over the wire?
    • Could this be cached?
    • Can we spread the load between various vendors to maximize ‘free’ tiers?
    • What will happen at scale?
    • Is there a different service I can use?
    • Do end users really need this expensive feature?
    • Is this index appropriate and efficient?
  5. Provide Robust Support: Efficient support systems can drastically reduce the cost of customer service over time. By ensuring that your code is maintainable and your documentation is thorough, you help reduce the need for extensive support resources.

Changing Team Development Cycles

For SaaS startup teams, integrating an understanding of gross margin into the development cycle involves a few strategic changes:

  • Regular Financial Updates: Including gross margin insights in regular project or company updates can help tech teams understand business outcomes and see the bigger financial picture.
  • Training and Development: Offering basic financial training to developers, focusing on how SaaS business models work and the importance of metrics like gross margin, can enhance decision-making at all levels.
  • Cross-Department Collaboration: Encouraging collaboration between finance and development teams can ensure that technical decisions are made with a clear understanding of their financial implications.

Conclusion

For SaaS startups, especially where agility and efficient scaling are crucial, gross margin is not just a number for the finance team to worry about. It’s a vital sign of how well the company’s offerings meet market needs without sacrificing profitability. By bringing gross margin into the conversations that development teams are having, SaaS startups can foster a more holistic approach to building and scaling their products. This not only ensures better financial health but also aligns product development with long-term business success.

AI all the things?

Standard

AI is incredibly powerful and it is relatively easy to add a rudimentary integration to new and existing software. It’s easy to get caught up in the hype and see every problem as solvable with an AI hammer.

But just because you can, doesn’t always mean you should.

Don’t get me wrong, I use AI-augmented tools every day and am amazed at what they do for my productivity. I also create AI-augmented features in the software that I build.

However, if I reach for the LLM AI “hammer” first, I bypass the opportunity to achieve better results and user experience. By focusing on the root problem at hand and structuring my data a bit better, I could nrgate the need for AI and achieve better outcomes.

For example, if your software needs to match job seekers with job specs, you could reach for the AI hammer to do the work, but you don’t need to. Why? Because AI yields “poorer” quality results than getting human assistance breaking down the constituent parts of a user profile and the parts of the job spec into structured data, matching these structured pieces, and human oversight to make the final judgement on a ‘good’ match.

For example, if your software needs to generate a list of similar job titles to one listed in a job post, you could reach for the AI hammer to do the work, but you don’t need to. Why? Because it might be cheaper, quicker and yield adequate results using existing databases like the US O*Net database of careers and salaries.

The point is, AI can do many things, but it’s not a panacea. You might find you get better results by exploring the root user problem and structuring new or existing data to solve the problem more accurately.

A Career in Development: It’s not all about me

Standard

The software industry is an incredible industry to be part of.

There is always something new to learn.

Over the past 25 years I’ve had a lot of fun, had great job satisfaction, and learned so much.

However, to keep myself grounded, I try to remind myself of two things:

  • You can never stop learning.
  • Its not all about you.

My journey from a junior to senior dev has taught me some ongoing lessons about myself, technology and, most importantly, that prioritizing user needs above most things is key to any successful project.

I may love what I do and get a kick out of learning, but if it doesn’t benefit users, at the end of the day its of limited use.

Let me explain why I believe users don’t get enough attention from development teams, by detailing my personal journey….

The Early Days: Going Deep

Early in my software development career (dotcom boom and bubble), I was eager to solve any problems with a technology solution.

Ask Jeeves

I would get my head deep into a niggling problem and come out the other side with an enormous sense of achievement. I would agonise over minor, but arguably essential, technical issues and feel real accomplishment when I succeeded.

At that point in my career, I wasn’t aware that customers don’t really care about technology.

If I were to try to explain to a customer what I’d been doing, their fully justified reaction would probably haven been “Why would you spend so much of my money doing that?!? It has very little visible effect on the problem that I have!”.

Customers were at the end of the waterfall model, they were of little concern when I was deep in code at the other end.

Levelling Up: Empathy

Things started to change when I was put directly in front of users.

Once you feel the pain and see the importance to individuals of the solutions you’re developing, it changes your perspective of your role as a developer.

Learn Empathy

If you want a developer to “care” and build useful products, have them experience the real-world pain they are trying to address.

The main takeaway from being personally deployed into client teams was realizing that users only really care about their problems and whether, what you’re doing, will directly help or hinder them.

Users don’t care about cool new technologies, frameworks, edgy design patterns, architectures, or highly technical concerns; they just want something to make their life easier, now.

Taking the blinkers off: Going Wide

Experiencing the real-world pain of clients broadened my focus from purely technical challenges to understanding and solving real-world user problems.

This transition from deep technical immersion to a wider perspective taught me the value of simplicity and direct impact. It wasn’t just about using the latest technology or dabbling with complex solutions anymore; it was about making a tangible difference in the clients’ lives with efficient and straightforward solutions.

Sometimes a complex solution was required, but sometimes there was another, simpler, cheaper, pain-free, way.

The ‘other’ way only became visible when we understood a client’s pain points, took a step back, and looked wide, instead of deep, for a solution.

It doesn’t add up: It shouldn’t be this hard

But, at this point in my career (early 2010s), even the “easier” way still seemed too difficult, took too long, and cost too much money.

There was a huge disconnect between clients’ expectations (in terms of timescale, cost and complexity) and how frustratingly difficult it was to develop production-ready software.

It just didn’t add up.

Technological solutions to everyday problems still seemed to be the preserve of large teams and clients with deep pockets prepared to wait for 6 months or more for a solution.

Front-end development felt too verbose, server management a speciality for proper geeks, and the ideal interaction between the frontend & backend a dark art.

Clients didn’t appreciate, or, more importantly, place value on, these [highly relevant] concerns.

A New Dawn: Scalable Services, Front-End Frameworks & Powerful Devices

In my personal opinion, around the early 2010s things started to change for the better, especially for web and mobile developers.

For example, frameworks like AngularJs (2010) & knockoutjs (2010) started gaining traction.

Scalable “serverless” services like AWS lambda (2014) started to roll out.

Mobile phones were now mainstream and able to handle JavaScript-intensive webpages and highly interactive apps.

Like them or loathe them, JavaScript frameworks made it easier for a huge number of teams to produce complicated web software quicker and with less code. While these frameworks marked significant shifts in my personal journey and the industry at large, they represent only a fraction of the myriad of tools and technologies that shaped the developer field at the time.

IPhone 5 Release – 2012

“Serverless” offerings started to reduce the need to manage expensive dedicated on-premises servers and reduced the need for a highly skilled team to manage them.

The pervasiveness of mobile devices made for an explosion in the appetite for, and number of, apps.

All combined, smaller teams could now start to produce complicated scalable software in less time, with less code and with [generally] less cost.

Median project schedules are shorter now (in months) than they used to be: https://www.qsm.com/articles/long-term-trends-40-years-completed-software-project-data

Late 2010s: Levelling the Field for Small Teams & Tech Startups

Towards the end of the 2010s scalable services started to really change the game for small teams and start-up’s.

In the early stages of a start-up, resources are scarce, time is of the essence, and the pressure to deliver is immense.

Leveraging frameworks, scalable services, IaaS (Infrastructure as a Service), PaaS (platform as a service), BaaS (back end as a service) & SaaS (software as a service) turned out to be a game-changer for smaller teams and start-ups. It allowed teams to focus on core product, rather than getting bogged down by the intricacies of backend infrastructure or deployment complexities.

This shift not only accelerated development cycles but also instilled a sense of confidence in the ability of smaller teams to scale and adapt as needed.

On average, today’s developers deliver about 40% as much new and modified code per project as they did 40 years ago.

Quantative software management: https://www.qsm.com/articles/long-term-trends-40-years-completed-software-project-data

Being able to concentrate on solving client issues and less on “invisible-to-the-client” concerns, was something that resonated with my own experiences so far.

At the same time, perhaps the most significant realization in my journey was understanding that the failure of tech start-ups (or innovative ideas in general) is not predominantly due to technology issues.

Instead, it often boils down to marketing mishaps and a poor product-market fit.

The new breed of scalable services started to allow teams to concentrate more time & resources on finding out if their idea is going to resonate with users and less on “invisible” technical issues.

Why startups really fail: Failory.com

Recognizing that a great product needs an equally great go-to-market strategy changed how I approach my role as a member of any organisation.

It’s not just about building; it’s about building what’s needed and ensuring it reaches the right audience. All startup team members, developers included, need to recognise this existential fact and continuously work with this concern at the forefront.

What Next? Do more with less

The journey that I’ve been on has been incredible, but its only just beginning.

While I believe we are seeing a convergence in web frameworks (signals, pre-compilation over framework libraries, SSR and hydration) & and an acceptance of the value of the “serverless” model, we are still only in the infancy of the tech industry.

AI is definitely changing things, quickly.

Quantum computing has the potential to massively accelerate what we are capable of.

And who knows what advances in human interfaces are in store for us in the next decade (think brain-computer interfaces or VR/AR instead of smartphones).

Although there will be change, some things will remain the same:

  • If you build something useless, it wont get used.
  • The process of finding out if something is useful or not to users can be seriously expediated if you build on existing tools, knowledge & services (rather than reinventing the wheel) .

For me, this means I will continue to strive to do more with less. Failing faster, with minimal cost, does mean more failures but continually gets us closer to a win more quickly & cost effectively.

I’ll also continue to trust my experience, extrapolate from past successes/mistakes, know when to learn deep or learn wide, when to build or buy, embrace the uncertainty, and understand that empathy for customers is just as important a skill as technical expertise

N.B. All of the above are just my personal experiences and opinions. I would love to hear if anyone else had similar experiences or had different experiences that have shaped their outlook differently.

My career is still in “development”. Gathering more experiences and opinions will never not be beneficial.

Svelte: Web development made simple

Standard

*this post is the 4th of a 4 part series of posts exploring Supabase, Vercel & Svelte. This post goes deeper into Svelte (and its application framework SvelteKit)

Part 1: Intro
Part 2: Supabase
Part 3: Vercel
Part 4: Svelte

In a previous post, I explored why a trio of technologies—Supabase, Vercel, and Svelte (along with its framework, SvelteKit)—constitute my ideal tech stack presently.

This time, I’m turning the spotlight on Svelte and SvelteKit. Both have rekindled my passion for front-end development, presenting a fresh perspective on building interactive web applications.

The Pursuit of Developer Happiness

I’ve considered myself to be a “full stack developer” for a long time, but the front-end web development part of the stack was always my “weak” area. I always neglected this part of my arsenal because it didn’t bring me joy. Doing “more with less” is the mantra that brings happiness to my work, and web development, up until recently, just seemed to get in the way of my productivity.

Recently, I’ve fallen back in love with front-end development due to Svelte (and SvelteKit).

It Should’nt be this Difficult

Over the years I’ve seen websites and apps become more complicated.

I’ve gone through several iterations of JavaScript frameworks, tools and libraries to catch up with this complexity. Over these iterations, unfortunately, the developer experience (DX) has not significantly improved and has become arguably worse.

The situation is exacerbated by the need to support an ever-growing array of devices and screen sizes, requiring responsive and adaptive design techniques.

User expectations for rich, app-like experiences have soared, demanding more sophisticated front-end and back-end logic, including real-time interactions and improved performance.

Additionally, the importance of SEO, accessibility, and security has led to yet further layers of complexity.

Lastly, development workflows have become more intricate. Developers need to be aware of CI/CD pipelines, cloud-based services, and the necessity for cross-disciplinary skills spanning design, development, and deployment.

All these factors have contributed to a landscape where web development demands a broader, more sophisticated skill set.

Developers need to constantly have a deeper understanding of a rapidly changing technology ecosystem.

Using an “opinionated” application framework has helped me to manage these complexities and increase my productivity.

For complex web apps, vanilla JavaScript is arguably always an option. However, I can imagine being highly frustrated by the verbosity of the resulting code. To reduce the amount of code, I could imagine ending up writing a pseudo framework to abstract most of the verbosity. For those who dare, I salute you. But I’d personally rather skip straight to using a battle-tested framework to reduce code complexity.

However, not all frameworks are created equal.

For me, an ideal framework is one that can do more with less code without compromising on flexibility and speed. A great developer experience is also a must to maximize developer productivity.

In Svelte, and its application framework SvelteKit, I feel that I’ve got a great mix.

My Svelte Discovery: From Experimentation to Production

My adventure with Svelte began as a weekend experiment.

After liking a recent foray into using VueJS, I was curious to explore a framework that compiled down to vanilla JS but promised a declarative coding experience.

The initial trial was for a small project aimed at enhancing a local community initiative. The objective was straightforward: deliver an engaging, performant web experience with minimal overhead for both the developers and the end-users.

To my delight, the project was not only a success in terms of its community impact but also a revelation in web development efficiency and simplicity.

Svelte, coupled with SvelteKit for seamless full-stack integration, transformed how I now approach front-end development, leading me to adopt it for several subsequent production projects.

Workflow Integration and Development Joy

Svelte’s integration into the developer workflow is seamless.

Its component-based architecture—enhanced by reactivity and compile-time optimizations—fits perfectly with modern development practices.

Furthermore, SvelteKit enriches this experience by offering a convention-over-configuration approach to building applications, from static sites to SEO-friendly SSR (Server-Side Rendered) applications and everything in between.

This framework has a unique proposition: write less code, without losing expressiveness or functionality.

For a developer who values both productivity and performance, Svelte’s proposition is incredibly appealing.

Unified App and API Development

A pivotal moment for me was utilizing SvelteKit’s ability to cohesively handle application logic, SSR, and backend APIs within the same project repository.

SvelteKit’s file-based routing and server-side capabilities mean that an application’s front end and its backend API can live side by side. This co-location streamlines the development process, especially for small teams or projects where agility and speed are paramount.

Performance Outsourced

SvelteKit, the application framework built on top of Svelte, offers several performance advantages over other JavaScript frameworks. These advantages can make it an attractive choice for developers focused on building highly efficient and fast web applications.

Here are some of the notable performance benefits:

  1. Compilation Step: Unlike frameworks that rely heavily on runtime interpretation (like React or Vue), Svelte moves much of the work to compile time. This means Svelte compiles components into highly optimized vanilla JavaScript at build time, reducing the need for a heavy runtime library. As a result, the final code shipped to the browser is leaner and faster to execute.
  2. Less Boilerplate Code: Svelte’s design philosophy emphasizes simplicity and minimalism, resulting in less boilerplate code. This not only makes development faster but also leads to smaller bundle sizes, which directly impacts load times and performance.
  3. Built-in Page Routing and SSR: SvelteKit comes with built-in support for page routing and server-side rendering (SSR). SRR can significantly improve the performance of web applications. SSR ensures that pages are rendered quickly on the server, reducing the initial load time. Client-side routing allows for seamless navigation without reloading the page. All this helps to create a smoother user experience.
  4. Efficient Reactivity Model: Svelte’s reactivity model is designed to be very efficient. It updates the DOM directly when the state changes, without the need for a virtual DOM diffing algorithm. This results in faster updates and interactions, as there is less computational overhead involved in making UI changes.
  5. Integrated Tooling: SvelteKit offers an integrated development environment. With features like hot module reloading, development time is sped up and optimizations are easier to implement.

These advantages make SvelteKit an appealing option for developers prioritising performance and efficiency in their web projects. However, the best choice of framework also depends on specific project requirements, existing developer skill sets, and other factors.

However, for myself, working with small team greenfield startups, the choice has been simple.

Embracing Svelte: A Testament to Developer-Focused Design

In conclusion, my journey with Svelte and SvelteKit has been nothing short of transformative.

These tools have not only simplified the development process and codebase size. Using Svelte has also resulted in superior end-user experiences—fast, engaging, and accessible web applications.

For developers and teams navigating the complexities of modern web development, looking for a balance between productivity, performance, and user experience, Svelte and SvelteKit offer a compelling, developer-friendly pathway.

*this post is the 4th of a 4 part series of posts exploring Supabase, Vercel & Svelte.

Part 1: Intro
Part 2: Supabase
Part 3: Vercel
Part 4: Svelte

Vercel: Developer-focused, powerful & cost-effective

Standard

*this post is the 3rd of a 4 part series of posts exploring Supabase, Vercel & Svelte. This post goes deeper into Vercel

Part 1: Intro
Part 2: Supabase
Part 3: Vercel
Part 4: Svelte

In a previous post, I outlined why Supabase, Vercel & Svelte make up my current ideal technology stack. This post delves deeper into Vercel and why it has helped me fall in love with full-stack development again.

Productivity as a Priority

As a “seasoned” software engineer, I’ve battle-tested numerous deployment & hosting platforms.

I am always on a quest for a more streamlined, developer-friendly approach. The goal is always to give developers space to focus on the things that really matter.

That search has led me, again and again, to Vercel in the past few years.

Here’s my personal take on why Vercel has become an indispensable tool in my development stack.

Discovering Vercel: Weekend project to production apps

My journey with Vercel began during a weekend project launching a web app for a local charity.

The goal was to deploy an easy-to-use and maintained web application. The app should be created with minimal fuss and minimal cost to the charity.

Over the course of the weekend, I was able to create a web app & CICD pipeline at Zero ongoing cost to the charity.

Since then I’ve used Vercel on a number of other production projects.

Seamless integration with developer workflows

The biggest selling point of Vercel is its seamless integration with developer workflows.

An estimated +100m developers incorporate GitHub somewhere in their workflow, I am one of those 100m.

Vercel hooks easily into your GitHub repo. After linking to GitHub, vercel automatically recognises what type of project you’ve got going on. Vercel then automatically builds and deploys your web-based and/or node project every time a branch is pushed.

It may sound like a simple thing, but it saves a huge amount of time, complexity, and expense for a significant number of web & node developers.

This single feature, coupled with a host of other performance and developer-focused features, makes Vercel highly persuasive for web and node developers.

Develop App & API together

A big game changer for me was the ability to develop & deploy my [web] apps and APIs at the same time.

API and APP in the same repo

If you include an “API” folder in your repo, Vercel “automagically” spins up a serverless function to serve the endpoints.

Being able to develop an App and API in the same codebase is a big win for smaller startup teams where the code is changing rapidly.

There is no disconnect between the two; if a new feature or API route is deployed, they are deployed with the front-end changes and not out of sync.

Even better, if you are developing with Node for your API, and a JavaScript framework for the front end, the same developer can understand and update both aspects.

Performance as someone else’s problem

Vercel doesn’t just deploy your application; it helps optimize and scale it.

Leveraging global content delivery networks (CDNs) and smart caching strategies helps your application load fast, regardless of where your users are.

Whether your webpage or API experiences a handful of users or a sudden surge in traffic, performance remains consistent.

This peace of mind, knowing that scalability and optimization aren’t something I have to actively manage, allows me to focus on other, client-centric, tasks.

Vercel Pricing (March 2024)

The Cost

The pricing tiers underline the feeling that Vercel was built for developers, by developers.

For smaller teams, being able to quickly try out ideas at no cost, allows them to iterate at speed.

Not having to raise a purchase order or ask engineering for some server resources, just to create a proof of concept, is perfect for startups and small teams.

Even when you get past the proof of concept stage, the pricing is extremely persuasive for small and large teams alike.

Embracing the Future with Vercel

My experiences with Vercel have made it a staple in my development workflow.

The ease of use, performance optimizations, and scalability it offers are fantastic for the types of projects I regularly work on (small team start-ups).

For any development team, in a similar situation, looking to streamline their development & deployment processes, I can’t recommend Vercel highly enough.

*this post is the 3rd of a 4 part series of posts exploring Supabase, Vercel & Svelte.

Part 1: Intro
Part 2: Supabase
Part 3: Vercel
Part 4: Svelte

Supabase: Making development easy, scalable & affordable

Standard

*this post is the 2nd of a 4 part series of posts exploring Supabase, Vercel & Svelte. This post goes deeper into Supabase

Part 1: Intro
Part 2: Supabase
Part 3: Vercel
Part 4: Svelte

In the previous post I outlined why Supabase, Vercel & Svelte make up my current ideal technology stack. This post delves deeper into Supabase and why it has helped me fall in love with full-stack development again.

What is Supabase?

Supabase is a managed service which encompasses (but is not limited to) all of the following: authentication, database, file storage & serverless functions.

Supabase is like other “backend as a service” (BaaS) offerings like Firebase, but with a few notable differences; the project is open-source & is centred around an open-source relational database (Postgres).

What makes it so useful?

As I have lamented in the past, app development is complicated. Anything that reduces stack complexity can help focus developers on the things that really matter.

I tried Supabase for a weekend project for a local charity and achieved so much in a single weekend that I would consider myself an advocate for the product.

Following that experience, I have now used Supabase successfully for two additional production projects and plan to use it in the future for similar scenarios (small team startups).

Advocating Supabase at a JavaScript meetup. Slides below…

Creating a relatively simple app over a weekend is not a huge accomplishment. There are other services and no-code platforms that can do something similar in the same timescales.

However, experience has taught me to get into the weeds with a product and then extrapolate into the future to gauge the real value of a tech stack. Low code and no-code tools are great, but at some point, in a growing project, you will hit a wall.

What makes Supabase stand out is that coupled with other developer tools like Svelte, it can be at least as productive as no-code tools without the drawbacks e.g. vendor lock-in, limited customization, up-front costs and scalability.

Embracing Open-Source and Community

My gravitation towards Supabase is also influenced by its open-source ethos which promotes transparency, collaboration, and community-driven innovation.

Being open to open source is more than just being idealistic, it’s also pragmatic.

The Supabase project is open source e.g. the code that runs its managed service can be downloaded and used on a server of your choosing.

If Supabase decides to increase the managed service cost to a level where it no longer makes sense to use it, you can manage the services yourself elsewhere.

Supabase has been completely transparent about its open-core business model from the start, hopefully, this model continues to work for them.

However, relying on open-source projects is not without potential pitfalls, especially when open-source companies’ heads get turned by greedy VCs and start over profiteering.

At one time, Elastic was my tool of choice for multi-faceted search, but the change in licence by the company has left a bad taste.

However, even though open-source licences can change, it is still better than the closed-source alternative where you are completely at the vendor’s whims from day one.

Simplifying the Complex

Creating apps is a complicated process even without having to worry about managing servers.

Delegating responsibility for managing auth, database, and storage to a managed service allows small teams to concentrate on more impactful concerns.

Not only does Supabase take these concerns away from you but it does it all in an easy-to-use dashboard.

The developer experience in general has been, dare I say it, enjoyable.

Using the Supabase tools and libraries has successfully reduced the complexity and lines of code in my apps.

The Security Model: Easy to Understand

The simplicity of the row-level security model in PostgreSQL is easy to configure and understand.

It presents a straightforward yet robust framework that drastically reduces the risk of misconfiguration—making security accessible to all of the team, even for newcomers, from day one.

However, it’s not perfect.

I have had experience in the past with different approaches to securing data. My least favourite way in the past was to implement the security rules totally in code i.e. lots of if/then statements hidden away in code that only the core developing team could understand or change.

In contrast, in my opinion, the “best” way I have experienced is to use declarative authorization rules, defined in the data schema e.g. Amplify authorization rules.

In the example below, any user can read from the “Todo” table/graphql type, but only the person who created the row can update or delete their own data.

## Configure schema and auth rules
## in one place 
type Todo 
  @model 
  @auth(rules: [{ allow: public, operations: [read] }, { allow: owner }]) 
  { 
     content: String 
  }
## Implementing something similar
## using Postgres/supabase
create policy "Allow select, update and delete for users based on id" on "public"."Todo" as permissive for all to public using ((auth.uid() = id));
 
create policy "Read for all users" on "public"."Todo" as permissive for select to public using (true);

It would be great if Supabase could cater for the type of declarative security as above, if anyone knows if it can, please reach out.

Scalability and Performance: Meeting Tomorrow’s Needs Today

Premature optimization is the root of all evil, let someone else grapple with the demon

In the past, I have spent countless hours trying to eek out marginal gains in performance in case my app goes viral. Spoiler alert… it didn’t… and I’ll never get those hours back.

Let someone else (with probably more expertise) obsess about performance and scalability.

Supabase’s seamless scalability ensures that as you grow, your backend does too—smoothly and reliably. This peace of mind allows you to focus your energies on innovation and enhancing user experience, secure in the knowledge that your technological foundation is a given.

The Cost-Effectiveness of Dreaming Big

Free and Pro pricing (March 2024)

In the world of startups, where every resource counts, Supabase’s pricing model is perfect.

The free tier is generous enough to battle-test your idea. The follow-on tiers are predictable and fair.

It’s not just about infrastructure costs where Supabase shines. The comprehensive savings in developer hours it enables through its exceptional developer experience is significant.

Again, this efficiency allows you to channel resources into areas that directly amplify user value and platform growth.

A Comparison with the Giants

In my career, I have used other back-end-as-a-service offerings and Supabase compares favourably for the projects I’ve been doing lately i.e. small team startup.

I have used all of the following comparable technologies in production environments: Firebase, Retool, AWS Amplify, Budibase.

I have tried, but not implemented the following tools: Planetscale

I have not tried, but want to look at, the following: Parse, NHost, Backendless, AppWrite

My advice, if any is needed, is to look at your particular situation and try out any or all of the tools above on a pet project.

The “try out” part is key, all these sites have wonderful marketing websites which promise the earth. It’s not until you get down into the weeds on developer experience and pricing that the suitability becomes clearer.

*this post is the 2nd of a 4 part series of posts exploring Supabase, Vercel & Svelte. This post goes deeper into Supabase

Part 1: Intro
Part 2: Supabase
Part 3: Vercel
Part 4: Svelte

The Power Trio: Svelte, Supabase, & Vercel – My [current] Tech Stack of Choice

Standard

*this post is the 1st of a 4 part series of posts. This post gives a quick overview of Svelte, Vercel & Supabase, the following posts will go deeper into the technologies.

Part 1: Intro
Part 2: Supabase
Part 3: Vercel
Part 4: Svelte

Choosing the right stack for your project is akin to setting the foundations of a building. It needs to be solid, reliable, and scalable.

As a software engineer for over 26 years, I’ve finally found a stack that feels like home: Svelte, Supabase, and Vercel.

This trio has not only supercharged my own productivity but has also proven indispensable in my role as CTO of a small startup. When every decision weighs heavily on our budget and future, finding a stack that offers ease of use, scalability, reliability & cost-effectiveness, has been thoroughly reassuring.

Supabase, Svelte & Vercel

Why SvelteKit, Supabase, & Vercel?

Each component of this stack brings something unique to the table.

Svelte, with its simplicity and speed, allows us to build web applications that are incredibly fast and easy to maintain. It eliminates the complexity typically associated with front-end development. It makes the developer experience delightfully smooth. This has been a game-changer for us. In a startup environment resources are limited, and we need to move quickly without sacrificing quality.

Then there’s Supabase, an open-source Firebase alternative, which has been a revelation. It offers the backend services we need – authentication, database, and storage – without the overhead of managing these systems ourselves. Its PostgreSQL foundation means we’re building on top of a powerful, open-source database. Not only that, its easy-to-use APIs save us countless hours that would otherwise be spent on backend development.

Vercel provides a seamless deployment and hosting solution that integrates perfectly with SvelteKit and GitHub. Its global CDN ensures our applications are fast, no matter where our users are. Its commitment to developer experience makes deploying our applications as simple as a git push. In the fast-paced environment of a startup, Vercel’s scalability and ease of use are invaluable.

Productivity Gains

The synergy between Svelte, Supabase, and Vercel has significantly boosted our productivity. The reduction in context switching, the streamlined development process, and the ease of deployment means we can go from idea to production incredibly fast. In a small startup, where each member often wears multiple hats, being able to focus more on solving our users’ problems and less on the intricacies of our tech stack is a massive advantage.

Scalability for Startups

For any early-stage startup, the ability to scale efficiently is critical. This stack ensures that we’re not just building for the present but are also prepared for future growth. Supabase and Vercel, in particular, offer scalable solutions that grow with us. Both ensure that we can handle increased loads without a hitch (and without surprise bills). This peace of mind allows us to focus on innovation and delivering value to our users, rather than worrying about our infrastructure.

A Personal Reflection

My personal journey through the realms of large corporations and startup agencies has taught me the importance of choosing the right tools. In the past, I’ve dealt with the complexities of custom builds and the challenges of managing primitive services on platforms like AWS, Azure & GCP. While powerful, they often require a significant investment in time and resources to manage effectively.

In my current role, where the margin for error is slim, and our budgets are tight, the simplicity, efficiency, and scalability of the Svelte, Supabase, and Vercel stack have been a blessing. It’s a setup that supports rapid growth and innovation, aligning perfectly with the transparent, agile, forward-thinking ethos of our startup.

My 26 years of experience across different spectrums of the tech industry has solidified my belief that software development is hard, the tech stack should make it easier, and abstracting the difficult parts away to scalable services should always be considered.

With this particular tech stack, I have personally found a great balance between interoperability, scalability and extensibility. A word of caution though, what works for one team does not necessarily mean its right for others. However, for our situation, it’s been a testament to how the right set of tools can not only enhance productivity but also empower a team to focus on what truly matters – creating value for the users.

*this post is the 1st of a 4 part series of posts. This post gives a quick overview of Svelte, Vercel & Supabase, the following posts will go deeper into the technologies.

Part 1: Intro
Part 2: Supabase
Part 3: Vercel
Part 4: Svelte

NIDC 2023 – Startup Priorities

Standard

Recently I had the absolute pleasure of talking at the 2023 Northern Ireland Developers Conference.
My talk was titled “The Art of Bootstrapping: Focus on the things that REALLY matter” and took a look at what the priorities of a startup should be.

TL;DR: According to the data, startups spend too much time & resources on relatively unimportant things, compared to a lack of focus on product market fit. To get an idea of what “product-market fit” equates to in practice for startups, see the following advice from Andrew Gadzeki (founder of Acquire.com)

The NIDC is my favorite conference to go to for several reasons, but the main reason is that it is a conference by developers, for developers. Everyone is there to share or to learn. In the spirit of sharing, I wanted to share the slides from the talk in case they can help anyone else.

High-Speed Web: Start with the End in Mind

Website Speed
Standard

The speed of your website is important to users. But in this age of pay-per-compute, reducing processing all along the chain is important to keep owner costs down too.

Fortunately, in today’s modern web, both of these seemingly competing requirements actually complement each other; reducing processing costs can be achieved by improving site speed.

To explore how to improve site speed and reduce overall processing, lets start with the end in mind and work backwards.

Starting with the customer, what consititutes a ‘good’ user experience, in terms of speed?

Ultimate User Experience?

Imagine you are a first time visitor to your site. The ideal situation would be if an immediate view of your site was instantly available to the device you’re using. No javascript processing, no querying for data, no image optimizations, just the browser showing HTML and [inline] CSS (or even one big snapshot image of your content).

Bear with me, I know some sites need to show dynamic data and some do not, but remember, at this point, you’re just an ordinary user. You dont care about what goes on in the browser and on the server. Neither do I care what static vs dynamic is, or what pain it takes to achieve results. You just want a great experience.

As a user, I want to see the content immediately. By the time I’ve made sense of the initial page (0-3.8secs), I want to interact with it.

If the data a customer is viewing is updated server-side while the page is open, these updates should be pushed to me automagically. Getting new data should not rely on me to fetch the data e.g. by hitting some kind of refresh button to call back to the server.

If the customer leaves the site and comes back to it, by the time I have indicated that I wish to view the page (preemptive rendering?), it should already be fully loaded and no stale data exists on the screen. If any updates have been made to the page since I last saw it, the changes, and only the changes, should have been pushed to my device, using as little bandwidth as possible.

All sounds great? But are these demands even possible using the latest tools/technologies for web and server?

Server Side Rendering

Arguably one of the most important things is to show the content of your site in the quickest way possible.

Generally, the steps that a browser takes to display a web page are as follows:

  1. A request to the server is made for a webpage.
  2. The server decodes this request and the page, and its resources (files), are downloaded.
  3. The web browser uses the page resources to build the page.
  4. The page then is rendered (displayed) to the user.

The bottle-neck in this process are steps 2 + 3 i.e. downloading the resources and ‘building’ the page as the resources become available. Rendering a ‘built’ page, stage 4, is what a browser is really good at.

Can we improve, or even skip, steps 2 + 3 to give users a better experience?

If the customer’s browser isnt going to do the hard work to pull all the resources together to build the page, who’s going to do it then?

Combining and building the page can be perfomed on the server instead. The subsequent ‘built’ page can be the resource that is served to the customer.

Server side rendering is an ‘old’ method of serving web pages. Before client-side Javascript was as powerful as it is today, SRR was a common way to serve dynamic web pages.

What’s changed is that now you can use Javascript on both the client and server. You can now replace PHP, Java, ASP etc on the server with NodeJs. This certainly helps with code reuse between client and server, but are we actually any further forward?

The principles are still the same. Client browser makes call to server, server dynamically creates a webpage containing an initial view of the page, server then delivers the page to client.

Server side rendering certainly solves the processing problem for the customer but we’ve just pushed the problem server-side. Not only have we increased the amount of processing that we, the site owner, has to pay for. We are also not much further in improving the overall speed of the site. Certainly, the customer may see a more complete view of your site sooner. Also, the server may have better access to some of the resources. But the overall amount of processing to build the page stays relatively the same.

If we, as a site owner, are now paying for the processing to ‘build’ an inital view of the site how can we make this process as efficient as possible?

Static First

The vast majority of content on the web changes infrequently. Even sites with ‘dynamic’ content, usually only a small amount of the total content of a page is truly dynamic.

If the same content is going to be viewed by more than one visitor, why generate it more than once? It might not be a big issue if you only have 2 visitors. But even with 11 visitors, you still might be doing 10x more processing than was needed.

If as much of your content can be precompiled i.e. the data has already been fetched, the styles applied and html generated preemptively, it can be delivered to the user quicker. If fact, if the content is already compiled, we can take the server out of the chain completely for this interaction, and allow the browser to access the content directly from a location close to the customer.

The ‘static first’ strategy is to compile a static view of the page first, serve it from a CDN, delay enabing javascript until the page has loaded, then hydrate any stale dynamic data.

By adopting static first you can potentially reduce your hosting costs to cents per month, as opposed to dollars, AND provide a blisteringly fast experience for your customers

But what about pages that are never viewed? To statically generate an entire site, you need to generate and manage updates for all potential routes in your website. You might be generating hundreds or thousands of web pages that real users may never visit. However, although ‘real’ users may not visit these pages, it is likely, and welcome, that at least 1x crawler bot will want to access these pages (unless it is content not for the eyes of search engines).

Caching Vs Static Site

So if having assets ready and available on the network, close to the user, is preferable. Is this not just server caching?

Yes and no. There are a number of different caching options available for your site. You can cache the individual items that are referenced by your page e.g. images, css files, database queries etc, or you can cache the page itself, or both. A static first strategy will try to cut the cord with the server. The strategy does not require database query caching and processes as much into a cachable unit i.e. page caching. Caching is generally performed dynamically i.e. the caching happens when one or more users access a particular page. Static site generation is performed pre-emptively i.e. before any users access a particular page. Caching and static site generation both have the aim of making reused assets as available and as close to a user’s device as possible; the difference is if this is done entirely pre-emptively or dynamically.

Static First, Only Update Dynamic Data

However, frequent updates may be unavoidable depending on the type of site. For dynamic sites, it is not feasible to continually pre-compile all views for all users, especially when the data is changing frequently.

But remember again, your user does not care. Mostly, they dont understand the difference between static and dynamic sites; they want to see the important content, fast.

You can aim to statically compile as much of the page as possible beforehand, but the ‘dynamic’ parts will involve some sort of processing. As a first time user, I may accept having the page load, seeing a placeholder where the dynamic data should be, and then ‘hydrating’ the data on page load. On the other hand, the user may, in this instance, prefer a slightly slower page load, if the page loads with the data already fully ‘hydrated’. The choice probably depends on what makes your customer happiest.

Subsequent Visits & Data caching

Up until now we’ve generally been concentrating on the scenario when customers first visit your site. When a customer visits your site again, the situation is slightly different. A returning customer will already have, on their device, many of the resources for the page. If nothing has changed, they may even already have everything to show the page again.

As a returning user, it makes little sense for me to have to contact the server again, have a new view generated by the server and download a new page. If only a small subsection of the page I already have has changed, this is unnecessary processing.

The ideal situation is if the server actually pushes my browser updates. When this happens my browser doesnt have to continually ask if new data is available. An even better scenario is if the server has already pushed me the data before I open the page again.

Even if you dont consider websockets and/or service workers you still have the opportunity to cache api data on the server. If a piece of data has not changed since the last time your browser (or any other browser) asked the server for it, it introduces unnessecary processing. Not for the feint hearted, but api caching can be achieved using the ETag header of a HTTP call.

Final Note. Software Development is hard.

There are only two hard things in Computer Science: cache invalidation and naming things.

— Phil Karlton

There are lots of difficult things about software development, but cache invalidation is the devil.

To reduce processing, all the methods above require caching in some shape or form. Static website generation is just page caching by another name. Putting things in cache is easy, knowing when to update them is incredibly difficult.

If your website page changes url, should you rebuild your entire site in case other pages reference the changed page? Does this re-introduce more processing than its worth?

If you statically compile your stylesheets inline into the page, what if a stylesheet changes? Does every page need compiled again, even if it doesnt make use of the changed style?

If a property in a row of data in your database changes, how do you invalidate your api cache? What if the data is referenced by another item in the cache?

If you are a masochist and like solving these type of problems, have at it. For the rest of us mere mortals, look to use a tool or service that helps you manage these problems.

Here is a non-exhaustive list of some newer tools and services that help in this space:

Hugo – Static website builder

Shifter – Compile your WordPress site to static

Vercel – JAM Stack + React Server Side Rendering + Hosting

Netlify – JAM Stack + Hosting

Webiny – Almost serverless Static website builder + API Generation

Svelte – Javascript framework that precompiles to plain JS so you dont need to deploy and load a framework with your webapp.