Theo's Blog

Why We're Building Ping

Content creation is hard.

Most of the tools creators use every day were not built for creators. Even fewer were built by creators.

We’re here to change that.

At T3 Tools, we’re building a team of people who are passionate about media and content creation. We stream. We make content. We’re creators, so we feel the pains creators experience every day.

After collective decades of dealing with these problems, we’re building solutions.

Introducing our first creator-focused tool, Ping.

Ping brand

Have you ever tried streaming with your friends before? If you have, then you know it’s one of the most painful processes in content creation. Interaction is the life blood of live content, and the tools to interact with your friends live are rough.

Existing solutions like StreamYard and Stream Club were built for people who want to be creators, but sadly they fail those who already create. Most professional creators avoid them entirely, instead using a combination of Zoom/Discord, OBS, screen capture and duct tape.

We think we can do better.

We Build Different

Creator tooling companies are often eager to reinvent the car. We’d rather build the world’s best tires.

When T3 builds, we stay focused on three things: Simplicity, Quality, and Integration.


Simple doesn’t always mean easy. Content creation will never be easy, and we’re not pretending to have changed that.

Simple means the thing does what it says on the label. It’s sad that this isn’t always the case. Creators have been burned so many times by unclear promises and weird gotcha’s. We won’t do that.


Most applications are built around breaking down barriers for the lowest common denominator. Discord can’t increase call quality if someone might call in from 3G on their phone.

We don’t compromise on quality for users with compromised connections. Low latency, high quality feeds are the default experience when you’re using our products.


Ping integrates directly with the tools you already know and love. Every participant in a Ping call can be added as a separate source (via browser embeds). This enables full control of your layout without assumptions about what you want.

By “unbundling video conferencing”, we’ve enabled some super cool use cases we never imagined while building. Many blur the lines between the virtual world, real world, and internet, like Ironmouse interviewing Sykkuno in a virtual anime room or ProjektMelody collaborating with a real-world chef.

By building software that compliments OBS instead of replacing it, we’ve built something much more powerful than any existing solution.

What’s Next?

We have lots of fun things in our future. Hang tight.

If you’d like access to Ping early, be sure to request a demo on the homepage!

If you want to stay up to date on what we’re building, join the Discord server and follow Theo on Twitter.

An Inconsistent Truth: Next.js and Type Safety

Imagine a world where Next.js was architected around type safety.

“But doesn’t Next.js already work with TypeScript?”

Yes. I even recommend the Next.js TypeScript template on

Type safety goes deeper than TypeScript support.

What is Type Safety?

“…type safety is the extent to which a programming language discourages or prevents type errors”


It’s important to recognize first and foremost that type safety isn’t a boolean ‘on/off’ state. Type safety is a set of pipes from your furthest off dependency and your user.

Throughout my career, I’ve seen a number of systems that handle types in various ways. For the sake of simplicity, I’m going to over-generalize the structure of a system into a few parts

  • Data store (SQL, Mongo, Worker KV)
  • Backend (interface to data store)
  • API + Schema layer (REST/Swagger, GraphQL, gRPC)
  • Client (Frontend web app, mobile app, video game)

I’ve been lucky to work primarily in systems where each of these pieces is type safe. At Twitch, we used PostgreSQL for data, Golang for backend, GraphQL for APIs, and React + TypeScript for the front end. Each piece was type safe, and tools like GraphQL allowed us to write a “type contract” between different type systems (in this case a GraphQL schema).

Given the separation of concerns and focus, combined with the varied technologies on frontend and backend, this architecture made a lot of sense.

Given a full-stack TypeScript app using Next.js, I think we can do much better.

Building Better Type Systems

Going to start this with a question:

When working in a type safe system, should you be writing more types, or less?

This question may seem dumb. “Of course you would have more type definitions in the better typed system!”

The best type systems should require no types to be written at all.

But how??!

Type Inference

Credits to Alex for this fantastic meme

Writing type definitions for every piece of your code does not make a type safe system.

Good type systems are built on top of strongly typed dependencies and models. Type safety comes when the rest is inferred from there.

Say I have a model in SQL:

  id   String  @id @default(cuid())
  name String?

We know, given a User, that we have an id string that is unique and we might have a name that is a string. If we were interfacing with this in TypeScript, the TS definition would look something like

	id: string;
	name: string | null;

Here’s where the Theo spice comes in: you should never have to write types that look this much like your data models

Tools like Prisma serve as a beautiful “translation layer” between your SQL data and your TypeScript backend.

export const getUserById = (userId: string) => ASTRO_ESCAPED_LEFT_CURLY_BRACKET
  return await prisma.user.findFirst(ASTRO_ESCAPED_LEFT_CURLY_BRACKET where: ASTRO_ESCAPED_LEFT_CURLY_BRACKET id: userId } });

That’s it.

The type safety doesn’t come from defining our own types. It comes from the source of truth being honored and all further contracts being inferred from that source.

Next.js often breaks that contract.

Next can be a (type) safety risk

This statement is bold, but this problem is large enough to justify it IMO. Know that it comes from a place of love.

It is hard to believe that the biggest breach in contract for my type system exists within any given file in the Next.js pages directory, but it’s the concern I’m here to shout about.

// pages/user-info/[id].ts
export default function UserInfo(props) ASTRO_ESCAPED_LEFT_CURLY_BRACKET
  return <div>Hello ASTRO_ESCAPED_LEFT_CURLY_BRACKETprops.user?.name}</div>;

export async function getServerSideProps(context) ASTRO_ESCAPED_LEFT_CURLY_BRACKET
  const id =;
  const user = await prisma.user.findFirst(ASTRO_ESCAPED_LEFT_CURLY_BRACKET where: ASTRO_ESCAPED_LEFT_CURLY_BRACKET id: id } });


This seems innocent enough, right? Drop this code into your Next.js pages dir and everything passes.

Sadly, there are numerous type errors that this will silently allow you to introduce, such as:

  • Modifying the schema (rename name to username)
  • Selecting different values from the prisma.user call
  • Changing the key you return user under in getServerSideProps
  • Erroneously deleting the getServerSideProps function (…yes I’ve done this before)

Even putting aside the egregious allowance of implicit-any that allows most of these failures to be possible, the recommended mitigation strategies don’t do enough. Let’s take a look at a few.

Manually Typing Props

// pages/user-info/[id].ts
import type ASTRO_ESCAPED_LEFT_CURLY_BRACKET User } from "@prisma/client";

  return <div>Hello ASTRO_ESCAPED_LEFT_CURLY_BRACKETprops.user?.name}</div>;

Yay we did it! If we were to change name to username in the schema, we’d get a type error here!

But what if we modify the getServerSideProps function?

export async function getServerSideProps(context) ASTRO_ESCAPED_LEFT_CURLY_BRACKET
  const id =;
  const user = await prisma.user.findFirst(ASTRO_ESCAPED_LEFT_CURLY_BRACKET
    select: ASTRO_ESCAPED_LEFT_CURLY_BRACKET id }, // We only select ID now (so `name` isn't included)


Note: we only made one change here, we started selecting the values we needed more carefully.

Sadly, since the page component presumed the entire User was coming down the wire, this will silently pass type checks. Since the user?.name call is optionally chained, this case will not throw an error, but that will only make debugging more painful.

Next’s provided inference helper: InferGetServerSidePropsType

// pages/user-info/[id].ts
} from "next";

export const getServerSideProps = async (
  context: GetServerSidePropsContext
  const id = context.params?.id;
  const user = await prisma.user.findFirst(ASTRO_ESCAPED_LEFT_CURLY_BRACKET where: ASTRO_ESCAPED_LEFT_CURLY_BRACKET id: id } });


// Infer types from getServerSideProps
type ServerSideProps = InferGetServerSidePropsType<typeof getServerSideProps>;

// Assign inferred props in exported page component
export default function UserInfo(props: ServerSideProps) ASTRO_ESCAPED_LEFT_CURLY_BRACKET
  return <div>Hello ASTRO_ESCAPED_LEFT_CURLY_BRACKETprops.user?.name}</div>;

Shout out to Brandon (Blitz.js) and Luis (Vercel) for pointing out that I entirely missed the provided inference type in the Next.js docs.

The goal here is to use the types of your getServerSideProps function as a source of truth via inference. Funny enough, I’ve written a number of helpers to do this myself before.

As happy I am to know this exists, I’ve already ran into some painful edges with Next’s provided InferGetServerSidePropsType

To use this correctly, I had to have decent familiarity with Next’s internal typings and read through this GitHub issue thoroughly. Even with that prerequisite, I found it shockingly easy to accidentally return a non-implicit any type, which does not throw any errors under the provided Next.js tsconfig .

This method also requires you to manually type both the server-side function and the component props. There’s nothing implicit about the relationship, those prop types could easily be re-assigned or mis-assigned :(

Manually typing API endpoints

This path is vaguely hinted at in the Next.js docs, but will require we break up our solution a bit. I will also be including React Query to make this example significantly less burdensome (I would have used Vercel’s swr package, but I was unable to find a TypeScript example in their docs).

// pages/api/get-user-by-id.ts
import type ASTRO_ESCAPED_LEFT_CURLY_BRACKET NextApiRequest, NextApiResponse } from "next";
import type ASTRO_ESCAPED_LEFT_CURLY_BRACKET User } from "@prisma/client";

export type UserRequestData = ASTRO_ESCAPED_LEFT_CURLY_BRACKET
  user: User;

export default async (
  req: NextApiRequest,
  res: NextApiResponse<UserRequestData>
  const ASTRO_ESCAPED_LEFT_CURLY_BRACKET userId } = req.query;

  const user = await prisma.user.findFirst(ASTRO_ESCAPED_LEFT_CURLY_BRACKET where: ASTRO_ESCAPED_LEFT_CURLY_BRACKET id: userId } });
  res.status(200).json(ASTRO_ESCAPED_LEFT_CURLY_BRACKET user });

// pages/user-info/[id].ts
import ASTRO_ESCAPED_LEFT_CURLY_BRACKET UserRequestData } from "../api/get-user-by-id";
import ASTRO_ESCAPED_LEFT_CURLY_BRACKET useQuery } from "react-query";

const getUserById = async (userId: string) => ASTRO_ESCAPED_LEFT_CURLY_BRACKET
  const response = await fetch("/api/get-user-by-id?userId=" + query.userId);

  // Assign type imported from server code
  return (await response.json()) as UserRequestData;

export function UserInfo(props) ASTRO_ESCAPED_LEFT_CURLY_BRACKET
  const ASTRO_ESCAPED_LEFT_CURLY_BRACKET query } = useRouter(); // Get userId from query params

  // Fetch from server with loading and error state
  const ASTRO_ESCAPED_LEFT_CURLY_BRACKET data, isLoading } = useQuery<UserRequestData>(
    () => getUserById(

  if (isLoading) return null;

  if (!data) return <div>Error: user not found</div>;

  return <div>Hello ASTRO_ESCAPED_LEFT_CURLY_BRACKETdata.user?.name}</div>;

This one may look like a lot, but for a full-stack backend and frontend with typesafety across both, it’s not bad. It’s important to note that, by moving from getServerSideProps to React Query (or swr), we have moved the data fetching from the server to the client in pursuit of type safety.

There are definite benefits to this approach. By putting the type definition so close to the API, we are making the “contract of what is returned” more reliable to consume.

There are definite negatives as well. The verbosity compared to the earlier options is apparent and absurd. We’ve given up a lot of our SSR benefits. But have we gained a lot in terms of type safety?

I’d argue no.

By defining the types manually, we’re still leaving a lot of surface area for error. What if I import the wrong type? What if I fetch from the wrong URL? What if I forget to call .json() (which I totally did when writing this example)?

I think we can do better.

An Inconsistent Truth

All of the type failures encountered in the above examples stem from roughly the same core issue: the “types” and the “sources of data” are not tied together implicitly. By separating the source of data and the source of truth, we introduce space for errors.

Let’s repeat that for those in the back.

By separating the source of data and the source of truth, we introduce space for errors.

This is a big part of why I love Prisma so much. Your “source of truth” is the schema.prisma file. Everything else is inferred from there. You will not be writing your own type defs with Prisma.

To be clear, Next is solving a very different problem and can’t generate a bunch of types out of a model file. Still though, I’d love if getServerSideProps worked similarly.

The closest we can get right now is InferGetServerSidePropsType. It is the safest way to honor the contracts inherent to TypeScript across the client and server barrier while using server side function helpers in Next.

Sadly, digging deeper into the provided types has only made me more cynical. There are some scary typedefs within Next’s provided types, GetServerSideProps in particular

export type GetServerSideProps<
  P extends ASTRO_ESCAPED_LEFT_CURLY_BRACKET [key: string]: any } = ASTRO_ESCAPED_LEFT_CURLY_BRACKET [key: string]: any },
  Q extends ParsedUrlQuery = ParsedUrlQuery,
  D extends PreviewData = PreviewData
> = (
  context: GetServerSidePropsContext<Q, D>
) => Promise<GetServerSidePropsResult<P>>;

The P extends bit that auto-assigns a generic object as the return type is…very scary. Way too easy to trigger. IMO, this first arg should be mandatory if this prop is going to be used.

After chatting with some folks at Vercel, it’s clear they’re working to make this better. A lot of the generic export type issues I’ve laid out here can be sourced back to a more generic export typing issue in TypeScript itself (ty Balázs for pointing me to this).

All that said, I think we can work around these problems :)

Exploring Outside Of Next

Typesafe APIs

Before I go too deep here, I should make my bias clear. I’m a tRPC fanboy.

tRPC takes full stack type inference to the next level by relying on the types defined in your router as a “schema” on your client. Blitz.js does something similar with queries. Both wrap React Query with typesafe definitions at the API level, which enables some “magic” with type consistency.

While this example uses Next, tRPC does not require you use it. It doesn’t even require React. Any typescript server and client can serve and consume a tRPC router

// pages/api/trpc/[trpc].ts
const appRouter = trpc.router().query("get-user-by-id", ASTRO_ESCAPED_LEFT_CURLY_BRACKET
    userId: z.string(),
    const user = await prisma.user.findFirst(ASTRO_ESCAPED_LEFT_CURLY_BRACKET where: ASTRO_ESCAPED_LEFT_CURLY_BRACKET id: input.userId } });

export type AppRouter = typeof appRouter;

// pages/user/[id].ts
import trpc from "../utils/trpc";

export default function UserInfo() ASTRO_ESCAPED_LEFT_CURLY_BRACKET
  const ASTRO_ESCAPED_LEFT_CURLY_BRACKET query } = useRouter();

  // trpc.useQuery will call "get-user-by-id" api with ASTRO_ESCAPED_LEFT_CURLY_BRACKETuserId:}
  const ASTRO_ESCAPED_LEFT_CURLY_BRACKET data } = trpc.useQuery(["get-user-by-id", ASTRO_ESCAPED_LEFT_CURLY_BRACKET userId: }]);

  if (!data) return <div>Error: user not found</div>;

  return <div>Hello ASTRO_ESCAPED_LEFT_CURLY_BRACKETdata.user?.name}</div>;

It’s important to note that the trpc.useQuery call is as close to 100% typesafe as you can get (hell, even in this case it will type error because isn’t guaranteed to exist).

The "get-user-by-id" string will auto-complete, and type error if it is not a real query in your tRPC router. The input will error if it doesn’t match the zod schema in your query/mutation. The data is typed identically to the return types of your resolve function (even if you use Map and Date, superjson can convert those too). Also - unlike the earlier example, this one can also work with SSR.

Server Components

This is the React 18 solution. “Just call the backend code in the component”.

// components/user.server.tsx
export const UserInfo: React.FC<ASTRO_ESCAPED_LEFT_CURLY_BRACKET userId: string }> = (props) => ASTRO_ESCAPED_LEFT_CURLY_BRACKET
  const user = prisma.user.findFirst(ASTRO_ESCAPED_LEFT_CURLY_BRACKET where: ASTRO_ESCAPED_LEFT_CURLY_BRACKET id: props.userId } });

  return <div>Hello ASTRO_ESCAPED_LEFT_CURLY_BRACKETuser?.name}</div>;

// pages/user/[id].tsx
import ASTRO_ESCAPED_LEFT_CURLY_BRACKET Suspense } from "react";
import ASTRO_ESCAPED_LEFT_CURLY_BRACKET UserInfo } from "../../components/user.server";

export default function UserInfoPage(props) ASTRO_ESCAPED_LEFT_CURLY_BRACKET
  const ASTRO_ESCAPED_LEFT_CURLY_BRACKET query } = useRouter();

  return (
    <Suspense fallback=ASTRO_ESCAPED_LEFT_CURLY_BRACKET<div>Loading...</div>}>
      <UserInfo} />

Server components are really damn cool. I think they will help significantly reduce the number of places where this problem exists. I also suspect the transition towards server components will take a long while, and important cases like header metadata will be missed unless explicitly SSR’d ahead of time.

Server components are the future. What about now?

A Proposal: _props.ts

I want to preface this with a few things

  • I’m writing this out of immense love for Vercel and Next.js. This stack is the most productive I’ve ever felt and I don’t suspect that will change any time soon. I’m betting my company on it.
  • I’m far from a TypeScript expert - especially on the maintainer side. TypeScript is a whole different beast when you are working on libraries that provide generics. Thank you Tanner and KATT for giving me a glimpse into that world.
  • I have no intention of implementing any of the things I discuss here. I’m very happy with my tRPC + Next setup and don’t want to move. This is purely theoretical.

All that said, hear me out.

Lightly Inspired

Think of this ergonomically as an in-between of the new Next.js Middleware syntax of _middleware.ts and the philosophy behind Blitz.js query resolvers.

// pages/user-info/_props.ts
export async function getServerSideProps(context) ASTRO_ESCAPED_LEFT_CURLY_BRACKET
  const id =;
  const user = await prisma.user.findFirst(ASTRO_ESCAPED_LEFT_CURLY_BRACKET where: ASTRO_ESCAPED_LEFT_CURLY_BRACKET id: id } });


// pages/user-info/[id].ts
import Props from "./_props"; // This will have to be some wizardry or a compile step

// This should lint error if the type was assigned
// to something other than _props in the same dir
export default function UserInfo(props: Props) ASTRO_ESCAPED_LEFT_CURLY_BRACKET
  return <div>Hello ASTRO_ESCAPED_LEFT_CURLY_BRACKETprops.user?.name}</div>;

This is a very rough sketch of what I have in mind. My “general thought” is a file-level barrier between “the thing run on the server” and “the thing run on server AND client”, with an implicit type contract (potentially generated) through the creation of these files. Could even spit out a useServerSideProps hook 🤔

Under the hood I would expect this to use something similar to the AsyncInferType example earlier. I can see potential ways to extend this further, such as additional keys you can return or other named files i.e. _dynamicProps.ts or _staticProps.ts.

Generally, I like the idea of “files with an underscore run on the server”, and that thought brought me here. I think it can go really far, especially when combined with a compiler. Not many other companies are in a position to change all the pieces to build something like this.

It’s been proven that full stack type inference is possible with modern TypeScript tooling. Let’s work towards a future where that’s the default 🙂

Thank You

This was a long one. I know it may seem harsh towards Next and Vercel, but that was not my intent at all. I’m critical out of love. I would never have written this much about something I didn’t intend to use for years. I bet my company on this stack. I feel like we’re working in a stack from the future.

Want to shout out a bunch of people who gave feedback on this article, I would have looked way stupider without y’all

Shoutout to Alex (tRPC), Balázs (NextAuth.js), Luis (Vercel), Lee (Vercel), Brandon (Blitz.jz), Jacob (CloudFlare), Tanner (TanStack/ React-Query), Jonas (ThirdWeb) and everyone else who I’m forgetting.

Extra stuff

If you got this far, you might like my rants on Twitter as well.

If you want to see this tech in action, check out this 2+ hour deep dive building a full stack app with Prisma, PlanetScale, Next.js, TypeScript, Vercel, tRPC, and Tailwind.

Quitting Your Dream Job (Twice)

Sometimes it’s important to do the thing that feels a little stupid.

I’m starting a company.

How did I get here? I’m just a music-loving skate nerd from a small farm town. My first “real job,” a contract dev role, was more a generous gift than an earned position. If the hiring manager didn’t happen to have the same taste in music, I probably wouldn’t have gotten the role.

Oh also — worth mentioning that the role was on the Creative Team at Twitch. Fresh out of university, with no experience at all, I got to write the code that ran Twitch’s marathon, including Power Rangers, YuGiOh, and the viral Bob Ross, which still runs today.

I stayed at Twitch for four years, hopping from team to team, building dozens of projects I’m proud of to this day. It’s hard to put into words how much I was able to learn here. One lesson stuck out — I learned I could build some pretty damn good stuff.

The First Resignation

Realizing that I had outgrown my role was a long, painful process. My last year was particularly rough. I saw so many points that hurt both creators and users. I wanted to destroy, rebuild, and improve everywhere I could. My ambitions helped me get some pretty cool stuff out, but I was getting tired. My work was no longer fulfilling. My time wasn’t spent building; it was spent convincing people that we needed to build and ship the right things.

My desire to build overpowered the comfort I got from my role at Twitch. One of my favorite sites from my childhood, Turntable, announced they were coming back and had begun hiring. The opportunity to build was being dangled in front of me, and I had to bite. I left Twitch on the last Friday in January 2021 and started at TTFM Labs the following Monday.

Startup Life

Going from a 2,000-person company to a two-person company is, uh, quite an adjustment.

My initial role was to build, of all things, the Android app. We had a working prototype within a week. Three days later, we had ported to iOS successfully. The web app (written by contractors at the time) was the next target. One month in, I’d managed to create the TTFM client on all platforms.

I was building again. Faster than ever. I was hooked.

We started hiring. I helped make a great team. We were shipping at an alarming rate, recreating the music sharing experience I missed from my high school days.

The feeling of “solving problems from scratch” motivated me to build like I had never built before. I wish I could say this continued past the first two months.

Losing Focus

It didn’t take long to run into some familiar red tape. Our backend was labeled by leadership as “do not touch.” All of the outages, bugs, and other weird behaviors became client-side problems we had to solve in increasingly obtuse ways.

In a short time, my role transitioned away from “solving technical problems while improving the product.” I was back in the bureaucracy. The foundation we were building on was weak, and we were being asked to build higher. I pushed for us to refocus and reinforce what we had before our tech collapsed under the weight.

It took a while, but they listened. We replaced our monstrous 7,000 LOC socket server with a minimal, maintainable implementation under 350 LOC. This was a huge win, but I had become disillusioned by the process.

I had spent almost half a year justifying code that took under 10 hours to write.

Finding My Drive Again

My work was no longer providing the fulfillment I had quit Twitch to pursue. I needed to build.

I spent a week putting too much time into a web game about Dogecoin. It was the most fulfillment I had felt in months. It also reinforced the big lesson from Twitch: I can go from concept to product pretty damn quick.

Speaking of Twitch, I was still a regular user. The problems I wanted to solve hadn’t gone anywhere. From the inside, creator pain points were rarely considered beyond dinner discussions. From the outside, those same problems stuck out like weeds, begging to be cut.

Multi-person content seemed particularly hard to create, so I started a new project — Round. In a few days, I had something usable. A week later, it was borderline useful. My energy was back.

Electrified as ever, I started showing some friends. Unlike previous endeavors, my excitement was reflected. They all wanted to use it as soon as possible.

The Second Resignation

When I moved from Twitch to TTFM, I took a third of my previous compensation in hopes of finding fulfillment in my work. To put it bluntly, Round is the most fulfilling work I’ve done in a while. I want to ride this wave.

I’m leaving TTFM Labs to start my own company.

Which is incredibly dumb.

But I can’t imagine doing anything else.

Quitting your job to start a streaming company is dumb. Missing this opportunity feels way dumber.

T3 Tools

I want to make tools that inspire a sense of craftsmanship. I am starting T3 Tools to do just that.

We have a lot to build. Round is just the start. If all goes well, we’ll be able to build the toolbox that powers the future of live content creation.

I’m so hyped. I can’t wait to share more about T3 soon.

If you’re also excited about live creator tools, modern dev practices and patterns, or building in general - hit me up. We’ll be hiring soon :)

Using Vite On Vercel (Outdated)

Vite + Vercel

UPDATE (7/13/21)

Vite is officially supported on Vercel now. That means the rest of this post is out of date and should be ignored.

For history’s sake…


I like fast, simple dev environments. Vite has quickly become my go-to build tool for any new single page app project. Vercel is my host of choice, greatly simplifying the deployment experience for both static web apps and associated APIs.

I’ve been loving Vercel’s serverless function implementation, which enables quick deploys of lambdas by adding JS (or TS) files to the /api directory in your repo. You can even run these locally with the Vercel CLI.

Sadly, Vite is not quite as drop-in a solution on Vercel as other build tools (Next.js, Gatsby, Nuxt, etc). After a good bit of hacking, I have managed to get everything working consistently enough that I felt obligated to share. Here’s a rough how-to on the steps to get a fresh Vite project running smoothly with Vercel’s builds, deploys, and CLI.

Step One: Init and push

Start a fresh vite project with npm init @vitejs/app. If you prefer Yarn, follow along here.

I’ll be initializing a fresh React and Typescript project, but these instructions should work regardless of your framework or choice between JS and TS.

Once you’ve initialized the project, make a fresh Github repo, cd into the dir, npm install, and push it up.

Rough bash:

npm install
git init
git add -A
git commit -m "init"
git branch -M main
git remote add origin !! YOUR REPO URL HERE !!
git push -u origin main

Step 2: Deploy to Vercel

If you are making a single page app (you likely are if using React or Vue), you will need to do a little more config.

By default, Vercel tries to resolve all requests to a file at the path. Works great in Next. Not great for SPAs. To enable non-root routes, you will have to make a vercel.json config file that redirects to the root index.html.


  "rewrites": [ASTRO_ESCAPED_LEFT_CURLY_BRACKET "source": "/(.*)", "destination": "/index.html" }]

Once this is added, you can go to and create a new project. For framework, select “other”. For “Output directory”, override the default with dist

Vercel Config

Click “deploy” and you should be live in no time!

If you do not plan on using Vercel’s serverless functions or CLI, you can stop here

Vite’s built output is a simple static webapp, which vercel is more than equipped to handle. The vite dev server has a few more weird quirks that you will have to resolve before it will play nice with Vercel’s CLI.

Step 3: Wrangling the CLIs

Vercel’s CLI can be installed with a quick npm i -g vercel (more info here).

To assure our changes work, I will also be creating a simple /api/hello-world endpoint to confirm the local dev environment is working.

I feel obligated to inform you that HERE BE DRAGONS. There’s some really weird behaviors in how the Vercel CLI interacts with the vite dev server. I’ve managed to work around most of these issues as long as this pr gets merged.

First, we have to modify our Vercel project settings once more to point it at a “safer dev command”. If you’re thinking of modifying the "dev": "vite" key in your package.json, do not do this. It will break. I have no idea why.

Toggle “override” for “DEVELOPMENT COMMAND” and set it to npm run ASTRO_ESCAPED_LEFT_CURLY_BRACKETvercel-special-command-name}.

Vercel dev special config

The following is weird enough that I stubbed out a commit with all the related changes to make it easier to apply to your project

Once we have told vercel about this special command, we have to create it. Vercel CLI uses the --port argument for…something. 🤷‍♂️

Add the following scripts in package.json

"vercel-dev-helper": "vite --port $PORT",
"vdev": "vercel dev --local-config ./vercel-dev.json"

You may have noticed that the vercel dev command is pointing to a unique local config. This is because the SPA rewrite in our vercel.json does not work with vite’s dev server.

Easiest fix is to create a vercel-dev.json with a single ASTRO_ESCAPED_LEFT_CURLY_BRACKET} insite to undo that config :).

Now npm run vdev and you should be good to go!

Wrap up

Assuming my fix for import pathing gets merged, you should now be good to go! As annoying as this is to config compared to Create React App, Next, and other build tools, it may not seem worthwhile. But man, Vite is fast and simple as heck and I’ll be damned if I have to give it up for a few CLI incompatibilities.

In the future, I’d hope these changes are integrated into Vercel’s tools, and that we’ll see a Vite option in the “frameworks” dropdown :)

Thank you for reading! Github Repo here for those interested in the full source

AirPods Max Review

Cat wearing airpods max

No Magic Here

I’ll admit that I went in hopeful. The AirPods Pro more than impressed me. They’re my “every day headphone”, which hurts to say as an audiophile, but also speaks to their quality.

These things, however. These are getting returned.

That’s not to say they’re all bad or anything. I can see a really good Version Two in the future. But I did not purchase a Version Two. Sadly, what we have right now in the AirPods Max is a disappointment, and I cannot recommend them.


Some parts of the AirPods Max are best in class. The pad material is fantastic. The all-metal framing and body makes my HD800’s feel cheap.

I think the top band is really nice. Apparently that’s controversial? idk. I like the mesh a lot. I’m concerned it is not replaceable, which is scary at this price, but it is showing no wear and should last.

The cups pivot “out”, preventing them from sitting around your neck while flat. These are meant to be on or on a table, not “around”.

Volume knob is fine. Surprised the copy-paste from the Watch went so well (seriously these wheels feel identical). Not much to note here.

The “cable” port (lightning, for charge + audio passthrough) is on the right. Let me emphasize this. THE RIGHT SIDE. The ENTIRE AUDIO INDUSTRY has standardized putting ports on the left for DECADES. I’m more pissed about this than the headphone jack removal tbh.

I only have two gripes with the build of the headphones themselves, both small. The cups touch when the headphone is sitting “flat”, which is scary metal-on-metal contact. No scratches thusfar 🤞

The other issue is the button on the top right (noise cancellation toggle). It’s placed exactly where I touch to adjust their position on my head, causing a lot of accidental presses. Easy enough to fix, but I’ve only managed to trigger it accidentally.


I did not expect something this heavy to feel so good on my head. These are very wearable. I tend to complain a lot about headphone wearability, so I’m surprised that I had no issues wearing these for 4+ hours at a time.


This deserves it’s own section. I am truly floored at how awful this thing is.

Firstly is how it looks. The thing is a crap magnet. It hasn’t left my desk and it’s still managed to pick up a ton of cat hair, fingerprints, and various gunk. Even when clean, it looks awful.

Using the case is, at best, unpleasant and inconvenient. At worst it’s truly aggravating. I’ve yet to find a motion to elegantly remove the Max from the case without both cups violently slamming each other. The top magnet flap thing is hilariously flimsy.

The charge port cut-out is so bad it’s memeworthy. The alignment shifts based on band size. I honestly don’t know how this passed Apple QA.

Seriously, this case is a joke, and Apple should apologize and ship all early adopters a better case. If these headphones were good, I would struggle to notice because of how much this awful case colored my impressions. Do better, Apple.


Okay, time for the important part.

Due to…surprising changes in sound characteristics when switching between modes (noise cancellation on/off, transparency mode, and wired vs wireless), I chose to review the sound exclusively in the “default” mode (noise cancelling on, wireless). As this is their intended use case, I think it is a fair representation of what they have to offer. If you intend to regularly use transparency mode or wired connectivity, I, uh…wish you luck with that.

All testing was done using Tidal, master quality where available. I used a lot of different headphones as “reference”, but relied most on my AirPods Pro, modded HD800’s, Ether C, and SHP9500.



Clear, surprisingly deep, quick, bloated in sub range. Apple pulled a lot of sub bass out of these. Impressive in some songs, unintentionally overpowering in others.

The flaws in the bass aren’t as simple as the Sony XM1000’s EQ nightmare. There’s something deeper in here, pulling really low bass into an audible range it shouldn’t be part of. I noted In Degrees by Foals and Good News by Mac Miller as examples of songs where the sub takes way more space than it should

I don’t want to be all negative. The sub bass presence is fantastic on a lot of stuff, such as bass-y electronic music (Feelin by DJ Rashad stood out to me.)

My frustration with the bass is in how much it hurts the versatility of the headphones. It can come out of seemingly nowhere and take up way too much “space”. Thankfully, as we will get into next, it doesn’t seem to interfere with vocal or treble much


Vocals are, imo, the shining point of these headphones. I struggled to find tracks that covered the vocals whatsoever. Regardless of “how much sound” was going on, these always managed to maintain a clear, centered vocal image.

The only catch is a quirk in the treble, which I will get into next


Treble detail is impressive, but at the cost of sharpness. There’s a wide range in the treble that’s way, way too sharp for a pleasant experience. They remind me of my HD800’s before I modded them, but even sharper at times.

There are a number of tracks that hit this sharp range. Corn Maze by Aesop Rock & Tobacco was genuinely unpleasant to listen to for this reason. Peach, Plum, Pear by Joanna Newsom sounds great until the “chorus” at 1:38, where a bit of static on the recording hits like a dagger.

I want to emphasize that this is a thing you’re gonna have to deal with. There’s no genre or style of music that avoids this sharp range entirely. Thankfully, of all the flaws in the headphones, this feels the most like a “bug”, and I think it can be addressed with software.

Imaging and Sound Stage

The imaging of these things is so, so, so strange. On tracks meant to be wide, it’s shockingly immersive. I did not try any of the “spatial audio” stuff, but I’m sure this works great for that.

That said, these are closed back headphones. Narrow ones at that. There’s only so much that can be done to make a “wide” sound, and these pull out all the tricks. Hard stereo cut. Boosted sub bass. Some weird dynamic range and stereo “compression” that’s hard to put into words.

These achieve a wide space, and they don’t lose vocals in the process, but man, they do some weird shit to achieve it.

The quirks this causes are painful on some tracks, such as Love Letters by Metronomy. The intro is a chill piano + horns ballad, which transitions into a louder, more dynamic, wider “full band” around a minute in. No headphone in my collection handles this transition worse. The effect of the band coming in is lost entirely, the “space” feels no more full. This isn’t because the band doesn’t sound overwhelmingly large - I promise it still does - more that the quieter intro was WAY TOO DAMN WIDE.

Sorry, I’m a little salty these headphones made even Metronomy sound bad.

Final thoughts

There are good things here. There are good sounds here. There’s too many overwhelming flaws for a headphone in this price range.

I’m of the belief that something in this price point has to justify the cost with versatility or unique ability, and the AirPods Max offer neither. There are songs with impressive range, quality bass, and well tuned vocals. There are just as many songs that have their “space” destroyed, that live in the sharp treble range, or get blown out by the over-present sub bass.

Unless you exclusively listen to EDM with trimmed up highs, these are not versatile enough to be your primary headphone. At $550, the lack of flexibility is insulting, especially when compared to a headphone as forgiving as the AirPods Pro.

The only “party trick” these have that my other headphones don’t is a bit more width (for a closed back) and a lot more sub bass. Although cool at times, neither of these qualities come close to justifying the cost, much less a spot in my collection.

So…yeah. Don’t bother with the AirPods Max. Next revision has a lot of potential.


If you want a good portable bluetooth headphone, AirPods Pro are still the best on the market.

If you really want wireless over ears, grab some refurbished Sony XM1000’s for under $200. These have qualities I vastly prefer to the Sony’s, but not for a $350 price increase.

If you just want some good over ear headphones for your desk and don’t mind a cable, the Phillips SHP9500 is still my favorite headphone. Surprisingly neutral, incredibly comfy, easy recommendation.

Hello World

This is long, long overdue.

I’m Theo. I am nerdy about a lot of stuff. I want to start writing more about the things I’m nerdy about. I made this blog to host the things I write.

This blog is also built on things I’m nerdy about. My personal site was previously a trim Gatsby app, shipping ~200kb down the pipe. I’m way too proud that this site, blog and all, is under 100kb.

tl;dr on the tech - Next.js rewrite deployed on Netlify

It’s nothing too fancy. Still pumped about the lighthouse score tho

I’ll likely write more on the tech itself in the future - but for now, know I’m loving this stack and highly recommend it.

Next post will be the AirPods Max review, follow me on Twitter to see when that’s out

Thanks for stopping by 🙂