Changing the interview loop

AI native assessments
for PostHog

We built a custom OpenRound for PostHog, showing what an AI native interview would look like at scale.

Note: This is not an official PostHog assignment, just a demonstration we put together to show what an AI-native technical interview could look like for their engineering loop.

Custom assessment

Built for PostHog

An AI native assessment built around the kind of engineering work that PostHog actually does, focused on the skills that predict day-one performance.

openround.ai · posthog assessment
README.md
Custom for PostHogForward-Deployed Engineerhard

Diagnose why a customer's funnel rate disagrees with PostHog's

## The problem

A customer's PostHog funnel reports ~38% conversion; their internal dashboard says ~62%. Diagnose, fix the pipeline, and ship a HogQL query whose number can be defended.

Time
60 min
Stack
python
Format
Real codebase
AI access
Full
What you'll touch
pythonposthoghogqlanalyticsdata-qualitydebugging

What is OpenRound

The interview, redesigned for AI-native engineers

One 60–90 minute task on a real codebase, with complete AI access. No leetcode, no take-homes, no technical screens. See how a candidate ships with AI.

Real engineering problems

Every assessment is a real engineering task on a real repo, the kind of work the candidate would actually do in their first week on the job.

AI native by default

Candidates use Claude Code, Cursor, or whatever they prefer, and we capture how they prompt, verify, debug, and ship along the way.

Better signal in less time

A single OpenRound replaces both a take-home and a technical screen, and gives you a calibrated rubric across foundations, agency, and taste.

Hiring process

Your process, with OpenRound in the loop

OpenRound slots into your existing loop and replaces the slowest, lowest-signal parts, which usually means the take-home and the first technical screen.

Public hiring data on PostHog is sparse and changes often, so the left column is our best guess at a typical loop rather than a verified process.

Typical loop

A standard PostHog interview

  1. 1
    Recruiter screen
    30 minute call covering background, motivation, and basic fit.
  2. 2
    Technical phone screen
    Live coding round, typically LeetCode-style on a shared editor.
  3. 3
    Second technical round
    Either another live coding round or a multi-hour take-home.
  4. 4
    On-site or virtual loop
    3 to 4 rounds covering system design, more coding, and behavioral.
  5. 5
    Final and offer
    A closing conversation with the founder or hiring lead.

With OpenRound

A leaner, higher-signal loop

  1. Informal chat / Recruiter screen
    Same as before, a quick check on background and motivation.
  2. OpenRound assessment
    60 to 90 minutes on a real codebase with full AI access, which replaces both the take-home and the first technical screen.
  3. 1 hour onsite or remote code discussion
    A live walkthrough of the OpenRound submission with the hiring team, covering decisions, tradeoffs, and follow-up questions.
  4. Founder or final round
    A closing conversation with the founder or hiring lead, with a decision in days rather than weeks.

Other assessments

More from the OpenRound library

Real engineering problems we've built for other AI-native teams, which we can adapt for you or use as a starting point for something new tailored to your stack.

Forward Deployed Engineer

Snorkel AI — Refactor a weak-label notebook into a clean LF → LabelModel → eval pipeline

An ML lead at a B2B SaaS support team needs their notebook-export weak-labeling script turned into a real pipeline they can run in CI, plug an LLM-LF into, and trust as label distributions evolve.

weak-supervisionlabeling-functionslabel-model
Forward-Deployed Engineer

LiveKit — Ship a voice agent that handles dental appointment reminders

Inherit a partially-built outbound appointment-reminder voice agent. Ship the pilot blockers from the customer's brief without breaking the existing conventions.

livekitvoice-agentsrealtime
Forward Deployed Engineer

10a Labs — Build a threat-evidence aggregator with mixed-signal fusion

Fuse mixed-signal intel (scraped marketplace data, OSINT, customer telemetry) into auditable per-entity case decisions for a frontier AI lab customer.

pythontrust-and-safetythreat-intel
Forward Deployed Engineer

Modal Labs — Add batch processing, retries, and a status endpoint to a Modal app

Extend a working-but-naive Modal app into a batch-capable, fault-isolated, observable pipeline without breaking the existing single-document contract.

modalpythonserverless
Forward Deployed Engineer

Picogrid — Deduplicate sensor-fusion alerts across radar, EO/IR, and RF feeds

A perimeter security team is drowning in duplicate alerts from radar, EO/IR, and RF sensors. Build the fusion logic that turns three messy event streams into one row per real entity, with confidence.

defensesensor-fusiondedup
Forward Deployed Engineer

Replit — Extend an agent harness to ship apps with budgets, validation, and snapshotting

Northwind ops keeps shipping broken Replit Agent apps. Take a thin planner-and-write prototype and turn it into a harness that actually runs, recovers, budgets, and records what it touched.

pythonagentsllm
Browse all assessments

Playground for
AI-native engineers.

Run your first OpenRound and see the difference in signal.