Applied AI Engineer

San Francisco

Wordware

A collaborative prompt engineering IDE

View all jobs at Wordware

Apply now Apply later

⚠️ Please read first
  • This is a full-time, in-person role based in San Francisco (Presidio) - we work from the office 5 days a week.

  • You must be based in the Bay Area or willing to relocate before starting.

  • We require US work authorisation, but are open to O-1 visa sponsorship for truly exceptional candidates.

About Wordware

Wordware is an IDE for building AI agents using natural language.

It looks and feels like Notion, but lets you design, test, and deploy AI systems in real time - without writing code.

Our mission is to bring structure and joy to human–AI collaboration.

We’re building a generational company that empowers the next billion knowledge workers to create with AI - not by writing code, but by expressing intent.

We’re backed by Spark Capital, Felicis, and Y Combinator ($30M seed round - the largest in YC history).

We work hard, move fast, and don’t take ourselves too seriously. It’s intense, but it’s also fun - at Wordware, you’ll do the best work of your life alongside people you genuinely like.

What You’ll Do

As an Applied AI Engineer, you’ll be responsible for building, refining, and scaling the agent systems inside Wordware — from architecture to evals to deployment.

This isn’t a research role. We care about what works in production: fast response times, predictable behavior, traceability, and uptime.

You’ll work across the stack — with infra, frontend, and product — to make sure the agents users build inside Wordware are robust, useful, and usable.

A few examples of what you might work on:

  • Implement multi-step, tool-using agents that hit real APIs and handle retries, auth, timeouts, and edge cases.

  • Build RAG pipelines that support grounded answers from structured and unstructured sources.

  • Design agent memory systems that persist relevant state across runs — e.g. scratchpads, summary buffers, embedding stores.

  • Add determinism + replay to agents so users can trace and debug behaviors step by step.

  • Own and evolve our eval framework — both automated checks and human-in-the-loop scoring.

  • ${your ideas}.

Who You Are

Minimum

  • 3+ years of engineering experience, including time shipping production software.

  • You've built and deployed agent-like systems — multi-step LLM pipelines, tool-using bots, scripted assistants, or similar.

  • Hands-on experience with:

    • RAG pipelines (e.g. embeddings, vector DBs, chunking strategies)

    • Agent memory systems (e.g. scratchpads, history compression, summarization)

    • Tool use and orchestration (e.g. calling real APIs, using plugins, auth flows)

    • Evaluation — success metrics, regression testing, and improving agent behavior over time

  • You write production-grade code and can work across systems without needing a spec.

  • You thrive in fast-paced, product-first environments where the goal is shipping.

Bonus (not required)

  • Experience with frameworks like LangChain, CrewAI, or DSPy — or strong opinions about why you don’t use them.

  • Shipped agents that are live in the wild — used by customers, not just internal demos.

  • Familiarity with LLM ops, tracing, observability, and failure handling.

  • You’ve been a founder or early engineer and care deeply about product quality.

The Process

We keep our process simple. Exceptional candidates go from first touch to offer within 2 weeks.

  1. Application

    Submit your resume and answer a few quick questions. If it looks like a fit, we’ll ask for a 1-minute Loom video: tell us who you are and why you’re excited about Wordware.

  2. 15-min intro call

    Quick check to align on location, motivation, and logistics. If it’s a go, we move fast from here.

  3. 45-minute technical interview

    You’ll build a small full-stack app. We’re looking for fluency, speed, and product sense.

  4. System design interview

    A deep dive into how you think and architect systems. We’ll walk through a real Wordware problem together.

  5. Final conversation

    Quick vibe check, answer your questions, and scope out the work trial.

  6. Work trial

    Paid, in-person, and real — typically 3 days to 2 weeks. You’ll work on something meaningful with us.

Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  0  0  0

Tags: APIs Architecture Engineering LangChain LLMOps LLMs Pipelines RAG Research Spark Testing

Region: North America
Country: United States

More jobs like this