LMArena is seeking a Senior Product Security Engineer to lead the strategy, design, and hands-on implementation of systems that protect the integrity of our platform and the trust of our community. You’ll work across product, infrastructure, and data pipelines to proactively identify risks, embed security into core features, and build scalable defenses against abuse, fraud, and adversarial behavior.
This is a builder role - you’ll not only set technical direction but also write the code, build the tools, and design the frameworks that make security and trust part of our product’s DNA. Your work will directly influence how the world’s top AI labs, developers, and millions of users experience LMArena, ensuring we remain resilient in the face of real-world attacks and evolving threats.
Own the product security vision for LMArena, ensuring security and trust are core to every stage of our product lifecycle.
Design and implement platform-wide security features, including Sybil resistance, bot detection, reputation systems, and anti-abuse primitives.
Lead threat modeling and security architecture reviews for new and existing product features.
Collaborate with infrastructure and product engineering to design secure APIs, data flows, and identity systems that scale.
Improve developer velocity by creating secure-by-default frameworks and tooling for internal teams.
Partner with incident response to quickly assess, contain, and remediate security events, and lead deep postmortems to improve defenses.
Stay ahead of the curve by monitoring emerging attack techniques and applying cutting-edge security research to our platform.
Mentor engineers across the company on secure coding practices, architecture trade-offs, and operational security.
Created by researchers from UC Berkeley’s SkyLab, LMArena is an open platform where anyone can access, explore, and interact with the world’s leading AI models. By comparing them side-by-side and casting votes, the community shapes a public leaderboard, making AI progress more transparent and grounded in real-world usage.
Trusted by Google, OpenAI, Meta, xAI, and more, LMArena is rapidly becoming essential infrastructure for human-centered AI evaluation at scale. With over one million monthly users and growing developer adoption, your work will influence the next generation of safe, aligned AI systems.
8+ years of experience in software engineering or security engineering, including staff-level scope in securing large-scale, user-facing platforms.
Proven track record designing and implementing systems to detect, mitigate, and prevent adversarial behavior (bots, Sybil attacks, automated abuse).
Strong experience with threat modeling, secure architecture design, and risk assessment.
Hands-on experience building security features into production systems at scale (millions of DAU / billions of requests).
Proficiency in backend development (Node.js, TypeScript, Python, or Go) and willingness to work across the stack when needed.
Strong knowledge of distributed systems security, identity, and authentication mechanisms.
Excellent communication skills, able to build alignment across engineering, product, and leadership teams.
Experience in adversarial ML, trust & safety systems, or securing voting/reputation platforms.
Familiarity with advanced detection methods, user fingerprinting, or behavioral analytics.
Contributions to open-source security tools or research.
Background in securing real-time, interactive platforms at scale.
Built real-time detection systems and post-processing pipelines to identify attacks on large scale systems