Skip to main content
Search Jobs

R&D Engineering, Principal Engineer

Bengaluru, Karnataka, India
Engineering
Employee
Apply

Overview

Synopsys software engineers are key enablers in the world of Electronic Design Automation (EDA), developing and maintaining software used in chip design, verification and manufacturing. They work on assignments like designing, developing, and troubleshooting software, leveraging the state-of-the-art technologies like AI/ML, GenAI and Cloud. Their critical contributions enable world-wide EDA designers to extend the frontiers of semiconductors and chip development.

Job Description

Date posted 05/04/2026

Category Engineering Hire Type Employee Job ID 17222 Remote Eligible No Date Posted 05/04/2026

We Are

Synopsys is the leader in engineering solutions from silicon to systems, enabling customers to rapidly innovate AI-powered products. We deliver industry-leading silicon design, IP, simulation and analysis solutions, and design services. We partner closely with our customers across a wide range of industries to maximize their R&D capability and productivity, powering innovation today that ignites the ingenuity of tomorrow


You Are

You have spent years at the intersection of AI systems engineering and infrastructure, where “good enough” is never good enough because your tools sit between prototype and product. You have an eye for where agent behavior can drift and a knack for building the kind of test harnesses that catch it before it lands in a release. You love designing frameworks that let developers move faster without breaking things—your idea of success is when your reference dataset and automated checks become a non-negotiable part of the team’s workflow. You look at a new calibration phase and immediately see how to scale your evaluation logic, not just patch it. You stay current on how the best labs are evaluating LLMs, agents, and non-deterministic systems, and you want to bring those patterns into a real product pipeline. You enjoy making yourself obsolete by building systems that work reliably on autopilot, but you’re the first to spot when a test is missing or a metric isn’t telling the whole story. You think less about “shipping code” and more about “shipping confidence.”


What You'll Be Doing

  • Designing and developing a robust Python-based evaluation infrastructure for agentic calibration R&D workflows, used by all calibration developers before code merges
  • Owning and curating the golden reference dataset that defines correct agentic calibration behavior across evolving codebases and model versions
  • Engineering automated regression, benchmark, and quality gate systems that integrate directly into the CI/CD pipeline (GitLab or Jenkins)
  • Implementing state-of-the-art agent evaluation methods, including execution tracing, LLM-as-judge, and automated behavioral drift detection
  • Ensuring every agent code, prompt, or model change triggers a full evaluation sweep—no manual steps, no skipped checks
  • Validating that agent failures are caught early and fail in controlled, predictable ways
  • Making the evaluation framework extensible so it adapts across calibration phases (mobility, junction, full flow, split calibration) without re-architecture


The Impact You Will Have

  • Every calibration R&D developer will have a reliable, automated path to validate their work before it ever reaches production
  • Behavioral drift, regressions, and silent failures are surfaced well before release, preventing issues from leaking downstream
  • Agent quality and readiness become trackable with real metrics, supporting confident, data-driven go/no-go decisions
  • The evaluation pipeline scales easily as calibration workflows evolve, supporting rapid R&D without sacrificing rigor
  • You will directly enable faster, safer productization of agentic calibration features, raising the bar for reliability
  • Your frameworks will become the backbone of agentic calibration QA—if it doesn’t pass your checks, it doesn’t ship
  • You’ll help foster a culture where quality gates are seen as enablers, not blockers, making every engineer’s work stronger


What You'll Need

  • 10+ years of relevant experience
  • Deep proficiency in Python, with hands-on experience building production-grade evaluation or test automation infrastructure
  • Prior experience designing benchmarking, regression, or quality gate systems for non-deterministic AI/ML workflows
  • Proven track record integrating automated test suites and dashboards into CI/CD pipelines (GitLab, Jenkins, or similar)
  • Familiarity with modern agent evaluation approaches: execution tracing, LLM-as-judge, golden dataset management, and behavioral drift detection
  • Comfort working in R&D environments where requirements shift and infrastructure must scale alongside evolving workflows
  • Experience with data management and dataset curation for machine learning or agent evaluation is a strong plus
  • Exposure to calibration, agentic workflows, or semiconductor EDA environments is valued but not required


Who You Are

  • You see evaluation frameworks as products, not afterthoughts—what you build is used and trusted by everyone, every day
  • You can explain to a developer, “Here’s why your agent failed this check,” and suggest how to fix it, without jargon or blame
  • You spot weak spots in a test plan and aren’t shy about pushing for stronger coverage or better metrics
  • You thrive on making complex validation logic both robust and invisible—developers just know that if it passes, it’s solid
  • You keep up with the latest in AI system evaluation, and you’re eager to bring those ideas into real-world workflows
  • You never treat “it worked last time” as a guarantee—it’s always about repeatability, reliability, and proof


The Team You'll Be Part Of

Your recruiter will share more about the team structure and mission during the interview process.


Rewards and Benefits

We offer a comprehensive range of health, wellness, and financial benefits to cater to your needs. Our total rewards include both monetary and non-monetary offerings. Your recruiter will provide more details about the salary range and benefits during the hiring process.

At Synopsys, we want talented people of every background to feel valued and supported to do their best work. Synopsys considers all applicants for employment without regard to race, color, religion, national origin, gender, sexual orientation, age, military veteran status, or disability.

Apply

Benefits

At Synopsys, innovation is driven by our incredible team around the world. We feel honored to work alongside such talented and passionate individuals who choose to make a difference here every day. We're proud to provide the comprehensive benefits and rewards that our team truly deserves.

Visit Benefits Page

Map Pointer

Get an idea of what your daily routine around the office can be like

View Map

Hiring Journey at Synopsys

Apply

As an applicant your resume, skills, and experience are being reviewed for consideration.

Phone Screen

Once your resume has been selected a recruiter and/or hiring manager will reach out to learn more about you and share more about the role.

Interview

You will be invited to meet with the hiring team to measure your qualifications for the role. Our interviews are held either in person or via Zoom.

Offer

Congratulations! When you have been selected for the role, your recruiter will reach out to make you a verbal offer (a written offer will follow your conversation), and we hope you accept!

Onboarding

There will be some steps you need to take before you start to ensure a smooth first day, including new hire documentation.

Welcome!

Once you’ve joined, your manager, team, and a peer buddy will help you get acclimated. Over the next few weeks, you’ll be invited to join activities and training to help you ramp up for a successful future at Synopsys!

BROWSE JOBS

Find the open role that’s
right for you

View all job opportunities here

View all job opportunities here

Explore the Possibilities
with Synopsys

Follow #lifeatSynopsys