You will be redirected to the company career page

At DevRev, we’re building the future of work with Computer – your AI teammate.

  • Computer is not just another tool. It’s built on the belief that the future of work should be about genuine human connection and collaboration – not piling on more apps.Computer is the best kind of teammate: it amplifies your strengths, takes repetition and frustration out of your day, and gives you more time and energy to do your best work.
  • How?

Extensions for your teams and customersComputer doesn’t make you choose between new software and old. Its AI-native platform lets you extend existing tools with sophisticated apps and agents. So your teams – and your customers – can take action, seamlessly. These agents work alongside you: updating workflows, coordinating across teams, and syncing back to your systems.

  • This isn’t just software. Computer brings people back together, breaking down silos and ushering in the future of teamwork, through human-AI collaboration. Stop managing software. Stop wasting time. Start solving bigger problems, building better products, and making your customers happier.
  • We call this Team Intelligence. It’s why DevRev exists.
  • Trusted by global companies across multiple industries, DevRev is backed by Khosla Ventures and Mayfield, with $150M+ raised. We are 650+ people, across eight global offices.

What You’ll Do

  • Design and execute comprehensive performance test strategies — Load, Stress, Soak, Spike, Scalability, and Endurance testing.
  • Develop and maintain performance scripts using tools such as JMeter, Gatling, LoadRunner, Locust, or k6.
  • Simulate realistic user traffic and workload models for distributed systems.
  • Perform root cause analysis across application, API, database, and infrastructure layers.
  • Define and maintain performance baselines, SLAs, and SLOs.
  • Integrate performance tests into CI/CD pipelines for continuous validation.
  • Design and execute comprehensive performance test strategies — Load, Stress, Soak, Spike, Scalability, and Endurance testing.

Develop and maintain performance scripts using tools such as JMeter, Gatling, LoadRunner, Locust, or k6.

  • Simulate realistic user traffic and workload models for distributed systems.

Integrate performance tests into CI/CD pipelines for continuous validation.

  • Build AI-driven performance analysis frameworks using pattern recognition and anomaly detection.
  • Develop custom test agents/orchestrators using MCPs to simulate large-scale, multi-node workloads.
  • Implement self-healing test systems that adapt dynamically to environment changes.
  • Use ML models to predict performance degradation and proactively optimize systems.
  • Automate root cause detection with AI-assisted observability insights.

Automate root cause detection with AI-assisted observability insights.

  • Use observability tools (Grafana, Prometheus, Datadog, New Relic, AppDynamics) to monitor and analyze performance metrics.
  • Create visual dashboards to communicate trends and optimization opportunities.
  • Collaborate with SRE and development teams for end-to-end performance tuning.

Use observability tools (Grafana, Prometheus, Datadog, New Relic, AppDynamics) to monitor and analyze performance metrics.

  • Create visual dashboards to communicate trends and optimization opportunities.
  • Collaborate with SRE and development teams for end-to-end performance tuning.
  • Partner with engineering, QA, and platform teams early in the SDLC to define performance goals.
  • Conduct post-release reviews and contribute to testing standards and best practices.

What You’ll Bring

  • 5+ years of experience in performance testing for large-scale distributed systems.
  • Strong programming skills in Python, Go, or JavaScript/TypeScript.
  • Hands-on with modern testing tools (JMeter, Gatling, Locust, k6, LoadRunner).
  • Expertise in performance metrics — latency, throughput, error rate, concurrency, resource utilization.
  • Experience integrating with CI/CD (Jenkins, GitHub Actions, Azure DevOps).
  • Deep understanding of microservices, containers (Docker, Kubernetes), and distributed architectures.
  • Skilled in analyzing logs, metrics, and traces for performance bottlenecks.

Bonus Points For

  • Experience with AI-assisted testing, anomaly detection, or AIOps.
  • Familiarity with chaos engineering tools (Gremlin, LitmusChaos).
  • Exposure to AWS, Azure, or GCP environments.
  • Database performance tuning and caching strategies.
  • Contributions to open-source testing or AI performance frameworks.

Exposure to AWS, Azure, or GCP environments.

  • Database performance tuning and caching strategies.

Culture

  • The foundation of DevRev is its culture -- our commitment to those who are hungry, humble, honest, and who act with heart. Our vision is to help build the earth’s most customer-centric companies. Our mission is to leverage design, data engineering, and machine intelligence to empower engineers to embrace their customers.
  • That is DevRev!

Job Summary

CompanyDevRev
LocationCebu, Philippines
TypeFull-Time
LevelMid-level
DomainAI / Data Science