AI Infrastructure & Reliability Engineer

Mosaic.tech

Mosaic.tech

Software Engineering, Other Engineering, Data Science

Israel

Posted on Apr 24, 2026
About Us
HiBob helps modern, mid-size businesses transform the way they manage people, giving HR and managers all they need to connect, engage, develop, and retain top talent. Since 2015, we’ve achieved consecutive triple-digit year-over-year growth, all backed by our amazing team of Bobbers from across the globe, making us the choice HRIS of over ~5500 midsize and multinational companies and over 1 Milion users.
Our HR platform is intuitive, data-driven, and built for the way people work today: globally, remotely, and collaboratively.
What this role is really aboutYou’ll join a 3-person platform team within our Business Technology group -owning the internal infrastructure that our AI platform and its users depend on. This isn’t a product engineering role, and it isn’t ticket work or babysitting pipelines someone else built. You’re building and operating the internal foundation that the company runs on. The work covers the full stack of platform engineering: core cloud infrastructure (AWS, Kubernetes, IaC), CI/CD pipelines, AI-driven infrastructure components, and the SRE and observability practice that keeps it all honest -metrics, alerting, incident response, and reliability standards. As our AI capabilities grow, so does the complexity underneath them, and staying ahead of that is central to the role. If you treat infrastructure as a product -reusable, automated, observable, and built to last -this is your kind of role.
  • 2-4 years Hands-on DevOps, SRE, or infrastructure engineering in production SaaS environments.
  • Strong AWS experience: multi-account architecture, cross-account IAM, serverless and event-driven services (Lambda, SQS, SNS, EventBridge), and EKS cluster management.
  • Proven Kubernetes experience in production, including cross-account migrations and stateful workload management.
  • Proficiency with Terraform - repository structure design, module architecture, and CI/CD pipeline implementation.
  • Hands-on experience building and maintaining GitHub Actions pipelines for end-to-end CI/CD workflows.
  • Working Python proficiency for scripting, internal tooling, and workflow automation.
  • Practical experience implementing observability stacks from scratch: metrics, logging, distributed tracing, and alerting.
  • Experience owning reliability practices: SLOs, incident response, and postmortem culture.
Nice to have
  • Hands-on experience operating LLM APIs in production: rate-limit and quota management, cost attribution per team/model, latency monitoring, and resilience patterns (retries, fallbacks, circuit breakers).
  • FinOps experience across cloud, AI, and observability spend.
  • Experience introducing self-healing or auto-remediation patterns in production.
  • DevOps & AI-Driven Infrastructure - own CI/CD, deployment processes, and release reliability. Build and operate cloud infrastructure that is automated, intelligent, and continuously self-improving - not just managed.
    • Design and build our Terraform repository and IaC pipeline from scratch -AI-assisted generation, drift detection, and policy enforcement built in.
    • Build AI-driven GitHub Actions pipelines -automated code review, risk assessment, and intelligent deployment decisions.
    • Manage Kubernetes workloads across AWS accounts -zero downtime, fully automated, nothing left behind.
  • Embed AI into the operational layer -proactive drift detection, automated remediation, and intelligent scaling toward a self-healing runtime.
  • Reliability & SRE -improve uptime, resilience, and incident response.
    • Define and enforce SLOs/SLIs, error budgets, and on-call practices.
    • Lead incident response, postmortems, and systemic reliability improvements.
  • Own AI-specific reliability: model latency SLOs, token quota monitoring, rate limit handling, fallback and retry strategies, and cost-per-request alerting.
  • Observability & Telemetry - increase visibility, reduce noise, improve troubleshooting.
  • Establish and continuously evolve the observability stack: metrics, logs, distributed tracing, and alerting tuned for both application and AI workloads.
  • AI / LLM Operations- bringing AI systems to production and operating them at scale, with a focus on reliability, performance, and trust.
    • Own the AI infrastructure layer: rate limits, quota management, latency SLOs, and fallback strategies (retries, circuit breakers).
  • Operate LLM APIs in production with resilience and cost attribution per team/model.
  • FinOps & Cost Optimization - optimize AI, infra, and logging costs at scale.
  • Build cost visibility and guardrails across AWS, LLM usage, and observability pipelines.