We're looking for a Senior Backend Engineer to grow our team and to keep a healthy balance of senior and junior engineers where there's room for everyone to improve their core skills and also get better at mentoring and leadership. The Senior Backend Engineer role is part of our backend team; the team responsible for building, running and maintaining our services: Fit Predictor, Style Finder, and Outfit Maker.


What you'll be working on

  • Build and run backend services written in Java that are part of a simple, straightforward architecture
  • Participate in architecture discussions and contribute to technical and architectural decision-making
  • Collaborate on and implement algorithms designed and documented by our data scientists
  • Improve your own and your fellow developers' skills through design kick-offs, pairing sessions, and code reviews


Who you'll be working with

  • A small, smart, and highly capable team that builds and runs their services themselves
  • A world-class QA team that understands our system better than anyone else
  • A product team that makes decisions based on usability tests and usage data
  • An R&D team that comes up with novel ideas they hope we can implement

The majority of our team is located in Budapest, Hungary, but you'll be able to work remotely anywhere in the EU. #LI-remote


As a senior member of a small team, you will be providing guidance and mentorship to junior engineers and serving as a role model for best practices. You should also be able to work independently and take ownership of your projects, seeing them through from start to finish. You must be experienced in some areas to be productive from day one, but everything else is fair game and you will have the opportunity to learn on the job.



  • At least 5 years experience in backend development positions
  • Hands-on experience building scalable and highly available web applications using Java
  • Hands-on experience with RDBMSs, including data modeling and schema design
  • A strong notion of clean code and good coding practices
  • A strong testing culture

Bonus points

  • You have worked with RDBMSs, including data modeling and schema design
  • You used cloud services before (AWS, GCP, Azure)
  • You used messaging services to communicate (Kafka, RabbitMQ, JMS, etc)
  • You have experience with Python
  • You have experience with distributed computing (Hadoop, Spark)
  • You mentored fellow colleagues and helped them improve their skills
  • You are active on GitHub and have contributed to open-source projects


If the role sounds interesting, apply now and get to know us during the interviews. You can read more about our hiring process on Glassdoor.

Tech Stack

At Secret Sauce, we use the technologies and tools that we believe are right for the job at the time. We're not afraid to replace a technology or rewrite a service if gaining experience and understanding the domain better makes us realize that we made the wrong choice. We embrace change and work in a fast-paced environment which means that the technology stack we work with is what we believe is the best. That makes us quite happy.

Our backend system consists of independent services built using Java and Ruby that communicate asynchronously through Kafka. We use Avro and a Schema Registry to enforce these interfaces. All our services are packaged using Docker and deployed to our infrastructure in AWS using Kubernetes. Our infrastructure is immutable, we build AMIs with Packer and roll them out with Terraform. We don't have "DevOps" or an Ops team, we think of running services in a cloud environment as part of the software engineering role.

The services we provide to our retail partners are integrated into their existing websites; we provide a single JavaScript library that they can use to unlock all of our products. Analytics, AB testing, error reporting, real-user monitoring is built-in and is available to Fit Predictor, Style Finder, and our future services. The services themselves are built using ES6, React/Flux, and modern JavaScript tooling.

Our data team loves Spark and uses it to process large datasets that we receive from our partners and that we produce ourselves. We don't run a persistent cluster; we process and move data between different data stores: S3, Kafka, PostgreSQL, and Snowflake are all part of the equation and are used where they make the most sense. We rely on Databricks to manage our Spark clusters and use Apache Airflow to orchestrate tasks and to monitor, schedule, and retry jobs.

We started out as a small development team using Ruby and Rails. We ended up with our current architecture and tech stack not because we use technology for technology's sake, but because we believe they are the right choice with the right trade-offs for our expertise, needs, and size.