Are you passionate about AI? 🤖
At Satori Analytics, we aim to change the world one algorithm at a time by bringing clarity to global brands through Data & AI. From cloud-based ecosystems for fintech to predictive models for airlines, our cutting-edge solutions cover the entire data lifecycle—from ingestion to AI applications.
As a fast-growing scale-up, our team of 100+ tech specialists—including Data Engineers, Data Scientists, and more—delivers innovative analytics solutions across industries like FMCG, retail, manufacturing and FSI. Join us as we lead the data revolution in South-Eastern Europe and beyond!
What Your Day Might Look Like:
- Architect and build scalable, production-grade GenAI systems and services.
- Design and implement Retrieval-Augmented Generation (RAG) pipelines end-to-end.
- Integrate and orchestrate LLMs (OpenAI, Anthropic, Google, or open-source models) in real-world applications.
- Build internal abstractions, SDKs, and reusable components for GenAI capabilities.
- Implement evaluation pipelines, guardrails, and monitoring for hallucination detection, drift, and quality control.
- Optimize inference performance, cost, token usage, and response latency.
- Design multi-step agent workflows and tool integrations safely and reliably.
- Lead the transition from experimentation to hardened production systems.
- Mentor engineers and establish best practices for GenAI engineering.
- Line management responsibilities – 1-2 bright junior engineers.
Requirements
Your Superpowers🚀:
- MSc in Computer Science, Engineering, or related STEM field (PhD is a plus).
- 4+ years of professional software engineering experience, with 1+ years building GenAI/LLM-powered systems in production.
- Strong backend engineering expertise: clean architecture, system design, SOLID principles, testing, and CI/CD.
- Deep experience building production-grade LLM applications (not just prototypes or notebooks).
- Expert-level Python skills, including async programming, typing, packaging, and performance optimization.
- Experience designing scalable APIs and AI microservices (FastAPI or similar frameworks).
- Strong understanding of LLM system patterns: RAG, tool calling, agents, prompt orchestration, memory, and evaluation pipelines.
- Experience managing latency, cost, reliability, and fallback strategies in LLM-powered systems.
- Hands-on experience with cloud platforms (AWS or Azure), Docker, CI/CD pipelines, and infrastructure-as-code.
- Experience implementing monitoring, logging, tracing, and evaluation frameworks for GenAI systems.
- Experience with vector databases (Azure AI Search, PostgreSQL pgvector, Pinecone, FAISS, Weaviate, Qdrant).
- Strong communication skills and ability to lead technical discussions and mentor engineers.
Bonus Points for:
- Deep experience with LLM orchestration frameworks (LangChain, LlamaIndex).
- Experience hosting or fine-tuning open-source LLMs.
- Experience with model evaluation frameworks (RAGAS, custom eval harnesses, A/B testing for prompts).
- Experience building AI platforms or internal LLM gateways.
- Familiarity with distributed systems, message queues, or event-driven architectures.
- Experience with security, governance, and responsible AI practices in production environments.
Benefits
Perks on Perks:
- Competitive salary and hybrid work model – come hang out in our Athens office or work remotely from anywhere in European economic Area (EU, Switzerland etc.) or UK (up to 6 weeks per year).
- Training budget to level up your skills from the top tech partners in the market (Microsoft, AWS, Salesforce, Databricks etc.) – whether it’s certifications or courses, we’ve got you covered.
- Private insurance, top-tier tech gear, and the chance to work with a stellar crew.
Ready to create some data magic with us? Hit that apply button and let’s get started.