Feel free to reach out!
<aside> 📫
</aside>
<aside> <img src="https://prod-files-secure.s3.us-west-2.amazonaws.com/afc40e8c-81fa-4db3-861f-17781fd0415a/ef3532a2-447c-499a-b927-9241b250dd98/LinkedIn_logo_initials.png" alt="https://prod-files-secure.s3.us-west-2.amazonaws.com/afc40e8c-81fa-4db3-861f-17781fd0415a/ef3532a2-447c-499a-b927-9241b250dd98/LinkedIn_logo_initials.png" width="40px" /> Linkedin
</aside>
<aside> <img src="https://prod-files-secure.s3.us-west-2.amazonaws.com/afc40e8c-81fa-4db3-861f-17781fd0415a/8fcb3a6f-6e1c-4c6f-921b-c6107cebcf76/Telegram_logo.svg.png" alt="https://prod-files-secure.s3.us-west-2.amazonaws.com/afc40e8c-81fa-4db3-861f-17781fd0415a/8fcb3a6f-6e1c-4c6f-921b-c6107cebcf76/Telegram_logo.svg.png" width="40px" /> Telegram
</aside>
<aside> <img src="https://prod-files-secure.s3.us-west-2.amazonaws.com/afc40e8c-81fa-4db3-861f-17781fd0415a/cc5201c7-70b6-47e2-ae91-da4f0ef83475/xl1POYLw_400x400.jpg" alt="https://prod-files-secure.s3.us-west-2.amazonaws.com/afc40e8c-81fa-4db3-861f-17781fd0415a/cc5201c7-70b6-47e2-ae91-da4f0ef83475/xl1POYLw_400x400.jpg" width="40px" /> Twitter
</aside>
I’m an Lead AI Engineer at **Terra Quantum, Germany**.
And I develop AI-powered products - like voice-assistants, chat-bots and productivity tools for AI teams.
I’ve got lots of hands-on experience with:
Along with that I sometimes publish papers and articles about On-Device AI. (check out more info below)
<aside> <img src="https://prod-files-secure.s3.us-west-2.amazonaws.com/afc40e8c-81fa-4db3-861f-17781fd0415a/f653d146-fe42-4cab-8e03-d71230f9a990/Google_Scholar_logo.svg.png" alt="https://prod-files-secure.s3.us-west-2.amazonaws.com/afc40e8c-81fa-4db3-861f-17781fd0415a/f653d146-fe42-4cab-8e03-d71230f9a990/Google_Scholar_logo.svg.png" width="40px" />
</aside>
<aside> 📄
</aside>
<aside>
<aside>
API Development: FastAPI, Express
Databases: PostgreSQL, Redis, Vector Databases
Cloud Services: Google Cloud Platform (Compute, Storage, Cloud Run), Firebase
</aside>
<aside>
Model Training: LLMs, Stable Diffusion (SD)
Model Deployment: LLMs, TTS (Text-to-Speech), STT (Speech-to-Text), SD on Google Cloud and other GPU providers
RAG Pipelines: Retrieval-Augmented Generation for enhanced LLM performance
</aside>
<aside>
Containerization & Orchestration: Docker, Kubernetes, Helm
CI/CD: GitHub Actions, Google Cloud CI/CD pipelines
Scalable Inference Systems: Cloud-based model inference on GCP and other GPU providers
</aside>