Full Stack Engineer
Source: Arbeitnow
AI Summary Powered by Gemini
Nooxit GmbH is seeking a Full Stack Engineer for their remote, full-time position in Berlin. The role involves building AI-powered finance automation software, with a strong emphasis on accuracy, observability, and improving AI model performance. This is an exciting opportunity to shape a product that makes critical financial decisions for enterprise clients.
Job Description
About Nooxit Nooxit is building the next generation of AI-powered finance process automation for medium and large enterprises. We're putting accounting on autopilot. Our software uses modern deep learning and NLP to free finance teams from repetitive document capturing, manual compliance reporting, and tedious accounting tasks — helping them save time, prevent fraud, and stay compliant with local financial regulations across the globe. Founded in 2019 and investor-backed, we're a remote-first company headquartered in Berlin with additional space in the Cologne area. Our diverse team brings backgrounds in consulting, venture building, autonomous driving, space technology, and cognitive science. Our customers include international mid-market and enterprise companies across sectors. The Role We're looking for a Full Stack Engineer who wants to build things that matter — someone who's equally comfortable designing a clean API as they are shipping a polished frontend feature, and who genuinely cares about whether the AI behind it is actually getting things right. Our product makes critical financial decisions for enterprise customers, so accuracy isn't a nice-to-have — it's everything. We want someone who's obsessed with observability: if the AI extracts the wrong line item, misclassifies a transaction, or drifts in confidence over time, you want to know about it before anyone else does. You'll build the systems that measure, surface, and improve model performance — from evaluation pipelines and accuracy dashboards to real-time alerting on prediction quality. You'll work across the stack to develop, improve, and scale cloud-native applications while ensuring we always have a clear, data-driven picture of how our AI is performing in the wild. You'll have real ownership, creative freedom, and the opportunity to shape both the product and the engineering culture as we grow. Tasks What You'll Do Design and build highly available, scalable cloud applications end to end Develop new features and microservices within our existing infrastructure Build and own observability infrastructure for our AI systems — accuracy dashboards, performance monitoring, anomaly detection, and alerting on prediction quality Design and maintain evaluation pipelines that continuously measure model accuracy across document types, edge cases, and customer environments Define and track key performance metrics — precision, recall, confidence thresholds, drift indicators — and turn them into actionable engineering work Instrument our services with structured logging, tracing, and metrics collection to ensure full visibility into how our AI behaves in production Improve and refactor existing services for performance, reliability, and maintainability Write clean, testable, well-documented code and participate in thoughtful code reviews Collaborate closely with product, data science, and fellow engineers to ship meaningful improvements Contribute to engineering practices, tooling, and architecture decisions Requirements What We're Looking For Strong proficiency in Python and TypeScript Solid experience with Docker, Git, and cloud-based development workflows Familiarity with SQL databases (PostgreSQL preferred) Experience with or willingness to work with: Pytest, CI/CD pipelines, Kubernetes, Terraform, Helm Comfort working across backend and frontend — you don't need to be an expert in both, but you're curious and willing to learn Experience with testing practices and a quality-first mindset A genuine passion for AI accuracy and observability — you're the kind of person who wants to understand why a model got something wrong, builds a dashboard to track it, and sets up an alert so it doesn't happen silently again Comfort with metrics and data analysis — you think in terms of precision, recall, error rates, and confidence distributions, not just "it seems to work" Proactive, communicative, and comfortable working autonomously in a remote team Be able to work within 2 hours +/- of the European timezone. Bonus Points Previous experience in a startup or fast-moving environment Hands-on experience building evaluation pipelines, accuracy benchmarks, or monitoring for ML/AI systems in production Experience with observability tools and practices (e.g., Prometheus, Grafana, Datadog, OpenTelemetry, structured logging, distributed tracing) Familiarity with ML metrics concepts — confusion matrices, F1 scores, calibration curves, data drift detection, A/B testing Experience working at the intersection of engineering and data science, helping translate model quality into measurable product outcomes Interest in AI/ML applications or fintech Experience mentoring other engineers or helping shape engineering processes Benefits What We Offer Fully remote — work from anywhere in the world Flexible working hours — we care about output, not clocked hours Competitive salary with room to grow as the company scales An open, diverse, and international team that values collaboration and curiosity The chance to work with renowned enterprise customers and help build what could become the next finance automation category leader Real ownership and influence over the product and engineering direction How to Apply Send us your CV — that's all we need. If you have a GitHub profile or portfolio, feel free to include it. No cover letter required. Questions? Reach out to us anytime. We look forward to hearing from you. Find more English Speaking Jobs in Germany on Arbeitnow
Full Description
About Nooxit Nooxit is building the next generation of AI-powered finance process automation for medium and large enterprises. We're putting accounting on autopilot. Our software uses modern deep learning and NLP to free finance teams from repetitive document capturing, manual compliance reporting, and tedious accounting tasks — helping them save time, prevent fraud, and stay compliant with local financial regulations across the globe. Founded in 2019 and investor-backed, we're a remote-first company headquartered in Berlin with additional space in the Cologne area. Our diverse team brings backgrounds in consulting, venture building, autonomous driving, space technology, and cognitive science. Our customers include international mid-market and enterprise companies across sectors. The Role We're looking for a Full Stack Engineer who wants to build things that matter — someone who's equally comfortable designing a clean API as they are shipping a polished frontend feature, and who genuinely cares about whether the AI behind it is actually getting things right. Our product makes critical financial decisions for enterprise customers, so accuracy isn't a nice-to-have — it's everything. We want someone who's obsessed with observability: if the AI extracts the wrong line item, misclassifies a transaction, or drifts in confidence over time, you want to know about it before anyone else does. You'll build the systems that measure, surface, and improve model performance — from evaluation pipelines and accuracy dashboards to real-time alerting on prediction quality. You'll work across the stack to develop, improve, and scale cloud-native applications while ensuring we always have a clear, data-driven picture of how our AI is performing in the wild. You'll have real ownership, creative freedom, and the opportunity to shape both the product and the engineering culture as we grow. Tasks What You'll Do Design and build highly available, scalable cloud applications end to end Develop new features and microservices within our existing infrastructure Build and own observability infrastructure for our AI systems — accuracy dashboards, performance monitoring, anomaly detection, and alerting on prediction quality Design and maintain evaluation pipelines that continuously measure model accuracy across document types, edge cases, and customer environments Define and track key performance metrics — precision, recall, confidence thresholds, drift indicators — and turn them into actionable engineering work Instrument our services with structured logging, tracing, and metrics collection to ensure full visibility into how our AI behaves in production Improve and refactor existing services for performance, reliability, and maintainability Write clean, testable, well-documented code and participate in thoughtful code reviews Collaborate closely with product, data science, and fellow engineers to ship meaningful improvements Contribute to engineering practices, tooling, and architecture decisions Requirements What We're Looking For Strong proficiency in Python and TypeScript Solid experience with Docker, Git, and cloud-based development workflows Familiarity with SQL databases (PostgreSQL preferred) Experience with or willingness to work with: Pytest, CI/CD pipelines, Kubernetes, Terraform, Helm Comfort working across backend and frontend — you don't need to be an expert in both, but you're curious and willing to learn Experience with testing practices and a quality-first mindset A genuine passion for AI accuracy and observability — you're the kind of person who wants to understand why a model got something wrong, builds a dashboard to track it, and sets up an alert so it doesn't happen silently again Comfort with metrics and data analysis — you think in terms of precision, recall, error rates, and confidence distributions, not just "it seems to work" Proactive, communicative, and comfortable working autonomously in a remote team Be able to work within 2 hours +/- of the European timezone. Bonus Points Previous experience in a startup or fast-moving environment Hands-on experience building evaluation pipelines, accuracy benchmarks, or monitoring for ML/AI systems in production Experience with observability tools and practices (e.g., Prometheus, Grafana, Datadog, OpenTelemetry, structured logging, distributed tracing) Familiarity with ML metrics concepts — confusion matrices, F1 scores, calibration curves, data drift detection, A/B testing Experience working at the intersection of engineering and data science, helping translate model quality into measurable product outcomes Interest in AI/ML applications or fintech Experience mentoring other engineers or helping shape engineering processes Benefits What We Offer Fully remote — work from anywhere in the world Flexible working hours — we care about output, not clocked hours Competitive salary with room to grow as the company scales An open, diverse, and international team that values collaboration and curiosity The chance to work with renowned enterprise customers and help build what could become the next finance automation category leader Real ownership and influence over the product and engineering direction How to Apply Send us your CV — that's all we need. If you have a GitHub profile or portfolio, feel free to include it. No cover letter required. Questions? Reach out to us anytime. We look forward to hearing from you. Find more English Speaking Jobs in Germany on Arbeitnow