Requirements
- •3+ years of experience in data engineering or in a similar role
- •Extensive experience with Apache Spark using Python for large-scale data processing
- •Familiarity with Databricks or similar cloud data platforms (such as Snowflake) would be a significant advantage
- •Experience with AWS, Azure, or other Kubernetes platforms is a huge plus
- •Must have hands-on expertise with Git for version control and collaboration
- •Solid knowledge of SQL, essential for querying and managing relational databases
What You'll Do
- •Enable efficient data access by creating and maintaining data pipelines
- •Collaborate with ML engineers to design and maintain automation for machine learning training, quality assessment, and model release process
- •Build data infrastructure from the vast amount of data for analytics, hypothesis testing and company metrics
- •Identify, design and implement improvement to internal processes allowing to optimize data delivery, automate manual processes
- •Design new and improve current patterns for building data models and implement necessary modifications
Benefits
- •Opportunity to join our Employee Stock Options program
- •Opportunity to help scale a unique product
- •Various bonus systems: performance-based, referral, additional paid leave, personal learning budget
- •Paid volunteering opportunities
- •Work location of your choice: office, remote, opportunity to work and travel
- •Personal and professional growth at an exponential rate supported by well-defined feedback and promotion processes
