Washington DC•North America
Remote
Senior
Full Time
about 1 year ago
💰$200,000 - $260,000
TS/SCI clearance requiredForward Deployed EngineerAWSPythonApache AirflowApache SparkKubernetesTerraformCloudFormationData Engineering
Requirements
- •Bachelor's degree (or equivalent) in Computer Science, Engineering, or a related field
- •4+ years of hands-on experience building and deploying data pipelines in Python
- •Proven expertise with Apache Airflow (DAG development, scheduler tuning, custom operators)
- •Strong knowledge of Apache Spark (Spark SQL, DataFrames, performance tuning)
- •Deep SQL skills—able to optimize queries with window functions, CTEs, and large datasets
- •Professional experience deploying cloud-native architectures on AWS, including services like S3, EMR, EKS, IAM, and Redshift
- •Familiarity with secure cloud environments and experience implementing FedRAMP/FISMA controls
- •Experience deploying applications and data workflows on Kubernetes, preferably EKS
- •Infrastructure-as-Code proficiency with Terraform or CloudFormation
- •Skilled in GitOps and CI/CD practices using Jenkins, GitLab CI, or similar tools
- •Excellent verbal and written communication skills—able to interface confidently with both technical and non-technical stakeholders
- •Willingness and ability to travel up to 25% to client sites as needed
- •Active TS/SCI clearance required (Polygraph strongly preferred)
What You'll Do
- •Partner directly with mission-focused customers to design and deploy secure, scalable cloud-based data lakehouse solutions on AWS
- •Own and deliver production-ready ETL/ELT pipelines using Python, Apache Airflow, Spark, and SQL optimized for petabyte-scale workloads
- •Containerize and deploy services on Kubernetes (EKS), using Terraform or CloudFormation for Infrastructure-as-Code and repeatable environments
- •Design integrations that ingest data from message buses, APIs, and relational databases, embedding real-time analytics capabilities into client workflows
- •Actively participate in all phases of the software development lifecycle: requirements gathering, architecture, implementation, testing, and secure deployment
- •Implement observability solutions (e.g., Prometheus, Datadog, NewRelic) to uphold SLAs and drive continuous improvement
- •Support mission-critical systems in production environments—resolving incidents alongside customer operations teams
Benefits
- •PTO
- •Holidays
- •Parental Leave
- •Equity plan eligibility
