Leverage expertise in structured and unstructured data to perform data engineering activities on cutting-edge projects in the industry working with Big Data tools. Architect data systems and stand up data platforms, build out ETL pipelines, write custom code, interface with data stores, perform data ingestion, and build data models. Assess, design, build, and maintain scalable data platforms that use the latest and best in Big Data tools. Perform analytical exploration and examination of data from multiple sources of data. Work in a Scrum-based Agile environment with multi-disciplinary team of analysts, data engineers, data scientists, developers, and data consumers in an agile fast-paced environment that is pushing the envelope of leading-edge Big Data implementations.
- 2+ years of experience with developing ETL pipelines and data manipulation scripts
- 2+ years of experience in using SQL and working with modern relational databases, including MySQL or PostgreSQL
- Experience in working with enterprise and production systems-Ability to learn technical concepts and communicate with multiple functional groups
- Active Secret clearance
- BA or BS degree
- 2+ years of experience with Big Data systems, including Hadoop, HDFS, Hive, or Cloudera
- Experience with Agile software development
- Experience with Big Data ETL tools, including StreamSets and NiFi
- Experience with AWS cloud technologies
- Experience with using Lucene-based search engines, including ElasticSearch or Solr
- Ability to have a positive, can-do attitude to solve the challenges of tomorrow
- Hortonworks, Cloudera, or Big Data Certification
Applicants selected will be subject to a security investigation and may need to meet eligibility requirements for access to classified information; Secret clearance is required.
We’re an EOE that empowers our people—no matter their race, color, religion, sex, gender identity, sexual orientation, national origin, disability, veteran status, or other protected characteristic—to fearlessly drive change.