Looking for a Big Data Developer to bring expertise in implementing complex, multi-tier applications in a cloud-based environment. You will be primarily working on collecting, storing, processing, transforming and transporting large-scale data sets using Big Data technology stack such as Hadoop, M/R, HDFS, Spark, SQL and Hive.
- 5+ years’ experience implementing complex, multi-tier applications
- Bachelor’s/Master’s degree in Computer Science or Engineering
- Experience with Programming in R (nice to have Python)
- Experience with Azure environment
- Write complex SQLs and ETL processes
- Experience with Big Data stack of technologies, including Hadoop, HDFS, Hive
- Working with large data volumes, including processing, transforming and transporting large-scale data using big data stack: M/R, Hive SQL, Spark etc.
- Working knowledge of Puppet
- Excellent analytical and troubleshooting skills
- Experience working in an Agile/Scrum environment
- Build and release experience (CI/CD)
- Exposure to scheduling, automation & orchestration software (Control-M, Cloud Formation, Puppet)