As a DevOps Engineer, you will be responsible to set up, shape, administer, and test the applications as part of our project delivery team (*specific to Hadoop platforms). You will be part of a team of DevOps engineers focusing on the day-to-day tasks of managing and maintaining Datacenter environments and will be hands-on involved with CI/CD process and monitoring application servers. Candidate must be comfortable working in an agile environment.
Provide infrastructure and support for software developers to rapidly iterate on their products and services and deliver high-quality results. This includes infrastructure for automated builds and testing, continuous integration, software releases, and system deployment
Automate the development and test automation processes through CI/CD pipeline (GitFlow, Jenkins, SonarQube, CheckMarx, Puppet, Terraform, etc.)
Develop and configure tools for more productive Front-end Operations (build tools, deployment, speed, app, tests, builds, deploys, monitoring errors/logs, and stability)
Install and configure Hadoop clusters (experience in Cloudera is a plus)
Manage Hadoop, Sqoop and Spark cluster environments
Broad understanding of tools and technologies:
source control, continuous integration, infrastructure automation, deployment automation, container concepts, orchestration and cloud
Ensure proper resource utilization between the different development teams and processes
Design and implement a toolset that simplifies provisioning and support of a large cluster environment
Aligning with the systems engineering team to propose and deploy new hardware and software environments required for Hadoop and to expand existing environments
Apply proper architecture guidelines to ensure highly available services
Review performance stats and query execution/explain plans; recommend changes for tuning
Create and maintain detailed, up-to-date technical documentation
Manage and maintain multiple environments to ensure proper setup, configuration and availablility for each project as scheduled
Solve live performance and stability issues and prevent recurrence
Bachelor's Degree in Computer Science or equivalent work experience
6+ years' software engineering / devops experience
6+ years' experience in architecting, administrating, configuring, installation and maintenance of Open Source Big-data applications, with focused experience in MapR distribution
Experience in utilizing and implementing ZooKeeper and Broker with Kafka
Must be hands-on with Apache/Confluent Kafka, Hadoop, Apache stack
Expertise in administration of Hive / Drill / Hbase / Spark / Sqoop
Experience in setting up Kerberos principals and testing HDFS, Hive, Impala and Spark access for the new users
Strong knowledge of scripting and automation tools and strategies, e.g. Shell, Python, Powershell
Experienced overseeing web application installations, upgrades, and deployment as well as any servers/systems that support hosted web applications
Experience installing/upgrading applications and deploying applications/solutions
Work with Jenkins and CI tools to automate software delivery (build, test, deploy)
UNIX/Linux system administration experience
Experience with performance tuning of Cloudera clusters, YARN & Spark
Experience with modern application infrastructure methodologies such as ansible and Kubernetes deployment
Healthcare IT experience – a plus.
- This role can be worked remotely from home.
CareCentrix maintains a drug-free workplace.
We are an equal opportunity employer. Employment selection and related decisions are made without regard to age, race, color, national origin, religion, sex, disability, sexual orientation, gender identification, or being a qualified disabled veteran or qualified veteran of the Vietnam era or any other category protected by Federal or State law.