Full Job Description
Duties: Gather requirements and analyze the data sources. Build scalable and distributed data solutions using Hadoop and Spark. Analyze Hadoop cluster and other Big Data analytic tools including Pig, Hive HBase database and SQOOP. Develop real-time analytics using Spark Streaming and Scala. Assist with project development to ensure projects are executed in a timely manner. Coordinate end-to-end delivery of modulus. Design and document process flows.
Minimum education and experience required: This position requires a Bachelor’s degree in Computer Engineering, Computer Science, Information Technology, or related field of study plus seven (7) years of experience in the job offered or seven (7) years of experience as Programmer Analyst, Consultant, Lead Engineer or related occupation.
Skills Required: This position requires seven (7) years of experience with the following skills: Hadoop; Hive; Spark; Java; Python; SQL; and Unix.