Full Job Description
Join the thousands of innovators, advocates and forces who are making an impact every day at one of the biggest footwear brands in the world. Whether you love to connect with consumers on the retail floor or want to drive our award-winning powerhouse in new directions, the SKECHERS team is the place to be. Learn more about our brand at skx.com.
We are looking for a Data Engineer with both conceptual and hands-on experience working on structured/semi structured/Complex data processing RDBMS and NoSQL data stores. As a member of our Data Services team you will be a member of a service group responsible for continuing organizational expansion of our data processing projects. Ideal candidate must be enthused about all spectrum of data development, including data transport, data processing, data warehouse/ETL integration, quick learning and self-starting. This is a demanding role that will require hands-on experience with data processing development to be deployed on Linux. You will be responsible for the day to day operation and new developments. We are seeking a candidate with good skills in software development life cycle. This position includes 24x7 production support.
Design, implement and deliver successful data solutions.
Implement defined data pipeline requirements for the underlying data lake, data warehouse and data marts.
Involved in the design and implementation of full cycle of data services, from data ingestion, data processing, ETL to data delivery for reporting .
Identify, troubleshoot and resolve production data integrity and performance issues.
Design, develop and support various data platform applications
Design and develop applications to process large amounts of critical information in batch and near real-time to power business insights.
Experience in managed services for data ingestion/processing with hands on experience working in AWS environment.
A solid understanding of NoSQL data stores with extensive experience in working with SQL
Proven experience of distributed systems driving large-scale data processing and analytics
Experience working with SAAS Data Warehouse Snowflake (Preferred) / Redshift
Working experience with CDC tools such as Qlik replicate / Debezium
Experience with Linux KSH/bash scripting
Experience working with any one of ETL toolset: Talend
Comfortable programming in Python / Java or Scala or similar programming languages
Experience with DevOps process and have used GitHub, Jenkins and JFrog
Experience in Apache Kafka, the Confluent platform.
Experience with the following data processing technologies: Spark, Kafka, Kinesis
Experience with Presto, Hive, Impala or similar SQL based engine for Big Data
3+ years of experience defining, designing and delivering data pipelines and solutions
3+ years of experience working with Linux based operating systems
3+ years relevant experience developing and integrating frameworks and database technologies that support highly scalable data processing
2+ years of programming experience with Python / Java
At least 2 years of experience working within cloud environments, preferably AWS
2+ year experience with cloud-based data warehousing systems (e.g. Snowflake / Redshift)
Proficient in any flavor of SQL
Demonstrable ability in data modeling, ETL development, data warehousing, batch, and real time data processing
Demonstrable experience with Stream Processing and workload management for data transformation, augmentation, analysis, etc.
B.S. in Computer Science, Computer Information Systems, Engineering, or another technical field, or equivalent work experience
While performing the duties of this job, the employee is regularly required to stand; use hands to finger, handle, or feel, and talk or hear. The employee frequently is required to walk, sit, reach with hands and arms, stoop, and kneel. The employee is occasionally required to sit for long period of times.
All your information will be kept confidential according to EEO guidelines.