Specific information related to the position is outlined below. To apply, click on the button above. You will be required to create an account (or sign in with an existing account). Your account will provide you access to your application information. Need Help?
Should you have a disability and need assistance with the application process, please request a reasonable accommodation by emailing BB&T Accessibility or by calling 866-362-6451. This email inbox is monitored for reasonable accommodation requests only. Any other correspondence will not receive a response.
Regular or Temporary:
Language Fluency: English (Required)
1st shift (United States of America)
Please review the following job description:
The Enterprise Data Office within the Data and Technology Services area of BB&T is creating a culture of technical excellence in order to empower our business units and customers. We are looking for a senior level data developer who is a passionate technologist with a talent for interpreting business needs into logical solutions and services that scale. For you to be successful in this role: you will have exceptional technical leadership abilities, a very strong background in working with “Big Data”, strong software and data ingestion and integration services development experience, deep knowledge across broad technologies and frameworks, an exceptional ability to communicate effectively, and are highly motivated to grow your skills and career path.
Primary job responsibilities are to lead, design and develop data integration programs and processes to support projects and Data Management demand. Through designing, developing, testing, and code review, this position will be responsible for ensuring technical designs and implementation are consistent with requirements and data integration best practices as well as in adherence with architecture standards and processes. The individual will work with other members of the Data Management team to ensure the data integration end-to-end implementation meet requirements and solution design, across broad line of business needs areas and enterprise data domains, using best practices in: ETL/ELT processes, SQL, messaging, streaming, scripting, and a combination of technologies. The data developer often works as a dedicated member of delivery teams, focused on providing solutions in a ready-to-use services form for analytics groups and data scientists who are interrogating information for predictive analytics, machine learning and data mining purposes. In many cases, the data also works with business units and departments cleansing, consolidating and preparing data for use in individual analytics applications for business analysts, leadership groups, and other end users to aid in ongoing operational insights.
An independent & self-motivated lead Data Developer must be versed in broad approaches to data processing and applications, and will develop components/applications by studying operations and designing, and developing, reusable services and solutions that support the automated ingesting, profiling, and analysis of structured and unstructured data.
Essential Duties and Required Skills:
Following is a summary of the essential functions for this job. Other duties may be performed, both major or and minor, which are not mentioned below. Specific activities may change from time to time.
Minimum of 6+ years of related experience
Design and implement data ingestion and integration techniques for real-time and batch processes for a variety of sources into Hadoop ecosystems, HDFS clusters, and conventional persistence environments
Understanding of Big Data and conventional components and tools and knowledge in agile development cycle
Experience working in a data intensive role including the extraction of data (DB/web/API/etc.), transformation and loading (ETL)
Familiarity with convention transformation and warehousing platforms, including Informatica and Neteeza
Experience with data cleansing/preparation on Hadoop/Apache Spark Ecosystem - MapReduce/Hive/HBase/Spark SQL
Experience with distributed streaming tools like Apache KAFKA.
Visualize and report data findings creatively in a variety of visual formats that provide insights into the organization
Knowledge of data, master data and metadata related standards, processes and technology
Contribute to the definition and documentation of architecture roadmaps and development standards
Drive use case analysis and solution design around activities focused on determining how to best meet customer requirements within the tools and platforms of the ecosystem
Ensure scalability and high availability, fault tolerance, and elasticity of solutions and services in the ecosystem
Architect and develop ELT and ETL solutions focused on moving data from highly diverse data landscapes into centralized data lake and warehousing; also architect solutions to acquire, ingest, and curate semi/unstructured data sources, such as sensors, machine logs, click streams, etc.
Serve as an expert in efficient ETL, data quality, and data consolidation
Stay current with vendor/product roadmaps
Fluent with functional, imperative and object-oriented languages and methodologies.
Experience with SQL (MySQL, Postgres) and NoSQL(MongoDB/HBase/ReDis) database is expected.
Proficiency with various operating systems (Linux/ Windows)
Experience with Big Data approaches and technologies including: Hadoop, Cloudera utilities, Spark, Kafka, Hive, Oozie (experience with Angular JS/HTML5/Node JS are big plus).
Experience implementing and consuming large-scale web services (RESTful APIs)
Has led, or been directly involved with, the investigation and resolution of complex data, system, and software systems requiring solutioning.
Experience in data management best practices, real-time and batch data integration, and data rationalization.
Ability to prioritize well, communicate clearly, have a consistent track record of commitment and accountability for delivery, as well as excellent software engineering and troubleshooting skills.
Must be able to work across multiple phases of the project (e.g. initiation, planning, requirements, design, etc.) and manage multiple responsibilities.
Understand database performance factors and trends pertaining to very large database design and collaborate with DBAs to implement mitigating physical modeling solutions; provide data structures optimized for information entry and retrieval
Adopt quality assurance practices to include: following an appropriate modeling methodology, helping to establish department standards and procedures, reviewing and critiquing data models produced by others, participating in walkthroughs and audits, ensuring appropriate documentation is produced at all points in the process.
Pursue continuous improvements based on lessons learned and industry best practices.
Understand the goals and risks associated with the business and technical requirements, and offer counsel on risk mitigation and the alignment of data solution with objectives.
Demonstrate a team orientation by working closely and effectively with business partners, technology teams and outside services.
Ability to derive technical specifications from business requirements and express complex technical concepts in terms that are understandable to multi-disciplinary teams, in verbal as well as in written form
Ability to apply systems thinking for solutions by considering broad potential alternatives and impact areas.
Ability to travel as needed, occasionally overnight.
Bachelor's degree, preferred with Masters, or equivalent experience
Knowledge of, and experience working in, DevOps environments is desirable.
Previous experience in the financial services industry.
Experience with performance tuning and documenting changes.
Exposure to container technologies (Docker or similar) and orchestration is a plus.
Experience with metadata capture, management, and platforms.
#BigData #Developer #ELT