Lead Information Architect

JP Morgan Chase - Wilmington, DE3.9

30+ days agoFull-timeEstimated: $130,000 - $180,000 a year
EducationSkills
Chase Consumer and Community Banking (CCB) serves more than 65 million consumers and 4 million small businesses with a broad range of financial services, including personal banking, small business lending, mortgages, credit cards, payments, auto finance and investment advice. The cost, risk, quality, timeliness and overall regulatory demands for data and information continue to increase, elevating the need to architect, drive transparency and govern information and integration into business processes with clear policies, standards and principles (formal & informal).

An ideal candidate for this role will develop strong partnerships with CCB Data Management/Data Office, Information Architects and Applications Delivery communities to ensure information architecture & data management align and adhere to firm-wide standards and direction, with all platforms and applications designed and executing based on clear and understood policies, standards, principles, patterns and constraints. Strengthening understanding of the existing data portfolio, at rest and in motion, and establishing new techniques, mandates and automation will be critical to control at scale as CCB transforms the organization and transitions to the New Banking Architecture and Hybrid Cloud strategies.

In this role, the CCB Information Architect will play in integral role in Firm Data Management initiatives that provide increased understanding and transparency of the CCB data estate.
CCB Data Landscape - Cataloging and classifying the CCB Data Estate for data in place and data in motion using a firm taxonomy, while elevating gaps and proposing solutions
Firm Data Standards – Driving adoption of Firm Data Management Standards across the application estate, including centralizing the collection of evidence artifacts (LDM/PDM models, sourcing and retention)
Data rationalization and optimization – Prioritizing data discovery and rationalization opportunities to eliminate redundant/overlapping data and debris to reduce cost, confusion and risk surface area.

The CCB Information Architect should be able to work across transactional/operational and analytic/big data environments to ensure the data-centric SDLC artifacts are complete and evidenced. The ideal candidate should have big picture thinking beyond data modeling, including data management capabilities, data governance, modern warehousing/reservoirs and infrastructure patterns. An ideal candidate should be progressive in strategy, standards and enablement with an automation and “to code” mindset.

Key Responsibilities:
CCB Data Landscape
Actively drive cataloging and classifying Data In Place and Data in Motion across the CCB application estate – including authoritative designation and sourcing validation
Actively participate in firm taxonomy (concepts/qualifiers), elevating gaps where unable to catalog data to the grain required to understand and consume the right data
Capture if applications have data models
Collaborate on the metadata course grain graph to visualize the data landscape for improved visualization and validation

Firm Data Management Standards Compliance
Ensure new application or application moving to new infrastructure have committed plans to define the data they create or migrate. Including Logical and Physical models
Assist with defining and communicating education and training to fan out standards and compliance understanding
Maintain updates in Firm Data Fit tool
Support prioritized Firm Critical Business Initiatives (CCAR)
Align CCB and Firm data modeling standards and implement in data model templates, governance and metadata publishing processes. Collaborate on processes and tools to facilitate migration.
Champion Metadata coverage, capture, health and use
Software Delivery Consistency and Governance Automation
Be a thought leader on domain models and dictionaries to increase data naming, definition and type consistency and velocity of our software engineers (software delivery – CI/CD)
Participate in the definition, approach and foundations to increasingly automate portfolio governance in an autonomous application and event driven environment

Data Rationalization and Optimization
Support prioritized tranches of activities to eliminate obsolete, diminished-value, redundant or over-retained data and utilize virtualization and lifecycle management to optimize storage costs.
A minimum of 10 years of IT experience, with 8+ years in Information Architecture / Data Modeling in Transactional, Data Warehousing, Business Intelligence or Master & Reference Data Management
Bachelor's degree in Science, Business or Arts is required; Master’s degree is Computer Science is preferred.
Strong leadership, partnership and communication skills, tailored to the audience’s level of knowledge.
Strong understanding of the financial services industry vertical, focused on Retail Banking
Expert in conceptual, logical and physical data modeling using tools such as Erwin
Experience with Data Management and Data Governance with a passion for automation for consistency and compliance
Experience with enterprise metadata, reference data and data quality, including analysis and framing facts
Excellent influencing and consultative skills, including business and technology interactions

Additional Desired Qualifications (candidates not expected to have all of these)
Experience in industry research and leveraging industry data models, standards, patterns and open source products, as appropriate, to increase consistency in data definition and developer velocity
Experience in performing ‘data forensics’ to discover and infer data managed and to strategize on data rationalization opportunities, given the facts, across the business data and storage infrastructure
Logical and Physical database modeling skills (i.e., Oracle, Teradata, and DB2) and understanding of dimensional modeling
Experience in highly distributed databases, event streaming – private and public cloud
Experience in data profiling and interpreting results to support data modeling tuning and data quality
Experience with metadata repository product(s), including future looking and future positioning
Exposure to Agile and data management practices
Understanding of emerging data persistence and processing paradigms, including Hadoop, Cassandra, and other NoSQL databases
Experience implementing data pipelines in big data technologies such as Hadoop, Kafka, Spark, Redshift
Shows passion for hands-on work in the metadata and data engineering space with an eye on ways to increasingly automate governance practices and a live sense of the data