During the application process, ensure your contact information (email and phone number) is up to date and upload your current resume when submitting your application for consideration. To participate in some selection activities you will need to respond to an invitation. The invitation can be sent by both email and text message. In order to receive text message invitations, your profile must include a mobile phone number designated as “Personal Cell” or “Cellular” in the contact information of your application.
At Wells Fargo, we want to satisfy our customers’ financial needs and help them succeed financially. We’re looking for talented people who will put our customers at the center of everything we do. Join our diverse and inclusive team where you’ll feel valued and inspired to contribute your unique skills and experience.
Help us build a better Wells Fargo. It all begins with outstanding talent. It all begins with you.
Wells Fargo Technology sets IT strategy; enhances the design, development, and operations of our systems; optimizes the Wells Fargo infrastructure footprint; provides information security; and enables continuous banking access through in-store, online, ATM, and other channels to Wells Fargo’s more than 70 million global customers.
Wells Fargo Model Risk Technology team within Enterprise Risk and Finance Technology organization is looking for a highly motivated and experienced Platform Engineer with Quants and Machine learning background. This role will support the Model Validation platform which is leveraged for validation of all the Models ( Statistical, AI/ML, NLP and others) developed by Model Development COEs across the enterprise. Primary responsibilities will involve working with Data Scientists within Model Validation group and help in Data Sourcing, managing the platform built on top of multiple technologies like Hortonworks, IBM Spectrum Conductor for Spark, Watson ML Accelerator, GPU Clusters, Python,R, Scala and Anaconda.
- Primarily Support the Model Validation Platform.
- Managing troubleshooting issues, patch/upgrade and monitoring of the platform
- Manage the IBM Spectrum Conductor for Spark and provisioning custom virtual environments
- Partnering with Validation and development teams to streamline the Model validation processes.
- Support the business on Validation efforts and troubleshoot, provide guidance to validators on best practices and optimization techniques.
- Partner with the LOB Validation/development teams to deliver technology solutions to address complex business requirements.
- Install Python, R, Anaconda and H2O Packages
- Create custom conda environments with installing extensions/kernels within Jupyterlab Notebook
- Troubleshoot issues with Conda environments or compiling packages from sources.
7+ years of application development and implementation experience
7+ years of experience delivering complex enterprise wide information technology solutions
3+ years of experience in Hadoop ecosystem tools for real-time batch data ingestion, processing and provisioning such as Apache Flume, Apache Kafka, Apache Sqoop, Apache Flink, Apache Spark or Apache Storm
3+ years of development experience with languages such as Python, Java, Scala, or R
3+ years of experience with installing and managing Python, R, and Anaconda packages including creating Conda packages from sources
Good verbal, written, and interpersonal communication skills
Knowledge and understanding of analytical methods used in:
statistical analysis, modeling, and reporting
Advanced problem solving and technical troubleshooting capabilities
Knowledge and understanding of building and executing a technical roadmap
Ability to work effectively in virtual environment where key team members and partners are in various time zones and locations
Knowledge and understanding of project management methodologies: used in waterfall or Agile development projects
Other Desired Qualifications
- 5+ years of experience in technologies like Python, R, Scala, ML/AI tools like Jupyterlab, IBM Watson ML Accelerator and IBM Conductor for Spark.
- 3+ years of data engineering experience with HortonWorks or Cloudera Hadoop, Hive, Ranger, Knox
- 2+ years of experience with GPU Compouting (CUDA), IBM Spectrum Conductor for Spark or Job scheduling and resource management tools.
- 2+ years of DevOps experience (Git, Jenkins, Artifactory, Nexus, Puppet)
- 2+ years Unix administration and packaging of scientific applications in Anaconda environments with package managers like conda, npm, yarn or pacman
- In depth knowledge of HDFS, Hive, and Spark in hybrid CPU/GPU environments, with PySpark, SparkR/Sparklyr, and H2O Sparking Water.
- IBM Spectrum Conductor for Spark (Spark on EGO) or BigDL and Intel’s Analytics Zoo are a big plus
- Excellent understanding of microservice architectures with emphasis on data intensive applications