Responsible for working independently and with a team to architect, design, implement and manage data pipelines and data-driven applications on Modern Data platforms on both on-premises and Cloud platforms (Azure PaaS, AWS EKS).
Design, implement and deliver large scale enterprise applications using big data open-source solutions such as Apache Hadoop, Apache Spark, Kafka, and Elasticsearch.
Implement data pipelines and data-driven applications using C#, Python on distributed computing frameworks like EKS, Presto, AWS Glue, Apache Spark etc.
Partner with cross-functional platform teams to build Python and C# based Reconciliation tools, to streamline Fund reporting and help Asset Managers.
Create and maintain optimal data pipeline architecture.
Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and Big Data technologies.
Execute and Automate Extract, Transform & Load (ETL) operations on large datasets using Big Data tools like Spark, Sqoop, MapReduce.
Design and develop data structures that support high performing and scalable analytic applications on one or more of these databases Hive, Impala, NoSQL Databases – HBase, Apache Cassandra, Vertica, or MongoDB.
Lead the enhancement of existing infrastructure and internalize the latest innovation in technologies like Pyspark, SQL, C# and Scala.
Leverage automation, cognitive and science-based techniques to manage data, predict scenarios and prescribe actions.
Design and build data services that deal with big data (>90PB) at low latency for a variety of use-cases spanning near-real-time analytics and machine intelligence using both Stream and Batch processing frameworks on Hadoop ecosystem technologies (e.g. Yarn, HDFS, Presto, Spark, Flink, Beam).
Work closely with Data Science teams to integrate data, algorithms into data lake systems and automate different Machine Learning workflows and assist with data infrastructure needs.
Harness the power of data, machine learning and modern tech to build cutting edge exception predictions, income anomalies detection techniques and algorithms to enhance and scaling the Fund Reporting.
Design and implement reporting and visualization for unstructured and structured data sets using visualization tools such as Tableau, Zoom data, Qlik, etc.
Review existing computer systems to determine compatibility with projected or identified needs, researches and selects appropriate frameworks, including ensuring forward compatibility of existing systems.
Requirements
Requires Bachelor’s degree, or foreign equivalent, in Technology, Civil Engineering, or a directly related field plus five (5) years of related professional experience.
Must have five (5) years of experience in the following: Coding in C#, Python, and SQL with experience in working with large data sets and distributed computing (MapReduce, Hadoop, Hive, Pig, Apache Spark, etc);
Web Services, XML, and SOAP;
Hands-on programming experience;
Gathering and documenting requirements, analysis, and specifications;
Writing client server software and web solutions in an enterprise environment;
Interpreting business requirements and effectively implementing into a software solution;
Taking direction and mentoring from other senior developers and leadership;
Translating business requirements into logical and physical file structure design;
Building and testing rapidly Spark/Map Reduce code in a rapid, iterative manner;
Data integration on traditional and Hadoop environments;
Designing and implementing reporting and visualization for unstructured and structured data sets;
Understanding of the benefits of data warehousing, data architecture, data quality processes, data warehousing design and implementation, table structure, fact, and dimension tables, logical and physical database design, data modeling, reporting process metadata, and ETL processes;
Using Hadoop scripting to process petabytes of data;
Unix/Linux shell scripting or similar programming/scripting knowledge.
Must have three (3) years of experience in the following: - Application architectural design.
Vice President, Commercialization Lead managing Go - to - Market strategy for Power & Renewables segment at Wood Mackenzie. Collaborating with teams to drive commercial outcomes and product launches.
Vice President, Commercialization Lead for downstream oil and gas products at Wood Mackenzie. Responsible for go - to - market strategies and driving product adoption and growth.
Vice President leading neurology clinical development at BlueRock Therapeutics. Overseeing clinical strategy execution for transformative cell therapies addressing neurological diseases.
Virtual Private Banking Specialist providing expert banking and lending product advice at Morgan Stanley. Supporting financial advisors and their clients with tailored recommendations and service excellence.
Vice President leading Vizient's acute care implementation projects for operational excellence and provider value. Mentoring teams and driving process optimization through analytics and collaboration.
Assistant Vice President for Equity Financing Middle Office overseeing daily operations and compliance. Collaborating across teams to enhance Trade and Working Capital processes in Hong Kong.
VP of Development Manager overseeing software engineering talent across teams. Responsible for leadership in cloud services and platform solutions for trading organization.
Regional Vice President overseeing Sunbelt Rentals' regional operations and sales teams. Leading strategies to enhance growth, profitability, and customer engagement in the construction sector.
VP of Retirement Advisor Consultant working to build Advisor relationships and business development efforts. Collaborating with the team at Fidelity to enhance their presence in the retirement plan marketplace.