Data Architect focusing on strategic data asset management and advanced analytics capabilities development. Collaborating with business and technology teams to enhance data strategy and architecture.
Responsibilities
Developing and optimizing database models and overall data architecture to store and retrieve company information efficiently.
Collaborating with business leaders and teams to translate information requirements into data-centric solutions, including databases, data warehouses, and data streams.
Designing conceptual and logical data models and flowcharts, defining database structures, and creating procedures to ensure data accuracy and accessibility.
Focusing on improving data quality, accessibility, and security across all systems and platforms.
Installing and configuring information systems, migrating data from legacy systems to new solutions, and improving system performance through testing, troubleshooting, and integrating new elements.
Working closely with product mangers, product owners, software developers, and other IT teams to implement data-related solutions and realize end-to-end data strategies.
Evaluating and recommending new technologies to enhance data management and supporting data analytics, AI solutions, and business intelligence projects.
Ensuring that industry-accepted data architecture principles and standards are integrated and followed for modelling, stored procedures, replication, regulations, and security.
Requirements
Bachelor’s degree in Computer Science, Information Systems, Data Management, or related field.
8-12 years of experience in data architecture, data engineering, or database development.
Expertise in database technologies (SQL, NoSQL), data mining, and programming languages like Python, Scala, Java.
Proficiency in data modelling (Canonical Data Model, Semantic Data Model) and schema design.
Familiarity with ETL (Extract, Transform, Load) tools and processes for efficiently transforming and loading data.
Hands-on experience with cloud data platforms (GCP/Azure/AWS) and modern data processing tools.
Solid understanding of data governance, data quality, metadata management and master data management(MDM), referenced data management, transactions data management.
Experience with data lakehouse architectures, data mesh, or data fabric.
Familiarity with structural and non structural like big data tools (Hadoop/Spark, Kafka, Databricks, Big Query).
Experience supporting AI/ML workloads with high-quality data structures.
Strong analytical and problem-solving abilities.
Excellent communication and stakeholder management skills.
Ability to create clear architecture artifacts and documentation.
Adaptability and a commitment to continuous learning to stay updated with evolving technologies.
Master’s degree or relevant certifications (GCP Data Engineer/Azure/AWS Data Architect, TOGAF).
Exposure to GCP data service like (Postgres, Big Query etc.)
Data Architect at ADEO ensuring interoperability of IT systems through architecture design and data knowledge diffusion. Collaborating with teams to maintain data integrity and quality standards in an international setup.
Consultant, Data Engineer leading end - to - end data solutions and analytics. Collaborating with clients to improve data strategies and deliver actionable insights.
Big Data Engineer developing applications for Synchrony’s Enterprise Data Lake within an Agile scrum team. Collaborating to deliver high - quality data ingestion and maintain data governance standards.
Junior AI Data Engineer specializing in data - focused solutions for financial services. Collaborating on digital transformation projects across various regions.
Data Engineer optimizing data pipelines and cloud solutions for GFT Poland. Involves performance tuning, ETL pipelines, and data model development across multiple locations in Poland.
Data Engineer building scalable cloud data pipelines in Azure and Databricks. Focus on Lakehouse architecture and data governance in a hybrid work model.
Mid Data Engineer handling performance tuning and ETL pipelines at GFT Poland. Design, build, and deploy Cloud data models, while optimizing for visualization use cases.
Apache Spark Specialist responsible for architecting and managing Spark environments on Nebul's AI cloud. Focus on performance, security, and solution innovation within a hybrid work environment.
Data Migration Engineer overseeing product deployment and data migration at Vistra. Ensuring project timelines and managing customer satisfaction through effective implementation processes.