- Location Jatujak, Bangkok.
- Salary up to 120K Baht up on EXP.
- Skills: etl, data pipeline, big data และ data concept.
$ads={1}
Responsibilities:
- Data Pipeline Development: Design, build, and maintain robust and scalable data pipelines to extract, transform, and load data from various sources. Implement efficient data integration processes to ensure data quality and consistency.
- ETL (Extract, Transform, Load): Develop and optimize ETL processes to efficiently extract data from different systems, transform it into the desired format, and load it into data storage and analytical systems.
- Big Data Technologies: Utilize your knowledge of big data technologies, such as Hadoop, Spark, and Hive, to process and analyze large volumes of structured and unstructured data. Design and implement solutions that leverage the power of distributed computing.
- Data Modeling and Architecture: Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and design effective data models and architecture. Ensure data integrity, performance, and scalability in the design of data storage and retrieval systems.
- Data Quality and Governance: Implement data quality checks and data governance processes to ensure data accuracy, completeness, and consistency. Define and enforce data quality standards and best practices.
- Performance Optimization: Identify performance bottlenecks in data processing and storage systems. Optimize data pipelines and queries to improve overall system performance and reduce processing time.
- Data Security: Implement appropriate security measures to protect sensitive data and ensure compliance with data privacy regulations. Monitor and mitigate data security risks.
- Collaboration and Documentation: Collaborate with cross-functional teams, including data scientists, analysts, and business stakeholders, to understand their data needs and provide efficient data solutions. Document data pipelines, processes, and system configurations for knowledge sharing and future reference.
- Continuous Learning and Innovation: Stay updated with the latest advancements in data engineering, big data technologies, and industry best practices. Explore new tools and techniques to improve data processing efficiency and effectiveness.
- Bachelor's or Master's degree in Computer Science, Information Systems, or a related field.
- Minimum of 3 years of hands-on experience as a Data Engineer, designing and implementing data pipelines and infrastructure.
- Strong knowledge of ETL processes and tools for data integration, transformation, and loading.
- Proficiency in big data technologies such as Hadoop, Spark, and Hive, and experience with distributed computing frameworks.
- Solid understanding of data modeling concepts and data architecture principles.
- Familiarity with cloud-based data platforms and services (e.g., AWS, Azure, Google Cloud Platform).
- Strong programming skills in languages such as Python, Java, or Scala.
- Experience with relational and NoSQL databases and data warehousing concepts.
- Understanding of data governance, data quality, and data security practices.
- Excellent problem-solving and analytical skills with a keen attention to detail.
- Effective communication and collaboration skills to work with cross-functional teams and stakeholders