Design end-to-end data architecture using Azure Databricks, Delta Lake, and related Azure services. Lead implementation of scalable ETL/ELT pipelines using PySpark and Databricks Notebooks. Define data lakehouse best practices, security models, and governance policies. Collaborate with data engineers, analysts, and business stakeholders to deliver insights. Optimize performance and cost of distributed workloads on Spark clusters. Implement CI/CD and automation using Databricks Asset Bundles and GitHub. Guide migration of legacy data platforms to modern Azure lakehouse solutions. Provide architectural mentorship and conduct code/design reviews. Stay updated on new features in Databricks, Azure Synapse, and Fabric. Ensure compliance with enterprise data policies and regulatory standards.