Software Engineer, VIDEO Platform Analytics

Comcast Denver, CO

About the Job

Business Unit:

Comcast's Technology & Product organization works at the intersection of media and technology. Our innovative teams are continually developing and delivering products that transform the customer experience. From creating apps like TVGo to new features such as the Talking Guide on the X1 platform, we work every day to make a positive impact through innovation in the pursuit of building amazing products that are enjoyable, easy to use and accessible across all platforms. The team also develops and supports our evolving network architecture, including next-generation consumer systems and technologies, infrastructure and engineering, network integration and management tools, and technical standards.






Software engineering and dataanalysisskills combined with the demands of a high volume, highly-visible analytics platform make this an exciting challenge for the right candidate.





Are you passionate about digital media, entertainment, and software services? Do you like big challenges and working within a highly motivated team environment?





As a software engineer on the Video Platform Analytics (VPA) team, you will research, develop, support, and deploy solutions within the Hadoop ecosystem and real-time distributing computing architectures. You will also employ your skills to deliver insights into customer and network behavior on a rapidly-growing video-over-IP platform. The VPA team is a small and fast-moving team of talented engineers who are innovating in end-to-end video delivery. We are a team that thrives on big challenges, results, quality, and agility.





Who does the big data software engineer work with?



VPA software engineering is a diverse collection of professionals who work with a variety of teams ranging from other software engineering teams whose software integrates with analytics services, service delivery engineers who provide support for our product, testers, operational stakeholders with all manner of information needs, and executives who rely on big data for ad hoc reports and analytical dashboards. We are often called upon in a clinch when it comes to providing the solution to a question that nobody else can answer.





What are some interesting problems you'll be working on?



Develop systems capable of processing billions of events per day, providing both a real time and historical view into the operation of our video-over-IP systems. Design data collection and enrichment system components for scale and reliability. Work on high performance in-memory real time data stores and a massive historical data store using Hadoop.



Optimize metrics gathering and reporting for performance, using the best open source or vendor tool for the job. Build rich visualizations that tell a compelling story with data. Provide operational reports and dashboards to enable day-to-day business decisions.





Where can you make an impact?



The VIDEO organization is building the core components needed to drive the next generation of television. Running this infrastructure, identifying trouble spots, and optimizing the overall user experience is a challenge that can only be met with a robust big data architecture capable of providing insights that would otherwise be drowned in a sea of data.





Success in this role is best enabled by a broad mix of skills and interests ranging from distributed systems software engineering prowess to the multidisciplinary field of data analysis/data science.





Responsibilities:




  • Develop solutions to Big Data problems utilizing Hadoop/Spark ecosystem's software

  • Develop solutions to real-time and off-line event collecting from various systems

  • Develop, maintain, and perform analysis within a real-time architecture supporting large amounts of data from various sources

  • Analyze massive amounts of data and help drive prototype ideas for new tools and data products

  • Design, build and support APIs and services that are exposed to other internal teams

  • Employ rigorous continuous delivery practices managed under an agile software development approach

  • Ensure a quality transition to production and solid production operation of the software





Here are some of the specific technologies we use:




  • Spark

  • Kafka

  • Hadoop

  • Flume

  • Storm

  • MemSQL

  • Java

  • Scala

  • SBT

  • Maven

  • Git

  • Jenkins

  • Splunk/Hunk

  • Circonus

  • Apache Pig

  • Unix/Linux

  • Redis/ElastiCache

  • AWS suite of applications





Skills & Requirements




  • 8+ years programming experience

  • Bachelors or Masters in Computer Science or related discipline

  • Experience in software development of large-scale distributed systems including proven track record of delivering backend systems that participate in a complex ecosystem

  • 8+ years of JVM based programming as well as experience in code optimization and high-performance computing

  • Knowledge of Big Data related technologies and open source frameworks

  • Good current knowledge of Unix/Linux environments

  • Test-driven development/test automation, continuous integration, and deployment automation

  • Enjoy working with data data analysis, data quality, reporting, and visualization

  • Good communicator, able to analyze and clearly articulate complex issues and technologies understandably and engagingly

  • Great design and problem solving skills, with a strong bias for architecting at scale

  • Adaptable, proactive and willing to take ownership

  • Keen attention to detail and high level of commitment

  • Comfortable working in a fast-paced agile environment.Requirements change quickly and our team needs to constantly adapt to moving targets



Nice to haves:




  • Collection, transformation and enrichment with computing frameworks such as Spark

  • Messaging middleware or distributed queuing technologies such as Kafka

  • Understanding and/or experience with serialization frameworks such as Thrift, Avro, Google Protocol Buffers, and Kyro preferred

  • Understanding of container technologies (Docker/Kubernetes)

  • MapReduce experience in Hadoop utilizing Pig, Hive, or other query/scripting technology

  • Distributed (HBase or Cassandra or equivalent) or NoSQL (e.g. Mongo) database experience

  • Scripting tools such as Python

  • Git, Maven, Jenkins, Nexus

  • Good understanding in any: advanced mathematics, statistics, and probability.






Comcast is an EOE/Veterans/Disabled/LGBT employer