The importance of data to business decisions, strategy and behavior has proven unparalleled in recent years. Predictive analytics, data mining and machine learning are tools giving us new methods for analyzing massive data sets. Companies place true value on individuals who understand and manipulate large data sets to provide informative outcomes.
Pivotal issues pertaining to mining massive data sets will range from how to deal with huge document databases and infinite streams of data to mining large social networks and web graphs. An emphasis will be on Map Reduce as a tool for creating parallel algorithms that can process very large amounts of data.
Practical hands-on experience will entail the design of algorithms for analyzing very large amounts of data and to learn existing data mining and machine learning algorithms. As a useful analytic tool, case studies will provide first-hand insight into how big data problems and their solutions allow companies like Google to succeed in the market.
- Jeffrey Ullman Stanford W. Ascherman Professor Emeritus, Engineering
- Big data systems like Hadoop, Spark and Hive
- Link analysis such as PageRank, spam detection and hubs-and-authorities
- Similarity search such as locality-sensitive hashtag and random hyperplanes
- Stream data processing
- Algorithms for large-scale mining
- Large-scale machine learning
- Submodular function optimization
- Computational advertising
3.0 - 4.0
Students enrolling under the non degree option are required to take the course for 4.0 units.