The Machine Learning Podcast

Tobias Macey

This show goes behind the scenes for the tools, techniques, and applications of machine learning. Model training, feature engineering, running in production, career development... Everything that you need to know to deliver real impact and value with machine learning and artificial intelligence. read less
TechnologyTechnology

Episodes

Strategies For Building A Product Using LLMs At DataChat
03-03-2024
Strategies For Building A Product Using LLMs At DataChat
Summary Large Language Models (LLMs) have rapidly captured the attention of the world with their impressive capabilities. Unfortunately, they are often unpredictable and unreliable. This makes building a product based on their capabilities a unique challenge. Jignesh Patel is building DataChat to bring the capabilities of LLMs to organizational analytics, allowing anyone to have conversations with their business data. In this episode he shares the methods that he is using to build a product on top of this constantly shifting set of technologies. Announcements Hello and welcome to the Machine Learning Podcast, the podcast about machine learning and how to bring it from idea to delivery. Your host is Tobias Macey and today I'm interviewing Jignesh Patel about working with LLMs; understanding how they work and how to build your own Interview Introduction How did you get involved in machine learning? Can you start by sharing some of the ways that you are working with LLMs currently? What are the business challenges involved in building a product on top of an LLM model that you don't own or control? In the current age of business, your data is often your strategic advantage. How do you avoid losing control of, or leaking that data while interfacing with a hosted LLM API? What are the technical difficulties related to using an LLM as a core element of a product when they are largely a black box? What are some strategies for gaining visibility into the inner workings or decision making rules for these models? What are the factors, whether technical or organizational, that might motivate you to build your own LLM for a business or product? Can you unpack what it means to "build your own" when it comes to an LLM? In your work at DataChat, how has the progression of sophistication in LLM technology impacted your own product strategy? What are the most interesting, innovative, or unexpected ways that you have seen LLMs/DataChat used? What are the most interesting, unexpected, or challenging lessons that you have learned while working with LLMs? When is an LLM the wrong choice? What do you have planned for the future of DataChat? Contact Info Website (https://jigneshpatel.org/) LinkedIn (https://www.linkedin.com/in/jigneshmpatel/) Parting Question From your perspective, what is the biggest barrier to adoption of machine learning today? Closing Announcements Thank you for listening! Don't forget to check out our other shows. The Data Engineering Podcast (https://www.dataengineeringpodcast.com) covers the latest on modern data management. Podcast.__init__ () covers the Python language, its community, and the innovative ways it is being used. Visit the site (https://www.themachinelearningpodcast.com) to subscribe to the show, sign up for the mailing list, and read the show notes. If you've learned something or tried out a project from the show then tell us about it! Email hosts@themachinelearningpodcast.com (mailto:hosts@themachinelearningpodcast.com)) with your story. To help other people find the show please leave a review on iTunes (https://podcasts.apple.com/us/podcast/the-machine-learning-podcast/id1626358243) and tell your friends and co-workers. Links DataChat (https://datachat.ai/) CMU == Carnegie Mellon University (https://www.cmu.edu/) SVM == Support Vector Machine (https://en.wikipedia.org/wiki/Support_vector_machine) Generative AI (https://en.wikipedia.org/wiki/Generative_artificial_intelligence) Genomics (https://en.wikipedia.org/wiki/Genomics) Proteomics (https://en.wikipedia.org/wiki/Proteomics) Parquet (https://parquet.apache.org/) OpenAI Codex (https://openai.com/blog/openai-codex) LLama (https://en.wikipedia.org/wiki/LLaMA) Mistral (https://mistral.ai/) Google Vertex (https://cloud.google.com/vertex-ai) Langchain (https://www.langchain.com/) Retrieval Augmented Generation (https://blogs.nvidia.com/blog/what-is-retrieval-augmented-generation/) Prompt Engineering (https://en.wikipedia.org/wiki/Prompt_engineering) Ensemble Learning (https://en.wikipedia.org/wiki/Ensemble_learning) XGBoost (https://xgboost.readthedocs.io/en/stable/) Catboost (https://catboost.ai/) Linear Regression (https://en.wikipedia.org/wiki/Linear_regression) COGS == Cost Of Goods Sold (https://www.investopedia.com/terms/c/cogs.asp) Bruce Schneier - AI And Trust (https://www.schneier.com/blog/archives/2023/12/ai-and-trust.html) The intro and outro music is from Hitman's Lovesong feat. Paola Graziano (https://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/Tales_Of_A_Dead_Fish/Hitmans_Lovesong/) by The Freak Fandango Orchestra (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/)/CC BY-SA 3.0 (https://creativecommons.org/licenses/by-sa/3.0/)
Improve The Success Rate Of Your Machine Learning Projects With bizML
18-02-2024
Improve The Success Rate Of Your Machine Learning Projects With bizML
Summary Machine learning is a powerful set of technologies, holding the potential to dramatically transform businesses across industries. Unfortunately, the implementation of ML projects often fail to achieve their intended goals. This failure is due to a lack of collaboration and investment across technological and organizational boundaries. To help improve the success rate of machine learning projects Eric Siegel developed the six step bizML framework, outlining the process to ensure that everyone understands the whole process of ML deployment. In this episode he shares the principles and promise of that framework and his motivation for encapsulating it in his book "The AI Playbook". Announcements Hello and welcome to the Machine Learning Podcast, the podcast about machine learning and how to bring it from idea to delivery. Your host is Tobias Macey and today I'm interviewing Eric Siegel about how the bizML approach can help improve the success rate of your ML projects Interview Introduction How did you get involved in machine learning? Can you describe what bizML is and the story behind it? What are the key aspects of this approach that are different from the "industry standard" lifecycle of an ML project? What are the elements of your personal experience as an ML consultant that helped you develop the tenets of bizML? Who are the personas that need to be involved in an ML project to increase the likelihood of success? Who do you find to be best suited to "own" or "lead" the process? What are the organizational patterns that might hinder the work of delivering on the goals of an ML initiative? What are some of the misconceptions about the work involved in/capabilities of an ML model that you commonly encounter? What is your main goal in writing your book "The AI Playbook"? What are the most interesting, innovative, or unexpected ways that you have seen the bizML process in action? What are the most interesting, unexpected, or challenging lessons that you have learned while working on ML projects and developing the bizML framework? When is bizML the wrong choice? What are the future developments in organizational and technical approaches to ML that will improve the success rate of AI projects? Contact Info LinkedIn (https://www.linkedin.com/in/predictiveanalytics/) Parting Question From your perspective, what is the biggest barrier to adoption of machine learning today? Closing Announcements Thank you for listening! Don't forget to check out our other shows. The Data Engineering Podcast (https://www.dataengineeringpodcast.com) covers the latest on modern data management. Podcast.__init__ () covers the Python language, its community, and the innovative ways it is being used. Visit the site (https://www.themachinelearningpodcast.com) to subscribe to the show, sign up for the mailing list, and read the show notes. If you've learned something or tried out a project from the show then tell us about it! Email hosts@themachinelearningpodcast.com (mailto:hosts@themachinelearningpodcast.com)) with your story. To help other people find the show please leave a review on iTunes (https://podcasts.apple.com/us/podcast/the-machine-learning-podcast/id1626358243) and tell your friends and co-workers. Links The AI Playbook (https://www.machinelearningkeynote.com/the-ai-playbook): Mastering the Rare Art of Machine Learning Deployment by Eric Siegel Predictive Analytics (https://www.machinelearningkeynote.com/predictive-analytics): The Power to Predict Who Will Click, Buy, Lie, or Die by Eric Siegel Columbia University (https://www.columbia.edu/) Machine Learning Week Conference (https://machinelearningweek.com/) Generative AI World (https://generativeaiworld.events/) Machine Learning Leadership and Practice Course (https://www.predictiveanalyticsworld.com/machinelearningweek/workshops/machine-learning-course/) Rexer Analytics (https://www.rexeranalytics.com/) KD Nuggets (https://www.kdnuggets.com/) CRISP-DM (https://en.wikipedia.org/wiki/Cross-industry_standard_process_for_data_mining) Random Forest (https://en.wikipedia.org/wiki/Random_forest) Gradient Descent (https://en.wikipedia.org/wiki/Gradient_descent) The intro and outro music is from Hitman's Lovesong feat. Paola Graziano (https://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/Tales_Of_A_Dead_Fish/Hitmans_Lovesong/) by The Freak Fandango Orchestra (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/)/CC BY-SA 3.0 (https://creativecommons.org/licenses/by-sa/3.0/)
Using Generative AI To Accelerate Feature Engineering At FeatureByte
11-02-2024
Using Generative AI To Accelerate Feature Engineering At FeatureByte
Summary One of the most time consuming aspects of building a machine learning model is feature engineering. Generative AI offers the possibility of accelerating the discovery and creation of feature pipelines. In this episode Colin Priest explains how FeatureByte is applying generative AI models to the challenge of building and maintaining machine learning pipelines. Announcements Hello and welcome to the Machine Learning Podcast, the podcast about machine learning and how to bring it from idea to delivery. Your host is Tobias Macey and today I'm interviewing Colin Priest about applying generative AI to the task of building and deploying AI pipelines Interview Introduction How did you get involved in machine learning? Can you start by giving the 30,000 foot view of the steps involved in an AI pipeline? Understand the problem Feature ideation Feature engineering Experiment Optimize Productionize What are the stages of that process that are prone to repetition? What are the ways that teams typically try to automate those steps? What are the features of generative AI models that can be brought to bear on the design stage of an AI pipeline? What are the validation/verification processes that engineers need to apply to the generated suggestions? What are the opportunities/limitations for unit/integration style tests? What are the elements of developer experience that need to be addressed to make the gen AI capabilities an enhancement instead of a distraction? What are the interfaces through which the AI functionality can/should be exposed? What are the aspects of pipeline and model deployment that can benefit from generative AI functionality? What are the potential risk factors that need to be considered when evaluating the application of this functionality? What are the most interesting, innovative, or unexpected ways that you have seen generative AI used in the development and maintenance of AI pipelines? What are the most interesting, unexpected, or challenging lessons that you have learned while working on the application of generative AI to the ML workflow? When is generative AI the wrong choice? What do you have planned for the future of FeatureByte's AI copilot capabiliteis? Contact Info LinkedIn (https://www.linkedin.com/in/colinpriest/?originalSubdomain=sg) Parting Question From your perspective, what is the biggest barrier to adoption of machine learning today? Closing Announcements Thank you for listening! Don't forget to check out our other shows. The Data Engineering Podcast (https://www.dataengineeringpodcast.com) covers the latest on modern data management. Podcast.__init__ () covers the Python language, its community, and the innovative ways it is being used. Visit the site (https://www.themachinelearningpodcast.com) to subscribe to the show, sign up for the mailing list, and read the show notes. If you've learned something or tried out a project from the show then tell us about it! Email hosts@themachinelearningpodcast.com (mailto:hosts@themachinelearningpodcast.com)) with your story. To help other people find the show please leave a review on iTunes (https://podcasts.apple.com/us/podcast/the-machine-learning-podcast/id1626358243) and tell your friends and co-workers. Links FeatureByte (https://featurebyte.com/) Generative AI (https://en.wikipedia.org/wiki/Generative_artificial_intelligence) The Art of War (https://en.wikipedia.org/wiki/The_Art_of_War) OCR == Optical Character Recognition (https://en.wikipedia.org/wiki/Optical_character_recognition) Genetic Algorithm (https://en.wikipedia.org/wiki/Genetic_algorithm) Semantic Layer (https://en.wikipedia.org/wiki/Semantic_layer) Prompt Engineering (https://en.wikipedia.org/wiki/Prompt_engineering) The intro and outro music is from Hitman's Lovesong feat. Paola Graziano (https://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/Tales_Of_A_Dead_Fish/Hitmans_Lovesong/) by The Freak Fandango Orchestra (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/)/CC BY-SA 3.0 (https://creativecommons.org/licenses/by-sa/3.0/)
Learn And Automate Critical Business Workflows With 8Flow
28-01-2024
Learn And Automate Critical Business Workflows With 8Flow
Summary Every business develops their own specific workflows to address their internal organizational needs. Not all of them are properly documented, or even visible. Workflow automation tools have tried to reduce the manual burden involved, but they are rigid and require substantial investment of time to discover and develop the routines. Boaz Hecht co-founded 8Flow to iteratively discover and automate pieces of workflows, bringing visibility and collaboration to the internal organizational processes that keep the business running. Announcements Hello and welcome to the Machine Learning Podcast, the podcast about machine learning and how to bring it from idea to delivery. Your host is Tobias Macey and today I'm interviewing Boaz Hecht about using AI to automate customer support at 8Flow Interview Introduction How did you get involved in machine learning? Can you describe what 8Flow is and the story behind it? How does 8Flow compare to RPA tools that companies are using today? What are the opportunities for augmenting or integrating with RPA frameworks? What are the key selling points for the solution that you are building? (does AI sell? Or is it about the realized savings?) What are the sources of signal that you are relying on to build model features? Given the heterogeneity in tools and processes across customers, what are the common focal points that let you address the widest possible range of functionality? Can you describe how 8Flow is implemented? How have the design and goals evolved since you first started working on it? What are the model categories that are most relevant for process automation in your product? How have you approached the design and implementation of your MLOps workflow? (model training, deployment, monitoring, versioning, etc.) What are the open questions around product focus and system design that you are still grappling with? Given the relative recency of ML/AI as a profession and the massive growth in attention and activity, how are you addressing the challenge of obtaining and maximizing human talent? What are the most interesting, innovative, or unexpected ways that you have seen 8Flow used? What are the most interesting, unexpected, or challenging lessons that you have learned while working on 8Flow? When is 8Flow the wrong choice? What do you have planned for the future of 8Flow? Contact Info LinkedIn (https://www.linkedin.com/in/boazhecht/) Personal Website (https://boaz.org/) Parting Question From your perspective, what is the biggest barrier to adoption of machine learning today? Closing Announcements Thank you for listening! Don't forget to check out our other shows. The Data Engineering Podcast (https://www.dataengineeringpodcast.com) covers the latest on modern data management. Podcast.__init__ () covers the Python language, its community, and the innovative ways it is being used. Visit the site (https://www.themachinelearningpodcast.com) to subscribe to the show, sign up for the mailing list, and read the show notes. If you've learned something or tried out a project from the show then tell us about it! Email hosts@themachinelearningpodcast.com (mailto:hosts@themachinelearningpodcast.com)) with your story. To help other people find the show please leave a review on iTunes (https://podcasts.apple.com/us/podcast/the-machine-learning-podcast/id1626358243) and tell your friends and co-workers. Links 8Flow (https://8flow.ai/) Robotic Process Automation (https://en.wikipedia.org/wiki/Robotic_process_automation) The intro and outro music is from Hitman's Lovesong feat. Paola Graziano (https://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/Tales_Of_A_Dead_Fish/Hitmans_Lovesong/) by The Freak Fandango Orchestra (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/)/CC BY-SA 3.0 (https://creativecommons.org/licenses/by-sa/3.0/)
Considering The Ethical Responsibilities Of ML And AI Engineers
28-01-2024
Considering The Ethical Responsibilities Of ML And AI Engineers
Summary Machine learning and AI applications hold the promise of drastically impacting every aspect of modern life. With that potential for profound change comes a responsibility for the creators of the technology to account for the ramifications of their work. In this episode Nicholas Cifuentes-Goodbody guides us through the minefields of social, technical, and ethical considerations that are necessary to ensure that this next generation of technical and economic systems are equitable and beneficial for the people that they impact. Announcements Hello and welcome to the Machine Learning Podcast, the podcast about machine learning and how to bring it from idea to delivery. Your host is Tobias Macey and today I'm interviewing Nicholas Cifuentes-Goodbody about the different elements of the machine learning workflow where ethics need to be considered Interview Introduction How did you get involved in machine learning? To start with, who is responsible for addressing the ethical concerns around AI? What are the different ways that AI can have positive or negative outcomes from an ethical perspective? What is the role of practitioners/individual contributors in the identification and evaluation of ethical impacts of their work? What are some utilities that are helpful in identifying and addressing bias in training data? How can practitioners address challenges of equity and accessibility in the delivery of AI products? What are some of the options for reducing the energy consumption for training and serving AI? What are the most interesting, innovative, or unexpected ways that you have seen ML teams incorporate ethics into their work? What are the most interesting, unexpected, or challenging lessons that you have learned while working on ethical implications of ML? What are some of the resources that you recommend for people who want to invest in their knowledge and application of ethics in the realm of ML? Contact Info WorldQuant University's Applied Data Science Lab (https://www.wqu.edu/) LinkedIn (https://www.linkedin.com/in/ncgoodbody/) Parting Question From your perspective, what is the biggest barrier to adoption of machine learning today? Closing Announcements Thank you for listening! Don't forget to check out our other shows. The Data Engineering Podcast (https://www.dataengineeringpodcast.com) covers the latest on modern data management. Podcast.__init__ () covers the Python language, its community, and the innovative ways it is being used. Visit the site (https://www.themachinelearningpodcast.com) to subscribe to the show, sign up for the mailing list, and read the show notes. If you've learned something or tried out a project from the show then tell us about it! Email hosts@themachinelearningpodcast.com (mailto:hosts@themachinelearningpodcast.com)) with your story. To help other people find the show please leave a review on iTunes (https://podcasts.apple.com/us/podcast/the-machine-learning-podcast/id1626358243) and tell your friends and co-workers. Links UNESCO Recommendation on the Ethics of Artificial Intelligence (https://unesdoc.unesco.org/ark:/48223/pf0000381137) European Union AI Act (https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence) How machine learning helps advance access to human rights information (https://www.youtube.com/watch?v=epaowz3pI40) Disinformation, Team Jorge (https://www.haaretz.com/israel-news/security-aviation/2022-11-16/ty-article-static-ext/the-israelis-destabilizing-democracy-and-disrupting-elections-worldwide/00000186-461e-d80f-abff-6e9e08b10000) China, AI, and Human Rights (https://fsi-live.s3.us-west-1.amazonaws.com/s3fs-public/snapshot_vi-_countering_the_rise_of_digital_authoritarianism_0.pdf) How China Is Using A.I. to Profile a Minority (https://www.nytimes.com/2019/04/14/technology/china-surveillance-artificial-intelligence-racial-profiling.html) Weapons of Math Destruction (https://g.co/kgs/diKJwm) Fairlearn (https://fairlearn.org/) AI Fairness 360 (https://aif360.res.ibm.com/) Allen Institute for AI NYT (https://www.nytimes.com/2023/10/19/technology/allen-institute-open-source-ai.html) Allen Institute for AI (https://allenai.org/) Transformers (https://huggingface.co/docs/transformers/index) AI4ALL (https://ai-4-all.org/) WorldQuant University (https://wqu.edu/) How to Make Generative AI Greener (https://hbr.org/2023/07/how-to-make-generative-ai-greener) Machine Learning Emissions Calculator (https://mlco2.github.io/impact/#compute) Practicing Trustworthy Machine Learning (https://learning.oreilly.com/library/view/practicing-trustworthy-machine/9781098120269/) Energy and Policy Considerations for Deep Learning (https://arxiv.org/abs/1906.02243) Natural Language Processing (https://en.wikipedia.org/wiki/Natural_language_processing) Trolley Problem (https://en.wikipedia.org/wiki/Trolley_problem) Protected Classes (https://en.wikipedia.org/wiki/Protected_group) fairlearn (https://fairlearn.org/) (scikit-learn) BERT Model (https://en.wikipedia.org/wiki/BERT_(language_model)) The intro and outro music is from Hitman's Lovesong feat. Paola Graziano (https://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/Tales_Of_A_Dead_Fish/Hitmans_Lovesong/) by The Freak Fandango Orchestra (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/)/CC BY-SA 3.0 (https://creativecommons.org/licenses/by-sa/3.0/)
Build Intelligent Applications Faster With RelationalAI
31-12-2023
Build Intelligent Applications Faster With RelationalAI
Summary Building machine learning systems and other intelligent applications are a complex undertaking. This often requires retrieving data from a warehouse engine, adding an extra barrier to every workflow. The RelationalAI engine was built as a co-processor for your data warehouse that adds a greater degree of flexibility in the representation and analysis of the underlying information, simplifying the work involved. In this episode CEO Molham Aref explains how RelationalAI is designed, the capabilities that it adds to your data clouds, and how you can start using it to build more sophisticated applications on your data. Announcements Hello and welcome to the Machine Learning Podcast, the podcast about machine learning and how to bring it from idea to delivery. Your host is Tobias Macey and today I'm interviewing Molham Aref about RelationalAI and the principles behind it for powering intelligent applications Interview Introduction How did you get involved in machine learning? Can you describe what RelationalAI is and the story behind it? On your site you call your product an "AI Co-processor". Can you explain what you mean by that phrase? What are the primary use cases that you address with the RelationalAI product? What are the types of solutions that teams might build to address those problems in the absence of something like the RelationalAI engine? Can you describe the system design of RelationalAI? How have the design and goals of the platform changed since you first started working on it? For someone who is using RelationalAI to address a business need, what does the onboarding and implementation workflow look like? What is your design philosophy for identifying the balance between automating the implementation of certain categories of application (e.g. NER) vs. providing building blocks and letting teams assemble them on their own? What are the data modeling paradigms that teams should be aware of to make the best use of the RKGS platform and Rel language? What are the aspects of customer education that you find yourself spending the most time on? What are some of the most under-utilized or misunderstood capabilities of the RelationalAI platform that you think deserve more attention? What are the most interesting, innovative, or unexpected ways that you have seen the RelationalAI product used? What are the most interesting, unexpected, or challenging lessons that you have learned while working on RelationalAI? When is RelationalAI the wrong choice? What do you have planned for the future of RelationalAI? Contact Info LinkedIn (https://www.linkedin.com/in/molham/) Parting Question From your perspective, what is the biggest barrier to adoption of machine learning today? Closing Announcements Thank you for listening! Don't forget to check out our other shows. The Data Engineering Podcast (https://www.dataengineeringpodcast.com) covers the latest on modern data management. Podcast.__init__ () covers the Python language, its community, and the innovative ways it is being used. Visit the site (https://www.themachinelearningpodcast.com) to subscribe to the show, sign up for the mailing list, and read the show notes. If you've learned something or tried out a project from the show then tell us about it! Email hosts@themachinelearningpodcast.com (mailto:hosts@themachinelearningpodcast.com)) with your story. To help other people find the show please leave a review on iTunes (https://podcasts.apple.com/us/podcast/the-machine-learning-podcast/id1626358243) and tell your friends and co-workers. Links RelationalAI (https://relational.ai/) Snowflake (https://www.snowflake.com/en/) AI Winter (https://en.wikipedia.org/wiki/AI_winter) BigQuery (https://cloud.google.com/bigquery) Gradient Descent (https://en.wikipedia.org/wiki/Gradient_descent) B-Tree (https://en.wikipedia.org/wiki/B-tree) Navigational Database (https://en.wikipedia.org/wiki/Navigational_database) Hadoop (https://hadoop.apache.org/) Teradata (https://www.teradata.com/) Worst Case Optimal Join (https://relational.ai/blog/worst-case-optimal-join-algorithms-techniques-results-and-open-problems) Semantic Query Optimization (https://relational.ai/blog/semantic-optimizer) Relational Algebra (https://en.wikipedia.org/wiki/Relational_algebra) HyperGraph (https://en.wikipedia.org/wiki/Hypergraph) Linear Algebra (https://en.wikipedia.org/wiki/Linear_algebra) Vector Database (https://en.wikipedia.org/wiki/Vector_database) Pathway (https://pathway.com/) Data Engineering Podcast Episode (https://www.dataengineeringpodcast.com/pathway-database-that-thinks-episode-334/) Pinecone (https://www.pinecone.io/) Data Engineering Podcast Episode (https://www.dataengineeringpodcast.com/pinecone-vector-database-similarity-search-episode-189/) The intro and outro music is from Hitman's Lovesong feat. Paola Graziano (https://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/Tales_Of_A_Dead_Fish/Hitmans_Lovesong/) by The Freak Fandango Orchestra (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/)/CC BY-SA 3.0 (https://creativecommons.org/licenses/by-sa/3.0/)
Building Better AI While Preserving User Privacy With TripleBlind
22-11-2023
Building Better AI While Preserving User Privacy With TripleBlind
Summary Machine learning and generative AI systems have produced truly impressive capabilities. Unfortunately, many of these applications are not designed with the privacy of end-users in mind. TripleBlind is a platform focused on embedding privacy preserving techniques in the machine learning process to produce more user-friendly AI products. In this episode Gharib Gharibi explains how the current generation of applications can be susceptible to leaking user data and how to counteract those trends. Announcements Hello and welcome to the Machine Learning Podcast, the podcast about machine learning and how to bring it from idea to delivery. Your host is Tobias Macey and today I'm interviewing Gharib Gharibi about the challenges of bias and data privacy in generative AI models Interview Introduction How did you get involved in machine learning? Generative AI has been gaining a lot of attention and speculation about its impact. What are some of the risks that these capabilities pose? What are the main contributing factors to their existing shortcomings? What are some of the subtle ways that bias in the source data can manifest? In addition to inaccurate results, there is also a question of how user interactions might be re-purposed and potential impacts on data and personal privacy. What are the main sources of risk? With the massive attention that generative AI has created and the perspectives that are being shaped by it, how do you see that impacting the general perception of other implementations of AI/ML? How can ML practitioners improve and convey the trustworthiness of their models to end users? What are the risks for the industry if generative models fall out of favor with the public? How does your work at Tripleblind help to encourage a conscientious approach to AI? What are the most interesting, innovative, or unexpected ways that you have seen data privacy addressed in AI applications? What are the most interesting, unexpected, or challenging lessons that you have learned while working on privacy in AI? When is TripleBlind the wrong choice? What do you have planned for the future of TripleBlind? Contact Info LinkedIn (https://www.linkedin.com/in/ggharibi/) Parting Question From your perspective, what is the biggest barrier to adoption of machine learning today? Closing Announcements Thank you for listening! Don't forget to check out our other shows. The Data Engineering Podcast (https://www.dataengineeringpodcast.com) covers the latest on modern data management. Podcast.__init__ () covers the Python language, its community, and the innovative ways it is being used. Visit the site (https://www.themachinelearningpodcast.com) to subscribe to the show, sign up for the mailing list, and read the show notes. If you've learned something or tried out a project from the show then tell us about it! Email hosts@themachinelearningpodcast.com (mailto:hosts@themachinelearningpodcast.com)) with your story. To help other people find the show please leave a review on iTunes (https://podcasts.apple.com/us/podcast/the-machine-learning-podcast/id1626358243) and tell your friends and co-workers. Links TripleBlind (https://tripleblind.ai/) ImageNet (https://scholar.google.com/citations?view_op=view_citation&hl=en&user=JicYPdAAAAAJ&citation_for_view=JicYPdAAAAAJ:VN7nJs4JPk0C) Geoffrey Hinton Paper BERT (https://en.wikipedia.org/wiki/BERT_(language_model)) language model Generative AI (https://en.wikipedia.org/wiki/Generative_artificial_intelligence) GPT == Generative Pre-trained Transformer (https://en.wikipedia.org/wiki/Generative_pre-trained_transformer) HIPAA Safe Harbor Rules (https://www.hhs.gov/hipaa/for-professionals/privacy/special-topics/de-identification/index.html) Federated Learning (https://en.wikipedia.org/wiki/Federated_learning) Differential Privacy (https://en.wikipedia.org/wiki/Differential_privacy) Homomorphic Encryption (https://en.wikipedia.org/wiki/Homomorphic_encryption) The intro and outro music is from Hitman's Lovesong feat. Paola Graziano (https://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/Tales_Of_A_Dead_Fish/Hitmans_Lovesong/) by The Freak Fandango Orchestra (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/)/CC BY-SA 3.0 (https://creativecommons.org/licenses/by-sa/3.0/)
Enhancing The Abilities Of Software Engineers With Generative AI At Tabnine
13-11-2023
Enhancing The Abilities Of Software Engineers With Generative AI At Tabnine
Summary Software development involves an interesting balance of creativity and repetition of patterns. Generative AI has accelerated the ability of developer tools to provide useful suggestions that speed up the work of engineers. Tabnine is one of the main platforms offering an AI powered assistant for software engineers. In this episode Eran Yahav shares the journey that he has taken in building this product and the ways that it enhances the ability of humans to get their work done, and when the humans have to adapt to the tool. Announcements Hello and welcome to the Machine Learning Podcast, the podcast about machine learning and how to bring it from idea to delivery. Your host is Tobias Macey and today I'm interviewing Eran Yahav about building an AI powered developer assistant at Tabnine Interview Introduction How did you get involved in machine learning? Can you describe what Tabnine is and the story behind it? What are the individual and organizational motivations for using AI to generate code? What are the real-world limitations of generative AI for creating software? (e.g. size/complexity of the outputs, naming conventions, etc.) What are the elements of skepticism/oversight that developers need to exercise while using a system like Tabnine? What are some of the primary ways that developers interact with Tabnine during their development workflow? Are there any particular styles of software for which an AI is more appropriate/capable? (e.g. webapps vs. data pipelines vs. exploratory analysis, etc.) For natural languages there is a strong bias toward English in the current generation of LLMs. How does that translate into computer languages? (e.g. Python, Java, C++, etc.) Can you describe the structure and implementation of Tabnine? Do you rely primarily on a single core model, or do you have multiple models with subspecialization? How have the design and goals of the product changed since you first started working on it? What are the biggest challenges in building a custom LLM for code? What are the opportunities for specialization of the model architecture given the highly structured nature of the problem domain? For users of Tabnine, how do you assess/monitor the accuracy of recommendations? What are the feedback and reinforcement mechanisms for the model(s)? What are the most interesting, innovative, or unexpected ways that you have seen Tabnine's LLM powered coding assistant used? What are the most interesting, unexpected, or challenging lessons that you have learned while working on AI assisted development at Tabnine? When is an AI developer assistant the wrong choice? What do you have planned for the future of Tabnine? Contact Info LinkedIn (https://www.linkedin.com/in/eranyahav/?originalSubdomain=il) Website (https://csaws.cs.technion.ac.il/~yahave/) Parting Question From your perspective, what is the biggest barrier to adoption of machine learning today? Closing Announcements Thank you for listening! Don't forget to check out our other shows. The Data Engineering Podcast (https://www.dataengineeringpodcast.com) covers the latest on modern data management. Podcast.__init__ () covers the Python language, its community, and the innovative ways it is being used. Visit the site (https://www.themachinelearningpodcast.com) to subscribe to the show, sign up for the mailing list, and read the show notes. If you've learned something or tried out a project from the show then tell us about it! Email hosts@themachinelearningpodcast.com (mailto:hosts@themachinelearningpodcast.com)) with your story. To help other people find the show please leave a review on iTunes (https://podcasts.apple.com/us/podcast/the-machine-learning-podcast/id1626358243) and tell your friends and co-workers. Links TabNine (https://www.tabnine.com/) Technion University (https://www.technion.ac.il/en/home-2/) Program Synthesis (https://en.wikipedia.org/wiki/Program_synthesis) Context Stuffing (http://gptprompts.wikidot.com/context-stuffing) Elixir (https://elixir-lang.org/) Dependency Injection (https://en.wikipedia.org/wiki/Dependency_injection) COBOL (https://en.wikipedia.org/wiki/COBOL) Verilog (https://en.wikipedia.org/wiki/Verilog) MidJourney (https://www.midjourney.com/home) The intro and outro music is from Hitman's Lovesong feat. Paola Graziano (https://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/Tales_Of_A_Dead_Fish/Hitmans_Lovesong/) by The Freak Fandango Orchestra (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/)/CC BY-SA 3.0 (https://creativecommons.org/licenses/by-sa/3.0/)
Validating Machine Learning Systems For Safety Critical Applications With Ketryx
08-11-2023
Validating Machine Learning Systems For Safety Critical Applications With Ketryx
Summary Software systems power much of the modern world. For applications that impact the safety and well-being of people there is an extra set of precautions that need to be addressed before deploying to production. If machine learning and AI are part of that application then there is a greater need to validate the proper functionality of the models. In this episode Erez Kaminski shares the work that he is doing at Ketryx to make that validation easier to implement and incorporate into the ongoing maintenance of software and machine learning products. Announcements Hello and welcome to the Machine Learning Podcast, the podcast about machine learning and how to bring it from idea to delivery. Your host is Tobias Macey and today I'm interviewing Erez Kaminski about using machine learning in safety critical and highly regulated medical applications Interview Introduction How did you get involved in machine learning? Can you start by describing some of the regulatory burdens placed on ML teams who are building solutions for medical applications? How do these requirements impact the development and validation processes of model design and development? What are some examples of the procedural and record-keeping aspects of the machine learning workflow that are required for FDA compliance? What are the opportunities for automating pieces of that overhead? Can you describe what you are doing at Ketryx to streamline the development/training/deployment of ML/AI applications for medical use cases? What are the ideas/assumptions that you had at the start of Ketryx that have been challenged/updated as you work with customers? What are the most interesting, innovative, or unexpected ways that you have seen ML used in medical applications? What are the most interesting, unexpected, or challenging lessons that you have learned while working on Ketryx? When is Ketryx the wrong choice? What do you have planned for the future of Ketryx? Contact Info Email (mailto:info@ketryx.com) LinkedIn (https://www.linkedin.com/in/erezkaminski/) Parting Question From your perspective, what is the biggest barrier to adoption of machine learning today? Closing Announcements Thank you for listening! Don't forget to check out our other shows. The Data Engineering Podcast (https://www.dataengineeringpodcast.com) covers the latest on modern data management. Podcast.__init__ () covers the Python language, its community, and the innovative ways it is being used. Visit the site (https://www.themachinelearningpodcast.com) to subscribe to the show, sign up for the mailing list, and read the show notes. If you've learned something or tried out a project from the show then tell us about it! Email hosts@themachinelearningpodcast.com (mailto:hosts@themachinelearningpodcast.com)) with your story. To help other people find the show please leave a review on iTunes (https://podcasts.apple.com/us/podcast/the-machine-learning-podcast/id1626358243) and tell your friends and co-workers. Links Ketryx (https://www.ketryx.com/) Wolfram Alpha (https://www.wolframalpha.com/) Mathematica (https://www.wolfram.com/mathematica/) Tensorflow (https://www.tensorflow.org/) SBOM == Software Bill Of Materials (https://www.cisa.gov/sbom) Air-gapped Systems (https://en.wikipedia.org/wiki/Air_gap_(networking)) AlexNet (https://en.wikipedia.org/wiki/AlexNet) Shapley Values (https://c3.ai/glossary/data-science/shapley-values/) SHAP (https://github.com/shap/shap) Podcast.__init__ Episode (https://www.pythonpodcast.com/shap-explainable-machine-learning-episode-335/) Bayesian Statistics (https://en.wikipedia.org/wiki/Bayesian_inference) Causal Modeling (https://en.wikipedia.org/wiki/Causal_inference) Prophet (https://facebook.github.io/prophet/) FDA Principles Of Software Validation (https://www.fda.gov/regulatory-information/search-fda-guidance-documents/general-principles-software-validation) The intro and outro music is from Hitman's Lovesong feat. Paola Graziano (https://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/Tales_Of_A_Dead_Fish/Hitmans_Lovesong/) by The Freak Fandango Orchestra (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/)/CC BY-SA 3.0 (https://creativecommons.org/licenses/by-sa/3.0/)
Applying Declarative ML Techniques To Large Language Models For Better Results
24-10-2023
Applying Declarative ML Techniques To Large Language Models For Better Results
Summary Large language models have gained a substantial amount of attention in the area of AI and machine learning. While they are impressive, there are many applications where they are not the best option. In this episode Piero Molino explains how declarative ML approaches allow you to make the best use of the available tools across use cases and data formats. Announcements Hello and welcome to the Machine Learning Podcast, the podcast about machine learning and how to bring it from idea to delivery. Your host is Tobias Macey and today I'm interviewing Piero Molino about the application of declarative ML in a world being dominated by large language models Interview Introduction How did you get involved in machine learning? Can you start by summarizing your perspective on the effect that LLMs are having on the AI/ML industry? In a world where LLMs are being applied to a growing variety of use cases, what are the capabilities that they still lack? How does declarative ML help to address those shortcomings? The majority of current hype is about commercial models (e.g. GPT-4). Can you summarize the current state of the ecosystem for open source LLMs? For teams who are investing in ML/AI capabilities, what are the sources of platform risk for LLMs? What are the comparative benefits of using a declarative ML approach? What are the most interesting, innovative, or unexpected ways that you have seen LLMs used? What are the most interesting, unexpected, or challenging lessons that you have learned while working on declarative ML in the age of LLMs? When is an LLM the wrong choice? What do you have planned for the future of declarative ML and Predibase? Contact Info LinkedIn (https://www.linkedin.com/in/pieromolino/?locale=en_US) Website (https://w4nderlu.st/) Closing Announcements Thank you for listening! Don't forget to check out our other shows. The Data Engineering Podcast (https://www.dataengineeringpodcast.com) covers the latest on modern data management. Podcast.__init__ () covers the Python language, its community, and the innovative ways it is being used. Visit the site (https://www.themachinelearningpodcast.com) to subscribe to the show, sign up for the mailing list, and read the show notes. If you've learned something or tried out a project from the show then tell us about it! Email hosts@themachinelearningpodcast.com (mailto:hosts@themachinelearningpodcast.com)) with your story. To help other people find the show please leave a review on iTunes (https://podcasts.apple.com/us/podcast/the-machine-learning-podcast/id1626358243) and tell your friends and co-workers Parting Question From your perspective, what is the biggest barrier to adoption of machine learning today? Links Predibase (https://predibase.com/) Podcast Episode (https://www.themachinelearningpodcast.com/predibase-declarative-machine-learning-episode-4) Ludwig (https://ludwig.ai/latest/) Podcast.__init__ Episode (https://www.pythonpodcast.com/ludwig-horovod-distributed-declarative-deep-learning-episode-341/) Recommender Systems (https://en.wikipedia.org/wiki/Recommender_system) Information Retrieval (https://en.wikipedia.org/wiki/Information_retrieval) Vector Database (https://thenewstack.io/what-is-a-real-vector-database/) Transformer Model (https://en.wikipedia.org/wiki/Transformer_(machine_learning_model)) BERT (https://en.wikipedia.org/wiki/BERT_(language_model)) Context Windows (https://www.linkedin.com/pulse/whats-context-window-anyway-caitie-doogan-phd/) LLAMA (https://en.wikipedia.org/wiki/LLaMA) The intro and outro music is from Hitman's Lovesong feat. Paola Graziano (https://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/Tales_Of_A_Dead_Fish/Hitmans_Lovesong/) by The Freak Fandango Orchestra (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/)/CC BY-SA 3.0 (https://creativecommons.org/licenses/by-sa/3.0/)
Surveying The Landscape Of AI and ML From An Investor's Perspective
15-10-2023
Surveying The Landscape Of AI and ML From An Investor's Perspective
Summary Artificial Intelligence is experiencing a renaissance in the wake of breakthrough natural language models. With new businesses sprouting up to address the various needs of ML and AI teams across the industry, it is a constant challenge to stay informed. Matt Turck has been compiling a report on the state of ML, AI, and Data for his work at FirstMark Capital. In this episode he shares his findings on the ML and AI landscape and the interesting trends that are developing. Announcements Hello and welcome to the Machine Learning Podcast, the podcast about machine learning and how to bring it from idea to delivery. As more people start using AI for projects, two things are clear: It’s a rapidly advancing field, but it’s tough to navigate. How can you get the best results for your use case? Instead of being subjected to a bunch of buzzword bingo, hear directly from pioneers in the developer and data science space on how they use graph tech to build AI-powered apps. . Attend the dev and ML talks at NODES 2023, a free online conference on October 26 featuring some of the brightest minds in tech. Check out the agenda and register today at Neo4j.com/NODES (https://Neo4j.com/NODES). Your host is Tobias Macey and today I'm interviewing Matt Turck about his work on the MAD (ML, AI, and Data) landscape and the insights he has gained on the ML ecosystem Interview Introduction How did you get involved in machine learning? Can you describe what the MAD landscape project is and the story behind it? What are the major changes in the ML ecosystem that you have seen since you first started compiling the landscape? How have the developments in consumer-grade AI in recent years changed the business opportunities for ML/AI? What are the coarse divisions that you see as the boundaries that define the different categories for ML/AI in the landscape? For ML infrastructure products/companies, what are the biggest challenges that they face in engineering and customer acquisition? What are some of the challenges in building momentum for startups in AI (existing moats around data access, talent acquisition, etc.)? For products/companies that have ML/AI as their core offering, what are some strategies that they use to compete with "big tech" companies that already have a large corpus of data? What do you see as the societal vs. business importance of open source models as AI becomes more integrated into consumer facing products? What are the most interesting, innovative, or unexpected ways that you have seen ML/AI used in business and social contexts? What are the most interesting, unexpected, or challenging lessons that you have learned while working on the ML/AI elements of the MAD landscape? When is ML/AI the wrong choice for businesses? What are the areas of ML/AI that you are paying closest attention to in your own work? Contact Info Website (https://mattturck.com/) @mattturck (https://twitter.com/mattturck) on Twitter Parting Question From your perspective, what is the biggest barrier to adoption of machine learning today? Closing Announcements Thank you for listening! Don't forget to check out our other shows. The Data Engineering Podcast (https://www.dataengineeringpodcast.com) covers the latest on modern data management. Podcast.__init__ () covers the Python language, its community, and the innovative ways it is being used. Visit the site (https://www.themachinelearningpodcast.com) to subscribe to the show, sign up for the mailing list, and read the show notes. If you've learned something or tried out a project from the show then tell us about it! Email hosts@themachinelearningpodcast.com (mailto:hosts@themachinelearningpodcast.com)) with your story. To help other people find the show please leave a review on iTunes (https://podcasts.apple.com/us/podcast/the-machine-learning-podcast/id1626358243) and tell your friends and co-workers Links MAD Landscape (https://mad.firstmark.com/) Data Engineering Podcast Episode (https://www.dataengineeringpodcast.com/mad-landscape-2023-data-infrastructure-episode-369) First Mark Capital (https://firstmark.com/) Bayesian Techniques (https://en.wikipedia.org/wiki/Bayesian_inference) Hadoop (https://hadoop.apache.org/) ChatGPT (https://chat.openai.com/) AutoGPT (https://news.agpt.co/) Dataiku (https://www.dataiku.com/) Generative AI (https://generativeai.net/) Databricks (https://www.databricks.com/) MLOps (https://ml-ops.org/) OpenAI (https://openai.com/) Anthropic (https://www.anthropic.com/) DeepMind (https://www.deepmind.com/) BloombergGPT (https://www.bloomberg.com/company/press/bloomberggpt-50-billion-parameter-llm-tuned-finance/) HuggingFace (https://huggingface.co/) Jexi (https://www.imdb.com/title/tt9354944/) Movie "Her" (https://www.imdb.com/title/tt1798709/?ref_=fn_al_tt_1) Movie Synthesia (https://www.synthesia.io/) The intro and outro music is from Hitman's Lovesong feat. Paola Graziano (https://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/Tales_Of_A_Dead_Fish/Hitmans_Lovesong/) by The Freak Fandango Orchestra (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/)/CC BY-SA 3.0 (https://creativecommons.org/licenses/by-sa/3.0/)
Applying Federated Machine Learning To Sensitive Healthcare Data At Rhino Health
11-09-2023
Applying Federated Machine Learning To Sensitive Healthcare Data At Rhino Health
Summary A core challenge of machine learning systems is getting access to quality data. This often means centralizing information in a single system, but that is impractical in highly regulated industries, such as healthchare. To address this hurdle Rhino Health is building a platform for federated learning on health data, so that everyone can maintain data privacy while benefiting from AI capabilities. In this episode Ittai Dayan explains the barriers to ML in healthcare and how they have designed the Rhino platform to overcome them. Announcements Hello and welcome to the Machine Learning Podcast, the podcast about machine learning and how to bring it from idea to delivery. Your host is Tobias Macey and today I'm interviewing Ittai Dayan about using federated learning at Rhino Health to bring AI capabilities to the tightly regulated healthcare industry Interview Introduction How did you get involved in machine learning? Can you describe what Rhino Health is and the story behind it? What is federated learning and what are the trade-offs that it introduces? What are the benefits to healthcare and pharmalogical organizations from using federated learning? What are some of the challenges that you face in validating that patient data is properly de-identified in the federated models? Can you describe what the Rhino Health platform offers and how it is implemented? How have the design and goals of the system changed since you started working on it? What are the technological capabilities that are needed for an organization to be able to start using Rhino Health to gain insights into their patient and clinical data? How have you approached the design of your product to reduce the effort to onboard new customers and solutions? What are some examples of the types of automation that you are able to provide to your customers? (e.g. medical diagnosis, radiology review, health outcome predictions, etc.) What are the ethical and regulatory challenges that you have had to address in the development of your platform? What are the most interesting, innovative, or unexpected ways that you have seen Rhino Health used? What are the most interesting, unexpected, or challenging lessons that you have learned while working on Rhino Health? When is Rhino Health the wrong choice? What do you have planned for the future of Rhino Health? Contact Info LinkedIn (https://www.linkedin.com/in/ittai-dayan/) Parting Question From your perspective, what is the biggest barrier to adoption of machine learning today? Closing Announcements Thank you for listening! Don't forget to check out our other shows. The Data Engineering Podcast (https://www.dataengineeringpodcast.com) covers the latest on modern data management. Podcast.__init__ () covers the Python language, its community, and the innovative ways it is being used. Visit the site (https://www.themachinelearningpodcast.com) to subscribe to the show, sign up for the mailing list, and read the show notes. If you've learned something or tried out a project from the show then tell us about it! Email hosts@themachinelearningpodcast.com (mailto:hosts@themachinelearningpodcast.com)) with your story. To help other people find the show please leave a review on iTunes (https://podcasts.apple.com/us/podcast/the-machine-learning-podcast/id1626358243) and tell your friends and co-workers Links Rhino Health (https://www.rhinohealth.com/) Federated Learning (https://en.wikipedia.org/wiki/Federated_learning) Nvidia Clara (https://www.nvidia.com/en-us/clara/) Nvidia DGX (https://www.nvidia.com/en-us/data-center/dgx-platform/) Melloddy (https://www.melloddy.eu/) Flair NLP (https://github.com/flairNLP/flair) The intro and outro music is from Hitman's Lovesong feat. Paola Graziano (https://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/Tales_Of_A_Dead_Fish/Hitmans_Lovesong/) by The Freak Fandango Orchestra (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/)/CC BY-SA 3.0 (https://creativecommons.org/licenses/by-sa/3.0/)
Using Machine Learning To Keep An Eye On The Planet
17-06-2023
Using Machine Learning To Keep An Eye On The Planet
Summary Satellite imagery has given us a new perspective on our world, but it is limited by the field of view for the cameras. Synthetic Aperture Radar (SAR) allows for collecting images through clouds and in the dark, giving us a more consistent means of collecting data. In order to identify interesting details in such a vast amount of data it is necessary to use the power of machine learning. ICEYE has a fleet of satellites continuously collecting information about our planet. In this episode Tapio Friberg shares how they are applying ML to that data set to provide useful insights about fires, floods, and other terrestrial phenomena. Announcements Hello and welcome to the Machine Learning Podcast, the podcast about machine learning and how to bring it from idea to delivery. Your host is Tobias Macey and today I'm interviewing Tapio Friberg about building machine learning applications on top of SAR (Synthetic Aperture Radar) data to generate insights about our planet Interview Introduction How did you get involved in machine learning? Can you describe what ICEYE is and the story behind it? What are some of the applications of ML at ICEYE? What are some of the ways that SAR data poses a unique challenge to ML applications? What are some of the elements of the ML workflow that you are able to use "off the shelf" and where are the areas that you have had to build custom solutions? Can you share the structure of your engineering team and the role that the ML function plays in the larger organization? What does the end-to-end workflow for your ML model development and deployment look like? What are the operational requirements for your models? (e.g. batch execution, real-time, interactive inference, etc.) In the model definitions, what are the elements of the source domain that create the largest challenges? (e.g. noise from backscatter, variance in resolution, etc.) Once you have an output from an ML model how do you manage mapping between data domains to reflect insights from SAR sources onto a human understandable representation? Given that SAR data and earth imaging is still a very niche domain, how does that influence your ability to hire for open positions and the ways that you think about your contributions to the overall ML ecosystem? How can your work on using SAR as a representation of physical attributes help to improve capabilities in e.g. LIDAR, computer vision, etc.? What are the most interesting, innovative, or unexpected ways that you have seen ICEYE and SAR data used? What are the most interesting, unexpected, or challenging lessons that you have learned while working on ML for SAR data? What do you have planned for the future of ML applications at ICEYE? Contact Info LinkedIn (https://www.linkedin.com/in/tapio-friberg-319212235/) Parting Question From your perspective, what is the biggest barrier to adoption of machine learning today? Closing Announcements Thank you for listening! Don't forget to check out our other shows. The Data Engineering Podcast (https://www.dataengineeringpodcast.com) covers the latest on modern data management. Podcast.__init__ () covers the Python language, its community, and the innovative ways it is being used. Visit the site (https://www.themachinelearningpodcast.com) to subscribe to the show, sign up for the mailing list, and read the show notes. If you've learned something or tried out a project from the show then tell us about it! Email hosts@themachinelearningpodcast.com (mailto:hosts@themachinelearningpodcast.com)) with your story. To help other people find the show please leave a review on iTunes (https://podcasts.apple.com/us/podcast/the-machine-learning-podcast/id1626358243) and tell your friends and co-workers Links ICEYE (https://www.iceye.com/) SAR == Synthetic Aperture Radar (https://en.wikipedia.org/wiki/Synthetic-aperture_radar) Transfer Learning (https://en.wikipedia.org/wiki/Transfer_learning) The intro and outro music is from Hitman's Lovesong feat. Paola Graziano (https://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/Tales_Of_A_Dead_Fish/Hitmans_Lovesong/) by The Freak Fandango Orchestra (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/)/CC BY-SA 3.0 (https://creativecommons.org/licenses/by-sa/3.0/)
The Role Of Model Development In Machine Learning Systems
29-05-2023
The Role Of Model Development In Machine Learning Systems
Summary The focus of machine learning projects has long been the model that is built in the process. As AI powered applications grow in popularity and power, the model is just the beginning. In this episode Josh Tobin shares his experience from his time as a machine learning researcher up to his current work as a founder at Gantry, and the shift in focus from model development to machine learning systems. Announcements Hello and welcome to the Machine Learning Podcast, the podcast about machine learning and how to bring it from idea to delivery. Your host is Tobias Macey and today I'm interviewing Josh Tobin about the state of industry best practices for designing and building ML models Interview Introduction How did you get involved in machine learning? Can you start by describing what a "traditional" process for building a model looks like? What are the forces that shaped those "best practices"? What are some of the practices that are still necessary/useful and what is becoming outdated? What are the changes in the ecosystem (tooling, research, communal knowledge, etc.) that are forcing teams to reconsider how they think about modeling? What are the most critical practices/capabilities for teams who are building services powered by ML/AI? What systems do they need to support them in those efforts? Can you describe what you are building at Gantry and how it aids in the process of developing/deploying/maintaining models with "modern" workflows? What are the most challenging aspects of building a platform that supports ML teams in their workflows? What are the most interesting, innovative, or unexpected ways that you have seen teams approach model development/validation? What are the most interesting, unexpected, or challenging lessons that you have learned while working on Gantry? When is Gantry the wrong choice? What are some of the resources that you find most helpful to stay apprised of how modeling and ML practices are evolving? Contact Info LinkedIn (https://www.linkedin.com/in/josh-tobin-4b3b10a9/) Website (http://josh-tobin.com/) Parting Question From your perspective, what is the biggest barrier to adoption of machine learning today? Closing Announcements Thank you for listening! Don't forget to check out our other shows. The Data Engineering Podcast (https://www.dataengineeringpodcast.com) covers the latest on modern data management. Podcast.__init__ () covers the Python language, its community, and the innovative ways it is being used. Visit the site (https://www.themachinelearningpodcast.com) to subscribe to the show, sign up for the mailing list, and read the show notes. If you've learned something or tried out a project from the show then tell us about it! Email hosts@themachinelearningpodcast.com (mailto:hosts@themachinelearningpodcast.com)) with your story. To help other people find the show please leave a review on iTunes (https://podcasts.apple.com/us/podcast/the-machine-learning-podcast/id1626358243) and tell your friends and co-workers Links Gantry (https://gantry.io/) Full Stack Deep Learning (https://fullstackdeeplearning.com/) OpenAI (https://openai.com/) Kaggle (https://www.kaggle.com/) NeurIPS == Neural Information Processing Systems Conference (https://nips.cc/) Caffe (https://caffe.berkeleyvision.org/) Theano (https://github.com/Theano/Theano) Deep Learning (https://en.wikipedia.org/wiki/Deep_learning) Regression Model (https://www.analyticsvidhya.com/blog/2022/01/different-types-of-regression-models/) scikit-learn (https://scikit-learn.org/) Large Language Model (https://en.wikipedia.org/wiki/Large_language_model) Foundation Models (https://en.wikipedia.org/wiki/Foundation_models) Cohere (https://cohere.com/) Federated Learning (https://en.wikipedia.org/wiki/Federated_learning) Feature Store (https://www.featurestore.org/) dbt (https://www.getdbt.com/) The intro and outro music is from Hitman's Lovesong feat. Paola Graziano (https://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/Tales_Of_A_Dead_Fish/Hitmans_Lovesong/) by The Freak Fandango Orchestra (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/)/CC BY-SA 3.0 (https://creativecommons.org/licenses/by-sa/3.0/)
Real-Time Machine Learning Has Entered The Realm Of The Possible
09-03-2023
Real-Time Machine Learning Has Entered The Realm Of The Possible
Summary Machine learning models have predominantly been built and updated in a batch modality. While this is operationally simpler, it doesn't always provide the best experience or capabilities for end users of the model. Tecton has been investing in the infrastructure and workflows that enable building and updating ML models with real-time data to allow you to react to real-world events as they happen. In this episode CTO Kevin Stumpf explores they benefits of real-time machine learning and the systems that are necessary to support the development and maintenance of those models. Announcements Hello and welcome to the Machine Learning Podcast, the podcast about machine learning and how to bring it from idea to delivery. Your host is Tobias Macey and today I'm interviewing Kevin Stumpf about the challenges and promise of real-time ML applications Interview Introduction How did you get involved in machine learning? Can you describe what real-time ML is and some examples of where it might be applied? What are the operational and organizational requirements for being able to adopt real-time approaches for ML projects? What are some of the ways that real-time requirements influence the scale/scope/architecture of an ML model? What are some of the failure modes for real-time vs analytical or operational ML? Given the low latency between source/input data being generated or received and a prediction being generated, how does that influence susceptibility to e.g. data drift? Data quality and accuracy also become more critical. What are some of the validation strategies that teams need to consider as they move to real-time? What are the most interesting, innovative, or unexpected ways that you have seen real-time ML applied? What are the most interesting, unexpected, or challenging lessons that you have learned while working on real-time ML systems? When is real-time the wrong choice for ML? What do you have planned for the future of real-time support for ML in Tecton? Contact Info LinkedIn (https://www.linkedin.com/in/kevinstumpf/) @kevinmstumpf (https://twitter.com/kevinmstumpf?lang=en) on Twitter Parting Question From your perspective, what is the biggest barrier to adoption of machine learning today? Closing Announcements Thank you for listening! Don't forget to check out our other shows. The Data Engineering Podcast (https://www.dataengineeringpodcast.com) covers the latest on modern data management. Podcast.__init__ () covers the Python language, its community, and the innovative ways it is being used. Visit the site (https://www.themachinelearningpodcast.com) to subscribe to the show, sign up for the mailing list, and read the show notes. If you've learned something or tried out a project from the show then tell us about it! Email hosts@themachinelearningpodcast.com (mailto:hosts@themachinelearningpodcast.com)) with your story. To help other people find the show please leave a review on iTunes (https://podcasts.apple.com/us/podcast/the-machine-learning-podcast/id1626358243) and tell your friends and co-workers Links Tecton (https://www.tecton.ai/) Podcast Episode (https://www.themachinelearningpodcast.com/tecton-machine-learning-feature-platform-episode-6/) Data Engineering Podcast Episode (https://www.dataengineeringpodcast.com/tecton-mlops-feature-store-episode-166/) Uber Michelangelo (https://www.uber.com/blog/michelangelo-machine-learning-platform/) Reinforcement Learning (https://en.wikipedia.org/wiki/Reinforcement_learning) Online Learning (https://en.wikipedia.org/wiki/Online_machine_learning) Random Forest (https://en.wikipedia.org/wiki/Random_forest) ChatGPT (https://openai.com/blog/chatgpt) XGBoost (https://xgboost.ai/) Linear Regression (https://en.wikipedia.org/wiki/Linear_regression) Train-Serve Skew (https://ploomber.io/blog/train-serve-skew/) Flink (https://flink.apache.org/) Data Engineering Podcast Episode (https://www.dataengineeringpodcast.com/apache-flink-with-fabian-hueske-episode-57/) The intro and outro music is from Hitman's Lovesong feat. Paola Graziano (https://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/Tales_Of_A_Dead_Fish/Hitmans_Lovesong/) by The Freak Fandango Orchestra (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/)/CC BY-SA 3.0 (https://creativecommons.org/licenses/by-sa/3.0/)
How Shopify Built A Machine Learning Platform That Encourages Experimentation
02-02-2023
How Shopify Built A Machine Learning Platform That Encourages Experimentation
Summary Shopify uses machine learning to power multiple features in their platform. In order to reduce the amount of effort required to develop and deploy models they have invested in building an opinionated platform for their engineers. They have gone through multiple iterations of the platform and their most recent version is called Merlin. In this episode Isaac Vidas shares the use cases that they are optimizing for, how it integrates into the rest of their data platform, and how they have designed it to let machine learning engineers experiment freely and safely. Announcements Hello and welcome to the Machine Learning Podcast, the podcast about machine learning and how to bring it from idea to delivery. Your host is Tobias Macey and today I'm interviewing Isaac Vidas about his work on the ML platform used by Shopify Interview Introduction How did you get involved in machine learning? Can you describe what Shopify is and some of the ways that you are using ML at Shopify? What are the challenges that you have encountered as an organization in applying ML to your business needs? Can you describe how you have designed your current technical platform for supporting ML workloads? Who are the target personas for this platform? What does the workflow look like for a given data scientist/ML engineer/etc.? What are the capabilities that you are trying to optimize for in your current platform? What are some of the previous iterations of ML infrastructure and process that you have built? What are the most useful lessons that you gathered from those previous experiences that informed your current approach? How have the capabilities of the Merlin platform influenced the ways that ML is viewed and applied across Shopify? What are the most interesting, innovative, or unexpected ways that you have seen Merlin used? What are the most interesting, unexpected, or challenging lessons that you have learned while working on Merlin? When is Merlin the wrong choice? What do you have planned for the future of Merlin? Contact Info @kazuaros (https://twitter.com/kazuarous) on Twitter LinkedIn (https://www.linkedin.com/in/isaac-vidas/) kazuar (https://github.com/kazuar) on GitHub Parting Question From your perspective, what is the biggest barrier to adoption of machine learning today? Closing Announcements Thank you for listening! Don't forget to check out our other shows. The Data Engineering Podcast (https://www.dataengineeringpodcast.com) covers the latest on modern data management. Podcast.__init__ () covers the Python language, its community, and the innovative ways it is being used. Visit the site (https://www.themachinelearningpodcast.com) to subscribe to the show, sign up for the mailing list, and read the show notes. If you've learned something or tried out a project from the show then tell us about it! Email hosts@themachinelearningpodcast.com (mailto:hosts@themachinelearningpodcast.com)) with your story. To help other people find the show please leave a review on iTunes (https://podcasts.apple.com/us/podcast/the-machine-learning-podcast/id1626358243) and tell your friends and co-workers Links Shopify (https://www.shopify.com/) Shopify Merlin (https://shopify.engineering/merlin-shopify-machine-learning-platform) Vertex AI (https://cloud.google.com/vertex-ai) scikit-learn (https://scikit-learn.org/stable/) XGBoost (https://xgboost.ai/) Ray (https://docs.ray.io/en/latest/) Podcast.__init__ Episode (https://www.pythonpodcast.com/ray-distributed-computing-episode-258/) PySpark (https://spark.apache.org/docs/latest/api/python/) GPT-3 (https://en.wikipedia.org/wiki/GPT-3) ChatGPT (https://openai.com/blog/chatgpt/) Google AI (https://ai.google/) PyTorch (https://pytorch.org/) Podcast.__init__ Episode (https://www.pythonpodcast.com/pytorch-deep-learning-epsiode-202/) Dask (https://www.dask.org/) Modin (https://modin.readthedocs.io/en/stable/) Podcast.__init__ Episode (https://www.pythonpodcast.com/modin-parallel-dataframe-episode-324/) Flink (https://flink.apache.org/) Data Engineering Podcast Episode (https://www.dataengineeringpodcast.com/apache-flink-with-fabian-hueske-episode-57/) Feast Feature Store (https://feast.dev/) Kubernetes (https://kubernetes.io/) The intro and outro music is from Hitman's Lovesong feat. Paola Graziano (https://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/Tales_Of_A_Dead_Fish/Hitmans_Lovesong/) by The Freak Fandango Orchestra (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/)/CC BY-SA 3.0 (https://creativecommons.org/licenses/by-sa/3.0/)
Applying Machine Learning To The Problem Of Bad Data At Anomalo
24-01-2023
Applying Machine Learning To The Problem Of Bad Data At Anomalo
Summary All data systems are subject to the "garbage in, garbage out" problem. For machine learning applications bad data can lead to unreliable models and unpredictable results. Anomalo is a product designed to alert on bad data by applying machine learning models to various storage and processing systems. In this episode Jeremy Stanley discusses the various challenges that are involved in building useful and reliable machine learning models with unreliable data and the interesting problems that they are solving in the process. Announcements Hello and welcome to the Machine Learning Podcast, the podcast about machine learning and how to bring it from idea to delivery. Your host is Tobias Macey and today I'm interviewing Jeremy Stanley about his work at Anomalo, applying ML to the problem of data quality monitoring Interview Introduction How did you get involved in machine learning? Can you describe what Anomalo is and the story behind it? What are some of the ML approaches that you are using to address challenges with data quality/observability? What are some of the difficulties posed by your application of ML technologies on data sets that you don't control? How does the scale and quality of data that you are working with influence/constrain the algorithmic approaches that you are using to build and train your models? How have you implemented the infrastructure and workflows that you are using to support your ML applications? What are some of the ways that you are addressing data quality challenges in your own platform? What are the opportunities that you have for dogfooding your product? What are the most interesting, innovative, or unexpected ways that you have seen Anomalo used? What are the most interesting, unexpected, or challenging lessons that you have learned while working on Anomalo? When is Anomalo the wrong choice? What do you have planned for the future of Anomalo? Contact Info @jeremystan (https://twitter.com/jeremystan) on Twitter LinkedIn (https://www.linkedin.com/in/jeremystanley/) Parting Question From your perspective, what is the biggest barrier to adoption of machine learning today? Closing Announcements Thank you for listening! Don't forget to check out our other shows. The Data Engineering Podcast (https://www.dataengineeringpodcast.com) covers the latest on modern data management. Podcast.__init__ () covers the Python language, its community, and the innovative ways it is being used. Visit the site (https://www.themachinelearningpodcast.com) to subscribe to the show, sign up for the mailing list, and read the show notes. If you've learned something or tried out a project from the show then tell us about it! Email hosts@themachinelearningpodcast.com (mailto:hosts@themachinelearningpodcast.com)) with your story. To help other people find the show please leave a review on iTunes (https://podcasts.apple.com/us/podcast/the-machine-learning-podcast/id1626358243) and tell your friends and co-workers Links Anomalo (https://www.anomalo.com/) Data Engineering Podcast Episode (https://www.dataengineeringpodcast.com/anomalo-data-quality-platform-episode-256/) Partial Differential Equations (https://en.wikipedia.org/wiki/Partial_differential_equation) Neural Network (https://en.wikipedia.org/wiki/Neural_network) Neural Networks For Pattern Recognition (https://amzn.to/3k0Mpv8) by Christopher M. Bishop (affiliate link) Gradient Boosted Decision Trees (https://developers.google.com/machine-learning/decision-forests/intro-to-gbdt) Shapley Values (https://christophm.github.io/interpretable-ml-book/shapley.html) Sentry (https://sentry.io) dbt (https://www.getdbt.com/) Altair (https://altair-viz.github.io/) The intro and outro music is from Hitman's Lovesong feat. Paola Graziano (https://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/Tales_Of_A_Dead_Fish/Hitmans_Lovesong/) by The Freak Fandango Orchestra (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/)/CC BY-SA 3.0 (https://creativecommons.org/licenses/by-sa/3.0/)
Build More Reliable Machine Learning Systems With The Dagster Orchestration Engine
02-12-2022
Build More Reliable Machine Learning Systems With The Dagster Orchestration Engine
Summary Building a machine learning model one time can be done in an ad-hoc manner, but if you ever want to update it and serve it in production you need a way of repeating a complex sequence of operations. Dagster is an orchestration engine that understands the data that it is manipulating so that you can move beyond coarse task-based representations of your dependencies. In this episode Sandy Ryza explains how his background in machine learning has informed his work on the Dagster project and the foundational principles that it is built on to allow for collaboration across data engineering and machine learning concerns. Interview Introduction How did you get involved in machine learning? Can you start by sharing a definition of "orchestration" in the context of machine learning projects? What is your assessment of the state of the orchestration ecosystem as it pertains to ML? modeling cycles and managing experiment iterations in the execution graph how to balance flexibility with repeatability What are the most interesting, innovative, or unexpected ways that you have seen orchestration implemented/applied for machine learning? What are the most interesting, unexpected, or challenging lessons that you have learned while working on orchestration of ML workflows? When is Dagster the wrong choice? What do you have planned for the future of ML support in Dagster? Contact Info LinkedIn (https://www.linkedin.com/in/sandyryza/) @s_ryz (https://twitter.com/s_ryz) on Twitter sryza (https://github.com/sryza) on GitHub Parting Question From your perspective, what is the biggest barrier to adoption of machine learning today? Closing Announcements Thank you for listening! Don't forget to check out our other shows. The Data Engineering Podcast (https://www.dataengineeringpodcast.com) covers the latest on modern data management. Podcast.__init__ () covers the Python language, its community, and the innovative ways it is being used. Visit the site (https://www.themachinelearningpodcast.com) to subscribe to the show, sign up for the mailing list, and read the show notes. If you've learned something or tried out a project from the show then tell us about it! Email hosts@themachinelearningpodcast.com (mailto:hosts@themachinelearningpodcast.com)) with your story. To help other people find the show please leave a review on iTunes (https://podcasts.apple.com/us/podcast/the-machine-learning-podcast/id1626358243) and tell your friends and co-workers Links Dagster (https://dagster.io/) Data Engineering Podcast Episode (https://www.dataengineeringpodcast.com/dagster-software-defined-assets-data-orchestration-episode-309/) Cloudera (https://www.cloudera.com/) Hadoop (https://hadoop.apache.org/) Apache Spark (https://spark.apache.org/) Peter Norvig (https://en.wikipedia.org/wiki/Peter_Norvig) Josh Wills (https://www.linkedin.com/in/josh-wills-13882b/) REPL == Read Eval Print Loop (https://en.wikipedia.org/wiki/Read%E2%80%93eval%E2%80%93print_loop) RStudio (https://posit.co/) Memoization (https://en.wikipedia.org/wiki/Memoization) MLFlow (https://mlflow.org/) Kedro (https://kedro.readthedocs.io/en/stable/) Data Engineering Podcast Episode (https://www.dataengineeringpodcast.com/kedro-data-pipeline-episode-100/) Metaflow (https://metaflow.org/) Podcast.__init__ Episode (https://www.pythonpodcast.com/metaflow-machine-learning-operations-episode-274/) Kubeflow (https://www.kubeflow.org/) dbt (https://www.getdbt.com/) Data Engineering Podcast Episode (https://www.dataengineeringpodcast.com/dbt-data-analytics-episode-81/) Airbyte (https://airbyte.com/) Data Engineering Podcast Episode (https://www.dataengineeringpodcast.com/airbyte-open-source-data-integration-episode-173/) The intro and outro music is from Hitman's Lovesong feat. Paola Graziano (https://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/Tales_Of_A_Dead_Fish/Hitmans_Lovesong/) by The Freak Fandango Orchestra (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/)/CC BY-SA 3.0 (https://creativecommons.org/licenses/by-sa/3.0/)
Solve The Cold Start Problem For Machine Learning By Letting Humans Teach The Computer With Aitomatic
28-09-2022
Solve The Cold Start Problem For Machine Learning By Letting Humans Teach The Computer With Aitomatic
Summary Machine learning is a data-hungry approach to problem solving. Unfortunately, there are a number of problems that would benefit from the automation provided by artificial intelligence capabilities that don’t come with troves of data to build from. Christopher Nguyen and his team at Aitomatic are working to address the "cold start" problem for ML by letting humans generate models by sharing their expertise through natural language. In this episode he explains how that works, the various ways that we can start to layer machine learning capabilities on top of each other, as well as the risks involved in doing so without incorporating lessons learned in the growth of the software industry. Announcements Hello and welcome to the Machine Learning Podcast, the podcast about machine learning and how to bring it from idea to delivery.Predibase is a low-code ML platform without low-code limits. Built on top of our open source foundations of Ludwig and Horovod, our platform allows you to train state-of-the-art ML and deep learning models on your datasets at scale. Our platform works on text, images, tabular, audio and multi-modal data using our novel compositional model architecture. We allow users to operationalize models on top of the modern data stack, through REST and PQL – an extension of SQL that puts predictive power in the hands of data practitioners. Go to themachinelearningpodcast.com/predibase today to learn more and try it out!Your host is Tobias Macey and today I’m interviewing Christopher Nguyen about how to address the cold start problem for ML/AI projects Interview IntroductionHow did you get involved in machine learning?Can you describe what the "cold start" or "small data" problem is and its impact on an organization’s ability to invest in machine learning?What are some examples of use cases where ML is a viable solution but there is a corresponding lack of usable data?How does the model design influence the data requirements to build it? (e.g. statistical model vs. deep learning, etc.)What are the available options for addressing a lack of data for ML? What are the characteristics of a given data set that make it suitable for ML use cases? Can you describe what you are building at Aitomatic and how it helps to address the cold start problem? How have the design and goals of the product changed since you first started working on it? What are some of the education challenges that you face when working with organizations to help them understand how to think about ML/AI investment and practical limitations? What are the most interesting, innovative, or unexpected ways that you have seen Aitomatic/H1st used?What are the most interesting, unexpected, or challenging lessons that you have learned while working on Aitomatic/H1st?When is a human/knowledge driven approach to ML development the wrong choice?What do you have planned for the future of Aitomatic? Contact Info LinkedIn@pentagoniac on TwitterGoogle Scholar Parting Question From your perspective, what is the biggest barrier to adoption of machine learning today? Closing Announcements Thank you for listening! Don’t forget to check out our other shows. The Data Engineering Podcast covers the latest on modern data management. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used.Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.If you’ve learned something or tried out a project from the show then tell us about it! Email hosts@themachinelearningpodcast.com) with your story.To help other people find the show please leave a review on iTunes and tell your friends and co-workers Links AitomaticHuman First AIKnowledge First World SymposiumAtari 800Cold start problemScale AISnorkel AI Podcast Episode Anomaly DetectionExpert SystemsICML == International Conference on Machine LearningNIST == National Institute of Standards and TechnologyMulti-modal ModelSVM == Support Vector MachineTensorflowPytorch Podcast.__init__ Episode OSS CapitalDALL-E The intro and outro music is from Hitman’s Lovesong feat. Paola Graziano by The Freak Fandango Orchestra/CC BY-SA 3.0
Convert Your Unstructured Data To Embedding Vectors For More Efficient Machine Learning With Towhee
21-09-2022
Convert Your Unstructured Data To Embedding Vectors For More Efficient Machine Learning With Towhee
Summary Data is one of the core ingredients for machine learning, but the format in which it is understandable to humans is not a useful representation for models. Embedding vectors are a way to structure data in a way that is native to how models interpret and manipulate information. In this episode Frank Liu shares how the Towhee library simplifies the work of translating your unstructured data assets (e.g. images, audio, video, etc.) into embeddings that you can use efficiently for machine learning, and how it fits into your workflow for model development. Announcements Hello and welcome to the Machine Learning Podcast, the podcast about machine learning and how to bring it from idea to delivery.Building good ML models is hard, but testing them properly is even harder. At Deepchecks, they built an open-source testing framework that follows best practices, ensuring that your models behave as expected. Get started quickly using their built-in library of checks for testing and validating your model’s behavior and performance, and extend it to meet your specific needs as your model evolves. Accelerate your machine learning projects by building trust in your models and automating the testing that you used to do manually. Go to themachinelearningpodcast.com/deepchecks today to get started!Your host is Tobias Macey and today I’m interviewing Frank Liu about how to use vector embeddings in your ML projects and how Towhee can reduce the effort involved Interview IntroductionHow did you get involved in machine learning?Can you describe what Towhee is and the story behind it?What is the problem that Towhee is aimed at solving?What are the elements of generating vector embeddings that pose the greatest challenge or require the most effort?Once you have an embedding, what are some of the ways that it might be used in a machine learning project? Are there any design considerations that need to be addressed in the form that an embedding takes and how it impacts the resultant model that relies on it? (whether for training or inference) Can you describe how the Towhee framework is implemented? What are some of the interesting engineering challenges that needed to be addressed?How have the design/goals/scope of the project shifted since it began? What is the workflow for someone using Towhee in the context of an ML project?What are some of the types optimizations that you have incorporated into Towhee? What are some of the scaling considerations that users need to be aware of as they increase the volume or complexity of data that they are processing? What are some of the ways that using Towhee impacts the way a data scientist or ML engineer approach the design development of their model code?What are the interfaces available for integrating with and extending Towhee?What are the most interesting, innovative, or unexpected ways that you have seen Towhee used?What are the most interesting, unexpected, or challenging lessons that you have learned while working on Towhee?When is Towhee the wrong choice?What do you have planned for the future of Towhee? Contact Info LinkedInfzliu on GitHubWebsite@frankzliu on Twitter Parting Question From your perspective, what is the biggest barrier to adoption of machine learning today? Closing Announcements Thank you for listening! Don’t forget to check out our other shows. The Data Engineering Podcast covers the latest on modern data management. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used.Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.If you’ve learned something or tried out a project from the show then tell us about it! Email hosts@themachinelearningpodcast.com) with your story.To help other people find the show please leave a review on iTunes and tell your friends and co-workers Links TowheeZillizMilvus Data Engineering Podcast Episode Computer VisionTensorAutoencoderLatent SpaceDiffusion ModelHSL == Hue, Saturation, LightnessWeights and Biases The intro and outro music is from Hitman’s Lovesong feat. Paola Graziano by The Freak Fandango Orchestra/CC BY-SA 3.0