5, 10 or 20 seats+ for your team - learn more
In this series of liveProjects, you’ll join up with four different computer vision companies to explore computer vision models powered by the latest deep learning architectures. You’ll utilize the groundbreaking transformer architecture, which forms the driving force behind ChatGPT, to develop a series of increasingly complex models. Starting with a classifier to detect brain tumors, you'll move on to a segmentation algorithm and an object detection application capable of detecting construction vehicles and structural flaws. Finally, you’ll take on the role of an MLOps expert, implementing model deployment and explainability in the systems you’ve developed.
It’s an excellent course, exactly what I needed in my neuroscience PhD, as I am dealing with ultra-high field fMRI data, where we need to manually segment portions of the fMRI image - the layers.This live project series is golden for anyone interested in learning transformers and their application to different computer vision tasks, like MRI!.
In this liveProject, you’ll join BrainAI’s MRI data analysis team. BrainAI needs you to develop a state-of-the-art AI module utilizing vision transformer models to detect brain tumors with exceptional accuracy. Armed with Python tools like Hugging Face Transformers, PyTorch, and more, you'll detect the presence or absence of tumors within human-brain MRI datasets. With Google Colab's GPU computing resources, you'll utilize deep learning to try and achieve a 95%+ accuracy rate.
In this liveProject, you'll pioneer the development of cutting-edge MRI segmentation algorithms using transformer architecture for computer vision company VisionSys. Manual segmentation is labor-intensive and expensive, so you’ll be developing a custom model that can do it for you. You'll train and evaluate SegFormer and MaskFormer models to identify brain tumor regions with over 90% accuracy. With Python tools like Hugging Face Transformers and Google Colab's GPU computing resources, you'll create pipelines, preprocess data, and showcase sample predictions and quantitative results.
In this liveProject, you'll spearhead the development of AI-aided surveillance software for construction site supervision. You’ll build two computer vision applications capable of detecting construction vehicles and their types across a large worksite and a more powerful model that can detect building defects such as cracks and fissures. Start by working with a pre-trained DETR model, then explore the Roboflow platform to assist you as you create a multi-class object detection dataset from multiple datasets with non-identical classes. With these datasets, you will train different transformer models for object detection to identify construction vehicles and cracks in buildings.
In this liveProject, you’ll take on the role of an engineer at AISoft, where you'll be part of two dynamic teams: MLOps and Core-ML. On the MLOps team, you'll utilize software engineering techniques to ensure your models are not only accurate but also scalable, reliable, and maintainable. You’ve been assigned two important support tasks for the Core-ML team.First, you'll utilize the Gradio Python library to create an interactive web user interface that runs MRI classification and segmentation transformer models in the background, and then you'll build a pipeline that provides interpretability to the decisions of a construction vehicle classification model.
Very interesting series with innovative materials about transformers.
This liveProject series is aimed at intermediate-level Python programmers who already know the basics of deep learning and computer vision.
geekle is based on a wordle clone.