new icn messageflickr-free-ic3d pan white
View allAll Photos Tagged machine+learning

This is an artwork installation reflecting datasets by Anna Ridler. It's made up of 10,000 polaroid photographs of tulips taken by the artist over the course of the tulip season, with each one hand labelled. Laborious! Each tulip is different. This photo shows 425 of them. Taken together, the images become an AI training data set - the information given to an algorithm so it can learn and recognise. A winner of a Beazley Designs of the Year Award 2019, exhibited at the Design Museum, London. It shows the human aspect that's behind machine learning.

My kid has birthday recently so we took him and his guests to play laser tag - a less messy kind of paintball. Players get that gear strapped onto their back and then start running around a labyrinth shooting at each other with laser pointers, basically. What better way to celebrate a new year in one's life than to make believe massacre each other? ;) Anyways, they sure did have fun.

 

Thank you everyone for your visits, faves and comments, they are always appreciated :)

Topaz Denoise AI is another of the Topaz artificial intelligence and machine learning powered tools which is designed to intellegantly reduce noise in photos. Have a look at how the software performs with a night photo which contains some noise. ------- Discount ------- Topaz has kindly provided a discount code for my viewers if you use the Topaz links below, and when purchasing use the discount code: TRAVISHALE20 you can get up to 20% discount on your purchase. ------- Links ------- Denoise AI: bit.ly/30K5CTB AI Bundle: bit.ly/2H981PQ Radeon RX 580 Graphics Card (eBay): bit.ly/2YgZu3m ------- Social Media ------ Website: bit.ly/1DG47rJ Newsletter: bit.ly/2V6KTFL Facebook: bit.ly/1OARaBV Twitter: www.twitter.com/travishale Instagram: bit.ly/2V5d3km Flickr: bit.ly/2ZYVOVv 500px: bit.ly/1S8wWSj Youtube: www.youtube.com/c/TravisHaleSciencePhotography NB: Some of the links on this site may be affiliate links, these do not change my opinion on products and/or services but allow me to continue to provide (hopefully) useful content to the community. If you purchase a product through an affiliate link, it does not change the cost of the item, but I may be provided with a small commission for referring you. For more information, visit: bit.ly/300WcCH

In the Ars Electronica Center's Machine Learning Studio visitors can use computer vision and machine learning applications to discover how machines learn and perceive their environment. Working with tech trainers, they can build and train self-driving model cars here, program robots with facial recognition, and gain insights into how they can teach these devices a wide variety of activities. Step-by-step, they can experience not only how these technologies function, but also that everything the machines know is determined by us.

 

Ars Electronica Center Linz

Ars-Electronica-Straße 1

4040 Linz

Austria

ars.electronica.art

 

Credit: Ars Electronica - Robert Bauernhansl

In the Machine Learning Studio, visitors can use computer vision and machine learning applications to discover how machines learn and perceive their environment. Working with tech trainers, they can build and train self-driving model cars here, program robots with facial recognition, and gain insights into how they can teach these devices a wide variety of activities. Step-by-step, they can experience not only how these technologies function, but also that everything the machines know is determined by us.

 

Credit: Ars Electronica - Robert Bauernhansl

PLEASE CREDIT THIS IMAGE PROPERLY AS PER INSTRUCTIONS BELOW IF YOU CHOOSE TO USE

 

Machine learning is playing an increasingly important role in computing and artificial intelligence. Suits any article on AI, algorithms, machine learning, quantum computing, artificial intelligence.

 

Want to use this image?

 

Feel free to use this photo for your website or blog as long as you include credit. It should include a clickable link to my website as per below. Please do not link to the Flickr profile.

 

Image via www.vpnsrus.com

In the Ars Electronica Center's Machine Learning Studio visitors can use computer vision and machine learning applications to discover how machines learn and perceive their environment. Working with tech trainers, they can build and train self-driving model cars here, program robots with facial recognition, and gain insights into how they can teach these devices a wide variety of activities. Step-by-step, they can experience not only how these technologies function, but also that everything the machines know is determined by us.

 

Ars Electronica Center Linz

Ars-Electronica-Straße 1

4040 Linz

Austria

ars.electronica.art

 

Credit: Ars Electronica - Magdalena Sick-Leitner

PLEASE CREDIT THIS IMAGE PROPERLY AS PER INSTRUCTIONS BELOW IF YOU CHOOSE TO USE IT

 

Artificial intelligence and machine learning are increasingly big news. Want to write an article on AI or computers getting smarter? This image could suit it.

 

Want to use this image?

 

Feel free to use this photo for your website or blog as long as you include credit. It should include a clickable link to my website as per below. Please do not link to the Flickr profile.

 

Image via www.vpnsrus.com

Topaz's Jpeg to Raw AI is an artificial intelligence and machine learning tool designed to take a JPEG image, and remove jpeg artifacts, improve dynamic range and convert to a Raw file format. Have a look at how it performs with an image captured from my Samsung Galaxy S7, and see a comparison between the converted and normal JPEG image. Run-through starts at about 4:00 minutes. ------- Discount ------- Topaz has kindly provided a discount code for my viewers if you use the Topaz links below, and when purchasing use the discount code: TRAVISHALE20 you can get up to 20% discount on your purchase. ------- Links ------- Jpeg to RAW AI: bit.ly/2WaBm4Y AI Bundle: bit.ly/2H981PQ Radeon RX 580 Graphics Card (eBay): bit.ly/2YgZu3m ------- Social Media ------ Website: bit.ly/1DG47rJ Newsletter: bit.ly/2V6KTFL Facebook: bit.ly/1OARaBV Twitter: www.twitter.com/travishale Instagram: bit.ly/2V5d3km Flickr: bit.ly/2ZYVOVv 500px: bit.ly/1S8wWSj Youtube: www.youtube.com/c/TravisHaleSciencePhotography NB: Some of the links on this site may be affiliate links, these do not change my opinion on products and/or services but allow me to continue to provide (hopefully) useful content to the community. If you purchase a product through an affiliate link, it does not change the cost of the item, but I may be provided with a small commission for referring you. For more information, visit: bit.ly/300WcCH

Congratulations to Intel on their acquisition of Nervana. This photo is from the last board meeting at our offices; the Nervana founders — from right to left: Naveen Rao, Amir Khosrowshahi and Arjun Bansal — pondered where on the wall they may fall during M&A negotiations.

 

We are now free to share some of our perspectives on the company and its mission to accelerate the future with custom chips for deep learning.

 

I’ll share a recap of the Nervana story, from an investor’s perspective, and try to explain why machine learning is of fundamental importance to every business over time. In short, I think the application of iterative algorithms (e.g., machine learning, directed evolution, generative design) to build complex systems is the most powerful advance in engineering since the Scientific Method. Machine learning allows us to build software solutions that exceed human understanding, and shows us how AI can innervate every industry.

 

By crude analogy, Nervana is recapitulating the evolutionary history of the human brain within computing — moving from the logical constructs of the reptilian brain to the cortical constructs of the human brain, with massive arrays of distributed memory and iterative learning algorithms.

 

Not surprisingly, the founders integrated experiences in neuroscience, distributed computing, and networking — a delightful mélange for tackling cognitive computing. Ali Partovi, an advisor to Nervana, introduced us to the company.

 

We were impressed with the founding team and we had a prepared mind to share their enthusiasm for the future of deep learning. Part of that prepared mind dates back to 1989, when I started a PhD in EE focusing on how to accelerate neural networks by mapping them to parallel processing computers. Fast forward 25 years, and the nomenclature has shifted to machine learning and the deep learning subset, and I chose it as the top tech trend of 2013 at the Churchill Club VC debate (video). We were also seeing the powerful application of deep learning and directed evolution across our portfolio, from molecular design to image recognition to cancer research to autonomous driving.

 

All of these companies were deploying these simulated neural networks on traditional compute clusters. Some were realizing huge advantages by porting their code to GPUs; these specialized processors originally designed for rapid rendering of computer graphics have many more computational cores than a traditional CPU, a baby step toward a cortical architecture. I first saw them being used for cortical simulations in 2007. But by the time of Nervana’s founding in 2014, some (e.g., Microsoft’s and Google’s search teams) were exploring FPGA chips for their even finer-grained arrays of customizable logic blocks. Custom silicon that could scale beyond any of these approaches seemed like the natural next step. Here is a page from Nervana’s original business plan (Fig. 1 in comments below).

 

The march to specialized silicon, from CPU to GPU to FPGA to ASIC, had played out similarly for Bitcoin miners, with each step toward specialized silicon obsoleting the predecessors. When we spoke to Amazon, Google, Baidu, and Microsoft in our due diligence, we found a much broader application of deep learning within these companies than we could have imagined prior, from product positioning to supply chain management.

 

Machine learning is central to almost everything that Google does. And through that lens, their acquisition, and new product strategies make sense; they are not traditional product line extensions, but a process expansion of machine leaning (more on that later). They are not just playing games of Go for the fun of it. Recently, Google switched their core search algorithms to deep learning, and they used Deep Mind to cut data center cooling costs by a whopping 40%.

 

The advances in deep learning are domain independent. Google can hire and acquire talent and delight in their passionate pursuit of game playing or robotics. These efforts help Google build a better brain. The brain can learn many things. It is like a newborn human; it has the capacity to learn any of the languages of the world, but based on training exposure, it will only learn a few. Similarly, a synthetic neural network can learn many things.

 

Google can let the Brain team find cats on the Internet and play a great game of Go. The process advances they make in building a better brain (or in this case, a better learning machine) can then be turned to ad matching, a task that does not inspire the best and the brightest to come work for Google.

 

The domain independence of deep learning has profound implications on labor markets and business strategy. The locus of learning shifts from end products to the process of their creation. Artifact engineering becomes more like parenting than programming. But more on that later; back to the Nervana story.

 

Our investment thesis for the Series A revolved around some universal tenets: a great group of people pursuing a product vision unlike anything we had seen before. The semiconductor sector was not crowded with investor interest. AI was not yet on many venture firms’ sectors of interest. We also shared with the team that we could envision secondary benefits from discovering the customers. Learning about the cutting edge of deep learning applications and the startups exploring the frontiers of the unknown held a certain appeal for me. And sure enough, there were patterns in customer interest, from an early flurry in medical imaging of all kinds to a recent explosion of interest in the automotive sector after Tesla’s Autopilot feature went live. The auto industry collectively rushed to catch up.

 

Soon after we led the Series A on August 8, 2014, I found myself moderating a deep learning panel at Stanford with Nervana CEO Naveen Rao.

 

I opened with an introduction to deep learning and why it has exploded in the past four years (video primer). I ended with some common patterns in the power and inscrutability of artifacts built with iterative algorithms. We see this in biology, cellular automata, genetic programming, machine learning and neural networks.

 

There is no mathematical shortcut for the decomposition of a neural network or genetic program, no way to “reverse evolve” with the ease that we can reverse engineer the artifacts of purposeful design.

 

The beauty of compounding iterative algorithms — evolution, fractals, organic growth, art — derives from their irreducibility. (More from my Google Tech Talk and MIT Tech Review)

 

Year 1. 2015

Nervana adds remarkable engineering talent, a key strategy of the first mover. One of the engineers figures out how to rework the undocumented firmware of NVIDIA GPUs so that they run deep learning algorithms faster than off-the-shelf GPUs or anything else Facebook could find. Matt Ocko preempted the second venture round of the company, and he brought the collective learning of the Data Collective to the board.

 

Year 2. 2016 Happy 2nd Birthday Nervana!

The company is heads down on chip development. They share some technical details (flexpoint arithmetic optimized for matrix multiplies and 32GB of stacked 3D memory on chip) that gives them 55 trillion operations per second on their forthcoming chip, and multiple high-speed interconnects (as typically seen in the networking industry) for ganging a matrix of chips together into unprecedented compute fabrics. 10x made manifest. See Fig. 2 below.

 

And then Intel came knocking.

With the most advanced production fab in the world and a healthy desire to regain the mantle of leading the future of Moore’s Law, the combination was hard to resist. Intel vice president Jason Waxman told Recode that the shift to artificial intelligence could dwarf the move to cloud computing. “I firmly believe this is not only the next wave but something that will dwarf the last wave.” But we had to put on our wizard hats to negotiate with giants.

 

The deep learning and AI sector have heated up in labor markets to relatively unprecedented levels. Large companies are recently paying $6–10 million per engineer for talent acquisitions, and $4–5M per head for pre-product startups still in academia. For the Masters students in a certain Stanford lab, they averaged $500K/yr for their first job offer at graduation. We witnessed an academic turn down a million dollar signing bonus because they got a better offer.

 

Why so hot?

The deep learning techniques, while relatively easy to learn, are quite foreign to traditional engineering modalities. It takes a different mindset and a relaxation of the presumption of control. The practitioners are like magi, sequestered from the rest of a typical engineering process. The artifacts of their creation are isolated blocks of functionality defined by their interfaces. They are like blocks of magic handed to other parts of a traditional organization. (This carries over to the customers too; just about any product that you experience in the next five years that seems like magic will almost certainly be built by these algorithms).

 

And remember that these “brain builders” could join any industry. They can ply their trade in any domain. When we were building the deep learning team at Human Longevity Inc. (HLI), we hired the engineering lead from the Google’s Translate team. Franz Och pioneered Google’s better-than-human translation service not by studying linguistics, grammar, or even speaking the languages being translated. He focused on building the brain that could learn the job from countless documents already translated by humans (UN transcripts in particular). When he came to HLI, he cared about the mission, but knew nothing about cancer and the genome. The learning machines can find the complex patterns across the genome. In short, the deep learning expertise is fungible, and there are a burgeoning number of companies hiring and competing across industry lines.

 

And it is an ever-widening set of industries undergoing transformation, from automotive to agriculture, healthcare to financial services. We saw this explosion in the Nervana customer pipeline. And we see it across the DFJ portfolio, especially in our newer investments. Here are some examples:

 

• Learning chemistry and drug discovery: Here is a visualization of the search space of candidates for a treatment for Ebola; it generated the lead molecule for animal trials. Atomwise summarizes: “When we examine different neurons on the network we see something new: AtomNet has learned to recognize essential chemical groups like hydrogen bonding, aromaticity, and single-bonded carbons. Critically, no human ever taught AtomNet the building blocks of organic chemistry. AtomNet discovered them itself by studying vast quantities of target and ligand data. The patterns it independently observed are so foundational that medicinal chemists often think about them, and they are studied in academic courses. Put simply, AtomNet is teaching itself college chemistry.”

 

• Designing new microbial life for better materials: Zymergen uses machine learning to predict the combination of genetic modifications that will optimize product yield for their customers. They are amassing one of the largest data sets about microbial design and performance, which enables them to train machine learning algorithms that make search predictions with increasing precision. Genomatica had great success in pathway optimization using directed evolution, a physical variant of an iterative optimization algorithm.

 

• Discovery and change detection in satellite imagery: Planet and Mapbox. Planet is now producing so much imagery that humans can’t actually look at each picture it takes. Soon, they will image every meter of the Earth every day. From a few training examples, a convolutional neural net can find similar examples globally — like all new housing starts, all depleted reservoirs, all current deforestation, or car counts for all retail parking lots.

 

• Automated driving & robotics: Tesla, Zoox, SpaceX, Rethink Robotics, etc.

 

• Visual classification: From e-commerce to drones to security cameras and more. Imagen is using deep learning to radically improve medical image analysis, starting with radiology.

 

• Cybersecurity: When protecting endpoint computing & IOT devices from the most advanced cyberthreats, AI-powered Cylance is proving to be a far superior and adaptive approach versus older signature-based antivirus solutions.

 

• Financial risk assessment: Avant and Prosper use machine learning to improve credit verification and merge traditional and non-traditional data sources during the underwriting process.

 

• And now for something completely different: quantum computing. For a wormhole peek into the near future, our quantum computing company, D-Wave Systems, powered a 100,000,000x speedup in a demonstration benchmark for Google, a company that has used D-Wave quantum computers for over a decade now on machine learning applications.

 

So where will this take us?

Neural networks had their early success in speech recognition in the 90’s. In 2012, the deep learning variant dominated the ImageNet competitions, and visual processing can now be better done by machine than human in many domains (like pathology, radiology and other medical image classification tasks). DARPA has research programs to do better than a dog’s nose in olfaction.

 

We are starting the development of our artificial brains in the sensory cortex, much like an infant coming into the world. Even within these systems, like vision, the deep learning network starts with similar low level constructs (like edge-detection) as foundations for higher level constructs like facial forms, and ultimately, finding cats on the internet with self-taught learning.

 

But the artificial brains need not limit themselves to the human senses. With the internet of things, we are creating a sensory nervous system on the planet, with countless sensors and data collecting proliferating across the planet. All of this “big data” would be a big headache but for machine learning to find patterns in it all and make it actionable. So, not only are we transcending human intelligence with multitudes of dedicated intelligences, we are transcending our sensory perception.

 

And it need not stop there. It is precisely by these iterative algorithms that human intelligence arose from primitive antecedents. While biological evolution was slow, it provides an existence proof of the process, now vastly accelerated in the artificial domain. It shifts the debate from the realm of the possible to the likely timeline ahead.

 

Let me end with the closing chapter in Danny Hillis’ CS book The Pattern on the Stone: “We will not engineer an artificial intelligence; rather we will set up the right conditions under which an intelligence can emerge. The greatest achievement of our technology may well be creation of tools that allow us to go beyond engineering — that allow us to create more than we can understand.”

 

-----

Here is some early press:

Xconomy(most in-depth), MIT Tech Review, Re/Code, Forbes, WSJ, Fortune.

Congratulations to the entire team on their $40M investment round, announced today! And it’s on the heels of their announcement of the industry’s first-ever AI-discovered drug candidate.

 

By focusing on the information-systems of our biology, from genetic disorders to genetic therapies, Deep Genomics can train their machine learning on the code — finding errant code and fixing it with digital RNA therapies — rather than the analog complexity and hit-and-miss methodology of small molecule drug design. As RNA therapy delivery chemistries unlock new organs and tissues, their approach can address a growing number of serious medical disorders.

 

Today’s news from FierceBiotech and Endpoints News:

 

“Therapeutically re-engineering the human genome is the final frontier,” said Brendan Frey, founder and CEO of Deep Genomics. “We have found that the more we explore the universe of genetic therapies using AI, the more we discover dark regions that can be illuminated only with the development of new technology.”

 

"This approach, the company explained, results in remarkable clarity and speed, as 70% of research projects have led to therapeutic leads, and programs have been taken from target discovery to drug candidate in less than a year."

 

“For over twenty years, our team at Future Ventures has backed visionary companies seeking to change the world for the better,” said Steve Jurvetson, co-founder of Future Ventures and board member of Tesla and SpaceX. “Deep Genomics has pioneered a better way to systematically discover new therapies with a much higher success rate than traditional pharma methods. My partner Maryanna Saenko and I are excited to be joining them on a journey to modernize drug development by using AI to design and de-risk drug development programs up front, instead of relying on trial-and-error experiments that are fraught with time delays and high cost.”

 

Maryanna serves on the board of directors of Deep Genomics, and Future Ventures led today’s financing.

 

And last year from FierceBiotech and BusinessWire:

“Deep Genomics reveals the first-ever AI-discovered drug candidate. ‘We have built a system that within two hours can scan over 200,000 pathogenic patient mutations and automatically identify potential drug targets,’ Frey said.”

 

“Researchers have struggled for two decades, without success, to understand the mechanism of this genetic mutation that causes Wilson disease,” said Frederick K. Askari, M.D., Ph.D., associate professor and director of the Wilson disease program at the University of Michigan. “The clarity that this artificial intelligence platform has brought to the scientific community is astounding and the potential of a therapy that could operate at the genomic level to correct the disease process is exciting. Patients can now have hope that a therapy may be developed that will recapitulate normal gene function and make their problems go away.”

 

Hiring in Toronto: DeepGenomics.com

My mom found this final report from CS411, and I noticed my lab-mate Dan Lenoski. I have not seen him for 29 years, and we reconnected this morning for breakfast. Turns out we are neighbors (I can see his house when I look out my front window), and the “small world” coincidences grew from there.

 

We were both grad student Research Assistants on Prof. John Hennessy’s DASH team at Stanford. I was fascinated by neural networks and wanted to study what we now call model and data parallelism, the two orthogonal ways to exploit parallelism in the algorithm. We only had a 16 processor machine at the time (an Encore Multimax), but we also did simulation work up to 100 processors. Below are some of the pages from our final report.

 

Pretty amazing to see the rebirth of these neural networks as deep learning over the past 5 years.

 

I found Dan on LinkedIn and from his DASH team paper on their shared-memory multiprocessor (I left in 1990).

The kind Canadians from D-Wave gave me a couple great books for the holidays. Merci.

 

Machine Learning. Quantum Computers. A grand concordance. I just noticed that the talk I gave at the U. of Toronto on the opportunity for quantum computers to accelerate deep learning is now online

Fresh Gravity’s robust and mature AI capability is led by highly-accomplished experts in Machine Learning (ML) and Artificial Intelligence (AI).

www.freshgravity.com/capabilities/artificial-intelligence/

D-Wave announced their new Quadrant business unit today, and their early results with Siemens, winning first place in the CATARACTS medical imaging grand challenge.

 

Generative Machine Learning allows for Deep Learning with a lot less data. To address the problem of overfitting, the Quadrant algorithms construct generative models which jointly model inputs and outputs. It is like developing a mental model of the structure of the problem on the fly. It combines the flexibility of deep neural nets with probabilistic graphical models.

 

While they have an eye to making quantum leaps, so to speak, with these algorithms on their quantum computers, the results so far are using classical GPUs. In the future, work done here should see further speedups running on their quantum computing cloud (they are testing that now).

 

P.S. They open with a quote that reminds me that everybody be jonesing on rocket ships:

 

"I think AI is akin to building a rocket ship. You need a huge engine and a lot of fuel. If you have a large engine and a tiny amount of fuel, you won’t make it to orbit. If you have a tiny engine and a ton of fuel, you can’t even lift off. To build a rocket you need a huge engine and a lot of fuel. The analogy to deep learning is that the rocket engine is the deep learning models and the fuel is the huge amounts of data we can feed to these algorithms."

—Andrew Ng, Founder of Coursera, Google Brain and former Head of AI at Baidu

 

Here's more info and a link to their White Paper, and today’s announcement about the new business unit.

 

The Machine Learning Studio of the Ars Electronica Center allows to use computer vision and machine learning applications to discover how machines learn and perceive their environment. Self-driving model cars can be trained, robots with facial recognition programmed and the basic concept of a assembly line observed.

 

Credit: Ars Electronica - Robert Bauernhansl

Machine-learning will generally announce this image to be a fish

Phlip Beesley Workshop October 2015 with CITAstudio

 

The installation DISSIPATIVE ARCHITECTURES explores the idea of a dynamic responsive architecture. The installation has been constructed during our recent CITAstudio workshop with Philip Beesley.

 

The opening of the linked exhibition is on Friday 4th at 15.00h in the KADK library: Danneskiold-Samsøes Allé 50 DK 1434 København K.

 

More www.facebook.com/citacph/

t-SNE converging over 600 iterations on a 2d-mapping of a 3d point cloud where RGB color is mapped to its 3d-position. made using openFrameworks + ofxTSNE github.com/genekogan/ofxTSNE

Commissioned to work with SALT Research collections, artist Refik Anadol employed machine learning algorithms to search and sort relations among 1,700,000 documents. Interactions of the multidimensional data found in the archives are, in turn, translated into an immersive media installation. Archive Dreaming, which is presented as part of The Uses of Art: Final Exhibition with the support of the Culture Programme of the European Union, is user-driven; however, when idle, the installation "dreams" of unexpected correlations among documents. The resulting high-dimensional data and interactions are translated into an architectural immersive space.

Shortly after receiving the commission, Anadol was a resident artist for Google's Artists and Machine Intelligence Program where he closely collaborated with Mike Tyka and explored cutting-edge developments in the field of machine intelligence in an environment that brings together artists and engineers. Developed during this residency, his intervention Archive Dreaming transforms the gallery space on floor -1 at SALT Galata into an all-encompassing environment that intertwines history with the contemporary, and challenges immutable concepts of the archive, while destabilizing archive-related questions with machine learning algorithms.

In this project, a temporary immersive architectural space is created as a canvas with light and data applied as materials. This radical effort to deconstruct the framework of an illusory space will transgress the normal boundaries of the viewing experience of a library and the conventional flat cinema projection screen, into a three dimensional kinetic and architectonic space of an archive visualized with machine learning algorithms. By training a neural network with images of 1,700,000 documents at SALT Research the main idea is to create an immersive installation with architectural intelligence to reframe memory, history and culture in museum perception for 21st century through the lens of machine intelligence.

SALT is grateful to Google's Artists and Machine Intelligence program, and Doğuş Technology, ŠKODA, Volkswagen Doğuş Finansman for supporting Archive Dreaming.

Location : SALT Gatala, Istanbul, Turkey

Exhibition Dates : April 20 - June 11

6 Meters Wide Circular Architectural Installation

4 Channel Video, 8 Channel Audio

Custom Software, Media Server, Table for UI Interaction

For more information:

refikanadol.com/works/archive-dreaming/

My comments on machine learning start here in the video from the Creative Destruction Lab's third annual conference, "Machine Learning and the Market for Intelligence", hosted at the University of Toronto's Rotman School of Management on October 26, 2017.

1 3 4 5 6 7 ••• 79 80