GDTF – Good idea, poor execution

Communities can create incredible things as we’ve seen with open-source initiatives like Linux, but every great community project needs a maintainer to guarantee excellence.

This post is going to look at why I’m opting to forgo community created data in my lighting control system, and instead rely on a commercial offering. 

The problem 

There exist hundreds of thousands of DMX controllable devices from thousands of manufacturers. Each manufacturer has its standards for defining the DMX control channel specification, with differences ranging from technical like MSB/LSB encoding to inconsistent naming. 

It’s this reason that lighting control systems contain a database of controllable devices DMX specifications; to abstract away the differences and make sense of the madness and provide a common denominator. 

Historically, controller manufacturers would create their databases, employing one or two people to work full-time on dealing with customer requests. Nowadays, most manufacturers have switched to using 3rd party data providers so they can focus on their core competency of developing control systems. 

The Landscape 

If you’re looking for DMX data for your own DMX control system, then you have a few options from which to choose. My former employer, Carallon, produces a database used by the industry leaders and is my personal favourite. It has been crafted for more than a decade and includes hundreds of thousands of entries, providing a vast array of information. 

Other options include a commercial database called AtlaBase, from the creatures of Capture Sweden, which includes ~16,000 fixture definitions. If you’re looking for free data, then community efforts include an open-source initiative aptly named Open Fixture Library (OFL) which sadly only contains 398 definitions. I should give credit to OFL in that they make use of Github for hosting the personalities and have a continuous integration pipeline to ensure data provided to them conforms to their schema. While the database is small, it is a great start, and I can’t wait to see how it develops. 

Enter GDTF 

GDTF stands for General Device Type Format and has the ambitious aims of providing a dependable, unifiable and consistent standard for describing DMX fixtures. It includes not just data relevant to lighting control, but also 3D information, including 3D models of each component of the light and how these are connected. The 3D data allows developers to create 3D representations of the lights, and I’m sure it won’t be long until we see an augmented reality app that uses this data. 

The GDTF project was conceived and jointly developed by MA Lighting, Robe and Vectorworks, three businesses that have amassed a tremendous amount of respect within the entertainment lighting industry. It’s since been welcome participants from other companies such as Atlabase, Avolites, Carallon, Chamsys, ETC, Martin and Zero88. However, it’s unclear how engaged these companies are with the project, or what their participation has included to date. 

Buried within the website is a hidden page for downloading fixture personalities in bulk, which offers the ability to filter the contents. Applying the ‘Uploaded by manufacturer’ filter yields just 37 files all from Robe, which struck me as rather low. A quick look at personalities on the share portal shows user accounts with company names like ACME-Lighting and ARRI-GDTF are uploading personalities, which leads me to think that the companies are struggling to obtain recognition as manufacturers. 

Of the participating companies listed on the GDTF site, many have very limited personalities available with the community having created those personalities in many cases. Clay Packy and Martin being two manufacturers where this is very obvious. My impression is that the participants are not actively engaging with the project in any meaningful way and instead, the project is relying almost entirely on community efforts. 

Data needs standards 

This brings me nicely to my biggest frustration with GDTF and why I’m leaning towards dropping support for it. The community for GDTF haven’t agreed on standard naming conventions, and this makes it difficult to make use of the data in any meaningful way. 

Using the download page from before, I applied two filters to ensure the files were at least GDTF version 1.0, and only the latest revision. This resulted in 527 personalities, most of which are almost unusable without significant post-processing. 

Inconsistent naming the problem I’ve been grappling with recently as I try to support the format. To best explain why the issues of the naming, let’s start by looking at group names.

I think its safe to assume that the first nine group names would be considered valid but the remaining give names would make no sense in the context of a lighting control system.

Now lets look at the Position feature group. It contains 19 Features for Pan & Tilt, including names such as Pan Motor, Tilt Inifi, Pan Rotate and mspeed. 

It’s pretty evident to that mspeed should be a control channel, not belonging under Position group. Pan Motor, Pan Rotate and Pan Inifi represent a Pan feature, but each personality creator deviated from the usual naming conventions. 

This deviation can be seen throughout the dataset. Looking at colour, one user has specified slots on a colour wheel as being individual attributes. This innocent mistake by an inexperienced contributor results in the fixture appearing to have nine distinct colour changing mechanisms! 

I ran all the personalities through my parser and had it generate a CSV file containing all the group, feature and attribute names and then augmented the data using pivot tables in excel to gain a better understanding of the problem. The images above are the results of different views on the data. It shows just how poor the quality of data provided is.

With visibility to the issues, it became evident that the dataset contained so many inconsistencies and illogical definitions that I shouldn’t invest any further time in trying to sanitise the data. Instead, I’ve opted to halt developing any further support for the General Data Type Format in Light Console. It’s already possible to import GDTF files and control these fixtures if you know the exact attribute name specified within the personality.

Having now spent the time to investigate the ‘open’ alternatives to commercial data, I don’t see much value on the offering, without MA Lighting, Robe and Vectorworks seriously investing in ensuring the data provided are accurate. One of the simplest ways to ensure the quality of data is to put rules in place around naming, perhaps even going so far as to introduce a lookup table of names. Going a step further, they should aim to validate that features genuinely belong to the group to ensure things like mspeed (a control channel) doesn’t incorrectly end up in a position group.

For now, I’ll continue to use the Carallon fixture database service for my fixture data needs as it provides consistent data which adheres to a well-documented set of rules.

Distributed DMX with Apache Ignite

A little known fact is that the original, intelligent lights and programming hardware formed a distributed system. Each light had its own on-board memory which was used to store the different states (looks) used throughout the show, and the lighting console would send a command for each light to load a particular look. I’ve heard of a show that was too large to fit on the lights on-board memory, so the operator had to split the programming into two, using the interval to upload the second half. I can’t imagine how nervous they were during this process!

This approach was a good start into the world of intelligent lighting, but had some significant drawbacks, the biggest being the impossibility of programming the show ahead of arriving at the venue. This ruled out the option of using 3D tools like Capture to program shows using a virtual representation of the venue and lighting rig.

An old Lighting Design I created many years ago, using Capture

Fast forward a little, and the advancements in consumer CPU’s allowed a single device to calculate all the required control data fast enough that distributed systems were no longer needed. This saw manufacturers adopt the architecture of using a single lighting console to calculate everything, with some providing the ability to track its state on a backup / redundant console but none offering a distributed system.

The backup console does no computation

Coming Full Circle

Only relativity recently has distributed systems come back into vogue, as shows have become more and more complex. With the complexity of shows like Eurovision, a single lighting console cannot calculate the control data fast enough. To get around this, multiple consoles are used together to control these massive shows. These console’s don’t just have ownership of a subsection of the lights, but instead create a Compute Grid, which is a high-performance computing technique to create a virtual supercomputer. 

Grids are a form of distributed computing whereby a “super virtual computer” is composed of many networked loosely coupled computers acting together to perform very large tasks

https://en.wikipedia.org/wiki/Grid_computing

In this article, I’m going to discuss how I’m using Apache Ignite to develop a distributed data and compute grid in order to provide high-availability and scalablity.

Apache Ignite

What is it?

Apache Ignite is an in-memory computing platform that is durable, strongly consistent, highly available and features powerful SQLkey-valuemessaging and event APIs.

Traditionally it’s used in industries such as e-commerce, banking, IoT and telecommunication it boasts companies such as MicrosoftAppleIBMBarclaysAmerican ExpressHuawei and Siemens as users.

Most users of Apache Ignite will be deploying it to servers either in a public cloud-like Microsoft’s Azure or into their on-premise data centers. Though servers are the usual domain for Apache Ignite, with its flexible deployment model, I can embed it as part of the Light Console .NET Core library.

I’m able to develop a distributed system that provides almost unlimited horizontal scale utilising the experience of its experts in distributed systems. The Apache Ignite codebase consists of more than a million lines of code and has 223 contributors meaning it’d be a huge effort to recreate this functionality in-house!

Contributions to Apache Ignite

Clustering

Apache Ignite is a fundamental pillar of my application architectures, providing data storage, service and event messaging capability. With being an integral part of my application, any consumer of the LightConsole.Core DLL will either automatically connect to an existing session or create one on launch.

What this means is that anyone running the Light Console app will automatically discover existing nodes and join the cluster, thus increasing the compute and data storage resources of the overall grid.

Using this approach means that no single device is responsible for the entire system. Instead, each node (console or onPC software) takes responsibility for a subsection of data and compute.

Launching a new show create a Apache Ignite Session

Distributed Data Storage

Data Storage within the Light Console uses Apache Ignites distributed key-value store, which you can think of as a distributed partitioned hash map, with each console owning a portion of the overall data.

Each instance of Light Console owns a portion of the overall data

The above example demonstrates how Apache Ignite might distribute Fixture objects stored within the FixtureCache. In actuality, I define a backup property of 1, which will ensure that a fixture doesn’t only exist in one instance of Light Console. This is how I mitigate against data loss when a console (node) crashes or goes offline. 

Defining a distributed Key-Value store with Apache Ignite

Affinity Colocation

As the show data is distributed across instances of Light Console, it’s important to ensure that any computations that make use of the data occur on an instance of Light Console that already has a copy of the data. This is called collocation and helps to significantly improve the performance of the application by reducing the need to move data around for the network for computation. The simplest example of a collocated computation currently within the project is the Fixture Patching mechanism. This is the process of assigning DMX Addresses to a fixtures control channels (such as pan, tilt, colour wheel, etc..). 

 The Compute Action is invoked using the PatchFixtureLocalCommand, which is defined below. The PatchFixtureLocalCommand hides the implementation details of the command and implements the ILocalCommand interface to support Undo/Redo functionality. 

Distributed Services

Another feature of Apache Ignite that I’m using extensively is the Service Grid. Service Grid allows me to deploy services to the cluster that can be used by any of the consoles. The advantage of deploying services to the grid is that it provides continuous availability, load balancing and fault tolerance out of the box. I also have the ability to specify if a service should be a cluster-singleton, node-singleton, or key-affinity-singleton. Below you can see an example of a Node Singleton deployment, which would be deployed to each Light Console within the cluster. 

Node Singleton Service Grid Deployment. Each Console get an instance of the service.

Two of the most critical services currently found within Light Console are the PlaybackEngine and the SyncTick Service. Both of these services are deployed as Cluster Singletons, which means that only one instance will be running on the cluster at any given time. If the instance of Light Console which is running the service goes offline, then Apache Ignite will automatically redeploy the service to another console. 

SyncTick Service

The SyncTick service is responsible for keeping all the currently running transitions (fades) and effects in sync with each other. This is achieved by broadcasting a tick event to all the nodes with a DateTime representing when the Tick occurred. If a transition or effect is running, upon receiving the Ticked message, it’ll calculate the next value for output and notify the PlaybackEngine. With this architecture, I’m able to speed up and slow down output data / calculations across the entire grid from a single location. This makes it possible for future versions of Light Console to support Timecode.

Messaging

In the above snippet, you’ll notice that the Tick event is using the SendOrdered method of IMessaging. This method allows me to ensure that the subscribers receive the Tick messages in the order that they’re sent.

To subscribe to the messages, I then use the IMessaging LocalListen method to register a message listener object which deals what to do when a message is received. To make my life easier, I ensure that I only ever call LocalListen and I use the EventListener<T> class as defined below. The EventListener<T> class allows me to use generics for the payload and easily attach to the EventRecieved event within the subscribing class.

Below you can see an example of a transition, which when started will create an EventListener and subscribe to the EventReceived event.

Wrapping Up

The above is just a small glimpse into how I’m using some of the feature available within Apache Ignite to power a distributed Lighting Control system. Whilst not exhaustive, I hope it gives you an idea of what’s possible and how you might also use Apache Ignite in your own projects.

It’s incredibly easy to get started with given it’s available as a Nuget package and has a rich set of documentation to help you understand features and how to add them to your apps.

Intro to DMX Control Systems

It’s no secret that one of my passions is thinking about how to design lighting control systems for the entertainment industry. In this post, I’m going to try and give you a condensed tour of everything you might need to know about large lighting control systems.

Understanding Control Data

Let’s start by rewinding the clock and head back a time before the invention of ‘intelligent’ lights.

Queen – Performing live with lots of Parcans providing the majority of the light

Queen offers an excellent example of the ‘old school’ lighting designs of the era. You can see hundreds of individual lights hanging in grids, which are usually a type of light called a Parcan. There’s not much to a Parcan, in fact, its often described as a car headlamp in a baked bean can.

A simple DMX topology

Each light is connected to a dimmer, which allows the operator to set the intensity (brightness) using their console. In the old days, each lamp would have a physical fader (linear potentiometer) on the lighting consoles for settings its intensity. The larger shows would have consoles with hundreds of faders. It’s essential to understand the control desk is not directly regulating the amount of amperage to individual lights but instead provides a control signal to a dimmer. It’s the dimmer which controls the amount of power to the lights. The modern control signal that consoles send is called Digital Multiplex (DMX512) and importantly, allows for controlling 512 lights with just one cable. A DMX cable is often referred to as a DMX Universe.

An old school concert / arena sized lighting console. Look at how many faders!!

Moving Lights

When people started attaching servos and other goodies to lights getting them to move and change colours, the control manufacturers had to evolve their systems to handle this new complexity. These types of lights became known as ‘intelligent lights’ or ‘moving lights’ and these new capabilities were controlled directly by the console and either had internal dimmers or shutters to control the brightness of the light.

VL1 – The first Moving Light

These new capabilities called for the creation of new user interface/interaction metaphors, which could help the operator control many more control channels at once. One of the more popular metaphors was inspired by early sequencers and synthesisers. Consoles like the DLD-6502 even go so far as to give a subtle nod to synthesisers with copying the shape of the wooden sides.

DLD-6502 Lighting Console. The side panels are similar to those found on synths at the time.

Manufacturers around the globe competed to create unique lights which led to some interesting (and often difficult to abstract & control) developments, such as conditional channels. Another development from the advent of moving lights was the ability to increase the resolution of DMX data through grouping channels together multiple 8-bit values to create higher resolution values. This is normally used to provide 16bit control but can also be used for 24 and 32bit values to be used if required. In general most control systems will only support 8bit and 16bit control channels.

An early moving light console. The Wholehog II.

The difficulty with higher resolution control channels is that there is no standard on how manufacturers have implemented how to decode the values. Some manufacturers have opted to use Most Significant Bit (MSB) while others have decided for Least Significant Bit (LSB), which only adds the complexity when calculating output values. And herein lies one of the biggest problems with DMX control, there is no standard among manufacturers!

This light changes shape, how should controllers handle this?!

Most light manufacturers have had little regard to the challenges of controlling the fixtures they create as they can pass on that complexity to the controller developers. 

One way controller manufactures deal with the complexity is to have an extensive database of DMX controlled fixtures that details a lights control channel values, labels, conditions and other important data for accurately controlling the light.

Most databases provides the ability to describe the fixtures by common denominators which can make it impossible to represent the state of more exotic fixtures in any meaningful way using existing control metaphors.

Modern Systems

Fast forward to 2019 and the industry has many consoles that still offer the user interface metaphor found on the Wholehog II, with a few experimental control metaphors failing to gain mainstream traction with professional lighting programmers. My personal favourite such console is the original Jands Vista, which saw the adoption of the Desktop metaphor, which feature a click/pen oriented user interface, and a timeline editor not dissimilar to video/audio editing tools.  The Vista is still available today but now incorporates many metaphors of their competitors in order to cater to operators preexisting knowledge and expectations of a lighting control system.

High Risk

From an operators perspective, lighting control is high-risk! If you’re the lighting operator for the Olympics opening ceremony, you have just one shot to get it right with billions of people watching. If the lighting console crashes, you better have bought a backup that can gracefully takeover!

Traditional High-Availability

A second lighting console will usually be deployed in order to provide high availability. The second runs in a “tracking” mode, which means it replicates the state of the master console. It’s ready for the operator to switch to in case the master console crashes or goes offline. This is the most basic form backup that lighting programmers / operators may use.

This resolves the potential issue of loosing control of a lighting rig in the event of a crash, but the spare device doesn’t add any additional capabilities or take on any responsibilities for processing output data.

All data is still computed only on the master controller thus this approach doesn’t provide a distributed compute capability required to scale the system. This means that the operator is limited to a fixed number of control channels based on the consoles local compute capability, which is a trade off that most manufacturers and users are happy to make. Most consoles can comfortably process 16 universes of DMX (16 * 512 = 8192 control channels) . 

Distributed Control
14 Lighting Consoles for 1 show!

For the larger shows, having just a single console simply isn’t enough compute power to calculate the control data in real-time. For this reason, some manufacturers now use high performance computing techniques to provide scalable control systems that can run the most demanding of shows.

The aim of my control system is to be able to control up to 1024 universes of DMX. That’s 524,288 control channels, recalculated 44 times a second in as close to real-time as possible. In order to make this happen, I want to use spare compute capacity available on the local area network to distribute the storage and computation of control data. This type of architecture is often referred to as Grid computing.

To do this, I’m utilising Apache Ignite to create a distributed data grid along with collocated computations to create a horizontally scalable data and compute grid.

Wrapping up

In an upcoming post, I’ll go into detail about Apache Ignite and how I’m using it within the Light Console.

Restarting an old project

I’ve been working on a problem for more than a decade on and off now, and last week I decided I was going to take it seriously.  Seriously enough to set up an Office 365 subscription attached to the project’s domain along with an Azure DevOps subscription for keeping everything in one place.  I’ll regularly be blogging about the project so I thought it’d probably best to write this to provide context for future posts.

History of the project

It’ the first week at university (did I mention I went to Drama School?) and I’m being forced to use a lighting control desk called the ETC Express. It’s like an idiots version of the Strand 500 series, which is the system that I know and love. If you’re not experienced with lighting control then know that lighting programmers often define themselves by their tools, just like we programmers frequently do with our languages and frameworks of choice.

Programming a show on my parents dinning table on a Strand 520i.

I was whining profusely about how limited the ETC Express was and my tutor said: “well why don’t you build your own then?”. In hindsight, she probably said this out of frustration but I agreed that I could do a better job, and thus commenced my journey back into .NET development and ultimately a career at Microsoft. Lighting control systems haven’t advanced past the innovations of the Hog 2 created by my good friend Nick Archdale. I wanted to create something unique, but most importantly it had to be as intuitive as the light switches we use at home every day. Initially, I was picturing a huge multi-touch screen, but the technology wasn’t available back then (the original iPhone hadn’t even been announced). I wanted to create a console that could be played like an instrument as lighting can be just as expressive as any musical instrument but frankly, I lacked the skills required to deliver my vision.

Hog 2 Lighting Control Console

Non the less, I started building lots of proof of concepts using WPF to see how it might work. Eventually, had a pretty solid idea of what I wanted to build but I couldn’t even match the features of existing systems with the knowledge I had at the time.

An old screenshot of a proof of concept.

Rebooting the project

Earlier this year I visited Nicks business in West London to discuss licensing some of his technology for a mobile app. One of the chaps there asked if the app would control any lights. It wasn’t in my spec as controlling lights is much more complex than you’d reasonably imagine, but this simple question has derailed the app and reminded me of an itch I’ve been ignoring for years. I went home and started creating some POCs using my experience gained from a decade of .NET development. I think I’ve cracked the secret sauce for creating a workable, scalable control system. The system HAS to be modular in every aspect from C# projects to physical hardware.

The future

Right now I’ve got the beginnings of a the important components of the control system working and I’m tying them together to build a minimal viable product before I start on the multi-touch instrument like parts that I’ve dream of for the last decade.

I’ve not yet decided how it’ll be released yet. I’m hoping to release bits of this as OSS but can’t promise anything just yet but if you’re interested in getting involved then ping me a message and we can chat!

Special Thanks

I feel a need to thank a few influential people who’ve helped me over the years to reach the point of being able to tackle this technical problem with some degree of competence.

Rachel Nicholson for the idea and belief that I could create a control system.

Nick Hunt for mentoring me through my dissertation as I investigated what an intuitive lighting control might look like. Nick Archdale and Richard Mead for hiring me out of university and encouraging me to be a better developer and licensing their fixture data to the project while I develop the control system.


Replacing lighting programmers with AI

The title is click bait, but I can picture situations where we could replace lighting programmers with artificial intelligence using language understanding techniques that power Alexa, Google Assistance and Cortana. I’m going to discuss how I’m working on this using Microsoft language understanding and intelligence technology.

Console Syntax

Most DMX Lighting control systems have a command line interface which supports a domain specific language for programming lights for theatre shows, concerts, tv and everything in between. The syntax is usually pretty similar across different manufacturers, but there always lies some subtle differences that can trip up experienced users when switching systems.

As I’ve been investigating how to create my language for users to program their shows on my control system, I’ve continually come back to the idea that the lighting industry has standardised what they do but its the tools that offer the variables.

An extreme of this is the theatre world where the lighting designer may call for specific fixtures and exact intensities. They may say something like “one and five at fifty percent”. The lighting programmer will then type in something like 1 + 5 @ 50 Enter on one brand of lighting consoles and perhaps Fixture 1 + 5 @ 5 on another console. The intent is the same but the specific syntax changes.

It’s currently the role of the lighting programmer to understand the intent of the designer and execute that on the hardware in front of them. The best lighting programmers can translate more abstract and complex queries into machine understandable commands using the domain-specific language of the control system their using. They understand the lighting consoles every feature and are master translators. They’re a bridge between the creative and the technical, but they are still fundamentally just translating intents.

Removing the need for human translation

Voice to text would go some way to being able to remove the lighting programmer as for simple commands like the one demonstrated earlier, it’s easy to convert the utterance to an action, but most designers don’t build scenes like this. For more complex commands, the console will likely get it wrong, and with no feedback loop, it won’t have the opportunity to learn from its mistakes like a human.

This is where utilising AI will significantly help. I’m currently working on my console featuring the ability to use machine learning, powered by the cloud so that eventually, even the most complex of requests should be fulfilled by the console alone. While cloud-connected in a lighting console probably seems strange, it is just a stepping stone to a full-offline supported system.

Language Understanding with AI

Let me walk you through the training process as I go about teaching it to understand a range of syntax and utterances. I’ve started this in the blog post from scratch, but in reality, I have a fleshed out version of this with many more commands and supports syntax from a variety of consoles.
The first step is to create a new ‘App’ within the Language Understanding and Intelligence service (LUIS).

NewAPp

By default, our app has no intents as we’re able to build a solution to suit our needs and this isn’t a pre-canned or prebuilt solution. To get started we’ll try and define an intent to effect the Intensity parameter of a fixture.

empty intents
EmptyIntent

We need to provide some examples of what the user might say to trigger this intent. To make this powerful, we want to give a variety of examples. The most natural being something like “1 @ 50”. This contains numeric symbols because that’s what our consoles interfaces provide us with, but if we’re using voice to text solutions, we will get the following response “One at fifty”. To solve this, we need to create some entities so that our AI understands that one is also 1. Thankfully to make developers lives easier, Microsoft provides a whole host of prebuilt entities so we can use their number entity rather than build our own.

entities

Matching to numbers is helpful, but we also need to provide information about other types of lighting specific entities. Below I define a few SourceTypes as the offline syntax for my console follows the grammar rules of ‘Source, Command Destination’.

Creating custom entities

I also provide synonyms which mean if a lighting designer for some crazy reason calls all lighting fixtures “device” then we can arcuately calculate the intent. Synonyms are incredibly powerful when we’re building out language understanding as you’ll see below in the PaletteType entity. I’ve created synonyms for Intensity which allows designers to say things like “Fixture 1 brightness at 50 percent” rather than knowing that the console thinks of brightness as intensities. I’ve also made sure to misspell Colour for Americans…

paletteTypes

Even with just three entities, our intent is more useful than just setting intensities. We can now handle basic fixture manipulation. For example “Group 3 @ Position 5” would work correctly with this configuration. For this reason, I renamed the intent to something more sensible (Programmer.SourceAtDestination).

renamming

Training and testing the AI

Before we can use our AI service, we must train it. The more data, the better but we can get some great results with what we already have.

TrainedApp

Below you can see I passed in the utterance of “fixture 10 @ colour 5”.

testresults

The top scoring intent (we only have one so its a little bit of a cheat) is Programmer.SourceAtDestination. The source type is Fixture and the number 10.

What’s next?

Language and conversation will be used in many technologies that may not be obvious right now. I believe it won’t be long until a lighting control manufacturer releases some form of AI bot or language understanding within their consoles and these get better with every day there used. Maybe I’ll be first, but I can’t believe no one else has seen this technology and not thought how to build it into a control system so perhaps we’ll start to see this as the norm in a few years.
Right now its early days but I’d put money on there being some virual lighting programmers shipping with consoles. What type of personality the virtual programmers have will be down to each manufacturer. I hope that they realise that their virtual programmer needs some personality or it’ll be no more engaging than voice to text. I’ve given some thought to my bots personality, and it stems from real people. I’m hoping to provide both female and male assistance, and they’ll be named Nick and Rachel.

Takeaways

It’s never too early to start investigating how AI can disrupt your industry. This post focuses on the niche that is the lighting control industry, but this level of disruption will be felt across all industries. Get ahead of the curve and learn how you can shape the future of your industry with Microsoft AI services.