For those who don’t know, I’ve been blogging about my electric motorcycle conversion over at https://ebandit.bike. I’ve gotten back into the swing of things now that a lot of places are opening back up after COVID mandated shut downs. I’ve put together a really detailed post explaining all about the BMS, the motor I selected and the ELV system. You can read all about it on my other blog right here.
I’ve been an eager smart home (or home automation) enthusiast for a number of years now. My end goal is always changing, but it’s generally been to automate as many things as possible and make it as convenient as possible to control all of my lighting and appliances. My smart home system has grown to be quite complex so I’ve started documenting it.
To start with, I’ve put together a system diagram showcasing all of the different components and how everything is connected.
Here is a quick summary of all the different protocols and the different components that rely on them.
ZigBee – CC2531 Dongle (zigbee2mqtt)
Zigbee2mqtt is a fantastic open source project. It aims to bring together all the products from various companies so they can all use a single hub. Currently most vendors have a proprietary hub and there’s little compatibility between. This is surprising given ZigBee is an open standard just like WiFi. Zigbee2mqtt has a set of converters that allow you to add support for almost any device and expose a control/status API over MQTT.
ZigBee is “created on IEEE’s 802.15.4 using the 2.4GHz band and a self-healing true mesh network”. It’s especially ideal for sensor and IoT networks as it is a true mesh network that re-organises itself and relays messages between nodes. It’s also extremely low powered which makes it great for tiny battery powered sensors.
I’m slowly moving all of my ZigBee devices onto this network. This allows me to benefit from having less hubs and a bigger, more reliable ZigBee network. Most of my fixed LED downlights have ZigBee light switches that act as repeaters as they’re always powered.
ZigBee – Phillips Hue
Although they’re great, I’m migrating away from the Phillips hue lineup to the IKEA range for consistency reasons. Otherwise, the hue range is the best quality and functioning smart bulbs I’ve used.
ZigBee – IKEA TRÅDFRI
IKEA’s range of smart lighting products is absolutely fantastic. They are incredibly good value, great quality and work well. You can get a dimmable smart bulb for about $15 AUD!
IPv4 – WiFi/Ethernet
All of the ZigBee hubs connect back to hass.io (or home assistant) over a standard IP network using WiFi or Ethernet. There are also various other devices like my air purifier, robot vacuum cleaner, smart thermostat and a couple of WiFi based relays (sonoff). I try to avoid adding WiFi based IoT devices as a lot of them have serious security vulnerabilities. ZigBee is generally much more secure as a breach from any ZigBee device generally can’t give access to the entire IP network.
Raspberry Pi 4
Home Assistant (or hass.io) runs on my raspberry 4 and exposes all of the devices in my smart home system to HomeBridge. This makes everything available to the Apple ecosystem via HomeKit. This allows me to use Siri or the Home app on my watch, iPhone, MacBook, iPad or HomePod to control everything that’s part of my smart home system. This is really convenient and Siri is now the primary way I interact with my smart home system and control lights/other devices.
The Raspberry Pi 4 also hosts several services such as a plex media server and download server. It’s mapped to our NAS which has 8tb of network accessible storage for media, backups and other files.
My smart home system is a lot of fun to build and maintain but it’s not for everyone. Hopefully this post has given you some ideas on how to get started or improve your own smart home.
USB Type C is meant to be the answer to all of our problems and be this magic, universal port right? Well in terms of charging things it’s pretty good. We’ve got the USB Type C PD (power delivery) spec that means my Apple charger will work on my MacBook Pro, my Samsung S9+, Samsung Gear Icon X, Nintendo switch, and most things with a Type C port on it. In general I’ve had a good experience with USB C being a truly universal solution for charging devices. However, getting a video signal out of a USB Type C port is another story.
I recently purchased a 2018 MacBook Pro (MBP) 15″ and I’ve been trying to work out how to setup my desk. I started investigating different docking stations, USB Type C adapters and cables, etc. I quickly learned that the world of USB Type C/Thunderbolt 3 docks and video adapters is complex and full of confusion. What’s the difference between Thunderbolt 3, USB3/3.1, “Thunderbolt 3 compatible” devices? Why do some only support mirroring on macOS but extended displays on windows? What is USB Type C alternate mode “alt mode”, etc.
I found myself asking so many questions. As a result I quickly fell into a rabbit hole of trying to understand all the different options that are available on the market. I’m going to attempt to summarise everything I’ve learnt, so that you don’t have to go through the same pain.
Thunderbolt 3 vs USB 3/3.1 vs Thunderbolt 3 “Compatible”
I quickly discovered that there are two main types of docks, proper Thunderbolt 3 ones, USB 3/3.1 ones and Thunderbolt 3 compatible adapters/docking stations. The Thunderbolt 3 options seemed far more expensive than their USB 3.1 and “compatible” alternatives. So what gives? The main difference is the way they communicate with your device, whether that’s a laptop like my MBP or a phone like my S9+.
Thunderbolt 3 is a standard that’s been developed by Intel to allow you to connect high bandwidth peripherals such as displays and storage devices. However, because of the high amounts of available bandwidth, it’s also used in many docks or “port replicators”. In fact, with the 40Gbps of bandwidth it has, you can drive two 4k displays at 60hz and still have room leftover for other peripherals.
USB 3/3.1 on other other hand, is just the latest revision of the USB (Universal Serial Bus) protocol that has been around for a long time. Thunderbolt 3 “compatible” devices seem to be just a marketing ploy to get people to think they support Thunderbolt 3. Really, they just use the normal USB protocol that Thunderbolt 3 automatically falls back to. USB 3.1 only has 10Gbps of bandwidth compared to Thunderbolt 3’s 40Gbps which means it doesn’t event have enough for a single 4k 60hz display signal. However, USB 3.1 over Type C has a nice trick up its sleeve which I’ll explain later.
Thunderbolt 3 ports are often accompanied by a small lightning icon to signify the fact. However, my MBP and some other devices don’t always do this. Thunderbolt 3 ports will normally fallback to USB3/3.1 if that’s the only protocol the device (such as a dock or adapter) supports.
USB Type C Display Output Methods
There are many different ways that USB Type C devices (laptops and docks etc.) output and interpret display signals. I’ll explain the common ones below.
USB 3/3.1 Over Type C With DisplayLink Chip
USB Type 3/3.1 over Type C docks normally rely on a chip manufactured by a company called DisplayLink (or something similar). These chips use software to encode, compress and send a display signal over the lower bandwidth USB 3/3.1 protocol. However, these chips are software driven so they don’t perform well in demanding applications such as gaming or video editing. They might even struggle with playing some videos. Anything besides general office use is asking for trouble.
DisplayPort/HDMI Over Type C With Alternate Mode
Most cheap USB Type C dongles/adapters rely on on a neat trick called USB C alternative mode. Basically, a dongle/adapter/dock can ask a compatible device like a laptop or smartphone to output a non USB signal at the same time over some unused wires. Some examples of these non USB signals include HDMI and DisplayPort. Yep, the standard protocol that a HDMI or DisplayPort cable carries can also be carried by the humble USB Type C port.
The way this works is the dongle/dock will ask the output device if it’s able to support HDMI/DisplayPort etc. via alternative mode. If it can, the device starts to output a native HDMI/DisplayPort signal straight from the GPU – no software to get in the way like a DisplayLink chip. These cheap adapters are completely passive, basically just joining the correct wires from the Type C connector to the right places ono the HDMI/DisplayPort connector. They don’t manipulate or process the signal.
Part of the DisplayPort standard includes MST – Multi Stream Transport. This handy feature allows you to daisy chain displays, use multiple outputs to drive a high res/refresh rate display, or carry multiple signals to different monitors as a “splitter” from a hub. A lot of docking stations and adapters that support more than one display out rely on MST, which is fine for the most part. However, Apple does not properly support MST in macOS. The only part of MST that’s supported is driving one larger screen from two DisplayPort outputs.
Unfortunately this means a lot of docking stations that work flawlessly in Windows or Linux show a “mirrored” image on both outputs instead of separate images for each. There’s nothing that can be done as a workaround as the problem is macOS fundamentally not supporting it. What this practically means is that some docking stations with multiple display outputs will only show up as a single one in macOS and output the same image on each one.
A Mixture of the Above
You’d think that adapters and dongles would probably pick one of the above methods and stick with it. However, from what I’ve seen most docks that advertise 2 or more outputs rely on some crazy combination of the methods above. Some will have one DisplayPort driven via USB Type C alt mode, and another two with a DisplayLink chip, or two with DisplayPort and MST via USB C Alt mode. This crazy mishmash of implementations and lack of information on product data sheets means it’s difficult for even a tech savvy consumer to work out if something is compatible with their device.
For example, I found this great looking Dell dock for the reasonable price of $200. I was about to buy it when I saw a review saying it only supports one display output on macOS. After looking into this I figured out it was due to the lack of MST support in macOS. I then found a more expensive one for $300 from Lenovo, and thought sweet, this is it. Apparently it uses DisplayPort via alt mode for one connector and a DisplayLink chip over USB for the other two. This means you get one output with “good” performance and the other two are severely restricted in comparison with CPU rendering.
Passive Dongles/Adapters and Cables
I didn’t spend too much time researching this, but there are still a few problems here. Whilst not ideal, someone should be able to plug a USB C to HDMI into a HDMI to DisplayPort adapter then use it with their screen, right? Well not quite, because of the way USB C video outputs are so varied and inconsistent, it’s unlikely you’ll be able to find the right combination of adapters that will work. It ends up just being easier to buy a new USB Type C adapter for every single type of output you need rather then chaining old ones onto a single Type C to HDMI adapter.
You’d also think that all USB Type C cables are the same right? Well, only certain cables support Thunderbolt 3, and only some cables are rated for higher amounts of power. How do you know? It’s impossible to tell. USB Type C enabled devices are developing into an ecosystem where you have to plug something in and cross your fingers that it all works. This isn’t the way it was meant to be.
Most manufacturers don’t tell you what ungodly mess they’ve got going on inside their products. Because of this complete mishmash, some display outputs will be severely limited in their performance, while the one next to it might be fine. Some docks and adapters may work fine with windows machines but not with macOS. On top of that, sometimes you can’t tell if a USB Type C port, cable or device, is USB3/3.1, Thunderbolt 3, DisplayPort/HDMI over alt mode compatible, etc. It used to be if a cable fit, the device and cable were compatible, but that’s no longer the case.
Consumers shouldn’t need to spend hours researching how an adapter or dock is implemented to work out if it’s going to be compatible with their use case and performance needs. This inconsistency and lack of information from manufactures is a massive problem and is dragging down an otherwise great standard that should be universal and consistent.
P.s. if I’ve left anything out or made any mistakes please let me know in the comments. My head is still spinning from the huge amount of information I’ve processed over the last day while trying to write this.
If you’ve ever played with MQTT, then you’ve probably had issues connecting to your broker. Whether it’s one you’ve setup or you’re using a 3rd party provider like AWS, they should all follow the MQTT protocol. This is mainly for my reference because I can never find it, but below is a list of the standard connack codes that could be returned when you try to connect.
Note these have been directly copied from the official specification. You can see the original by clicking here.
Table 3.1 – Connect Return code values
|Value||Return Code Response||Description|
|0||0x00 Connection Accepted||Connection accepted|
|1||0x01 Connection Refused, unacceptable protocol version||The Server does not support the level of the MQTT protocol requested by the Client|
|2||0x02 Connection Refused, identifier rejected||The Client identifier is correct UTF-8 but not allowed by the Server|
|3||0x03 Connection Refused, Server unavailable||The Network Connection has been made but the MQTT service is unavailable|
|4||0x04 Connection Refused, bad user name or password||The data in the user name or password is malformed|
|5||0x05 Connection Refused, not authorized||The Client is not authorized to connect|
|6-255||Reserved for future use|
For days I’ve struggled with this new linux install on a virtual machine on my local network. The SSH has been super unreliable and everytime I typed tab for an auto completion the whole thing seemed to lock up for ~30 seconds. Turns out the autocompletion problem was the simplest fix ever! After scouring the internet for ages I found this command.
It’s simple, all it does is update the auto completion database. (according to the forum I found it on) What was probably happening is the database got really big and was taking ages to scan through. It beats me why a fresh ubuntu install had this problem, but at least it’s solved, for now.
If you’ve seen my 3DR Solo xtra large leg extenders post, you might be wondering what I used to attach my Sony a5100 to my Solo. Well I used the “pretty” face plate thing that comes with solo. (for use without a gimbal) It has a hard mounted GoPro adaptor on it and for now this will suffice. It’s basically a little right angle GoPro to 1/4″ tripod mount adaptor with an offset 1/4″ mount to roughly centre the a5100 and ensure it’s as small as possible.
You can download my STL file for the print via the link at the end of this post. You can see this mount in action in the photo below. I highly recommending that you take it slow and easy. The “rubber stoppers” to help combat gello/vibrations are designed to take a ~85g GoPro, not a ~400g compact DSLR. I highly recommend tethering the Sony to the Solo just in case the mount fails.
It’s important to print the mount so that when you look down on the print bed from above you see an L shape. This ensures the layers aren’t parallel to the camera body. It’s extremely weak when printed this way.
The sonoff WiFi relays have arrived. I ended up buying ten of them and 3 motion sensors. My first impression is that they’re tiny and solid. They’re much smaller than I thought, which is a good thing! The case they come in is perfect for mounting inline with something and neatly hides the exposed wires. For comparison, you can see my old LG G4 phone next to it.
On the inside, they look pretty good. The soldering is done well and the gaps between the mains traces is reassuring. As you can also see from the picture below there are a few header pins. These are the programming pins. Itead has been nice enough to breakout the programming pins into headers to make it easier to reprogram with your own code.
I’ve currently had one set up on my desk lamp for the last couple of days. It has been rock solid and hasn’t experienced any drop outs or glitches. This was running their stock firmware which allowed me to connect it to their app. Although I have no intention of continuing to use their app it is miles ahead of the Belkin system. For example, switching it on or off happens via the internet almost instantaneously. However the Belkin’s system sometimes takes 10 seconds!
I’ve just finished a recent side project with my friend Kendrick. (his GitHub) We built an autonomous car that you can teach how to drive, then it drives around by itself. I did all of the hardware/arduino software and Kendrick did all of the machine learning software. He called his project Suiron and it’s available on GitHub here. The code running on the arduino is called car-controller and is available on my GitHub here.
Now that you’ve got the links feel free to go and have a look. Work through the code and try to figure out how it works. I’ll try to briefly cover the software here but my main focus will be on the hardware. One thing to note are the open source licenses, all my stuff is GPL and Kendrick’s Suiron project is MIT.
This post is more intended as an overview of how the whole thing works. If I get time I might turn it into a tutorial on how to get it working yourself.
Before we begin here is a short video of it in action.
Now onto the fun stuff! How does it work?
These are the main components used.
1) Remote Control Car – we used this car (link) but anything of similar size will work. As long as it has a standard ESC and Steering Servo. It comes with a remote control, battery and charger to start with. I recommend buying a new remote control system. (link 5 below)
2) Inten NUC – The raspberry pi doesn’t really have enough power and is arm based. An x86 based processor like the i5 in our NUC is much easier to use for machine learning purposes. The exact one you use doesn’t matter.
4) Lens filters – if you are operating in any sunlight, you will want a Polarising and ND (Neutral Density) filter. The camera just can’t cope with the harsh sunlight and shadows so these filters help bring the conditions into something much better. A variable ND is great as it let’s you adjust the “darkness” level.
5) Radio control system – if you intend on doing lots of this stuff then get an FrSky TARANIS. You won’t be disappointed. Otherwise, a turnigy 9XR will work just as good. Make sure you get a receiver too if it isn’t included.
6) You’ll also need an arduino. I like the Arduino Nano’s because they are super cheap and have on board USB.
I won’t go into details on how to wire everything as this isn’t a tutorial. However, If you need some help drop a comment below. I suggest you learn how an ESC (electronic speed controller) works together with a motor, receiver, servo and battery. This is a standard setup on normal remote control cars. Once you understand that you should look at arduino’s and how to use them to blink lights and read inputs. Read through the arduino code and the wiring should be pretty self explanatory.
How it all fits together
It’s up to you how you put everything together. I recommend trying to keep everything as low as possible for better stability when driving. The webcam needs to be mounted up high so it has a better chance of seeing the lane that it’s in. I just used a square bit of balsa wood as it’s really light and strong, then glued the webcam to it. Instead of explaining exactly how I mounted everything I’ll dump a few pictures here. All the white things are 3D printed, but you could easily do it without a 3D printer.
The importance of a polarising filter cannot be underestimated. It reduces reflections and the harsh glare sometimes encountered. In the image below (credit) you can see how much of a difference a polarising filter can make. Now water is a bit of an extreme example, but I chose that picture so it’s easier to demonstrate the difference. In realty, where we’re operating the difference won’t be so obvious.
The neutral density filter is equally or more important than the polarising filter. The ND filter is basically like sun glasses for the webcam. The webcam doesn’t like really harsh light so it reduces the intensity of it without interfering with the image to much. The picture below (credit wikipedia) shows how much better the right ND filter can make an image in harsh light.
I suggest making the lens filters removable as it will make the image to dark in lower lighting situations. For example, it was perfect mid day but much to dark a few hours later just before dusk. I made a simple mount that just uses an alligator clip to hold the filters in place. The filters are both glued together then onto a small 3D printed right angle mount.
The diagram below shows how everything is hooked up. Basically the arduino is the “brains of the hardware”. It reads in the values from the R/C receiver (bottom left) and then decides what to do based on the mode channel. Dig through the arduino code (link) and see exactly how. Basically there are 3 modes, manual, autonomous and emergency stop.
In manual mode the arduino reads in the steering and motor values and passes it straight to the motor and steering servo. In this mode with the right flag enabled, it also sends back over UART what those values are every time it receives a character. (every time it receives prevents the serial buffer getting full and “lagging”) In autonomous mode the arduino reads inputs over UART from the NUC. In this mode it receives two messages; steer,x and motor,x where x is the value you want to set it to. It then writes those outputs to the steering servo or motor. Finally, emergency stop kills the motor output and straightens the steering servo. This emergency stop overrides any sort of manual or autonomous control.
The Machine Learning Part
This isn’t my expertise so I’ll briefly summarise what it’s doing. (not really how it’s doing it, no one really knows) We used a library called Tensor Flow. It’s an Open Source machine learning library published by Google. It’s open source and released under an Apache license. It has a nice python and a “no nonsense” C++ api.
This is a really short summary of the whole process. Each time a video frame is recorded Suiron (software on the NUC) asks car-controller (software on arduino) what the human operator is doing. Remember, in manual mode the human operator is driving the car around. Car-controller responds by sending the current steering and motor values back to Suiron. Suiron takes these values and saves them along with a processed version of the frame.
This process happens at about 30Hz (or 30 times per second) for as long as you record data. In the final model, we used about 20 minutes worth of training data. That is 20 minutes of continuously driving around the track. It may not seem like a lot but it’s repetitive very quickly. 😉 In reality, 20 minutes is no where near enough data. It works great on this particular track with similar lighting conditions but would likely fail if the conditions changed to much.
Again, I’m not an exert at this but I’ll try to briefly explain how the training works. Convolutional Neural Networks (CNNs) are weird in the way they work. It’s impossible to know exactly how or why a CNN works. Basically, we’re giving Tensor Flow the frame and two numbers. (steering and motor) Then we’re asking it to work out how the frame relates to those two numbers. After giving it hundreds of thousands of examples (frames) it can try to generalise a model.
Because of the amount of computing power required it takes a very long time to train a good model. Due to the type of calculations it has to do, Tensor Flow runs much faster on a dedicated GPU. With only 20 minutes of data our model took half a day to train properly. The training took place in a desktop with a borrowed GTX980, a GPU that’s towards the higher end of consumer graphics cards.
Using the model
You can see it in action in the gif below. The blue line is what the model thinks it should do, the green line is what I actually did when I was steering it. Note that this data was not included in the training set, this is to ensure the model works with other data.
Once it has been trained we can then use the model. Basically, what happens is we collect just a frame from the webcam. Then we pass it to Tensor Flow and ask it to run it through the model. The model then spits out what it thinks our two values should be, one for steering and one for throttle. At the moment the throttle is unused and it runs at a constant speed. However we thought we’d include it just in case we wanted to use it in the future.
Update: Clive from hobbyhelp.com reached out to me after seeing this. He’s got a pretty cool “Ultimate beginners guide to RC cars” article on his website here. I recommend checking it out if you want to get started doing something similar to this project.
I’ve discovered a really cool product that is cheaper to buy than what I was making them for. Plus it looks a lot neater and is probably safer seeing as I’m not a qualified electrician. 😉
These sonoff “smart switches” (link) are exactly what my home automation system is looking for. Basically, the sonoff switches has a mains to 3.3v regulator, relay, ESP8266 and a button/LED all on board. For about $5. The manufacturer has even broken out the serial pins so it’s easy to upload your own code. I’ve bought about 10 of these little devices after hearing great reviews about them from the internet.
I intend on automating as much as I can with my home. I’m going to make all the automation switches MQTT compliant which make it easier to expand and/or change things around later. I’m going to be making a personal companion (much like Siri or Alexa) that can answer useful questions and do some cool things around the house. Eventually, small remote control modules likely running raspberry pi zero’s will be placed around the house so you can pick one up and ask the house to do things.
I’ll post an update when the sonoff modules arrive and post heaps of pictures!
I strongly believe in the philosophy of open source and free software. Most of the projects and code I publish on this site and my GitHub are released under the GNU GPL v3 or later license. What is this GNU GPL you say? Well, it’s a type of software license you must abide by. If you’ve ever downloaded a program that asks you to accept something, it’s likely the license agreement. The GNU GPL is great, it lets anyone do anything they want with the software, as long as they pass along the same freedoms.
Free software generally has no price attached to it. This means you can download and use the software at no direct monetary cost. However, the greatest benefit is having the ability to modify the code and make changes. This allows you to improve the software and release an even better version for other people to use. This may include adding new features, or fixing problems like bugs and security flaws.
That was a quick overview of what “free” software is and why I love it so much. For more information and some great reading I suggest checking out the GNU project’s website by clicking the link: www.gnu.org