Add in some speaker notes

This commit is contained in:
Jake Howard 2022-10-28 16:03:22 +01:00
parent 629b77b6b2
commit 4b5a7d5d77
Signed by: jake
GPG key ID: 57AFB45680EDD477

View file

@ -14,47 +14,133 @@
<p class="has-text-primary is-size-2">🧍 Jake Howard</p>
<p class="has-text-primary is-size-6">🐦 <a class="has-text-primary" href="https://twitter.com/realorangeone">@RealOrangeOne</a></p>
<p class="is-size-4">👨‍💻 Senior Systems Engineer @ <span class="has-text-torchbox">Torchbox</span></p>
<aside class="notes" data-markdown>
- Who am I?
- Systems @ Torchbox
- We're hiring btw
- In my spare time...
</aside>
</section>
<section data-background-image="https://studentrobotics.org/images/content/blog/sr2022/arena.jpg">
<h1 class="r-fit-text text-shadow">I Build Robots*</h1>
<p class="text-shadow">* I help <em>others</em> build robots</p>
<aside class="notes" data-markdown>
- I build robots
- Help _others_ build robots
</aside>
</section>
<section data-background-iframe="https://www.youtube-nocookie.com/embed/xiDS58Htuh4?autoplay=1&start=24">
<aside class="notes" data-markdown>
- Student Robotics
- Charity to help students get into STEM
- Autonomous robotics competition
- 16 - 19 year olds
- Always looking for sponsors
</aside>
</section>
<section data-background-iframe="https://www.youtube-nocookie.com/embed/xiDS58Htuh4?autoplay=1&start=24"></section>
</section>
<section>
<section data-background-image="https://live.staticflickr.com/8718/17123916289_1cbc4c5210_k.jpg">
<h1 class="r-fit-text text-shadow">Robots need to see things</h1>
<h1 class="r-fit-text text-shadow">Environment</h1>
<aside class="notes" data-markdown>
- Robots need to sense their environments
- As humans, we rely quite a lot on sight
- Competitors, as humans, also do the same
</aside>
</section>
<section data-background-image="https://live.staticflickr.com/2837/33771948196_3cf1b5e3e5_k.jpg">
<aside class="notes" data-markdown>
- Sight is a powerful sense
- Ultrasound sensors can't distinguish between objects
- Switches are dull
- Eyes can detect objects, get distances, colour etc
- Only if they know what they're doing
</aside>
</section>
</section>
<section data-background-image="https://images.unsplash.com/photo-1504639725590-34d0984388bd?ixlib=rb-4.0.3&ixid=MnwxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8&auto=format&fit=crop&w=1000&q=80">
<h2 class="text-shadow">Computer Vision</h2>
<aside class="notes" data-markdown>
- Requires the world of computer Vision
- Been around since late 1960s
- Universities pioneering early AI
- Not machine learning, the old kind
- Lots of `if` statements
- Some techniques unchanged to this day
</aside>
</section>
<section>
<section data-background-image="./img/object-recognition.png">
<h2 class="text-shadow">Object Recognition</h2>
<h4 class="text-shadow">The <span class="has-text-success">hot new thing</span></h4>
<aside class="notes" data-markdown>
- More modern machine learning
- 2 stages
- First you train a model
- Use that model to detect things
- Not quite ideal for our use case...
</aside>
</section>
<section data-background-image="./img/not-hotdog.png">
<h2 class="text-shadow">Prone to errors</h2>
<aside class="notes" data-markdown>
- Prone to errors
- Different lighting / shadows can affect detection
- Black box
</aside>
</section>
<section data-background-image="https://images.unsplash.com/photo-1558494949-ef010cbdcc31?ixlib=rb-4.0.3&ixid=MnwxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8&auto=format&fit=crop&w=1000&q=80">
<h2 class="text-shadow">Computationally intensive</h2>
<aside class="notes" data-markdown>
- Tonnes of computation
- However is done once upfront
- Fairly fast to detect
- Our robos are just Raspberry Pis
</aside>
</section>
<section>
<h2 class="r-fit-text">What else?</h2>
<aside class="notes" data-markdown>
- What other options do we have?
</aside>
</section>
</section>
<section>
<section data-background-image="https://april.eecs.umich.edu/media/apriltag/apriltagrobots_overlay.jpg">
<h2 class="text-shadow">Fiducial Markers</h2>
<aside class="notes" data-markdown>
- Enter fiducial markers!
- Look sorta like QR codes
- Just a single number
- Simpler, so they're easier to detect
</aside>
</section>
<section data-background-image="https://docs.opencv.org/4.x/singlemarkersaxes.jpg">
<h2 class="text-shadow">Pose Estimation</h2>
<h2 class="text-shadow">Location</h2>
<aside class="notes" data-markdown>
- Accurately detect edges, see where in our FoV it is
- If we know how big it's meant to be, derive distance
- We know it's a square, derive angles
</aside>
</section>
<section data-background-image="https://live.staticflickr.com/2827/32969459924_164c509e20_k.jpg">
<h2 class="text-shadow r-fit-text">We put them <strong>everywhere</strong>!</h2>
<aside class="notes" data-markdown>
- Abundance of sources
- Arena walls
- Game props (tokens etc)
- Location of any of them
- If you know where the marker is, you know where you are
</aside>
</section>
<section>
<h2 class="r-fit-text">How do they work?</h2>
<h2 class="r-fit-text">How do fiducial markers work?</h2>
<aside class="notes" data-markdown>
- All well and good knowing they exist
- Tools out there to help make it easier
- Not good enough
- How does it actually _work_?
</aside>
</section>
</section>
</div>