Thin out the presentation and notes
It was a little too close to the time limit
This commit is contained in:
parent
2d0c301dad
commit
44be5ab82a
4 changed files with 41 additions and 135 deletions
Binary file not shown.
Before Width: | Height: | Size: 683 KiB |
Binary file not shown.
Before Width: | Height: | Size: 489 KiB |
163
src/index.html
163
src/index.html
|
@ -31,7 +31,7 @@
|
||||||
</section>
|
</section>
|
||||||
<section data-background-iframe="https://www.youtube-nocookie.com/embed/xiDS58Htuh4?autoplay=1&start=24">
|
<section data-background-iframe="https://www.youtube-nocookie.com/embed/xiDS58Htuh4?autoplay=1&start=24">
|
||||||
<aside class="notes" data-markdown>
|
<aside class="notes" data-markdown>
|
||||||
- Student Robotics
|
- I volunteer for Student Robotics
|
||||||
- Charity to help students get into STEM
|
- Charity to help students get into STEM
|
||||||
- Autonomous robotics competition
|
- Autonomous robotics competition
|
||||||
- 16 - 19 year olds
|
- 16 - 19 year olds
|
||||||
|
@ -40,105 +40,24 @@
|
||||||
</section>
|
</section>
|
||||||
</section>
|
</section>
|
||||||
<section>
|
<section>
|
||||||
<section data-background-image="https://live.staticflickr.com/8718/17123916289_1cbc4c5210_k.jpg">
|
<section data-background-image="https://live.staticflickr.com/2827/32969459924_164c509e20_k.jpg">
|
||||||
<h1 class="r-fit-text text-shadow">Environment</h1>
|
|
||||||
<aside class="notes" data-markdown>
|
<aside class="notes" data-markdown>
|
||||||
- Robots need to sense their environments
|
- Lots of these _things_ dotted around the arena
|
||||||
- As humans, we rely quite a lot on sight
|
- What are they?
|
||||||
- Competitors, as humans, also do the same
|
|
||||||
</aside>
|
</aside>
|
||||||
</section>
|
</section>
|
||||||
<section data-background-image="https://live.staticflickr.com/2837/33771948196_3cf1b5e3e5_k.jpg">
|
|
||||||
<aside class="notes" data-markdown>
|
|
||||||
- Sight is a powerful sense
|
|
||||||
- Ultrasound sensors can't distinguish between objects
|
|
||||||
- Switches are dull
|
|
||||||
- Eyes can detect objects, get distances, colour etc
|
|
||||||
- Only if they know what they're doing
|
|
||||||
</aside>
|
|
||||||
</section>
|
|
||||||
</section>
|
|
||||||
<section data-background-image="https://images.unsplash.com/photo-1504639725590-34d0984388bd?ixlib=rb-4.0.3&ixid=MnwxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8&auto=format&fit=crop&w=1000&q=80">
|
|
||||||
<h2 class="text-shadow">Computer Vision</h2>
|
|
||||||
<aside class="notes" data-markdown>
|
|
||||||
- Requires the world of computer Vision
|
|
||||||
- Been around since late 1960s
|
|
||||||
- Universities pioneering early AI
|
|
||||||
- Not machine learning, the old kind
|
|
||||||
- Lots of `if` statements
|
|
||||||
- Some techniques unchanged to this day
|
|
||||||
</aside>
|
|
||||||
</section>
|
|
||||||
<section>
|
|
||||||
<section data-background-image="./img/object-recognition.png">
|
|
||||||
<h2 class="text-shadow">Object Recognition</h2>
|
|
||||||
<h4 class="text-shadow">The <span class="has-text-success">hot new thing</span></h4>
|
|
||||||
<aside class="notes" data-markdown>
|
|
||||||
- More modern machine learning
|
|
||||||
- 2 stages
|
|
||||||
- First you train a model
|
|
||||||
- Use that model to detect things
|
|
||||||
- Not quite ideal for our use case...
|
|
||||||
</aside>
|
|
||||||
</section>
|
|
||||||
<section data-background-image="./img/not-hotdog.png">
|
|
||||||
<h2 class="text-shadow">Prone to errors</h2>
|
|
||||||
<aside class="notes" data-markdown>
|
|
||||||
- Prone to errors
|
|
||||||
- Different lighting / shadows can affect detection
|
|
||||||
- Black box
|
|
||||||
</aside>
|
|
||||||
</section>
|
|
||||||
<section data-background-image="https://images.unsplash.com/photo-1558494949-ef010cbdcc31?ixlib=rb-4.0.3&ixid=MnwxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8&auto=format&fit=crop&w=1000&q=80">
|
|
||||||
<h2 class="text-shadow">Computationally intensive</h2>
|
|
||||||
<aside class="notes" data-markdown>
|
|
||||||
- Tonnes of computation
|
|
||||||
- However is done once upfront
|
|
||||||
- Fairly fast to detect
|
|
||||||
- Our robos are just Raspberry Pis
|
|
||||||
</aside>
|
|
||||||
</section>
|
|
||||||
<section>
|
|
||||||
<h2 class="r-fit-text">What else?</h2>
|
|
||||||
<aside class="notes" data-markdown>
|
|
||||||
- What other options do we have?
|
|
||||||
</aside>
|
|
||||||
</section>
|
|
||||||
</section>
|
|
||||||
<section>
|
|
||||||
<section data-background-image="https://april.eecs.umich.edu/media/apriltag/apriltagrobots_overlay.jpg">
|
<section data-background-image="https://april.eecs.umich.edu/media/apriltag/apriltagrobots_overlay.jpg">
|
||||||
<h2 class="text-shadow">Fiducial Markers</h2>
|
<h2 class="text-shadow">Fiducial Markers</h2>
|
||||||
<aside class="notes" data-markdown>
|
<aside class="notes" data-markdown>
|
||||||
- Enter fiducial markers!
|
- Fiducial markers!
|
||||||
- Look sorta like QR codes
|
- Look sorta like QR codes
|
||||||
- Just a single number
|
- Just a single number
|
||||||
- Simpler, so they're easier to detect
|
- Simpler, so they're easier to detect
|
||||||
</aside>
|
</aside>
|
||||||
</section>
|
</section>
|
||||||
<section data-background-image="https://docs.opencv.org/4.x/singlemarkersaxes.jpg">
|
|
||||||
<h2 class="text-shadow">Location</h2>
|
|
||||||
<aside class="notes" data-markdown>
|
|
||||||
- Accurately detect edges, see where in our FoV it is
|
|
||||||
- If we know how big it's meant to be, derive distance
|
|
||||||
- We know it's a square, derive angles
|
|
||||||
</aside>
|
|
||||||
</section>
|
|
||||||
<section data-background-image="https://live.staticflickr.com/2827/32969459924_164c509e20_k.jpg">
|
|
||||||
<h2 class="text-shadow r-fit-text">We put them <strong>everywhere</strong>!</h2>
|
|
||||||
<aside class="notes" data-markdown>
|
|
||||||
- Abundance of sources
|
|
||||||
- Arena walls
|
|
||||||
- Game props (tokens etc)
|
|
||||||
- Location of any of them
|
|
||||||
- If you know where the marker is, you know where you are
|
|
||||||
</aside>
|
|
||||||
</section>
|
|
||||||
<section>
|
<section>
|
||||||
<h2 class="r-fit-text">How do fiducial markers work?</h2>
|
<h2 class="r-fit-text">How do fiducial markers work?</h2>
|
||||||
<aside class="notes" data-markdown>
|
<aside class="notes" data-markdown>
|
||||||
- All well and good knowing they exist
|
|
||||||
- Tools out there to help make it easier
|
|
||||||
- Not good enough
|
|
||||||
- How does it actually _work_?
|
- How does it actually _work_?
|
||||||
</aside>
|
</aside>
|
||||||
</section>
|
</section>
|
||||||
|
@ -158,24 +77,25 @@
|
||||||
- Images aren't black and white
|
- Images aren't black and white
|
||||||
- This slide is
|
- This slide is
|
||||||
- Markers _are_
|
- Markers _are_
|
||||||
- Discard the colour so the data we're working with is easier
|
|
||||||
- Grey is still more than enough data
|
|
||||||
- Black and white!
|
- Black and white!
|
||||||
- Thresholding
|
- Not greyscale
|
||||||
- Naive thresholding based on a value
|
- Much less data to be working with
|
||||||
|
- Thresholding achieves this
|
||||||
|
- Naive thresholding based on the entire image
|
||||||
- Adaptive thresholding looks for hotspots and edges
|
- Adaptive thresholding looks for hotspots and edges
|
||||||
- Useful in future
|
- Useful in future stages
|
||||||
</aside>
|
</aside>
|
||||||
</section>
|
</section>
|
||||||
<section>
|
<section>
|
||||||
<section>
|
<section>
|
||||||
<h2 class="r-fit-text">3. Edge Detection</h2>
|
<h2 class="r-fit-text">3. Edge Detection</h2>
|
||||||
<aside class="notes" data-markdown>
|
<aside class="notes" data-markdown>
|
||||||
|
- Basically "find the squares"
|
||||||
- Marker edges are straight lines
|
- Marker edges are straight lines
|
||||||
- Markers are squares
|
|
||||||
- Well, quadralaterals
|
|
||||||
- Filter to find hard edges
|
- Filter to find hard edges
|
||||||
- Are neighbouring pixels sufficiently different from eachother?
|
- Are neighbouring pixels sufficiently different from eachother?
|
||||||
|
- Markers are squares
|
||||||
|
- Well, quadralaterals
|
||||||
- Find collections with 4 sides
|
- Find collections with 4 sides
|
||||||
</aside>
|
</aside>
|
||||||
</section>
|
</section>
|
||||||
|
@ -193,28 +113,28 @@
|
||||||
</section>
|
</section>
|
||||||
</section>
|
</section>
|
||||||
<section data-background-image="https://docs.opencv.org/4.x/singlemarkersoriginal.jpg">
|
<section data-background-image="https://docs.opencv.org/4.x/singlemarkersoriginal.jpg">
|
||||||
<h2 class="r-fit-text text-shadow">4. <em>Distortion</em></h2>
|
<h2 class="r-fit-text text-shadow">4. <em>Distortion</em></h2>
|
||||||
<aside class="notes" data-markdown>
|
<aside class="notes" data-markdown>
|
||||||
- Highly unlikely a marker is directly in front of you
|
- Highly unlikely a marker is directly in front of you
|
||||||
- Want the simplest possible case when decoding
|
- Want the simplest possible case when decoding
|
||||||
- Remove the need for special casing later
|
- Remove the need for special casing later
|
||||||
- Make our lives easier!
|
- Make our lives easier!
|
||||||
- Skew the image a bit
|
- Skew the image a bit
|
||||||
- Same process used for those paper scanning apps
|
- Same process used for those paper scanning apps
|
||||||
- Markers are now always straight on
|
- Markers are now always straight on
|
||||||
</aside>
|
</aside>
|
||||||
</section>
|
</section>
|
||||||
<section data-background-image="https://docs.opencv.org/4.x/bitsextraction2.png" data-background-size="contain">
|
<section data-background-image="https://docs.opencv.org/4.x/bitsextraction2.png" data-background-size="contain">
|
||||||
<h2 class="r-fit-text text-shadow">5. Bit extraction</h2>
|
<h2 class="r-fit-text text-shadow">5. Bit extraction</h2>
|
||||||
<aside class="notes" data-markdown>
|
<aside class="notes" data-markdown>
|
||||||
- Convert the image into some 1s and 0s
|
- Convert the image into some 1s and 0s
|
||||||
- Images we have are much higher resolution than the markers
|
- Images we have are much higher resolution than the markers
|
||||||
- We know how many pixels are in a marker
|
- We know how many pixels are in a marker
|
||||||
- Divide it down into cells
|
- Divide it down into cells
|
||||||
- Add some margins in case we're slightly off
|
- Add some margins in case we're slightly off
|
||||||
- Average the remaining pixels in each cell
|
- Average the remaining pixels in each cell
|
||||||
- Convert into 2D list
|
- Convert into 2D list
|
||||||
</aside>
|
</aside>
|
||||||
</section>
|
</section>
|
||||||
<section>
|
<section>
|
||||||
<h2 class="r-fit-text">6. Decoding</h2>
|
<h2 class="r-fit-text">6. Decoding</h2>
|
||||||
|
@ -235,18 +155,20 @@
|
||||||
<aside class="notes" data-markdown>
|
<aside class="notes" data-markdown>
|
||||||
- And that's it
|
- And that's it
|
||||||
- We can now find markers in a given image
|
- We can now find markers in a given image
|
||||||
|
- ... Almost
|
||||||
- But what if we wanted to do a little more?
|
- But what if we wanted to do a little more?
|
||||||
</aside>
|
</aside>
|
||||||
</section>
|
</section>
|
||||||
<section data-background-image="https://docs.opencv.org/4.x/singlemarkersaxes.jpg">
|
<section data-background-image="https://docs.opencv.org/4.x/singlemarkersaxes.jpg">
|
||||||
<h2 class="r-fit-text text-shadow">7. Pose Estimation</h2>
|
<h2 class="r-fit-text text-shadow">7. Pose Estimation</h2>
|
||||||
<aside class="notes" data-markdown>
|
<aside class="notes" data-markdown>
|
||||||
- Now know where there's a marker
|
- Now we know where there's a marker
|
||||||
- Which way is it facing?
|
- Which way is it facing?
|
||||||
- We know where the corners are
|
- We know where the corners are
|
||||||
- With some calibration, if it knows what a known-size marker looks like, it can be used to determine how far away a marker is
|
- We can get the angle from that
|
||||||
- Camera lenses have some distortion (lenses aren't completely flat), so as the image moves around the camera, it skews.
|
- If we know the size of the marker, we can calculate its distance
|
||||||
- If we account for that, we can get accurate angles
|
- Some calibration per camera model is required
|
||||||
|
- Done once upfront
|
||||||
- Out the end, we get rotation and translations from the camera
|
- Out the end, we get rotation and translations from the camera
|
||||||
- _complicated maths_
|
- _complicated maths_
|
||||||
- `solvePnP` in OpenCV (entertaining name)
|
- `solvePnP` in OpenCV (entertaining name)
|
||||||
|
@ -296,9 +218,6 @@
|
||||||
- Working out what something is
|
- Working out what something is
|
||||||
</aside>
|
</aside>
|
||||||
</section>
|
</section>
|
||||||
<section data-background-image="https://github.com/ju1ce/April-Tag-VR-FullBody-Tracker/raw/master/images/demo.gif"></section>
|
|
||||||
<section data-background-iframe="https://www.youtube-nocookie.com/embed/5iV_hB08Uns?autoplay=1"></section>
|
|
||||||
<section data-background-iframe="https://www.youtube-nocookie.com/embed/4sRnVUHdM4A?autoplay=1&start=17"></section>
|
|
||||||
</section>
|
</section>
|
||||||
<section>
|
<section>
|
||||||
<section>
|
<section>
|
||||||
|
|
13
src/index.js
13
src/index.js
|
@ -9,16 +9,3 @@ let deck = new Reveal({
|
||||||
hash: true
|
hash: true
|
||||||
})
|
})
|
||||||
deck.initialize();
|
deck.initialize();
|
||||||
|
|
||||||
// HACK: Manually transform parcel's hashed file paths for background images
|
|
||||||
const FILE_MAPPING = {
|
|
||||||
"./img/not-hotdog.png": new URL("./img/not-hotdog.png", import.meta.url).toString(),
|
|
||||||
"./img/object-recognition.png": new URL("./img/object-recognition.png", import.meta.url).toString(),
|
|
||||||
};
|
|
||||||
|
|
||||||
for (const src in FILE_MAPPING) {
|
|
||||||
const dest = FILE_MAPPING[src];
|
|
||||||
document.querySelectorAll(`[data-background-image='${src}']`).forEach(e => {
|
|
||||||
e.dataset.backgroundImage = dest;
|
|
||||||
})
|
|
||||||
}
|
|
||||||
|
|
Loading…
Reference in a new issue