Write the rest of the slides
This commit is contained in:
parent
7478cddec9
commit
2d0c301dad
3 changed files with 210 additions and 5 deletions
200
src/index.html
200
src/index.html
|
@ -151,9 +151,9 @@
|
||||||
- Image is just pixels (RGB)
|
- Image is just pixels (RGB)
|
||||||
</aside>
|
</aside>
|
||||||
</section>
|
</section>
|
||||||
<section>
|
<section data-background-image="https://docs.opencv.org/4.x/singlemarkersthresh.png">
|
||||||
<h2 class="r-fit-text">2. Thresholding</h2>
|
<h2 class="r-fit-text text-shadow">2. Thresholding</h2>
|
||||||
<p class="fragment">Demo 🤞</p>
|
<p class="fragment text-shadow">Demo 🤞</p>
|
||||||
<aside class="notes" data-markdown>
|
<aside class="notes" data-markdown>
|
||||||
- Images aren't black and white
|
- Images aren't black and white
|
||||||
- This slide is
|
- This slide is
|
||||||
|
@ -167,6 +167,200 @@
|
||||||
- Useful in future
|
- Useful in future
|
||||||
</aside>
|
</aside>
|
||||||
</section>
|
</section>
|
||||||
|
<section>
|
||||||
|
<section>
|
||||||
|
<h2 class="r-fit-text">3. Edge Detection</h2>
|
||||||
|
<aside class="notes" data-markdown>
|
||||||
|
- Marker edges are straight lines
|
||||||
|
- Markers are squares
|
||||||
|
- Well, quadralaterals
|
||||||
|
- Filter to find hard edges
|
||||||
|
- Are neighbouring pixels sufficiently different from eachother?
|
||||||
|
- Find collections with 4 sides
|
||||||
|
</aside>
|
||||||
|
</section>
|
||||||
|
<section>
|
||||||
|
<h2 class="r-fit-text">3a. Contour Refinement</h2>
|
||||||
|
<aside class="notes" data-markdown>
|
||||||
|
- Only care about the square (ish) ones
|
||||||
|
- Get rid of anything else
|
||||||
|
- Ignore rectangles, too skewed etc
|
||||||
|
- Remove contours within contours
|
||||||
|
- Refinement is fast and simple
|
||||||
|
- Latter stages are more complex
|
||||||
|
- Exclude now whilst it's easy and cheap
|
||||||
|
</aside>
|
||||||
|
</section>
|
||||||
|
</section>
|
||||||
|
<section data-background-image="https://docs.opencv.org/4.x/singlemarkersoriginal.jpg">
|
||||||
|
<h2 class="r-fit-text text-shadow">4. <em>Distortion</em></h2>
|
||||||
|
<aside class="notes" data-markdown>
|
||||||
|
- Highly unlikely a marker is directly in front of you
|
||||||
|
- Want the simplest possible case when decoding
|
||||||
|
- Remove the need for special casing later
|
||||||
|
- Make our lives easier!
|
||||||
|
- Skew the image a bit
|
||||||
|
- Same process used for those paper scanning apps
|
||||||
|
- Markers are now always straight on
|
||||||
|
</aside>
|
||||||
|
</section>
|
||||||
|
<section data-background-image="https://docs.opencv.org/4.x/bitsextraction2.png" data-background-size="contain">
|
||||||
|
<h2 class="r-fit-text text-shadow">5. Bit extraction</h2>
|
||||||
|
<aside class="notes" data-markdown>
|
||||||
|
- Convert the image into some 1s and 0s
|
||||||
|
- Images we have are much higher resolution than the markers
|
||||||
|
- We know how many pixels are in a marker
|
||||||
|
- Divide it down into cells
|
||||||
|
- Add some margins in case we're slightly off
|
||||||
|
- Average the remaining pixels in each cell
|
||||||
|
- Convert into 2D list
|
||||||
|
</aside>
|
||||||
|
</section>
|
||||||
|
<section>
|
||||||
|
<h2 class="r-fit-text">6. Decoding</h2>
|
||||||
|
<aside class="notes" data-markdown>
|
||||||
|
- Just 1s and 0s now
|
||||||
|
- 7x7 markers have 2041 combinations
|
||||||
|
- There are only ~255 possible combinations
|
||||||
|
- Error checks
|
||||||
|
- Single pixel flips can be corrected
|
||||||
|
- Sometimes even more
|
||||||
|
- Try all 4 rotations
|
||||||
|
- Search "Hamming code" for more
|
||||||
|
</aside>
|
||||||
|
</section>
|
||||||
|
<section data-background-image="https://docs.opencv.org/4.x/singlemarkersdetection.jpg">
|
||||||
|
<h2 class="r-fit-text text-shadow">And done!</h2>
|
||||||
|
<p class="fragment text-shadow">...Almost</p>
|
||||||
|
<aside class="notes" data-markdown>
|
||||||
|
- And that's it
|
||||||
|
- We can now find markers in a given image
|
||||||
|
- But what if we wanted to do a little more?
|
||||||
|
</aside>
|
||||||
|
</section>
|
||||||
|
<section data-background-image="https://docs.opencv.org/4.x/singlemarkersaxes.jpg">
|
||||||
|
<h2 class="r-fit-text text-shadow">7. Pose Estimation</h2>
|
||||||
|
<aside class="notes" data-markdown>
|
||||||
|
- Now know where there's a marker
|
||||||
|
- Which way is it facing?
|
||||||
|
- We know where the corners are
|
||||||
|
- With some calibration, if it knows what a known-size marker looks like, it can be used to determine how far away a marker is
|
||||||
|
- Camera lenses have some distortion (lenses aren't completely flat), so as the image moves around the camera, it skews.
|
||||||
|
- If we account for that, we can get accurate angles
|
||||||
|
- Out the end, we get rotation and translations from the camera
|
||||||
|
- _complicated maths_
|
||||||
|
- `solvePnP` in OpenCV (entertaining name)
|
||||||
|
- "Perspective 'n' Point"
|
||||||
|
- Requires calibration for each camera model
|
||||||
|
- With simpler maths, can be turned into angles, distances etc
|
||||||
|
</aside>
|
||||||
|
</section>
|
||||||
|
<section>
|
||||||
|
<h2 class="r-fit-text">Another demo 🙏</h2>
|
||||||
|
</section>
|
||||||
|
<section>
|
||||||
|
<section>
|
||||||
|
<h2 class="r-fit-text">Endless possibilities 🤯</h2>
|
||||||
|
<aside class="notes" data-markdown>
|
||||||
|
- Markers are useful for lots of different purposes
|
||||||
|
</aside>
|
||||||
|
</section>
|
||||||
|
<section>
|
||||||
|
<h2 class="r-fit-text">A single marker provides:</h2>
|
||||||
|
<div class="columns">
|
||||||
|
<div class="column">
|
||||||
|
<ul>
|
||||||
|
<li class="fragment">"id"</li>
|
||||||
|
<li class="fragment">Rotation</li>
|
||||||
|
<li class="fragment">Bearing</li>
|
||||||
|
<li class="fragment">Distance</li>
|
||||||
|
</ul>
|
||||||
|
</div>
|
||||||
|
<div class="column">
|
||||||
|
<ul>
|
||||||
|
<li class="fragment">Object tracking</li>
|
||||||
|
<li class="fragment">Localization</li>
|
||||||
|
<li class="fragment">Identification</li>
|
||||||
|
</ul>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
<aside class="notes" data-markdown>
|
||||||
|
- From just a single marker
|
||||||
|
- id
|
||||||
|
- Rotation
|
||||||
|
- Bearing
|
||||||
|
- How far away it is
|
||||||
|
- Could be used for
|
||||||
|
- Tracking objects
|
||||||
|
- Working out where you are
|
||||||
|
- Working out what something is
|
||||||
|
</aside>
|
||||||
|
</section>
|
||||||
|
<section data-background-image="https://github.com/ju1ce/April-Tag-VR-FullBody-Tracker/raw/master/images/demo.gif"></section>
|
||||||
|
<section data-background-iframe="https://www.youtube-nocookie.com/embed/5iV_hB08Uns?autoplay=1"></section>
|
||||||
|
<section data-background-iframe="https://www.youtube-nocookie.com/embed/4sRnVUHdM4A?autoplay=1&start=17"></section>
|
||||||
|
</section>
|
||||||
|
<section>
|
||||||
|
<section>
|
||||||
|
<h2 class="r-fit-text">Don't do it yourself</h2>
|
||||||
|
<aside class="notes" data-markdown>
|
||||||
|
- I've run through this quickly
|
||||||
|
- Lots of intricacies
|
||||||
|
- OpenCV has great primitives for these operations
|
||||||
|
- If it's good enough for JPL on Mars, it's good enough for me
|
||||||
|
- (and you)
|
||||||
|
- OpenCV has a built-in marker detection library called ArUco
|
||||||
|
- Where I got lots of this from
|
||||||
|
</aside>
|
||||||
|
</section>
|
||||||
|
<section>
|
||||||
|
<h2 class="r-fit-text"><code>pip install zoloto</code></h2>
|
||||||
|
<pre><code data-trim data-noescape>
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
from zoloto import MarkerType
|
||||||
|
from zoloto.cameras import Camera
|
||||||
|
|
||||||
|
with Camera(marker_type=MarkerType.ARUCO_6X6) as camera:
|
||||||
|
for frame in camera:
|
||||||
|
print(
|
||||||
|
"Markers I can see:",
|
||||||
|
len(camera.get_visible_markers(frame=frame))
|
||||||
|
)
|
||||||
|
</code></pre>
|
||||||
|
<aside class="notes" data-markdown>
|
||||||
|
- If you're like me
|
||||||
|
- Know Python
|
||||||
|
- Lazy
|
||||||
|
- I wrote a library to help
|
||||||
|
- Wraps OpenCV's ArUco, but with a much nicer API
|
||||||
|
</aside>
|
||||||
|
</section>
|
||||||
|
</section>
|
||||||
|
<section data-background-image="https://i.ytimg.com/vi/JICMv4TAFMA/maxresdefault.jpg">
|
||||||
|
<aside class="notes" data-markdown>
|
||||||
|
- Now you know how these markers work
|
||||||
|
- Makes for great geeky pub conversation
|
||||||
|
</aside>
|
||||||
|
</section>
|
||||||
|
<section>
|
||||||
|
<h2>More‽</h2>
|
||||||
|
<ul class="r-fit-text">
|
||||||
|
<li>Slides: <a href="https://slides.jakehoward.tech">slides.jakehoward.tech</a></li>
|
||||||
|
<li>Student Robotics: <a href="https://studentrobotics.org">studentrobotics.org</a></li>
|
||||||
|
<li>OpenCV: <a href="https://docs.opencv.org/4.x/d5/dae/tutorial_aruco_detection.html">opencv.org</a></li>
|
||||||
|
<li>AptilTag: <a href="https://april.eecs.umich.edu/software/apriltag">april.eecs.umich.edu</a></li>
|
||||||
|
<li>🐦 <a class="has-text-primary" href="https://twitter.com/realorangeone">@RealOrangeOne</a></li>
|
||||||
|
</ul>
|
||||||
|
<aside class="notes" data-markdown>
|
||||||
|
- Notes are available online
|
||||||
|
- Find out more about Student Robotics
|
||||||
|
- If your company likes sponsoring charities, let's chat
|
||||||
|
</aside>
|
||||||
|
</section>
|
||||||
|
<section>
|
||||||
|
<h1 class="is-family-code"></me></h1>
|
||||||
|
</section>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
<script type="module" src="./index.js"></script>
|
<script type="module" src="./index.js"></script>
|
||||||
|
|
|
@ -1,9 +1,10 @@
|
||||||
import Reveal from 'reveal.js';
|
import Reveal from 'reveal.js';
|
||||||
import Markdown from 'reveal.js/plugin/markdown/markdown.js';
|
import Markdown from 'reveal.js/plugin/markdown/markdown.js';
|
||||||
import Notes from 'reveal.js/plugin/notes/notes.js';
|
import Notes from 'reveal.js/plugin/notes/notes.js';
|
||||||
|
import Highlight from 'reveal.js/plugin/highlight/highlight.js';
|
||||||
|
|
||||||
let deck = new Reveal({
|
let deck = new Reveal({
|
||||||
plugins: [ Markdown, Notes ],
|
plugins: [ Markdown, Notes, Highlight ],
|
||||||
controlsTutorial: false,
|
controlsTutorial: false,
|
||||||
hash: true
|
hash: true
|
||||||
})
|
})
|
||||||
|
|
|
@ -1,10 +1,20 @@
|
||||||
$text: white;
|
$text: white;
|
||||||
$text-strong: white;
|
$text-strong: white;
|
||||||
$primary: #e85537;
|
$primary: #e85537;
|
||||||
|
$pre-background: none;
|
||||||
|
$code-background: none;
|
||||||
|
$code: $text;
|
||||||
|
$family-code: var(--r-code-font);
|
||||||
|
|
||||||
@import "../node_modules/reveal.js/css/reveal";
|
@import "../node_modules/reveal.js/css/reveal";
|
||||||
@import "../node_modules/reveal.js/css/theme/source/night";
|
@import "../node_modules/reveal.js/css/theme/source/night";
|
||||||
@import "../node_modules/bulma/bulma";
|
@import "../node_modules/reveal.js/plugin/highlight/monokai.css";
|
||||||
|
|
||||||
|
// Bulma (the useful bits)
|
||||||
|
@import "../node_modules/bulma/sass/utilities/_all";
|
||||||
|
@import "../node_modules/bulma/sass/helpers/_all";
|
||||||
|
@import "../node_modules/bulma/sass/base/_all";
|
||||||
|
@import "../node_modules/bulma/sass/grid/_all";
|
||||||
|
|
||||||
|
|
||||||
.text-shadow {
|
.text-shadow {
|
||||||
|
|
Loading…
Reference in a new issue