Remove some demos and stray slides
This commit is contained in:
parent
cbb6d0921e
commit
2deecb2e36
4 changed files with 12 additions and 69 deletions
|
@ -1,18 +0,0 @@
|
|||
import cv2
|
||||
import numpy
|
||||
|
||||
cap = cv2.VideoCapture(2)
|
||||
|
||||
cap.set(cv2.CAP_PROP_FRAME_WIDTH, 640)
|
||||
cap.set(cv2.CAP_PROP_FRAME_HEIGHT, 480)
|
||||
|
||||
while True:
|
||||
_, frame = cap.read()
|
||||
grey = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
|
||||
t1 = cv2.adaptiveThreshold(grey, 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY_INV, 3, 7)
|
||||
t2 = cv2.adaptiveThreshold(grey, 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY_INV, 13, 7)
|
||||
t3 = cv2.adaptiveThreshold(grey, 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY_INV, 23, 7)
|
||||
|
||||
threshed = numpy.concatenate((t1, t2, t3), axis=1)
|
||||
cv2.imshow('threshed', threshed)
|
||||
cv2.waitKey(1)
|
|
@ -1,14 +0,0 @@
|
|||
import cv2
|
||||
|
||||
cap = cv2.VideoCapture(2)
|
||||
|
||||
cap.set(cv2.CAP_PROP_FRAME_WIDTH, 640)
|
||||
cap.set(cv2.CAP_PROP_FRAME_HEIGHT, 480)
|
||||
|
||||
while True:
|
||||
_, frame = cap.read()
|
||||
grey = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
|
||||
_, thresh = cv2.threshold(grey, 127, 255, cv2.THRESH_BINARY)
|
||||
cv2.imshow('original', frame)
|
||||
cv2.imshow('thresh', thresh)
|
||||
cv2.waitKey(1)
|
|
@ -72,13 +72,12 @@
|
|||
</section>
|
||||
<section data-background-image="https://docs.opencv.org/4.x/singlemarkersthresh.png">
|
||||
<h2 class="r-fit-text text-shadow">2. Thresholding</h2>
|
||||
<p class="fragment text-shadow">Demo 🤞</p>
|
||||
<aside class="notes" data-markdown>
|
||||
- Images aren't black and white
|
||||
- This slide is
|
||||
- Markers _are_
|
||||
- Black and white!
|
||||
- Not greyscale
|
||||
- Not even greyscale
|
||||
- Much less data to be working with
|
||||
- Thresholding achieves this
|
||||
- Naive thresholding based on the entire image
|
||||
|
@ -86,7 +85,6 @@
|
|||
- Useful in future stages
|
||||
</aside>
|
||||
</section>
|
||||
<section>
|
||||
<section>
|
||||
<h2 class="r-fit-text">3. Edge Detection</h2>
|
||||
<aside class="notes" data-markdown>
|
||||
|
@ -99,19 +97,6 @@
|
|||
- Find collections with 4 sides
|
||||
</aside>
|
||||
</section>
|
||||
<section>
|
||||
<h2 class="r-fit-text">3a. Contour Refinement</h2>
|
||||
<aside class="notes" data-markdown>
|
||||
- Only care about the square (ish) ones
|
||||
- Get rid of anything else
|
||||
- Ignore rectangles, too skewed etc
|
||||
- Remove contours within contours
|
||||
- Refinement is fast and simple
|
||||
- Latter stages are more complex
|
||||
- Exclude now whilst it's easy and cheap
|
||||
</aside>
|
||||
</section>
|
||||
</section>
|
||||
<section data-background-image="https://docs.opencv.org/4.x/singlemarkersoriginal.jpg">
|
||||
<h2 class="r-fit-text text-shadow">4. <em>Distortion</em></h2>
|
||||
<aside class="notes" data-markdown>
|
||||
|
@ -149,16 +134,6 @@
|
|||
- Search "Hamming code" for more
|
||||
</aside>
|
||||
</section>
|
||||
<section data-background-image="https://docs.opencv.org/4.x/singlemarkersdetection.jpg">
|
||||
<h2 class="r-fit-text text-shadow">And done!</h2>
|
||||
<p class="fragment text-shadow">...Almost</p>
|
||||
<aside class="notes" data-markdown>
|
||||
- And that's it
|
||||
- We can now find markers in a given image
|
||||
- ... Almost
|
||||
- But what if we wanted to do a little more?
|
||||
</aside>
|
||||
</section>
|
||||
<section data-background-image="https://docs.opencv.org/4.x/singlemarkersaxes.jpg">
|
||||
<h2 class="r-fit-text text-shadow">7. Pose Estimation</h2>
|
||||
<aside class="notes" data-markdown>
|
||||
|
@ -178,7 +153,7 @@
|
|||
</aside>
|
||||
</section>
|
||||
<section>
|
||||
<h2 class="r-fit-text">Another demo 🙏</h2>
|
||||
<h2 class="r-fit-text">Demo 🙏</h2>
|
||||
</section>
|
||||
<section>
|
||||
<section>
|
||||
|
|
Loading…
Reference in a new issue