JeVoisBase  1.21
JeVois Smart Embedded Machine Vision Toolkit Base Modules
Share this page:
Loading...
Searching...
No Matches
FirstPython.py
Go to the documentation of this file.
1######################################################################################################################
2#
3# JeVois Smart Embedded Machine Vision Toolkit - Copyright (C) 2017 by Laurent Itti, the University of Southern
4# California (USC), and iLab at USC. See http://iLab.usc.edu and http://jevois.org for information about this project.
5#
6# This file is part of the JeVois Smart Embedded Machine Vision Toolkit. This program is free software; you can
7# redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software
8# Foundation, version 2. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY;
9# without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public
10# License for more details. You should have received a copy of the GNU General Public License along with this program;
11# if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
12#
13# Contact information: Laurent Itti - 3641 Watt Way, HNB-07A - Los Angeles, CA 90089-2520 - USA.
14# Tel: +1 213 740 3527 - itti@pollux.usc.edu - http://iLab.usc.edu - http://jevois.org
15######################################################################################################################
16
17import pyjevois
18if pyjevois.pro: import libjevoispro as jevois
19else: import libjevois as jevois
20import cv2
21import numpy as np
22import math # for cos, sin, etc
23
24## Simple example of FIRST Robotics image processing pipeline using OpenCV in Python on JeVois
25#
26# This module is a simplified version of the C++ module \jvmod{FirstVision}. It is available with \jvversion{1.6.2} or
27# later.
28#
29# This module implements a simple color-based object detector using OpenCV in Python. Its main goal is to also
30# demonstrate full 6D pose recovery of the detected object, in Python.
31#
32# This module isolates pixels within a given HSV range (hue, saturation, and value of color pixels), does some cleanups,
33# and extracts object contours. It is looking for a rectangular U shape of a specific size (set by parameters \p owm and
34# \p ohm for object width and height in meters). See screenshots for an example of shape. It sends information about
35# detected objects over serial.
36#
37# This module usually works best with the camera sensor set to manual exposure, manual gain, manual color balance, etc
38# so that HSV color values are reliable. See the file \b script.cfg file in this module's directory for an example of
39# how to set the camera settings each time this module is loaded.
40#
41# This module is provided for inspiration. It has no pretension of actually solving the FIRST Robotics vision problem
42# in a complete and reliable way. It is released in the hope that FRC teams will try it out and get inspired to
43# develop something much better for their own robot.
44#
45# Using this module
46# -----------------
47#
48# Check out [this tutorial](http://jevois.org/tutorials/UserFirstVision.html) first, for the \jvmod{FirstVision} module
49# written in C++ and also check out the doc for \jvmod{FirstVision}. Then you can just dive in and start editing the
50# python code of \jvmod{FirstPython}.
51#
52# See http://jevois.org/tutorials for tutorials on getting started with programming JeVois in Python without having
53# to install any development software on your host computer.
54#
55# Trying it out
56# -------------
57#
58# Edit the module's file at JEVOIS:/modules/JeVois/FirstPython/FirstPython.py and set the parameters \p self.owm and \p
59# self.ohm to the physical width and height of your U-shaped object in meters. You should also review and edit the other
60# parameters in the module's constructor, such as the range of HSV colors.
61#
62# @author Laurent Itti
63#
64# @displayname FIRST Python
65# @videomapping YUYV 640 252 60.0 YUYV 320 240 60.0 JeVois FirstPython
66# @videomapping YUYV 320 252 60.0 YUYV 320 240 60.0 JeVois FirstPython
67# @email itti\@usc.edu
68# @address University of Southern California, HNB-07A, 3641 Watt Way, Los Angeles, CA 90089-2520, USA
69# @copyright Copyright (C) 2018 by Laurent Itti, iLab and the University of Southern California
70# @mainurl http://jevois.org
71# @supporturl http://jevois.org/doc
72# @otherurl http://iLab.usc.edu
73# @license GPL v3
74# @distribution Unrestricted
75# @restrictions None
76# @ingroup modules
78 # ###################################################################################################
79 ## Constructor
80 def __init__(self):
81 # HSV color range to use:
82 #
83 # H: 0=red/do not use because of wraparound, 30=yellow, 45=light green, 60=green, 75=green cyan, 90=cyan,
84 # 105=light blue, 120=blue, 135=purple, 150=pink
85 # S: 0 for unsaturated (whitish discolored object) to 255 for fully saturated (solid color)
86 # V: 0 for dark to 255 for maximally bright
87 self.HSVmin = np.array([ 20, 50, 180], dtype=np.uint8)
88 self.HSVmax = np.array([ 80, 255, 255], dtype=np.uint8)
89
90 # Measure your U-shaped object (in meters) and set its size here:
91 self.owm = 0.280 # width in meters
92 self.ohm = 0.175 # height in meters
93
94 # Other processing parameters:
95 self.epsilon = 0.015 # Shape smoothing factor (higher for smoother)
96 self.hullarea = ( 20*20, 300*300 ) # Range of object area (in pixels) to track
97 self.hullfill = 50 # Max fill ratio of the convex hull (percent)
98 self.ethresh = 900 # Shape error threshold (lower is stricter for exact shape)
99 self.margin = 5 # Margin from from frame borders (pixels)
100
101 # Instantiate a JeVois Timer to measure our processing framerate:
102 self.timer = jevois.Timer("FirstPython", 100, jevois.LOG_INFO)
103
104 # CAUTION: The constructor is a time-critical code section. Taking too long here could upset USB timings and/or
105 # video capture software running on the host computer. Only init the strict minimum here, and do not use OpenCV,
106 # read files, etc
107
108 # ###################################################################################################
109 ## Load camera calibration from JeVois share directory
110 def loadCameraCalibration(self, w, h):
111 try:
112 self.camMatrix, self.distCoeffs = jevois.loadCameraCalibration("calibration", True)
113 jevois.LINFO("Loaded camera calibration")
114 except:
115 jevois.LERROR("Failed to load camera calibration for {}x{} -- IGNORED".format(w,h))
116 self.camMatrix = np.eye(3, 3, dtype=np.double)
117 self.distCoeffs = np.zeros(5, 1, dtype=np.double)
118
119 # ###################################################################################################
120 ## Detect objects within our HSV range
121 def detect(self, imgbgr, outimg = None):
122 maxn = 5 # max number of objects we will consider
123 h, w, chans = imgbgr.shape
124
125 # Convert input image to HSV:
126 imghsv = cv2.cvtColor(imgbgr, cv2.COLOR_BGR2HSV)
127
128 # Isolate pixels inside our desired HSV range:
129 imgth = cv2.inRange(imghsv, self.HSVmin, self.HSVmax)
130 str = "H={}-{} S={}-{} V={}-{} ".format(self.HSVmin[0], self.HSVmax[0], self.HSVmin[1],
131 self.HSVmax[1], self.HSVmin[2], self.HSVmax[2])
132
133 # Create structuring elements for morpho maths:
134 if not hasattr(self, 'erodeElement'):
135 self.erodeElement = cv2.getStructuringElement(cv2.MORPH_RECT, (2, 2))
136 self.dilateElement = cv2.getStructuringElement(cv2.MORPH_RECT, (2, 2))
137
138 # Apply morphological operations to cleanup the image noise:
139 imgth = cv2.erode(imgth, self.erodeElement)
140 imgth = cv2.dilate(imgth, self.dilateElement)
141
142 # Detect objects by finding contours:
143 contours, hierarchy = cv2.findContours(imgth, cv2.RETR_CCOMP, cv2.CHAIN_APPROX_SIMPLE)
144 str += "N={} ".format(len(contours))
145
146 # Only consider the 5 biggest objects by area:
147 contours = sorted(contours, key = cv2.contourArea, reverse = True)[:maxn]
148 hlist = [ ] # list of hulls of good objects, which we will return
149 str2 = ""
150 beststr2 = ""
151
152 # Identify the "good" objects:
153 for c in contours:
154 # Keep track of our best detection so far:
155 if len(str2) > len(beststr2): beststr2 = str2
156 str2 = ""
157
158 # Compute contour area:
159 area = cv2.contourArea(c, oriented = False)
160
161 # Compute convex hull:
162 rawhull = cv2.convexHull(c, clockwise = True)
163 rawhullperi = cv2.arcLength(rawhull, closed = True)
164 hull = cv2.approxPolyDP(rawhull, epsilon = self.epsilon * rawhullperi * 3.0, closed = True)
165
166 # Is it the right shape?
167 if (hull.shape != (4,1,2)): continue # 4 vertices for the rectangular convex outline (shows as a trapezoid)
168 str2 += "H" # Hull is quadrilateral
169
170 huarea = cv2.contourArea(hull, oriented = False)
171 if huarea < self.hullarea[0] or huarea > self.hullarea[1]: continue
172 str2 += "A" # Hull area ok
173
174 hufill = area / huarea * 100.0
175 if hufill > self.hullfill: continue
176 str2 += "F" # Fill is ok
177
178 # Check object shape:
179 peri = cv2.arcLength(c, closed = True)
180 approx = cv2.approxPolyDP(c, epsilon = self.epsilon * peri, closed = True)
181 if len(approx) < 7 or len(approx) > 9: continue # 8 vertices for a U shape
182 str2 += "S" # Shape is ok
183
184 # Compute contour serr:
185 serr = 100.0 * cv2.matchShapes(c, approx, cv2.CONTOURS_MATCH_I1, 0.0)
186 if serr > self.ethresh: continue
187 str2 += "E" # Shape error is ok
188
189 # Reject the shape if any of its vertices gets within the margin of the image bounds. This is to avoid
190 # getting grossly incorrect 6D pose estimates as the shape starts getting truncated as it partially exits
191 # the camera field of view:
192 reject = 0
193 for v in c:
194 if v[0,0] < self.margin or v[0,0] >= w-self.margin or v[0,1] < self.margin or v[0,1] >= h-self.margin:
195 reject = 1
196 break
197
198 if reject == 1: continue
199 str2 += "M" # Margin ok
200
201 # Re-order the 4 points in the hull if needed: In the pose estimation code, we will assume vertices ordered
202 # as follows:
203 #
204 # 0| |3
205 # | |
206 # | |
207 # 1----------2
208
209 # v10+v23 should be pointing outward the U more than v03+v12 is:
210 v10p23 = complex(hull[0][0,0] - hull[1][0,0] + hull[3][0,0] - hull[2][0,0],
211 hull[0][0,1] - hull[1][0,1] + hull[3][0,1] - hull[2][0,1])
212 len10p23 = abs(v10p23)
213 v03p12 = complex(hull[3][0,0] - hull[0][0,0] + hull[2][0,0] - hull[1][0,0],
214 hull[3][0,1] - hull[0][0,1] + hull[2][0,1] - hull[1][0,1])
215 len03p12 = abs(v03p12)
216
217 # Vector from centroid of U shape to centroid of its hull should also point outward of the U:
218 momC = cv2.moments(c)
219 momH = cv2.moments(hull)
220 vCH = complex(momH['m10'] / momH['m00'] - momC['m10'] / momC['m00'],
221 momH['m01'] / momH['m00'] - momC['m01'] / momC['m00'])
222 lenCH = abs(vCH)
223
224 if len10p23 < 0.1 or len03p12 < 0.1 or lenCH < 0.1: continue
225 str2 += "V" # Shape vectors ok
226
227 good = (v10p23.real * vCH.real + v10p23.imag * vCH.imag) / (len10p23 * lenCH)
228 bad = (v03p12.real * vCH.real + v03p12.imag * vCH.imag) / (len03p12 * lenCH)
229
230 # We reject upside-down detections as those are likely to be spurious:
231 if vCH.imag >= -2.0: continue
232 str2 += "U" # U shape is upright
233
234 # Fixup the ordering of the vertices if needed:
235 if bad > good: hull = np.roll(hull, shift = 1, axis = 0)
236
237 # This detection is a keeper:
238 str2 += " OK"
239 hlist.append(hull)
240
241 if len(str2) > len(beststr2): beststr2 = str2
242
243 # Display any results requested by the users:
244 if outimg is not None and outimg.valid():
245 if (outimg.width == w * 2): jevois.pasteGreyToYUYV(imgth, outimg, w, 0)
246 jevois.writeText(outimg, str + beststr2, 3, h+1, jevois.YUYV.White, jevois.Font.Font6x10)
247
248 return hlist
249
250 # ###################################################################################################
251 ## Estimate 6D pose of each of the quadrilateral objects in hlist:
252 def estimatePose(self, hlist):
253 rvecs = []
254 tvecs = []
255
256 # set coordinate system in the middle of the object, with Z pointing out
257 objPoints = np.array([ ( -self.owm * 0.5, -self.ohm * 0.5, 0 ),
258 ( -self.owm * 0.5, self.ohm * 0.5, 0 ),
259 ( self.owm * 0.5, self.ohm * 0.5, 0 ),
260 ( self.owm * 0.5, -self.ohm * 0.5, 0 ) ])
261
262 for detection in hlist:
263 det = np.array(detection, dtype=np.float).reshape(4,2,1)
264 (ok, rv, tv) = cv2.solvePnP(objPoints, det, self.camMatrix, self.distCoeffs)
265 if ok:
266 rvecs.append(rv)
267 tvecs.append(tv)
268 else:
269 rvecs.append(np.array([ (0.0), (0.0), (0.0) ]))
270 tvecs.append(np.array([ (0.0), (0.0), (0.0) ]))
271
272 return (rvecs, tvecs)
273
274 # ###################################################################################################
275 ## Send serial messages, one per object
276 def sendAllSerial(self, w, h, hlist, rvecs, tvecs):
277 idx = 0
278 for c in hlist:
279 # Compute quaternion: FIXME need to check!
280 tv = tvecs[idx]
281 axis = rvecs[idx]
282 angle = (axis[0] * axis[0] + axis[1] * axis[1] + axis[2] * axis[2]) ** 0.5
283
284 # This code lifted from pyquaternion from_axis_angle:
285 mag_sq = axis[0] * axis[0] + axis[1] * axis[1] + axis[2] * axis[2]
286 if (abs(1.0 - mag_sq) > 1e-12): axis = axis / (mag_sq ** 0.5)
287 theta = angle / 2.0
288 r = math.cos(theta)
289 i = axis * math.sin(theta)
290 q = (r, i[0], i[1], i[2])
291
292 jevois.sendSerial("D3 {} {} {} {} {} {} {} {} {} {} FIRST".
293 format(np.asscalar(tv[0]), np.asscalar(tv[1]), np.asscalar(tv[2]), # position
294 self.owm, self.ohm, 1.0, # size
295 r, np.asscalar(i[0]), np.asscalar(i[1]), np.asscalar(i[2]))) # pose
296 idx += 1
297
298 # ###################################################################################################
299 ## Draw all detected objects in 3D
300 def drawDetections(self, outimg, hlist, rvecs = None, tvecs = None):
301 # Show trihedron and parallelepiped centered on object:
302 hw = self.owm * 0.5
303 hh = self.ohm * 0.5
304 dd = -max(hw, hh)
305 i = 0
306 empty = np.array([ (0.0), (0.0), (0.0) ])
307
308 for obj in hlist:
309 # skip those for which solvePnP failed:
310 if np.array_equal(rvecs[i], empty):
311 i += 1
312 continue
313
314 # Project axis points:
315 axisPoints = np.array([ (0.0, 0.0, 0.0), (hw, 0.0, 0.0), (0.0, hh, 0.0), (0.0, 0.0, dd) ])
316 imagePoints, jac = cv2.projectPoints(axisPoints, rvecs[i], tvecs[i], self.camMatrix, self.distCoeffs)
317
318 # Draw axis lines:
319 jevois.drawLine(outimg, int(imagePoints[0][0,0] + 0.5), int(imagePoints[0][0,1] + 0.5),
320 int(imagePoints[1][0,0] + 0.5), int(imagePoints[1][0,1] + 0.5),
321 2, jevois.YUYV.MedPurple)
322 jevois.drawLine(outimg, int(imagePoints[0][0,0] + 0.5), int(imagePoints[0][0,1] + 0.5),
323 int(imagePoints[2][0,0] + 0.5), int(imagePoints[2][0,1] + 0.5),
324 2, jevois.YUYV.MedGreen)
325 jevois.drawLine(outimg, int(imagePoints[0][0,0] + 0.5), int(imagePoints[0][0,1] + 0.5),
326 int(imagePoints[3][0,0] + 0.5), int(imagePoints[3][0,1] + 0.5),
327 2, jevois.YUYV.MedGrey)
328
329 # Also draw a parallelepiped:
330 cubePoints = np.array([ (-hw, -hh, 0.0), (hw, -hh, 0.0), (hw, hh, 0.0), (-hw, hh, 0.0),
331 (-hw, -hh, dd), (hw, -hh, dd), (hw, hh, dd), (-hw, hh, dd) ])
332 cu, jac2 = cv2.projectPoints(cubePoints, rvecs[i], tvecs[i], self.camMatrix, self.distCoeffs)
333
334 # Round all the coordinates and cast to int for drawing:
335 cu = np.rint(cu)
336
337 # Draw parallelepiped lines:
338 jevois.drawLine(outimg, int(cu[0][0,0]), int(cu[0][0,1]), int(cu[1][0,0]), int(cu[1][0,1]),
339 1, jevois.YUYV.LightGreen)
340 jevois.drawLine(outimg, int(cu[1][0,0]), int(cu[1][0,1]), int(cu[2][0,0]), int(cu[2][0,1]),
341 1, jevois.YUYV.LightGreen)
342 jevois.drawLine(outimg, int(cu[2][0,0]), int(cu[2][0,1]), int(cu[3][0,0]), int(cu[3][0,1]),
343 1, jevois.YUYV.LightGreen)
344 jevois.drawLine(outimg, int(cu[3][0,0]), int(cu[3][0,1]), int(cu[0][0,0]), int(cu[0][0,1]),
345 1, jevois.YUYV.LightGreen)
346 jevois.drawLine(outimg, int(cu[4][0,0]), int(cu[4][0,1]), int(cu[5][0,0]), int(cu[5][0,1]),
347 1, jevois.YUYV.LightGreen)
348 jevois.drawLine(outimg, int(cu[5][0,0]), int(cu[5][0,1]), int(cu[6][0,0]), int(cu[6][0,1]),
349 1, jevois.YUYV.LightGreen)
350 jevois.drawLine(outimg, int(cu[6][0,0]), int(cu[6][0,1]), int(cu[7][0,0]), int(cu[7][0,1]),
351 1, jevois.YUYV.LightGreen)
352 jevois.drawLine(outimg, int(cu[7][0,0]), int(cu[7][0,1]), int(cu[4][0,0]), int(cu[4][0,1]),
353 1, jevois.YUYV.LightGreen)
354 jevois.drawLine(outimg, int(cu[0][0,0]), int(cu[0][0,1]), int(cu[4][0,0]), int(cu[4][0,1]),
355 1, jevois.YUYV.LightGreen)
356 jevois.drawLine(outimg, int(cu[1][0,0]), int(cu[1][0,1]), int(cu[5][0,0]), int(cu[5][0,1]),
357 1, jevois.YUYV.LightGreen)
358 jevois.drawLine(outimg, int(cu[2][0,0]), int(cu[2][0,1]), int(cu[6][0,0]), int(cu[6][0,1]),
359 1, jevois.YUYV.LightGreen)
360 jevois.drawLine(outimg, int(cu[3][0,0]), int(cu[3][0,1]), int(cu[7][0,0]), int(cu[7][0,1]),
361 1, jevois.YUYV.LightGreen)
362
363 i += 1
364
365 # ###################################################################################################
366 ## Process function with no USB output
367 def processNoUSB(self, inframe):
368 # Get the next camera image (may block until it is captured) as OpenCV BGR:
369 imgbgr = inframe.getCvBGR()
370 h, w, chans = imgbgr.shape
371
372 # Start measuring image processing time:
373 self.timer.start()
374
375 # Get a list of quadrilateral convex hulls for all good objects:
376 hlist = self.detect(imgbgr)
377
378 # Load camera calibration if needed:
379 if not hasattr(self, 'camMatrix'): self.loadCameraCalibration(w, h)
380
381 # Map to 6D (inverse perspective):
382 (rvecs, tvecs) = self.estimatePose(hlist)
383
384 # Send all serial messages:
385 self.sendAllSerial(w, h, hlist, rvecs, tvecs)
386
387 # Log frames/s info (will go to serlog serial port, default is None):
388 self.timer.stop()
389
390 # ###################################################################################################
391 ## Process function with USB output
392 def process(self, inframe, outframe):
393 # Get the next camera image (may block until it is captured). To avoid wasting much time assembling a composite
394 # output image with multiple panels by concatenating numpy arrays, in this module we use raw YUYV images and
395 # fast paste and draw operations provided by JeVois on those images:
396 inimg = inframe.get()
397
398 # Start measuring image processing time:
399 self.timer.start()
400
401 # Convert input image to BGR24:
402 imgbgr = jevois.convertToCvBGR(inimg)
403 h, w, chans = imgbgr.shape
404
405 # Get pre-allocated but blank output image which we will send over USB:
406 outimg = outframe.get()
407 outimg.require("output", w * 2, h + 12, jevois.V4L2_PIX_FMT_YUYV)
408 jevois.paste(inimg, outimg, 0, 0)
409 jevois.drawFilledRect(outimg, 0, h, outimg.width, outimg.height-h, jevois.YUYV.Black)
410
411 # Let camera know we are done using the input image:
412 inframe.done()
413
414 # Get a list of quadrilateral convex hulls for all good objects:
415 hlist = self.detect(imgbgr, outimg)
416
417 # Load camera calibration if needed:
418 if not hasattr(self, 'camMatrix'): self.loadCameraCalibration(w, h)
419
420 # Map to 6D (inverse perspective):
421 (rvecs, tvecs) = self.estimatePose(hlist)
422
423 # Send all serial messages:
424 self.sendAllSerial(w, h, hlist, rvecs, tvecs)
425
426 # Draw all detections in 3D:
427 self.drawDetections(outimg, hlist, rvecs, tvecs)
428
429 # Write frames/s info from our timer into the edge map (NOTE: does not account for output conversion time):
430 fps = self.timer.stop()
431 jevois.writeText(outimg, fps, 3, h-10, jevois.YUYV.White, jevois.Font.Font6x10)
432
433 # We are done with the output, ready to send it to host over USB:
434 outframe.send()
435
Simple example of FIRST Robotics image processing pipeline using OpenCV in Python on JeVois.
loadCameraCalibration(self, w, h)
Load camera calibration from JeVois share directory.
__init__(self)
Constructor.
sendAllSerial(self, w, h, hlist, rvecs, tvecs)
Send serial messages, one per object.
estimatePose(self, hlist)
Estimate 6D pose of each of the quadrilateral objects in hlist:
detect(self, imgbgr, outimg=None)
Detect objects within our HSV range.
processNoUSB(self, inframe)
Process function with no USB output.
process(self, inframe, outframe)
Process function with USB output.
drawDetections(self, outimg, hlist, rvecs=None, tvecs=None)
Draw all detected objects in 3D.