jetson nano 2GB 开发历程记录
基础篇
一、了解硬件(40引脚、微型USB接口、网线接口、USB2.0接口2个、USB3.0接口、USB电源接口、csi摄像头接口、风扇接口)
sudo sh -c 'echo 200 >/sys/devices/pwm-fan/target_pwm' 控制风扇(200是转速)
风扇自启动、根据温度自动调节转速 ubuntu实现python脚本后台运行+开机自启

#!/usr/bin/python import time downThres = 48 upThres = 58 baseThres = 40 ratio = 5 sleepTime = 30 while True: fo = open("/sys/class/thermal/thermal_zone0/temp","r") thermal = int(fo.read(10)) fo.close() thermal = thermal / 1000 if thermal < downThres: thermal = 0 elif thermal >= downThres and thermal < upThres: thermal = baseThres + (thermal - downThres) * ratio else: thermal = thermal thermal = str(thermal) fw=open("/sys/devices/pwm-fan/target_pwm","w") fw.write(thermal) fw.close() time.sleep(sleepTime) 将以上代码写进文件,如/home/dlinano/ZLTech-demo/conf/thermal.py,将该文件添加到自启动: 使用命令 gnome-session-properties 打开自启动管理界面,添加一条新的自启动命令,sudo python3 /home/dlinano/ZLTech-demo/conf/thermal.py & 完成
二、镜像烧录(使用SDFormatter软件格式化SD卡,用Etcher软件写入镜像)
三、电脑连接jetson nano
1.无头连接jetsonnano(微型USB线连接,192.168.55.1,用户名dlinano、密码dlinano)
2.xshell连接jetson nano(IP,用户名dlinano、密码dlinano,) Linux命令行连接WiFi
3.VNC连接jetson nano(使用GitHub上的create_ap项目让jetson nano发出热点。命令 sudo create_ap wlan0 eth0 wifi名 密码,设置为开机自启动命令gnome-session-properties)
先用ssh开启vnc服务(vino-server服务):/usr/lib/vino/vino-server --display=:0
192.168.12.1 用户名dlinano 密码dlinano

git clone https://github.com/oblique/create_ap cd create_ap/ sudo make install sudo apt-get install util-linux procps hostapd iproute2 iw haveged dnsmasq sudo create_ap wlan0 eth0 wifi名 wifi密码 (将这个命令设置为开机自启动)
Linux 使用create_ap开热点后无法连接wifi问题的解决
xrandr --fb 宽x高 调节屏幕分辨率
jtop 性能监控命令
sudo apt install thonny 安装thonny
四、文件传输方式(filezilla或winscp)
五. 问题及解决方法记录:
1.音频输出输入 https://blog.csdn.net/weixin_39249334/article/details/110197043
https://blog.csdn.net/xiaolong1126626497/article/details/105828447

查看当前系统可用的音频端口(使用排除法,先把USB声卡拔掉,然后再插上,确定那个端口是USB那个端口是电脑内置的) pacmd list | grep "active port" 根据打印的结果,可以知道USB声卡的输出端口是:active port: <analog-output-speaker> 输出端口是:active port: <analog-input-mic> 修改配置文件/etc/pulse/default.pa sudo vim /etc/pulse/default.pa 在文件末尾增加代码: 其中的analog-output-speaker 是使用的声卡输出端口名称,analog-input-mic是输入端口,前面查找到的 set-default-sink analog-output-speaker set-default-source analog-input-mic 重启即可生效

sudo apt-add-repository ppa:yktooo/ppa sudo apt-get update sudo apt-get install indicator-sound-switcher 安装成功后需手动启动,在右上角会显示图标(左数第三个)
2.Ubuntu主面板字体大小设置
开始---首选项---桌面偏好设置----点击数字,修改字体大小
3.解决错误apt --fix-broken install

首先,按照命令行的提示,运行 sudo apt --fix-broken install 这个时候,会让你安装一些依赖包,会提示你是否安装,选择Y
4.mjpg的安装(通过MJPEG-streamer实现IPCamera)

安装MJPEG编译所需要的库 sudo apt-get update sudo apt-get install subversion sudo apt-get install libjpeg8-dev sudo apt-get install imagemagick sudo apt-get install libv4l-dev sudo apt-get install cmake sudo apt-get install git 编译MJPEG-streamer sudo git clone https://github.com/jacksonliam/mjpg-streamer.git cd mjpg-streamer/mjpg-streamer-experimental make all sudo make install 然后打开浏览器输入网址 应该就能看到我们摄像头拍摄的图像了 http://jetson的IP地址:8080/?action=stream mjpeg安装,安装完启动即可看到摄像头的数据,再把摄像头开启服务添加到开机启动中,注意目录问题 sudo /home/dlinano/mjpg-streamer/mjpg-streamer-experimental/mjpg_streamer -i "/home/dlinano/mjpg-streamer/mjpg-streamer-experimental/input_uvc.so" -o "/home/dlinano/mjpg-streamer/mjpg-streamer-experimental/output_http.so -w /home/dlinano/mjpg-streamer/mjpg-streamer-experimental/www"
传感器篇
数字类、模拟类、协议类、其他类
sudo i2cdetect -y -r 1 查看i2c设备的命令
sudo /opt/nvidia/jetson-io/jetson-io.py 使能pwm的命令
OLED /home/jetbot/Notebooks/APP Centrol/jetbot/apps/stats.py

# Copyright (c) 2017 Adafruit Industries # Author: Tony DiCola & James DeVito # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN # THE SOFTWARE. import time import Adafruit_SSD1306 from PIL import Image from PIL import ImageDraw from PIL import ImageFont #from jetbot.utils.utils import get_ip_address import subprocess def get_ip_address(interface): if get_network_interface_state(interface) == 'down': return None cmd = "ifconfig %s | grep -Eo 'inet (addr:)?([0-9]*\.){3}[0-9]*' | grep -Eo '([0-9]*\.){3}[0-9]*' | grep -v '127.0.0.1'" % interface return subprocess.check_output(cmd, shell=True).decode('ascii')[:-1] def get_network_interface_state(interface): return subprocess.check_output('cat /sys/class/net/%s/operstate' % interface, shell=True).decode('ascii')[:-1] # 128x32 display with hardware I2C: disp = Adafruit_SSD1306.SSD1306_128_32(rst=None, i2c_bus=1, gpio=1) # setting gpio to 1 is hack to avoid platform detection # Initialize library. disp.begin() # Clear display. disp.clear() disp.display() # Create blank image for drawing. # Make sure to create image with mode '1' for 1-bit color. width = disp.width height = disp.height image = Image.new('1', (width, height)) # Get drawing object to draw on image. draw = ImageDraw.Draw(image) # Draw a black filled box to clear the image. draw.rectangle((0,0,width,height), outline=0, fill=0) # Draw some shapes. # First define some constants to allow easy resizing of shapes. padding = -2 top = padding bottom = height-padding # Move left to right keeping track of the current x position for drawing shapes. x = 0 # Load default font. font = ImageFont.load_default() while True: # Draw a black filled box to clear the image. draw.rectangle((0,0,width,height), outline=0, fill=0) # Shell scripts for system monitoring from here : https://unix.stackexchange.com/questions/119126/command-to-display-memory-usage-disk-usage-and-cpu-load cmd = "top -bn1 | grep load | awk '{printf \"CPU Load: %.2f\", $(NF-2)}'" CPU = subprocess.check_output(cmd, shell = True ) cmd = "free -m | awk 'NR==2{printf \"Mem: %s/%sMB %.2f%%\", $3,$2,$3*100/$2 }'" MemUsage = subprocess.check_output(cmd, shell = True ) cmd = "df -h | awk '$NF==\"/\"{printf \"Disk: %d/%dGB %s\", $3,$2,$5}'" Disk = subprocess.check_output(cmd, shell = True ) # Write two lines of text. draw.text((x, top), "eth0: " + str(get_ip_address('eth0')), font=font, fill=255) draw.text((x, top+8), "wlan0: " + str(get_ip_address('wlan0')), font=font, fill=255) draw.text((x, top+16), str(MemUsage.decode('utf-8')), font=font, fill=255) draw.text((x, top+25), str(Disk.decode('utf-8')), font=font, fill=255) # Display image. disp.image(image) disp.display() time.sleep(1)
AI人工智能篇
一、jupyterlab(打开浏览器,地址栏输入 IP:8888, 密码dlinano)
jupyter+OpenCV用法:导包,创建显示控件,打开摄像头视频捕获,无限循环对每一帧处理、显示在控件里
二、不用jupyter,在本地用OpenCV打开摄像头(ls -l /dev/video* 可查看有哪几个摄像头设备)

1.jetson nano 用opencv4.1.1打开CSI摄像头(python) import cv2 def gstreamer_pipeline( capture_width=1280, capture_height=720, display_width=1280, display_height=720, framerate=60, flip_method=0, ): return ( "nvarguscamerasrc ! " "video/x-raw(memory:NVMM), " "width=(int)%d, height=(int)%d, " "format=(string)NV12, framerate=(fraction)%d/1 ! " "nvvidconv flip-method=%d ! " "video/x-raw, width=(int)%d, height=(int)%d, format=(string)BGRx ! " "videoconvert ! " "video/x-raw, format=(string)BGR ! appsink" % ( capture_width, capture_height, framerate, flip_method, display_width, display_height, ) ) cap = cv2.VideoCapture(gstreamer_pipeline(flip_method=0), cv2.CAP_GSTREAMER) 注意:cap = cv2.VideoCapture(gstreamer_pipeline(flip_method=0), cv2.CAP_GSTREAMER)只是创建一个对象,并不是完整打开摄像头代码 2.jetson nano 用opencv4.1.1打开USB摄像头(python) cap = cv2.VideoCapture(1) 1表示摄像头编号1
注意:如果已安装mjpg并开机自启mjpg视频流传输,则会出现以下问题:ai功能调用/dev/video0打不开摄像头,会报错。解决方法:写一个脚本如killmjpg.sh杀死mjpg进程,在ai功能打开摄像头之前使用os.system('./killmjpg.sh'杀死mjpg进程即可

#!/bin/bash sudo kill -9 `ps -elf|grep mjpg |awk '{print $4}'|awk 'NR==1'` sudo kill -9 `ps -elf|grep mjpg |awk '{print $4}'|awk 'NR==1'`

import cv2 import time import os #启动脚本杀死影响程序的进程 os.system('./killmjpg.sh') cap = cv2.VideoCapture(0) #打开摄像头 最大范围 640×480 cap.set(3,320) #设置画面宽度 cap.set(4,240) #设置画面高度 while 1: ret,frame = cap.read() cv2.imshow('frame',frame) if cv2.waitKey(5) & 0xFF == 27: #如果按了ESC就退出 当然也可以自己设置 break cap.release() cv2.destroyAllWindows() #后面两句是常规操作,每次使用摄像头都需要这样设置一波
注:本地打开摄像头需要加上下面这个代码

cv2.imshow(frame)之后加上 if cv2.waitKey(5) & 0xFF == 27: break
文本播放用pyttsx3模块(离线版语言引擎 pip3 install pyttsx3==2.71) 不好用

import pyttsx3 # 离线版语音引擎pip install pyttsx3==2.71 item = '欢迎使用众灵AI智能语音识别系统!!' engine = pyttsx3.init() engine.setProperty('rate',150) engine.setProperty('voice','english+f2') engine.say(item) engine.runAndWait() print('end')
语音播放用pygame模块

import pygame #import time pygame.mixer.init() def play(filename): pygame.mixer.music.load(filename) pygame.mixer.music.play() while pygame.mixer.music.get_busy(): #检查是否正在播放音乐 pass #time.sleep(5) if __name__ == '__main__': play('0018.wav')
语音播放用simpleaudio模块

from pydub import AudioSegment import simpleaudio as sa def trans_mp3_to_wav(filepath): song = AudioSegment.from_mp3(filepath) song.export("此处填wav文件名称及路径", format="wav") trans_mp3_to_wav("原mp3文件名称及路径") wave_obj = sa.WaveObject.from_wave_file("此处填wav文件名称及路径") play_obj = wave_obj.play() play_obj.wait_done()

from pydub import AudioSegment #这里filepath填的是.mp3文件的名字(也可加上路径) def trans_mp3_to_wav(filepath): song = AudioSegment.from_mp3(filepath) song.export("now.wav", format="wav")
百度语音技术(领取免费额度)示例代码(pyaudio录音+语音识别+语音合成)
安装scipy库: https://blog.csdn.net/yaoqi_isee/article/details/75206222

1、首先安装依赖库 sudo apt-get install libasound-dev portaudio19-dev libportaudio2 libportaudiocpp0 2、然后,就可以直接安装了 sudo pip3 install pyaudio

import os import time import pygame from aip import AipSpeech import pyaudio_test """ 你的 APPID AK SK """ APP_ID = '24705618' API_KEY = 'D9nixLbfFdTp4B7rT378Y67K' SECRET_KEY = '2Rm19eciqEa9sCUHNyls50QUNtU34qeu' client = AipSpeech(APP_ID, API_KEY, SECRET_KEY) # 读取文件 def get_file_content(filePath): os.system(f"ffmpeg -y -i {filePath} -acodec pcm_s16le -f s16le -ac 1 -ar 16000 {filePath}.pcm") with open(f"{filePath}.pcm", 'rb') as fp: return fp.read() # 语音识别 def audio2text(filepath): # 识别本地文件 res = client.asr(get_file_content(filepath), 'pcm', 16000, {'dev_pid': 1537,}) print(res.get("result")[0]) return res.get("result")[0] # 语音合成 def text2audio(text): filename = "{}.mp3".format(int(time.time())) path = '/tmp/' filepath = os.path.join(path, filename) result = client.synthesis(text, 'zh', 1, {'vol': 7,"spd": 4,"pit": 7,"per": 4}) # 识别正确返回语音二进制 错误则返回dict 参照下面错误码 if not isinstance(result, dict): print('start write') with open(filepath, 'wb') as f: f.write(result) return filepath # 语音播放 def voice(filepath): print('voice start') pygame.mixer.init() pygame.mixer.music.load(filepath) pygame.mixer.music.play() while pygame.mixer.music.get_busy(): #检查是否正在播放音乐 pass print('voice end') if __name__ == '__main__': filepath = pyaudio_test.rec('1.wav') text = audio2text(filepath) filename = text2audio(text) voice(filename)

import os import time import pyaudio import wave import baiduaip_test CHUNK = 1024 FORMAT = pyaudio.paInt16 CHANNELS = 2 RATE = 16000 RECORD_SECONDS = 5 def rec(file_name='%s.wav'%time.time()): path = '/tmp/' filepath = os.path.join(path,file_name) p = pyaudio.PyAudio() stream = p.open(format=FORMAT, channels=CHANNELS, rate=RATE, input=True, frames_per_buffer=CHUNK) f1 = baiduaip_test.text2audio("开始录音") baiduaip_test.voice(f1) frames = [] for i in range(0, int(RATE / CHUNK * RECORD_SECONDS)): data = stream.read(CHUNK) frames.append(data) f2 = baiduaip_test.text2audio("录音结束") baiduaip_test.voice(f2) stream.stop_stream() stream.close() p.terminate() wf = wave.open(filepath, 'wb') wf.setnchannels(CHANNELS) #设置声道数目 wf.setsampwidth(p.get_sample_size(FORMAT)) #设置样本宽度 wf.setframerate(RATE) #设置取样频率 wf.writeframes(b''.join(frames)) #写入语音频及文件表头 wf.close() return filepath if __name__ == '__main__': filepath = rec() print('text: ', filepath)
视觉巡线uart、serial
import serial ser=serial.Serial('/dev/ttyTHS1',115200,timeout=1) # jetson nano
Jetson Nano 跑通 jetson-inference

参考官方github的内容https://github.com/NVIDIA-AI-IOT/jetcam git clone https://github.com/NVIDIA-AI-IOT/jetcam cd jetcam sudo python3 setup.py install