jetson nano 2GB開發歷程記錄


                                                                                         jetson nano 2GB 開發歷程記錄

 

基礎篇

    一、了解硬件(40引腳、微型USB接口、網線接口、USB2.0接口2個、USB3.0接口、USB電源接口、csi攝像頭接口、風扇接口)

        sudo sh -c 'echo 200 >/sys/devices/pwm-fan/target_pwm'     控制風扇(200是轉速)

      風扇自啟動、根據溫度自動調節轉速   ubuntu實現python腳本后台運行+開機自啟

#!/usr/bin/python
import time
downThres = 48
upThres = 58
baseThres = 40
ratio = 5
sleepTime = 30
 
while True:
    fo = open("/sys/class/thermal/thermal_zone0/temp","r")
    thermal = int(fo.read(10))
    fo.close()
 
    thermal = thermal / 1000
 
    if thermal < downThres:
        thermal = 0
    elif thermal >= downThres and thermal < upThres:
        thermal = baseThres + (thermal - downThres) * ratio
    else:
        thermal = thermal
 
 
    thermal = str(thermal)
 
    fw=open("/sys/devices/pwm-fan/target_pwm","w")
    fw.write(thermal)
    fw.close()
 
    time.sleep(sleepTime)







將以上代碼寫進文件,如/home/dlinano/ZLTech-demo/conf/thermal.py,將該文件添加到自啟動:
使用命令 gnome-session-properties 打開自啟動管理界面,添加一條新的自啟動命令,sudo python3 /home/dlinano/ZLTech-demo/conf/thermal.py &     完成
風扇開機自啟動及自動調節轉速

 

    二、鏡像燒錄(使用SDFormatter軟件格式化SD卡,用Etcher軟件寫入鏡像)

        微雲資料微雪課堂

    三、電腦連接jetson nano

      1.無頭連接jetsonnano(微型USB線連接,192.168.55.1,用戶名dlinano、密碼dlinano)

      2.xshell連接jetson nano(IP,用戶名dlinano、密碼dlinano,)   Linux命令行連接WiFi

      3.VNC連接jetson nano(使用GitHub上的create_ap項目讓jetson nano發出熱點。命令 sudo create_ap wlan0 eth0 wifi名 密碼,設置為開機自啟動命令gnome-session-properties

        先用ssh開啟vnc服務(vino-server服務):/usr/lib/vino/vino-server --display=:0

        192.168.12.1   用戶名dlinano   密碼dlinano   

git clone https://github.com/oblique/create_ap

        cd create_ap/

        sudo make install

        sudo apt-get install util-linux procps hostapd iproute2 iw haveged dnsmasq

        sudo create_ap wlan0 eth0 wifi名 wifi密碼 (將這個命令設置為開機自啟動)
create_ap

        Linux 使用create_ap開熱點后無法連接wifi問題的解決

        Ubuntu18.04設置開機自啟動的三種方法

        xrandr --fb 寬x高       調節屏幕分辨率

        jtop          性能監控命令

        sudo apt install thonny        安裝thonny

    四、文件傳輸方式(filezilla或winscp

    五. 問題及解決方法記錄:

      1.音頻輸出輸入   https://blog.csdn.net/weixin_39249334/article/details/110197043

               https://blog.csdn.net/xiaolong1126626497/article/details/105828447

查看當前系統可用的音頻端口(使用排除法,先把USB聲卡拔掉,然后再插上,確定那個端口是USB那個端口是電腦內置的)
pacmd list | grep "active port"
根據打印的結果,可以知道USB聲卡的輸出端口是:active port: <analog-output-speaker>   輸出端口是:active port: <analog-input-mic>


修改配置文件/etc/pulse/default.pa
sudo vim /etc/pulse/default.pa
在文件末尾增加代碼:  其中的analog-output-speaker 是使用的聲卡輸出端口名稱,analog-input-mic是輸入端口,前面查找到的
set-default-sink analog-output-speaker
set-default-source analog-input-mic

重啟即可生效
設置默認音頻輸入輸出設備
sudo apt-add-repository ppa:yktooo/ppa
sudo apt-get update
sudo apt-get install indicator-sound-switcher


安裝成功后需手動啟動,在右上角會顯示圖標(左數第三個)
音頻

      2.Ubuntu主面板字體大小設置

        開始---首選項---桌面偏好設置----點擊數字,修改字體大小

      3.解決錯誤apt --fix-broken install

首先,按照命令行的提示,運行

sudo apt --fix-broken install

這個時候,會讓你安裝一些依賴包,會提示你是否安裝,選擇Y
解決錯誤apt --fix-broken install

       4.mjpg的安裝(通過MJPEG-streamer實現IPCamera

安裝MJPEG編譯所需要的庫
sudo apt-get update
sudo apt-get install subversion
sudo apt-get install libjpeg8-dev
sudo apt-get install imagemagick
sudo apt-get install libv4l-dev
sudo apt-get install cmake
sudo apt-get install git

編譯MJPEG-streamer
sudo git clone https://github.com/jacksonliam/mjpg-streamer.git
cd mjpg-streamer/mjpg-streamer-experimental
make all
sudo make install

然后打開瀏覽器輸入網址 應該就能看到我們攝像頭拍攝的圖像了
http://jetson的IP地址:8080/?action=stream   


mjpeg安裝,安裝完啟動即可看到攝像頭的數據,再把攝像頭開啟服務添加到開機啟動中,注意目錄問題
sudo /home/dlinano/mjpg-streamer/mjpg-streamer-experimental/mjpg_streamer -i "/home/dlinano/mjpg-streamer/mjpg-streamer-experimental/input_uvc.so" -o "/home/dlinano/mjpg-streamer/mjpg-streamer-experimental/output_http.so -w /home/dlinano/mjpg-streamer/mjpg-streamer-experimental/www"
jetson nano通過MJPEG-streamer實現IPCamera

 

 

 

傳感器篇

    數字類、模擬類、協議類、其他類

      sudo i2cdetect -y -r 1      查看i2c設備的命令

      sudo /opt/nvidia/jetson-io/jetson-io.py       使能pwm的命令

      OLED     /home/jetbot/Notebooks/APP Centrol/jetbot/apps/stats.py

# Copyright (c) 2017 Adafruit Industries
# Author: Tony DiCola & James DeVito
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
import time

import Adafruit_SSD1306

from PIL import Image
from PIL import ImageDraw
from PIL import ImageFont
#from jetbot.utils.utils import get_ip_address

import subprocess

def get_ip_address(interface):
    if get_network_interface_state(interface) == 'down':
        return None
    cmd = "ifconfig %s | grep -Eo 'inet (addr:)?([0-9]*\.){3}[0-9]*' | grep -Eo '([0-9]*\.){3}[0-9]*' | grep -v '127.0.0.1'" % interface
    return subprocess.check_output(cmd, shell=True).decode('ascii')[:-1]


def get_network_interface_state(interface):
    return subprocess.check_output('cat /sys/class/net/%s/operstate' % interface, shell=True).decode('ascii')[:-1]


# 128x32 display with hardware I2C:
disp = Adafruit_SSD1306.SSD1306_128_32(rst=None, i2c_bus=1, gpio=1) # setting gpio to 1 is hack to avoid platform detection

# Initialize library.
disp.begin()

# Clear display.
disp.clear()
disp.display()

# Create blank image for drawing.
# Make sure to create image with mode '1' for 1-bit color.
width = disp.width
height = disp.height
image = Image.new('1', (width, height))

# Get drawing object to draw on image.
draw = ImageDraw.Draw(image)

# Draw a black filled box to clear the image.
draw.rectangle((0,0,width,height), outline=0, fill=0)

# Draw some shapes.
# First define some constants to allow easy resizing of shapes.
padding = -2
top = padding
bottom = height-padding
# Move left to right keeping track of the current x position for drawing shapes.
x = 0

# Load default font.
font = ImageFont.load_default()


while True:

    # Draw a black filled box to clear the image.
    draw.rectangle((0,0,width,height), outline=0, fill=0)

    # Shell scripts for system monitoring from here : https://unix.stackexchange.com/questions/119126/command-to-display-memory-usage-disk-usage-and-cpu-load
    cmd = "top -bn1 | grep load | awk '{printf \"CPU Load: %.2f\", $(NF-2)}'"
    CPU = subprocess.check_output(cmd, shell = True )
    cmd = "free -m | awk 'NR==2{printf \"Mem: %s/%sMB %.2f%%\", $3,$2,$3*100/$2 }'"
    MemUsage = subprocess.check_output(cmd, shell = True )
    cmd = "df -h | awk '$NF==\"/\"{printf \"Disk: %d/%dGB %s\", $3,$2,$5}'"
    Disk = subprocess.check_output(cmd, shell = True )

    # Write two lines of text.

    draw.text((x, top),       "eth0: " + str(get_ip_address('eth0')),  font=font, fill=255)
    draw.text((x, top+8),     "wlan0: " + str(get_ip_address('wlan0')), font=font, fill=255)
    draw.text((x, top+16),    str(MemUsage.decode('utf-8')),  font=font, fill=255)
    draw.text((x, top+25),    str(Disk.decode('utf-8')),  font=font, fill=255)

    # Display image.
    disp.image(image)
    disp.display()
    time.sleep(1)
stats.py

 

AI人工智能篇

    一、jupyterlab(打開瀏覽器,地址欄輸入 IP:8888, 密碼dlinano)

      jupyter+OpenCV用法:導包,創建顯示控件,打開攝像頭視頻捕獲,無限循環對每一幀處理、顯示在控件里

    二、不用jupyter,在本地用OpenCV打開攝像頭(ls -l /dev/video* 可查看有哪幾個攝像頭設備)

      jetson nano打開csi和USB攝像頭

1.jetson nano 用opencv4.1.1打開CSI攝像頭(python)

import cv2
 
 
def gstreamer_pipeline(
    capture_width=1280,
    capture_height=720,
    display_width=1280,
    display_height=720,
    framerate=60,
    flip_method=0,
):
    return (
        "nvarguscamerasrc ! "
        "video/x-raw(memory:NVMM), "
        "width=(int)%d, height=(int)%d, "
        "format=(string)NV12, framerate=(fraction)%d/1 ! "
        "nvvidconv flip-method=%d ! "
        "video/x-raw, width=(int)%d, height=(int)%d, format=(string)BGRx ! "
        "videoconvert ! "
        "video/x-raw, format=(string)BGR ! appsink"
        % (
            capture_width,
            capture_height,
            framerate,
            flip_method,
            display_width,
            display_height,
        )
    )
 
cap = cv2.VideoCapture(gstreamer_pipeline(flip_method=0), cv2.CAP_GSTREAMER)
注意:cap = cv2.VideoCapture(gstreamer_pipeline(flip_method=0), cv2.CAP_GSTREAMER)只是創建一個對象,並不是完整打開攝像頭代碼




2.jetson nano 用opencv4.1.1打開USB攝像頭(python)

cap = cv2.VideoCapture(1)

1表示攝像頭編號1
打開兩種攝像頭

      注意:如果已安裝mjpg並開機自啟mjpg視頻流傳輸,則會出現以下問題:ai功能調用/dev/video0打不開攝像頭,會報錯。解決方法:寫一個腳本如killmjpg.sh殺死mjpg進程,在ai功能打開攝像頭之前使用os.system('./killmjpg.sh'殺死mjpg進程即可

#!/bin/bash

sudo kill -9 `ps -elf|grep mjpg |awk '{print $4}'|awk 'NR==1'`

sudo kill -9 `ps -elf|grep mjpg |awk '{print $4}'|awk 'NR==1'`
killmjpg.sh
import cv2
import time
import os

#啟動腳本殺死影響程序的進程
os.system('./killmjpg.sh')

cap = cv2.VideoCapture(0)  #打開攝像頭 最大范圍 640×480
cap.set(3,320)  #設置畫面寬度
cap.set(4,240)  #設置畫面高度

while 1: 
    ret,frame = cap.read()
    cv2.imshow('frame',frame)
    if cv2.waitKey(5) & 0xFF == 27: #如果按了ESC就退出 當然也可以自己設置
        break

cap.release()
cv2.destroyAllWindows() #后面兩句是常規操作,每次使用攝像頭都需要這樣設置一波
使用示例

      注:本地打開攝像頭需要加上下面這個代碼

cv2.imshow(frame)之后加上
if cv2.waitKey(5) & 0xFF == 27:
    break
本地打開攝像頭必加代碼

      文本播放用pyttsx3模塊(離線版語言引擎 pip3 install pyttsx3==2.71)  不好用

import pyttsx3   # 離線版語音引擎pip install pyttsx3==2.71

item = '歡迎使用眾靈AI智能語音識別系統!!'

engine = pyttsx3.init()
engine.setProperty('rate',150)
engine.setProperty('voice','english+f2')

engine.say(item)
engine.runAndWait()

print('end')
pyttsx3_test.py

       語音播放用pygame模塊

import pygame
#import time

pygame.mixer.init()
def play(filename):
    pygame.mixer.music.load(filename)
    pygame.mixer.music.play()

    while pygame.mixer.music.get_busy():  #檢查是否正在播放音樂
        pass

    #time.sleep(5)
if __name__ == '__main__':
    play('0018.wav')
pygame_test.py

       語音播放用simpleaudio模塊

from pydub import AudioSegment
import simpleaudio as sa

def trans_mp3_to_wav(filepath):
    song = AudioSegment.from_mp3(filepath)
    song.export("此處填wav文件名稱及路徑", format="wav")

trans_mp3_to_wav("原mp3文件名稱及路徑")
wave_obj = sa.WaveObject.from_wave_file("此處填wav文件名稱及路徑")
play_obj = wave_obj.play()
play_obj.wait_done()
音頻轉換+語音播放

      mp3轉wav用pydub模塊

from pydub import AudioSegment

#這里filepath填的是.mp3文件的名字(也可加上路徑)
def trans_mp3_to_wav(filepath):
    song = AudioSegment.from_mp3(filepath)
    song.export("now.wav", format="wav")
mp3_to_wav.py

      百度語音技術(領取免費額度)示例代碼(pyaudio錄音+語音識別+語音合成)

安裝scipy庫: https://blog.csdn.net/yaoqi_isee/article/details/75206222
1、首先安裝依賴庫

sudo apt-get install libasound-dev portaudio19-dev libportaudio2 libportaudiocpp0

2、然后,就可以直接安裝了

sudo pip3 install pyaudio
Ubuntu18.04 install Pyaudio
import os
import time
import pygame
from aip import AipSpeech
import pyaudio_test

""" 你的 APPID AK SK """
APP_ID = '24705618'
API_KEY = 'D9nixLbfFdTp4B7rT378Y67K'
SECRET_KEY = '2Rm19eciqEa9sCUHNyls50QUNtU34qeu'

client = AipSpeech(APP_ID, API_KEY, SECRET_KEY)


# 讀取文件
def get_file_content(filePath):
    os.system(f"ffmpeg -y -i {filePath} -acodec pcm_s16le -f s16le -ac 1 -ar 16000 {filePath}.pcm")
    with open(f"{filePath}.pcm", 'rb') as fp:
        return fp.read()

# 語音識別
def audio2text(filepath):
    # 識別本地文件
    res = client.asr(get_file_content(filepath), 'pcm', 16000, {'dev_pid': 1537,})
    print(res.get("result")[0])
    return res.get("result")[0]

# 語音合成
def text2audio(text):
    filename = "{}.mp3".format(int(time.time()))
    path = '/tmp/'
    filepath = os.path.join(path, filename)
    result = client.synthesis(text, 'zh', 1, {'vol': 7,"spd": 4,"pit": 7,"per": 4})
    # 識別正確返回語音二進制 錯誤則返回dict 參照下面錯誤碼
    if not isinstance(result, dict):
        print('start write')
        with open(filepath, 'wb') as f:
            f.write(result)
    return filepath

# 語音播放
def voice(filepath):
    print('voice start')
    pygame.mixer.init()
    pygame.mixer.music.load(filepath)
    pygame.mixer.music.play()
    while pygame.mixer.music.get_busy():  #檢查是否正在播放音樂
        pass
    print('voice end')

if __name__ == '__main__':
    filepath = pyaudio_test.rec('1.wav')
    text = audio2text(filepath)
    filename = text2audio(text)

    voice(filename)
baiduaip_test.py
import os
import time
import pyaudio
import wave
import baiduaip_test

CHUNK = 1024
FORMAT = pyaudio.paInt16
CHANNELS = 2
RATE = 16000
RECORD_SECONDS = 5

def rec(file_name='%s.wav'%time.time()):
    path = '/tmp/'
    filepath = os.path.join(path,file_name)
    p = pyaudio.PyAudio()
    stream = p.open(format=FORMAT, channels=CHANNELS, rate=RATE, input=True, frames_per_buffer=CHUNK)

    f1 = baiduaip_test.text2audio("開始錄音")
    baiduaip_test.voice(f1)
        
    frames = []

    for i in range(0, int(RATE / CHUNK * RECORD_SECONDS)):
        data = stream.read(CHUNK)
        frames.append(data)

    f2 = baiduaip_test.text2audio("錄音結束")
    baiduaip_test.voice(f2)

    stream.stop_stream()
    stream.close()
    p.terminate()

    wf = wave.open(filepath, 'wb')
    wf.setnchannels(CHANNELS)  #設置聲道數目
    wf.setsampwidth(p.get_sample_size(FORMAT))  #設置樣本寬度
    wf.setframerate(RATE)  #設置取樣頻率
    wf.writeframes(b''.join(frames))  #寫入語音頻及文件表頭
    wf.close()

    return filepath

if __name__ == '__main__':
    filepath = rec()
    print('text: ', filepath)
pyaudio_test.py

      視覺巡線uart、serial

import serial
ser=serial.Serial('/dev/ttyTHS1',115200,timeout=1)   # jetson nano

       import pytorch錯誤:OSError: libmpi_cxx.so.20: cannot open shared object file: No such file or directory

      Jetson Nano 跑通 jetson-inference

參考官方github的內容https://github.com/NVIDIA-AI-IOT/jetcam

git clone https://github.com/NVIDIA-AI-IOT/jetcam
cd jetcam
sudo python3 setup.py install
安裝Jetcam

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM