Winse Blog

走走停停都是风景, 熙熙攘攘都向最好, 忙忙碌碌都为明朝, 何畏之.

AIGC Setup on Win11 WSL2

Yi官方文档,一开始摸不着头脑,不知道从哪里入手。 网上找了一些资料,查到了苏洋的博客,先把环境搭建起来。

为了在 Windows11 机器方便使用GPU,以及开源很多工程都提供docker入门,但WSL2慢,考虑本地已经搞了一个WSL1了会不会冲突,同时虚拟机里面也安装不了WSL2,VMWare桌面虚拟机的话直接使用GPU没有很好的方式等等,纠结了一天,最终还是选了安装 WSL2+Docker Desktop。

跟着文章,你将会了解Windows+WLS2+Docker怎么跑GPU模型,以及在国内怎么下载模型文件。

使用WSL2

在 启用或关闭Windows功能 中选择 虚拟机平台。

1
2
wsl --update
wsl --set-default-version 2

然后在微软商店Microsoft Store里面安装 Ubuntu-20.04 (版本选20或者22)的系统(通过应用商店的话就规避了可能安装同一个的Linux的问题:已经安装 在应用商店的按钮不是[获取]是[打开])。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
winse@DESKTOP-BR4MG38:~$ cat /etc/os-release
NAME="Ubuntu"
VERSION="20.04.6 LTS (Focal Fossa)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 20.04.6 LTS"
VERSION_ID="20.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=focal
UBUNTU_CODENAME=focal

winse@DESKTOP-BR4MG38:~$ uname -a
Linux DESKTOP-BR4MG38 5.15.133.1-microsoft-standard-WSL2 #1 SMP Thu Oct 5 21:02:42 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux

对比WSL1,WSL2的 ip a ,WSL2还是干净很多,把宿主机的一些信息合并到linux里面了(如:hosts)。

Docker Desktop + WSL2

通过exe安装,安装过程中选择使用WSL2,装好后wsl显示多出了两个linux。

1
2
3
4
5
6
C:\Users\P15>wsl -l -v
  NAME                   STATE           VERSION
* Ubuntu                 Stopped         1
  Ubuntu-20.04           Running         2
  docker-desktop-data    Running         2
  docker-desktop         Running         2
1
2
3
4
winse@DESKTOP-BR4MG38:~$ su -
root@DESKTOP-BR4MG38:~# echo "winse ALL=(ALL:ALL) NOPASSWD: ALL" >>/etc/sudoers

root@DESKTOP-BR4MG38:~# sed -i.bak -e 's|archive.ubuntu.com/ubuntu/|mirrors.aliyun.com/ubuntu/|' -e 's|security.ubuntu.com/ubuntu/|mirrors.aliyun.com/ubuntu/|' /etc/apt/sources.list

在 WSL-Ubuntu 里面可以直接用 Win11 的程序,直接查看docker的信息:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
winse@DESKTOP-BR4MG38:~$ docker version
Client: Docker Engine - Community
 Cloud integration: v1.0.35+desktop.5
 Version:           24.0.7
 API version:       1.43
 Go version:        go1.20.10
 Git commit:        afdd53b
 Built:             Thu Oct 26 09:08:17 2023
 OS/Arch:           linux/amd64
 Context:           default

Server: Docker Desktop
 Engine:
  Version:          24.0.7
  API version:      1.43 (minimum version 1.12)
  Go version:       go1.20.10
  Git commit:       311b9ff
  Built:            Thu Oct 26 09:08:02 2023
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.6.25
  GitCommit:        d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f
 runc:
  Version:          1.1.10
  GitCommit:        v1.1.10-0-g18a0cb0
 docker-init:
  Version:          0.19.0
  GitCommit:        de40ad0
  
winse@DESKTOP-BR4MG38:~$ which docker
/usr/bin/docker
winse@DESKTOP-BR4MG38:~$ ll  /usr/bin/docker
lrwxrwxrwx 1 root root 48 Jan 13 11:03 /usr/bin/docker -> /mnt/wsl/docker-desktop/cli-tools/usr/bin/docker*

其实用的就是windows的docker

镜像加速

保存会重启docker,再查看docker的信息,确认Registry Mirrors:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
winse@DESKTOP-BR4MG38:~$ docker info
Client: Docker Engine - Community
 Version:    24.0.7
 Context:    default
 Debug Mode: false
 Plugins:
  buildx: Docker Buildx (Docker Inc.)
    Version:  v0.12.0-desktop.2
    Path:     /usr/local/lib/docker/cli-plugins/docker-buildx
  compose: Docker Compose (Docker Inc.)
    Version:  v2.23.3-desktop.2
    Path:     /usr/local/lib/docker/cli-plugins/docker-compose
  dev: Docker Dev Environments (Docker Inc.)
    Version:  v0.1.0
    Path:     /usr/local/lib/docker/cli-plugins/docker-dev
  extension: Manages Docker extensions (Docker Inc.)
    Version:  v0.2.21
    Path:     /usr/local/lib/docker/cli-plugins/docker-extension
  feedback: Provide feedback, right in your terminal! (Docker Inc.)
    Version:  0.1
    Path:     /usr/local/lib/docker/cli-plugins/docker-feedback
  init: Creates Docker-related starter files for your project (Docker Inc.)
    Version:  v0.1.0-beta.10
    Path:     /usr/local/lib/docker/cli-plugins/docker-init
  sbom: View the packaged-based Software Bill Of Materials (SBOM) for an image (Anchore Inc.)
    Version:  0.6.0
    Path:     /usr/local/lib/docker/cli-plugins/docker-sbom
  scan: Docker Scan (Docker Inc.)
    Version:  v0.26.0
    Path:     /usr/local/lib/docker/cli-plugins/docker-scan
  scout: Docker Scout (Docker Inc.)
    Version:  v1.2.0
    Path:     /usr/local/lib/docker/cli-plugins/docker-scout

Server:
 Containers: 1
  Running: 1
  Paused: 0
  Stopped: 0
 Images: 5
 Server Version: 24.0.7
 Storage Driver: overlay2
  Backing Filesystem: extfs
  Supports d_type: true
  Using metacopy: false
  Native Overlay Diff: true
  userxattr: false
 Logging Driver: json-file
 Cgroup Driver: cgroupfs
 Cgroup Version: 1
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: inactive
 Runtimes: io.containerd.runc.v2 runc
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f
 runc version: v1.1.10-0-g18a0cb0
 init version: de40ad0
 Security Options:
  seccomp
   Profile: unconfined
 Kernel Version: 5.15.133.1-microsoft-standard-WSL2
 Operating System: Docker Desktop
 OSType: linux
 Architecture: x86_64
 CPUs: 16
 Total Memory: 31.26GiB
 Name: docker-desktop
 ID: 340fee1c-e22a-485c-a973-f0e26d7535c9
 Docker Root Dir: /var/lib/docker
 Debug Mode: false
 HTTP Proxy: http.docker.internal:3128
 HTTPS Proxy: http.docker.internal:3128
 No Proxy: hubproxy.docker.internal
 Experimental: false
 Insecure Registries:
  hubproxy.docker.internal:5555
  127.0.0.0/8
 Registry Mirrors:
  https://us69kjun.mirror.aliyuncs.com/
  https://docker.mirrors.ustc.edu.cn/
  https://hub-mirror.c.163.com/
  https://mirror.baidubce.com/
 Live Restore Enabled: false

WARNING: No blkio throttle.read_bps_device support
WARNING: No blkio throttle.write_bps_device support
WARNING: No blkio throttle.read_iops_device support
WARNING: No blkio throttle.write_iops_device support
WARNING: daemon is not using the default seccomp profile

GPU

Driver

根据Win11机器的显卡安装最新版本驱动(不要在WSL中安装任何Linux版的Nvidia驱动!)

https://www.nvidia.com/Download/index.aspx

输入nvidia-smi,查验是否安装成功。WSL2里面啥都不用做,在WSL2命令行直接就能查看nvidia-smi。

启动docker也能一样查看

1
winse@DESKTOP-BR4MG38:stable-diffusion-taiyi$ docker run -it --rm --gpus all ubuntu nvidia-smi

其实这个启动的container也是一个WSL2。注意:WSL中不需要安装任何Linux版的Nvidia驱动!

验证 WLS2中Docker跑起来的容器 是否能够正常调用GPU:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
winse@DESKTOP-BR4MG38:~$ docker pull nvcr.io/nvidia/pytorch:23.07-py3
23.07-py3: Pulling from nvidia/pytorch
3153aa388d02: Pulling fs layer
...
ee3f0ae6e80f: Pull complete
d4528227b5b8: Pull complete
Digest: sha256:c53e8702a4ccb3f55235226dab29ef5d931a2a6d4d003ab47ca2e7e670f7922b
Status: Downloaded newer image for nvcr.io/nvidia/pytorch:23.07-py3
nvcr.io/nvidia/pytorch:23.07-py3

What's Next?
  1. Sign in to your Docker account → docker login
  2. View a summary of image vulnerabilities and recommendations → docker scout quickview nvcr.io/nvidia/pytorch:23.07-py3


winse@DESKTOP-BR4MG38:~$ docker run -it --gpus=all --rm nvcr.io/nvidia/pytorch:23.07-py3 nvidia-smi

=============
== PyTorch ==
=============

NVIDIA Release 23.07 (build 63867923)
PyTorch Version 2.1.0a0+b5021ba

Container image Copyright (c) 2023, NVIDIA CORPORATION & AFFILIATES. All rights reserved.

Copyright (c) 2014-2023 Facebook Inc.
Copyright (c) 2011-2014 Idiap Research Institute (Ronan Collobert)
Copyright (c) 2012-2014 Deepmind Technologies    (Koray Kavukcuoglu)
Copyright (c) 2011-2012 NEC Laboratories America (Koray Kavukcuoglu)
Copyright (c) 2011-2013 NYU                      (Clement Farabet)
Copyright (c) 2006-2010 NEC Laboratories America (Ronan Collobert, Leon Bottou, Iain Melvin, Jason Weston)
Copyright (c) 2006      Idiap Research Institute (Samy Bengio)
Copyright (c) 2001-2004 Idiap Research Institute (Ronan Collobert, Samy Bengio, Johnny Mariethoz)
Copyright (c) 2015      Google Inc.
Copyright (c) 2015      Yangqing Jia
Copyright (c) 2013-2016 The Caffe contributors
All rights reserved.

Various files include modifications (c) NVIDIA CORPORATION & AFFILIATES.  All rights reserved.

This container image and its contents are governed by the NVIDIA Deep Learning Container License.
By pulling and using the container, you accept the terms and conditions of this license:
https://developer.nvidia.com/ngc/nvidia-deep-learning-container-license

NOTE: The SHMEM allocation limit is set to the default of 64MB.  This may be
   insufficient for PyTorch.  NVIDIA recommends the use of the following flags:
   docker run --gpus all --ipc=host --ulimit memlock=-1 --ulimit stack=67108864 ...

Sat Jan 13 14:01:37 2024
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.146.01             Driver Version: 537.99       CUDA Version: 12.2     |
|-----------------------------------------+----------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |         Memory-Usage | GPU-Util  Compute M. |
|                                         |                      |               MIG M. |
|=========================================+======================+======================|
|   0  Quadro T2000                   On  | 00000000:01:00.0  On |                  N/A |
| N/A   43C    P8               6W /  60W |    856MiB /  4096MiB |      9%      Default |
|                                         |                      |                  N/A |
+-----------------------------------------+----------------------+----------------------+

+---------------------------------------------------------------------------------------+
| Processes:                                                                            |
|  GPU   GI   CI        PID   Type   Process name                            GPU Memory |
|        ID   ID                                                             Usage      |
|=======================================================================================|
|    0   N/A  N/A        27      G   /Xwayland                                 N/A      |
|    0   N/A  N/A        41      G   /Xwayland                                 N/A      |
|    0   N/A  N/A        42      G   /Xwayland                                 N/A      |
+---------------------------------------------------------------------------------------+


winse@DESKTOP-BR4MG38:~$ docker run --rm --gpus all nvcr.io/nvidia/k8s/cuda-sample:nbody nbody -gpu -benchmark
Unable to find image 'nvcr.io/nvidia/k8s/cuda-sample:nbody' locally
nbody: Pulling from nvidia/k8s/cuda-sample
22c5ef60a68e: Pull complete
1939e4248814: Pull complete
548afb82c856: Pull complete
a424d45fd86f: Pull complete
207b64ab7ce6: Pull complete
f65423f1b49b: Pull complete
2b60900a3ea5: Pull complete
e9bff09d04df: Pull complete
edc14edf1b04: Pull complete
1f37f461c076: Pull complete
9026fb14bf88: Pull complete
Digest: sha256:59261e419d6d48a772aad5bb213f9f1588fcdb042b115ceb7166c89a51f03363
Status: Downloaded newer image for nvcr.io/nvidia/k8s/cuda-sample:nbody
Run "nbody -benchmark [-numbodies=<numBodies>]" to measure performance.
        -fullscreen       (run n-body simulation in fullscreen mode)
        -fp64             (use double precision floating point values for simulation)
        -hostmem          (stores simulation data in host memory)
        -benchmark        (run benchmark to measure performance)
        -numbodies=<N>    (number of bodies (>= 1) to run in simulation)
        -device=<d>       (where d=0,1,2.... for the CUDA device to use)
        -numdevices=<i>   (where i=(number of CUDA devices > 0) to use for simulation)
        -compare          (compares simulation results running once on the default GPU and once on the CPU)
        -cpu              (run n-body simulation on the CPU)
        -tipsy=<file.bin> (load a tipsy model file for simulation)

NOTE: The CUDA Samples are not meant for performance measurements. Results may vary when GPU Boost is enabled.

> Windowed mode
> Simulation data stored in video memory
> Single precision floating point simulation
> 1 Devices used for simulation
GPU Device 0: "Turing" with compute capability 7.5

> Compute 7.5 CUDA device: [Quadro T2000]
16384 bodies, total time for 10 iterations: 64.071 ms
= 41.897 billion interactions per second
= 837.937 single-precision GFLOP/s at 20 flops per interaction


#再跑一遍
winse@DESKTOP-BR4MG38:~$ docker run --rm --gpus all nvcr.io/nvidia/k8s/cuda-sample:nbody nbody -gpu -benchmark
Run "nbody -benchmark [-numbodies=<numBodies>]" to measure performance.
        -fullscreen       (run n-body simulation in fullscreen mode)
        -fp64             (use double precision floating point values for simulation)
        -hostmem          (stores simulation data in host memory)
        -benchmark        (run benchmark to measure performance)
        -numbodies=<N>    (number of bodies (>= 1) to run in simulation)
        -device=<d>       (where d=0,1,2.... for the CUDA device to use)
        -numdevices=<i>   (where i=(number of CUDA devices > 0) to use for simulation)
        -compare          (compares simulation results running once on the default GPU and once on the CPU)
        -cpu              (run n-body simulation on the CPU)
        -tipsy=<file.bin> (load a tipsy model file for simulation)

NOTE: The CUDA Samples are not meant for performance measurements. Results may vary when GPU Boost is enabled.

> Windowed mode
> Simulation data stored in video memory
> Single precision floating point simulation
> 1 Devices used for simulation
GPU Device 0: "Turing" with compute capability 7.5

> Compute 7.5 CUDA device: [Quadro T2000]
16384 bodies, total time for 10 iterations: 23.398 ms
= 114.724 billion interactions per second
= 2294.490 single-precision GFLOP/s at 20 flops per interaction

WSL2 cuda-toolkit

开发环境/运行环境 * https://zhuanlan.zhihu.com/p/555151725 * https://docs.nvidia.com/cuda/wsl-user-guide/index.html#cuda-support-for-WSL2 * https://developer.nvidia.com/cuda-downloads?target_os=Linux&target_arch=x86_64&Distribution=WSL-Ubuntu&target_version=2.0&target_type=deb_network

1
2
3
4
wget https://developer.download.nvidia.com/compute/cuda/repos/wsl-ubuntu/x86_64/cuda-keyring_1.1-1_all.deb
sudo dpkg -i cuda-keyring_1.1-1_all.deb
sudo apt-get update
sudo apt-get -y install cuda-toolkit-12-3

运行安装:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22

(demo_env) winse@DESKTOP-BR4MG38:ai$ wget https://developer.download.nvidia.com/compute/cuda/repos/wsl-ubuntu/x86_64/cuda-keyring_1.1-1_all.deb
--2024-01-14 23:53:22--  https://developer.download.nvidia.com/compute/cuda/repos/wsl-ubuntu/x86_64/cuda-keyring_1.1-1_all.deb
Resolving developer.download.nvidia.com (developer.download.nvidia.com)... 152.199.39.144, 72.21.80.5, 72.21.80.6, ...
Connecting to developer.download.nvidia.com (developer.download.nvidia.com)|152.199.39.144|:443... connected.
HTTP request sent, awaiting response... 301 Moved Permanently
Location: https://developer.download.nvidia.cn/compute/cuda/repos/wsl-ubuntu/x86_64/cuda-keyring_1.1-1_all.deb [following]
--2024-01-14 23:53:23--  https://developer.download.nvidia.cn/compute/cuda/repos/wsl-ubuntu/x86_64/cuda-keyring_1.1-1_all.deb
Resolving developer.download.nvidia.cn (developer.download.nvidia.cn)... 59.36.216.26, 59.36.216.27, 175.4.58.180, ...
Connecting to developer.download.nvidia.cn (developer.download.nvidia.cn)|59.36.216.26|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 4328 (4.2K) [application/x-deb]
Saving to: ‘cuda-keyring_1.1-1_all.deb’

cuda-keyring_1.1-1_all.deb      100%[====================================================>]   4.23K  --.-KB/s    in 0s

2024-01-14 23:53:23 (1.61 GB/s) - ‘cuda-keyring_1.1-1_all.deb’ saved [4328/4328]

(demo_env) winse@DESKTOP-BR4MG38:ai$
(demo_env) winse@DESKTOP-BR4MG38:ai$ sudo dpkg -i cuda-keyring_1.1-1_all.deb
(demo_env) winse@DESKTOP-BR4MG38:ai$ sudo apt-get update
(demo_env) winse@DESKTOP-BR4MG38:ai$ sudo apt-get -y install cuda-toolkit-12-3
1
2
3
(base) winse@DESKTOP-BR4MG38:~$ vi .bashrc

export PATH=/usr/local/cuda/bin:$PATH

新打开一个shell:

1
2
3
4
5
6
(base) winse@DESKTOP-BR4MG38:~$ nvcc -V
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2023 NVIDIA Corporation
Built on Wed_Nov_22_10:17:15_PST_2023
Cuda compilation tools, release 12.3, V12.3.107
Build cuda_12.3.r12.3/compiler.33567101_0

cuDNN

https://developer.nvidia.com/cudnn

NVIDIA CUDA® Deep Neural Network library 支持神经网络的推理。

注册下载对应CUDA的版本 https://developer.nvidia.com/rdp/cudnn-download

注意:如果不在WSL2-Ubuntu中直接使用cuDNN,后续通过容器直接拉取包含cuDNN的容器,就可以省略这一部分。

https://docs.nvidia.com/deeplearning/cudnn/install-guide/index.html#installlinux-deb

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
#sudo dpkg -i cudnn-local-repo-ubuntu2004-8.9.6.50_1.0-1_amd64.deb
#sudo dpkg -r cudnn-local-repo-ubuntu2004-8.9.6.50
#sudo rm /etc/apt/sources.list.d/cudnn-local-ubuntu2004-8.9.6.50.list

(base) winse@DESKTOP-BR4MG38:i$ sudo dpkg -i cudnn-local-repo-ubuntu2004-8.9.7.29_1.0-1_amd64.deb

(base) winse@DESKTOP-BR4MG38:i$ sudo cp /var/cudnn-local-repo-ubuntu2004-8.9.7.29/cudnn-local-30472A84-keyring.gpg /usr/share/keyrings/


(base) winse@DESKTOP-BR4MG38:i$ sudo apt install zlib1g

(base) winse@DESKTOP-BR4MG38:i$ sudo apt update

(base) winse@DESKTOP-BR4MG38:i$ apt search libcudnn8
Sorting... Done
Full Text Search... Done
libcudnn8/unknown 8.9.7.29-1+cuda12.2 amd64
  cuDNN runtime libraries

libcudnn8-dev/unknown 8.9.7.29-1+cuda12.2 amd64
  cuDNN development libraries and headers

libcudnn8-samples/unknown 8.9.7.29-1+cuda12.2 amd64
  cuDNN samples

(base) winse@DESKTOP-BR4MG38:i$ sudo apt install libcudnn8 libcudnn8-dev libcudnn8-samples

校验是否安装成功

https://docs.nvidia.com/deeplearning/cudnn/install-guide/index.html#verify

运行报错参考 https://forums.developer.nvidia.com/t/freeimage-is-not-set-up-correctly-please-ensure-freeimae-is-set-up-correctly/66950

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
(base) winse@DESKTOP-BR4MG38:i$ cp -r /usr/src/cudnn_samples_v8 ./
(base) winse@DESKTOP-BR4MG38:i$ cd cudnn_samples_v8/mnistCUDNN/
(base) winse@DESKTOP-BR4MG38:mnistCUDNN$


(base) winse@DESKTOP-BR4MG38:mnistCUDNN$ sudo apt-get install libfreeimage3 libfreeimage-dev


(base) winse@DESKTOP-BR4MG38:mnistCUDNN$ make clean && make

(base) winse@DESKTOP-BR4MG38:mnistCUDNN$ ./mnistCUDNN
Executing: mnistCUDNN
cudnnGetVersion() : 8907 , CUDNN_VERSION from cudnn.h : 8907 (8.9.7)
Host compiler version : GCC 9.4.0

There are 1 CUDA capable devices on your machine :
device 0 : sms 16  Capabilities 7.5, SmClock 1785.0 Mhz, MemSize (Mb) 4095, MemClock 6001.0 Mhz, Ecc=0, boardGroupID=0
Using device 0

Testing single precision
...

Result of classification: 1 3 5

Test passed!

Testing half precision (math in single precision)
...

Result of classification: 1 3 5

Test passed!

nvidia-container-toolkit???

Docker Desktop + WSL2不用安装 nvidia-container-toolkit ???

WSL2 Python - conda

下载安装conda

下载miniconda

https://docs.conda.io/projects/miniconda/en/latest/

1
2
3
4
5
6
mkdir -p ~/miniconda3
wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -O ~/miniconda3/miniconda.sh
bash ~/miniconda3/miniconda.sh -b -u -p ~/miniconda3
rm -rf ~/miniconda3/miniconda.sh

~/miniconda3/bin/conda init bash

运行脚本安装:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
winse@DESKTOP-BR4MG38:ai$ mkdir miniconda3
winse@DESKTOP-BR4MG38:ai$ cd miniconda3/
winse@DESKTOP-BR4MG38:miniconda3$

winse@DESKTOP-BR4MG38:miniconda3$ bash miniconda.sh -b -u -p ~/miniconda3
PREFIX=/home/winse/miniconda3
Unpacking payload ...

Installing base environment...


Downloading and Extracting Packages:


Downloading and Extracting Packages:

Preparing transaction: done
Executing transaction: done
installation finished.

winse@DESKTOP-BR4MG38:miniconda3$ ~/miniconda3/bin/conda init bash
no change     /home/winse/miniconda3/condabin/conda
no change     /home/winse/miniconda3/bin/conda
no change     /home/winse/miniconda3/bin/conda-env
no change     /home/winse/miniconda3/bin/activate
no change     /home/winse/miniconda3/bin/deactivate
no change     /home/winse/miniconda3/etc/profile.d/conda.sh
no change     /home/winse/miniconda3/etc/fish/conf.d/conda.fish
no change     /home/winse/miniconda3/shell/condabin/Conda.psm1
no change     /home/winse/miniconda3/shell/condabin/conda-hook.ps1
no change     /home/winse/miniconda3/lib/python3.11/site-packages/xontrib/conda.xsh
no change     /home/winse/miniconda3/etc/profile.d/conda.csh
modified      /home/winse/.bashrc

==> For changes to take effect, close and re-open your current shell. <==

添加源:

1
2
3
4
5
6
7
8
9
10
11
conda config --add channels https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free/
conda config --add channels https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main/
conda config --add channels https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge/
conda config --add channels https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/msys2/
conda config --add channels https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/bioconda/
conda config --add channels https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/menpo/
conda config --add channels https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/pytorch/
conda config --set show_channel_urls yes 

conda config --show channels
#conda config --remove-key channels

运行测试GPU

1
2
3
4
5
6
7
8
9
10
conda create -n demo_env python=3.8
conda activate demo_env
conda install pytorch==1.6.0 cudatoolkit=10.1 torchaudio=0.6.0 -c pytorch

#conda list
#conda deactivate

#conda env list
#conda remove -n demo_env --all
#conda env remove --name old_name

验证是否安装成功 在我们到demo_env环境下,打开Python,输入以下语句:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
(demo_env) winse@DESKTOP-BR4MG38:ai$ python
Python 3.8.18 | packaged by conda-forge | (default, Dec 23 2023, 17:21:28)
[GCC 12.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> x = torch.rand(5,3)
>>> print(x)
tensor([[0.4343, 0.3966, 0.1862],
        [0.1502, 0.0788, 0.7713],
        [0.3505, 0.7065, 0.9952],
        [0.6420, 0.2574, 0.7550],
        [0.8292, 0.7714, 0.9014]])
>>> print(torch.cuda.is_available())
True

模型下载

非常重要,不然时间都浪费等待下载上了。模型动辄几G,稍微大一点的就几十G,是需要慎重和反复探索。

一开始用代理和GIT下载的,又慢又浪费时间又浪费空间。放着下了一晚,早起起来磁盘空间不够 o(╥﹏╥)o 。

参考 https://soulteary.com/2024/01/09/summary-of-reliable-download-solutions-for-ai-models.html

国内的modelscope下载

modelscope它还结合了aliyun提供了一定时长的免费环境,在本地折腾折腾后再上去跑跑。这里只通过它去下载模型(下载的方式是没有.git的文件,少占一半多的磁盘空间)。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
(base) winse@DESKTOP-BR4MG38:ai$ conda activate demo

(demo) winse@DESKTOP-BR4MG38:ai$ pip config set global.index-url https://pypi.tuna.tsinghua.edu.cn/simple

(demo) winse@DESKTOP-BR4MG38:ai$ pip install modelscope


(demo) winse@DESKTOP-BR4MG38:ai$ python -c "from modelscope.hub.snapshot_download import snapshot_download;snapshot_download('damo/nlp_xlmr_named-entity-recognition_viet-ecommerce-title', cache_dir='./')"
2024-01-14 16:52:36,017 - modelscope - INFO - PyTorch version 1.11.0+cu113 Found.
2024-01-14 16:52:36,018 - modelscope - INFO - Loading ast index from /home/winse/.cache/modelscope/ast_indexer
2024-01-14 16:52:36,050 - modelscope - INFO - Loading done! Current index file version is 1.11.0, with md5 85336421feb1dc1ec9dde85ceee20f42 and a total number of 953 components indexed
2024-01-14 16:52:36,757 - modelscope - WARNING - Model revision not specified, use revision: v1.0.0
Downloading: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1.19k/1.19k [00:00<00:00, 12.3MB/s]
Downloading: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 238/238 [00:00<00:00, 1.97MB/s]
Downloading: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 238/238 [00:00<00:00, 1.81MB/s]
Downloading: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▉| 1.04G/1.04G [00:36<00:00, 30.1MB/s]
Downloading: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2.68k/2.68k [00:00<00:00, 18.2MB/s]
Downloading: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2.68k/2.68k [00:00<00:00, 27.7MB/s]
Downloading: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 4.83M/4.83M [00:00<00:00, 7.97MB/s]
Downloading: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 150/150 [00:00<00:00, 1.58MB/s]
Downloading: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 8.68M/8.68M [00:00<00:00, 11.3MB/s]
Downloading: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 470/470 [00:00<00:00, 4.53MB/s]


(demo) winse@DESKTOP-BR4MG38:ai$ python -c "from modelscope.hub.snapshot_download import snapshot_download;snapshot_download('Fengshenbang/Taiyi-Stable-Diffusion-1B-Chinese-v0.1', cache_dir='./')"
2024-01-15 08:19:52,588 - modelscope - INFO - PyTorch version 1.11.0+cu113 Found.
2024-01-15 08:19:52,589 - modelscope - INFO - Loading ast index from /home/winse/.cache/modelscope/ast_indexer
2024-01-15 08:19:52,741 - modelscope - INFO - Loading done! Current index file version is 1.11.0, with md5 85336421feb1dc1ec9dde85ceee20f42 and a total number of 953 components indexed
2024-01-15 08:19:53,874 - modelscope - WARNING - Model revision not specified, use revision: v1.0.0
Downloading: 100%|████████████████████████████████████████████████████████████████| 257k/257k [00:00<00:00, 1.48MB/s]
Downloading: 100%|███████████████████████████████████████████████████████████████████| 600/600 [00:00<00:00, 488kB/s]
Downloading: 100%|██████████████████████████████████████████████████████████████████| 793/793 [00:00<00:00, 1.35MB/s]
Downloading: 100%|██████████████████████████████████████████████████████████████████| 884/884 [00:00<00:00, 1.47MB/s]
Downloading: 100%|███████████████████████████████████████████████████████████████| 4.56k/4.56k [00:00<00:00, 220kB/s]
Downloading: 100%|███████████████████████████████████████████████████████████████████| 146/146 [00:00<00:00, 251kB/s]
Downloading: 100%|█████████████████████████████████████████████████████████████▉| 3.20G/3.20G [01:29<00:00, 38.5MB/s]
Downloading: 100%|████████████████████████████████████████████████████████████████| 319M/319M [00:39<00:00, 8.39MB/s]
Downloading: 100%|████████████████████████████████████████████████████████████████| 571k/571k [00:00<00:00, 2.35MB/s]
Downloading: 100%|████████████████████████████████████████████████████████████████| 583k/583k [00:00<00:00, 2.16MB/s]
Downloading: 100%|████████████████████████████████████████████████████████████████| 571k/571k [00:00<00:00, 2.10MB/s]
Downloading: 100%|███████████████████████████████████████████████████████████████████| 539/539 [00:00<00:00, 924kB/s]
Downloading: 100%|█████████████████████████████████████████████████████████████████| 196k/196k [00:00<00:00, 998kB/s]
Downloading: 100%|████████████████████████████████████████████████████████████████| 226k/226k [00:00<00:00, 1.24MB/s]
Downloading: 100%|███████████████████████████████████████████████████████████████████| 342/342 [00:00<00:00, 617kB/s]
Downloading: 100%|████████████████████████████████████████████████████████████████| 390M/390M [00:30<00:00, 13.5MB/s]
Downloading: 100%|█████████████████████████████████████████████████████████████▉| 1.13G/1.13G [00:41<00:00, 29.3MB/s]
Downloading: 100%|██████████████████████████████████████████████████████████████| 8.90k/8.90k [00:00<00:00, 3.09MB/s]
Downloading: 100%|███████████████████████████████████████████████████████████████████| 298/298 [00:00<00:00, 482kB/s]
Downloading: 100%|███████████████████████████████████████████████████████████████████| 186/186 [00:00<00:00, 304kB/s]
Downloading: 100%|█████████████████████████████████████████████████████████████▉| 3.89G/3.89G [02:12<00:00, 31.6MB/s]
Downloading: 100%|████████████████████████████████████████████████████████████████| 477k/477k [00:00<00:00, 2.02MB/s]
Downloading: 100%|████████████████████████████████████████████████████████████████| 477k/477k [00:00<00:00, 1.72MB/s]
Downloading: 100%|████████████████████████████████████████████████████████████████| 198k/198k [00:00<00:00, 1.01MB/s]
Downloading: 100%|████████████████████████████████████████████████████████████████| 212k/212k [00:00<00:00, 1.10MB/s]
Downloading: 100%|███████████████████████████████████████████████████████████████████| 555/555 [00:00<00:00, 933kB/s]
Downloading: 100%|█████████████████████████████████████████████████████████████████| 107k/107k [00:00<00:00, 782kB/s]

对比一下git和直接下载空间的占用,时间就更加不用说了。

modelscope

把整个流程跑一下,跑个简单的例子:

环境安装 https://modelscope.cn/docs/%E7%8E%AF%E5%A2%83%E5%AE%89%E8%A3%85

运行时依赖

1
2
3
4
5
6
7
(demo) winse@DESKTOP-BR4MG38:ai$ pip config set global.index-url https://pypi.tuna.tsinghua.edu.cn/simple
#pip config set global.index-url https://mirrors.cloud.aliyuncs.com/pypi/simple 
#pip config set install.trusted-host mirrors.cloud.aliyuncs.com

(demo) winse@DESKTOP-BR4MG38:ai$ pip3 install torch==1.11.0 torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu113

$ pip install transformers sentencepiece pyvi

测试模型:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25

https://modelscope.cn/models/damo/nlp_xlmr_named-entity-recognition_viet-ecommerce-title/summary

(demo) winse@DESKTOP-BR4MG38:ai$ python
Python 3.8.18 | packaged by conda-forge | (default, Dec 23 2023, 17:21:28)
[GCC 12.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from modelscope.pipelines import pipeline
2024-01-15 01:22:20,476 - modelscope - INFO - PyTorch version 1.11.0+cu113 Found.
2024-01-15 01:22:20,478 - modelscope - INFO - Loading ast index from /home/winse/.cache/modelscope/ast_indexer
2024-01-15 01:22:20,498 - modelscope - INFO - Loading done! Current index file version is 1.11.0, with md5 85336421feb1dc1ec9dde85ceee20f42 and a total number of 953 components indexed
>>> from modelscope.utils.constant import Tasks
>>> ner_pipeline = pipeline(Tasks.named_entity_recognition, 'damo/nlp_xlmr_named-entity-recognition_viet-ecommerce-title', model_revision='v1.0.1')
2024-01-15 01:22:28,618 - modelscope - INFO - initiate model from damo/nlp_xlmr_named-entity-recognition_viet-ecommerce-title
2024-01-15 01:22:28,620 - modelscope - INFO - initiate model from location damo/nlp_xlmr_named-entity-recognition_viet-ecommerce-title.
2024-01-15 01:22:28,630 - modelscope - INFO - initialize model from damo/nlp_xlmr_named-entity-recognition_viet-ecommerce-title
2024-01-15 01:22:30,945 - modelscope - INFO - head has no _keys_to_ignore_on_load_missing
2024-01-15 01:22:34,599 - modelscope - INFO - All model checkpoint weights were used when initializing ModelForTokenClassificationWithCRF.

2024-01-15 01:22:34,599 - modelscope - INFO - All the weights of ModelForTokenClassificationWithCRF were initialized from the model checkpoint If your task is similar to the task the model of the checkpoint was trained on, you can already use ModelForTokenClassificationWithCRF for predictions without further training.
>>> result = ner_pipeline('Nón vành dễ thương cho bé gái')
>>> print(result)
{'output': [{'type': 'product', 'start': 0, 'end': 8, 'prob': 0.98140895, 'span': 'Nón vành'}, {'type': 'style', 'start': 9, 'end': 18, 'prob': 0.99752563, 'span': 'dễ thương'}, {'type': 'consumer_group', 'start': 23, 'end': 29, 'prob': 0.99895895, 'span': 'bé gái'}]}
>>>

TODO

  • huggingface

国外的,后续用到了再补

  • downloader(跳过)

当然,如果进场要用到各种工具的特定版本来下载依赖,用一个docker镜像来作为下载器,也是不错的方法

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
#Python轻量环境
docker pull python:3.10-slim

@1
#将本地目录挂载到容器里,一会作为模型下载目录使用
docker run --rm -it -v `pwd`:/models python:3.10-slim bash

sed -i 's/snapshot.debian.org/mirrors.tuna.tsinghua.edu.cn/g' /etc/apt/sources.list.d/debian.sources
pip config set global.index-url https://pypi.tuna.tsinghua.edu.cn/simple

cd /models


@2
#创建一个持续运行的 Python 容器
docker run -d --name=downloader -v `pwd`:/models python:3.10-slim tail -f /etc/hosts
#使用命令进入容器进行配置和下载模型
docker exec -it downloader bash

太乙模型

[尝试/试错]

开始是参照 使用 Docker 来快速上手中文 Stable Diffusion 模型:太乙 文章里面说的蛮简单的,想着我这个WSL2+GPU应该也是可以的。

开始的时刻还是git clone的,等到怕了,几个小时还不一定成功,后面才改成下载的方式!

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
#git clone https://huggingface.co/IDEA-CCNL/Taiyi-Stable-Diffusion-1B-Chinese-v0.1
winse@DESKTOP-BR4MG38:/mnt/i/ai/stable-diffusion-taiyi$ git clone https://www.modelscope.cn/Fengshenbang/Taiyi-Stable-Diffusion-1B-Chinese-v0.1.git
Cloning into 'Taiyi-Stable-Diffusion-1B-Chinese-v0.1'...
remote: Enumerating objects: 85, done.
remote: Counting objects: 100% (6/6), done.
remote: Compressing objects: 100% (6/6), done.
remote: Total 85 (delta 2), reused 0 (delta 0), pack-reused 79
Unpacking objects: 100% (85/85), 3.61 GiB | 1.98 MiB/s, done.
Filtering content: 100% (5/5), 8.92 GiB | 1.71 MiB/s, done.


#镜像也下载了好几次 才pull下来
winse@DESKTOP-BR4MG38:/mnt/i/ai$ docker pull soulteary/stable-diffusion:taiyi-0.1
taiyi-0.1: Pulling from soulteary/stable-diffusion
a404e5416296: Pull complete
af6d12d8d61a: Pull complete
bc57d500b85c: Pull complete
fcd60060414d: Pull complete
65b27d733eb0: Pull complete
266c4315d44f: Pull complete
7ed4190451a3: Pull complete
975671c72e25: Pull complete
213ba1e17e15: Pull complete
37bbbc68318a: Pull complete
80438d07027f: Pull complete
74c79bc62d3a: Pull complete
f8054e9907fb: Pull complete
dc8d44bb4941: Pull complete
625444b7a83c: Pull complete
0b90667ff465: Pull complete
67d73c5193e1: Pull complete
Digest: sha256:69cc4b5fc890dd7ccffff9dbfc2eb2262a0a727574b8beeeafe621f9ef135d16
Status: Downloaded newer image for soulteary/stable-diffusion:taiyi-0.1
docker.io/soulteary/stable-diffusion:taiyi-0.1

What's Next?
  View a summary of image vulnerabilities and recommendations → docker scout quickview soulteary/stable-diffusion:taiyi-0.1


#这也是得在纯Linux机器上的Docker才行的
#wget https://github.com/soulteary/docker-stable-diffusion-taiyi/blob/main/docker-compose.yml
#我这就直接运行
winse@DESKTOP-BR4MG38:stable-diffusion-taiyi$ docker run --gpus all --rm -it -v $(pwd)/Taiyi-Stable-Diffusion-1B-Chinese-v0.1:/stable-diffusion-webui/models/Taiyi-Stable-Diffusion-1B-Chinese-v0.1 -p 7860:7860 soulteary/stable-diffusion:taiyi-0.1

#Windows cmd
#docker run --gpus all --ipc=host --ulimit memlock=-1 --ulimit stack=67108864 --rm -it -v C:/docker-sdxl/stabilityai/:/app/stabilityai -p 7860:7860 soulteary/sdxl:runtime

运行起来后访问 http://localhost:7860/ ,输入 小船,河流,星空,星星,山峦,油画 查看Win11的任务管理器,GPU是打满了的,但生成的图片是全黑,啥都没有!

WSL-Ubuntu部署

试了很多次都不行,最后还是回到原点,不能偷懒,先把效果跑出来:

安装依赖(通过代理)

1
2
3
4
5
6
7
8
9
10
11
#https://stackoverflow.com/questions/37776228/pycharm-python-opencv-and-cv2-install-error
(demo) winse@DESKTOP-BR4MG38:~$ pip3 install opencv-python

$ pip install diffusers

##pip install accelerate
(demo) winse@DESKTOP-BR4MG38:~$ unset all_proxy && unset ALL_PROXY
(demo) winse@DESKTOP-BR4MG38:~$ pip install pysocks

(demo) winse@DESKTOP-BR4MG38:~$ export ALL_PROXY=socks5://172.22.240.1:23333 HTTPS_PROXY=socks5://172.22.240.1:23333 HTTP_PROXY=socks5://172.22.240.1:23333
(demo) winse@DESKTOP-BR4MG38:~$ pip install git+https://github.com/huggingface/accelerate  

测试(跑了一个小时,-_-||)(此时还没安装cuDNN):

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
(demo) winse@DESKTOP-BR4MG38:ai$ python
Python 3.8.18 | packaged by conda-forge | (default, Dec 23 2023, 17:21:28)
[GCC 12.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from modelscope.utils.constant import Tasks
2024-01-15 10:07:41,619 - modelscope - INFO - PyTorch version 1.11.0+cu113 Found.
2024-01-15 10:07:41,630 - modelscope - INFO - Loading ast index from /home/winse/.cache/modelscope/ast_indexer
2024-01-15 10:07:41,751 - modelscope - INFO - Loading done! Current index file version is 1.11.0, with md5 85336421feb1dc1ec9dde85ceee20f42 and a total number of 953 components indexed
>>> from modelscope.pipelines import pipeline
>>> import cv2
>>> pipe = pipeline(task=Tasks.text_to_image_synthesis, model='Fengshenbang/Taiyi-Stable-Diffusion-1B-Chinese-v0.1', model_revision='v1.0.0')
Loading pipeline components...:   0%|                                                          | 0/7 [00:00<?, ?it/s]
/home/winse/miniconda3/envs/demo/lib/python3.8/site-packages/transformers/models/clip/feature_extraction_clip.py:28: 
FutureWarning: The class CLIPFeatureExtractor is deprecated and will be removed in version 5 of Transformers. Please use CLIPImageProcessor instead.
  warnings.warn(
Loading pipeline components...:  57%|████████████████████████████▌                     | 4/7 [00:32<00:23,  7.83s/it]
`text_config_dict` is provided which will be used to initialize `CLIPTextConfig`. The value `text_config["id2label"]` will be overriden.
`text_config_dict` is provided which will be used to initialize `CLIPTextConfig`. The value `text_config["bos_token_id"]` will be overriden.
`text_config_dict` is provided which will be used to initialize `CLIPTextConfig`. The value `text_config["eos_token_id"]` will be overriden.
Loading pipeline components...: 100%|██████████████████████████████████████████████████| 7/7 [03:23<00:00, 29.02s/it]
>>> prompt = '飞流直下三千尺,油画'
>>> output = pipe({'text': prompt})
/home/winse/miniconda3/envs/demo/lib/python3.8/site-packages/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py:889: FutureWarning: `callback_steps` is deprecated and will be removed in version 1.0.0. Passing `callback_steps` as an input argument to `__call__` is deprecated, consider using `callback_on_step_end`
  deprecate(
We strongly recommend passing in an `attention_mask` since your input_ids may be padded. See https://huggingface.co/docs/transformers/troubleshooting#incorrect-output-when-padding-tokens-arent-masked.
You may ignore this warning if your `pad_token_id` (0) is identical to the `bos_token_id` (0), `eos_token_id` (2), or the `sep_token_id` (None), and your input is not padded.
100%|██████████████████████████████████████████████████████████████████████████████| 50/50 [1:03:11<00:00, 75.83s/it]
>>> cv2.imwrite('result.png', output['output_imgs'][0])
True
>>>

再测个快的

有 cuDNN 加持确实快,10分钟就跑出来了!没安装之前时间估计是三个小时的!!!

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
>>> import torch
>>> from diffusers import StableDiffusionPipeline
>>> torch.backends.cudnn.benchmark = True

>>> pipe = StableDiffusionPipeline.from_pretrained("Fengshenbang/Taiyi-Stable-Diffusion-1B-Chinese-v0.1", torch_dtype=torch.float16)
Loading pipeline components...:  57%|████████████████████████████▌                     | 4/7 [00:05<00:03,  1.32s/it]`text_config_dict` is provided which will be used to initialize `CLIPTextConfig`. The value `text_config["id2label"]` will be overriden.
`text_config_dict` is provided which will be used to initialize `CLIPTextConfig`. The value `text_config["bos_token_id"]` will be overriden.
`text_config_dict` is provided which will be used to initialize `CLIPTextConfig`. The value `text_config["eos_token_id"]` will be overriden.
Loading pipeline components...: 100%|██████████████████████████████████████████████████| 7/7 [00:32<00:00,  4.58s/it]
>>> pipe.to('cuda')
StableDiffusionPipeline {
  "_class_name": "StableDiffusionPipeline",
  "_diffusers_version": "0.25.0",
  "_name_or_path": "Fengshenbang/Taiyi-Stable-Diffusion-1B-Chinese-v0.1",
  "feature_extractor": [
    "transformers",
    "CLIPFeatureExtractor"
  ],
  "image_encoder": [
    null,
    null
  ],
  "requires_safety_checker": true,
  "safety_checker": [
    "stable_diffusion",
    "StableDiffusionSafetyChecker"
  ],
  "scheduler": [
    "diffusers",
    "PNDMScheduler"
  ],
  "text_encoder": [
    "transformers",
    "BertModel"
  ],
  "tokenizer": [
    "transformers",
    "BertTokenizer"
  ],
  "unet": [
    "diffusers",
    "UNet2DConditionModel"
  ],
  "vae": [
    "diffusers",
    "AutoencoderKL"
  ]
}

>>>
>>> prompt = '飞流直下三千尺,油画'
>>> image = pipe(prompt, guidance_scale=7.5).images[0]
100%|████████████████████████████████████████████████████████████████████████████████| 50/50 [09:32<00:00, 11.45s/it]
>>> image.save("飞流.png")
>>>

改一下conda的名字

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
(demo) winse@DESKTOP-BR4MG38:ai$ conda deactivate
(base) winse@DESKTOP-BR4MG38:ai$ conda rename -n demo modelscope
Source:      /home/winse/miniconda3/envs/demo
Destination: /home/winse/miniconda3/envs/modelscope
Packages: 22
Files: 33924

Downloading and Extracting Packages:


Downloading and Extracting Packages:

Preparing transaction: done
Verifying transaction: done
Executing transaction: done

(base) winse@DESKTOP-BR4MG38:ai$ conda env list
# conda environments:
#
base                  *  /home/winse/miniconda3
modelscope               /home/winse/miniconda3/envs/modelscope

模型下载脚本

比如下载:https://modelscope.cn/models/Fengshenbang/Taiyi-Stable-Diffusion-1B-Chinese-EN-v0.1/summary

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
$ tail -16 ~/.bashrc

function modelscope_download() {
model=$1

conda activate modelscope

#pip config set global.index-url https://pypi.tuna.tsinghua.edu.cn/simple
#pip install modelscope

python -c "from modelscope.hub.snapshot_download import snapshot_download;snapshot_download('$model', cache_dir='./')"

conda deactivate

}

$ source ~/.bashrc


(base) winse@DESKTOP-BR4MG38:ai$ modelscope_download "Fengshenbang/Taiyi-Stable-Diffusion-1B-Chinese-EN-v0.1"
2024-01-15 15:17:13,613 - modelscope - INFO - PyTorch version 1.11.0+cu113 Found.
2024-01-15 15:17:13,614 - modelscope - INFO - Loading ast index from /home/winse/.cache/modelscope/ast_indexer
2024-01-15 15:17:13,637 - modelscope - INFO - Loading done! Current index file version is 1.11.0, with md5 85336421feb1dc1ec9dde85ceee20f42 and a total number of 953 components indexed
2024-01-15 15:17:16,879 - modelscope - WARNING - Model revision not specified, use revision: v1.0.0
Downloading: 100%|███████████████████████████████████████████████████████████████████████████████████████████| 2.31M/2.31M [00:00<00:00, 6.01MB/s]
Downloading: 100%|████████████████████████████████████████████████████████████████████████████████████████████████| 612/612 [00:00<00:00, 436kB/s]
Downloading: 100%|████████████████████████████████████████████████████████████████████████████████████████████████| 547/547 [00:00<00:00, 804kB/s]
Downloading: 100%|███████████████████████████████████████████████████████████████████████████████████████████████| 743/743 [00:00<00:00, 1.25MB/s]
Downloading: 100%|████████████████████████████████████████████████████████████████████████████████████████████| 4.46k/4.46k [00:00<00:00, 244kB/s]
Downloading: 100%|████████████████████████████████████████████████████████████████████████████████████████████████| 139/139 [00:00<00:00, 231kB/s]
Downloading: 100%|██████████████████████████████████████████████████████████████████████████████████████████▉| 3.20G/3.20G [02:36<00:00, 22.0MB/s]
Downloading: 100%|█████████████████████████████████████████████████████████████████████████████████████████████| 319M/319M [00:39<00:00, 8.40MB/s]
Downloading: 100%|███████████████████████████████████████████████████████████████████████████████████████████| 2.11M/2.11M [00:00<00:00, 4.05MB/s]
Downloading: 100%|█████████████████████████████████████████████████████████████████████████████████████████████| 512k/512k [00:00<00:00, 1.28MB/s]
Downloading: 100%|████████████████████████████████████████████████████████████████████████████████████████████████| 543/543 [00:00<00:00, 845kB/s]
Downloading: 100%|████████████████████████████████████████████████████████████████████████████████████████████████| 342/342 [00:00<00:00, 521kB/s]
Downloading: 100%|██████████████████████████████████████████████████████████████████████████████████████████▉| 1.13G/1.13G [01:22<00:00, 14.8MB/s]
Downloading: 100%|█████████████████████████████████████████████████████████████████████████████████████████████| 469M/469M [00:12<00:00, 39.0MB/s]
Downloading: 100%|█████████████████████████████████████████████████████████████████████████████████████████████| 5.00/5.00 [00:00<00:00, 6.02kB/s]
Downloading: 100%|███████████████████████████████████████████████████████████████████████████████████████████| 7.97k/7.97k [00:00<00:00, 7.06MB/s]
Downloading: 100%|████████████████████████████████████████████████████████████████████████████████████████████████| 284/284 [00:00<00:00, 441kB/s]
Downloading: 100%|████████████████████████████████████████████████████████████████████████████████████████████████| 389/389 [00:00<00:00, 348kB/s]
Downloading: 100%|██████████████████████████████████████████████████████████████████████████████████████████▉| 11.3G/11.3G [04:07<00:00, 49.0MB/s]
Downloading: 100%|████████████████████████████████████████████████████████████████████████████████████████████████| 697/697 [00:00<00:00, 950kB/s]
Downloading: 100%|█████████████████████████████████████████████████████████████████████████████████████████████| 939k/939k [00:00<00:00, 2.50MB/s]
Downloading: 100%|███████████████████████████████████████████████████████████████████████████████████████████| 2.87M/2.87M [00:00<00:00, 5.79MB/s]
Downloading: 100%|███████████████████████████████████████████████████████████████████████████████████████████| 3.44M/3.44M [00:00<00:00, 6.17MB/s]


#tree -L 2
(base) winse@DESKTOP-BR4MG38:ai$ tree -L 1 Fengshenbang/
Fengshenbang/
├── Taiyi-Stable-Diffusion-1B-Chinese-EN-v0.1
└── Taiyi-Stable-Diffusion-1B-Chinese-v0.1

2 directories, 0 files

再跑一个中英文的模型试试

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
(base) winse@DESKTOP-BR4MG38:ai$ conda activate modelscope
(modelscope) winse@DESKTOP-BR4MG38:ai$
(modelscope) winse@DESKTOP-BR4MG38:ai$ python
Python 3.8.18 | packaged by conda-forge | (default, Dec 23 2023, 17:21:28)
[GCC 12.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from diffusers import StableDiffusionPipeline
>>> pipe = StableDiffusionPipeline.from_pretrained("Fengshenbang/Taiyi-Stable-Diffusion-1B-Chinese-EN-v0.1").to("cuda")
Loading pipeline components...:  43%|█████████████████████████████████▊                                             | 3/7 [00:17<00:21,  5.46s/it]/home/winse/miniconda3/envs/modelscope/lib/python3.8/site-packages/transformers/models/clip/feature_extraction_clip.py:28: FutureWarning: The class CLIPFeatureExtractor is deprecated and will be removed in version 5 of Transformers. Please use CLIPImageProcessor instead.
  warnings.warn(
Loading pipeline components...:  86%|███████████████████████████████████████████████████████████████████▋           | 6/7 [00:17<00:01,  1.90s/it]`text_config_dict` is provided which will be used to initialize `CLIPTextConfig`. The value `text_config["id2label"]` will be overriden.
`text_config_dict` is provided which will be used to initialize `CLIPTextConfig`. The value `text_config["bos_token_id"]` will be overriden.
`text_config_dict` is provided which will be used to initialize `CLIPTextConfig`. The value `text_config["eos_token_id"]` will be overriden.
Loading pipeline components...: 100%|███████████████████████████████████████████████████████████████████████████████| 7/7 [00:27<00:00,  3.87s/it]
>>>
>>> prompt = '小桥流水人家,Van Gogh style'
>>> image = pipe(prompt, guidance_scale=10).images[0]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████| 50/50 [14:19<00:00, 17.19s/it]
>>> image.save("小桥.png")
>>>

太乙webui - 亦步亦趋

这里记录了安装的详细过程,比较繁琐,如果直接安装可以跳到[太乙webui - 纯净版]。

选一个跟我现在用的环境一样的版本和系统:https://hub.docker.com/r/nvidia/cuda/tags

镜像的描述:CUDA and cuDNN images from gitlab.com/nvidia/cuda

试错

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
(modelscope) winse@DESKTOP-BR4MG38:ai$ docker pull nvidia/cuda:12.3.1-devel-ubuntu20.04
12.3.1-devel-ubuntu20.04: Pulling from nvidia/cuda
12.3.1-devel-ubuntu20.04: Pulling from nvidia/cuda
25ad149ed3cf: Pull complete
ba7b66a9df40: Pull complete
520797292d92: Pull complete
c5f2ffd06d8b: Pull complete
1698c67699a3: Pull complete
16dd7c0d35aa: Pull complete
568cac1e538c: Pull complete
6252d19a7f1d: Pull complete
f573e2686be4: Pull complete
0074e75104ac: Pull complete
df35fae9e247: Pull complete
Digest: sha256:befbdfddbb52727f9ce8d0c574cac0f631c606b1e6f0e523f3a0777fe2720c99
Status: Downloaded newer image for nvidia/cuda:12.3.1-devel-ubuntu20.04
docker.io/nvidia/cuda:12.3.1-devel-ubuntu20.04

What's Next?
  1. Sign in to your Docker account → docker login
  2. View a summary of image vulnerabilities and recommendations → docker scout quickview nvidia/cuda:12.3.1-devel-ubuntu20.04


(modelscope) winse@DESKTOP-BR4MG38:ai$ docker run --rm --gpus all --ipc host --ulimit memlock=-1 --ulimit stack=67108864 -it -v /mnt/i/ai:/app/stabilityai -p 7860:7860 docker.io/nvidia/cuda:12.3.1-devel-ubuntu20.04
docker: Error response from daemon: failed to create task for container: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error running hook #0: error running hook: exit status 1, stdout: , stderr: Auto-detected mode as 'legacy'
nvidia-container-cli: requirement error: unsatisfied condition: cuda>=12.3, please update your driver to a newer version, or use an earlier cuda container: unknown.

重新下载镜像并配置

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
(modelscope) winse@DESKTOP-BR4MG38:ai$ nvidia-smi
Mon Jan 15 17:04:18 2024
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.146.01             Driver Version: 537.99       CUDA Version: 12.2     |
|-----------------------------------------+----------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |         Memory-Usage | GPU-Util  Compute M. |
|                                         |                      |               MIG M. |
|=========================================+======================+======================|
|   0  Quadro T2000                   On  | 00000000:01:00.0  On |                  N/A |
| N/A   46C    P8               6W /  60W |    589MiB /  4096MiB |      7%      Default |
|                                         |                      |                  N/A |
+-----------------------------------------+----------------------+----------------------+

+---------------------------------------------------------------------------------------+
| Processes:                                                                            |
|  GPU   GI   CI        PID   Type   Process name                            GPU Memory |
|        ID   ID                                                             Usage      |
|=======================================================================================|
|    0   N/A  N/A        39      G   /Xwayland                                 N/A      |
|    0   N/A  N/A        42      G   /Xwayland                                 N/A      |
|    0   N/A  N/A        44      G   /Xwayland                                 N/A      |
+---------------------------------------------------------------------------------------+

版本不能高于本地CUDA,重新下载镜像:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
(modelscope) winse@DESKTOP-BR4MG38:ai$ docker pull nvidia/cuda:12.2.2-devel-ubuntu20.04
12.2.2-devel-ubuntu20.04: Pulling from nvidia/cuda
12.2.2-devel-ubuntu20.04: Pulling from nvidia/cuda
96d54c3075c9: Pull complete
db26cf78ae4f: Pull complete
5adc7ab504d3: Pull complete
e4f230263527: Pull complete
95e3f492d47e: Pull complete
35dd1979297e: Pull complete
39a2c88664b3: Pull complete
d8f6b6cd09da: Pull complete
fe19bbed4a4a: Pull complete
469ef7e9efe0: Pull complete
e30c6425f419: Pull complete
Digest: sha256:b7074ef6f9aa30c27fe747f3a7e10402ec442f001290718c73e0972d1ee61342
Status: Downloaded newer image for nvidia/cuda:12.2.2-devel-ubuntu20.04
docker.io/nvidia/cuda:12.2.2-devel-ubuntu20.04

What's Next?
  1. Sign in to your Docker account → docker login
  2. View a summary of image vulnerabilities and recommendations → docker scout quickview nvidia/cuda:12.2.2-devel-ubuntu20.04

运行容器实例

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
(modelscope) winse@DESKTOP-BR4MG38:P15$ docker run --rm --gpus all --ipc host --ulimit memlock=-1 --ulimit stack=67108864 -it -v /mnt/i/ai:/app/stabilityai -p 7860:7860 docker.io/nvidia/cuda:12.2.2-devel-ubuntu20.04


==========
== CUDA ==
==========

CUDA Version 12.2.2

Container image Copyright (c) 2016-2023, NVIDIA CORPORATION & AFFILIATES. All rights reserved.

This container image and its contents are governed by the NVIDIA Deep Learning Container License.
By pulling and using the container, you accept the terms and conditions of this license:
https://developer.nvidia.com/ngc/nvidia-deep-learning-container-license

A copy of this license is made available in this container at /NGC-DL-CONTAINER-LICENSE for your convenience.

root@41af85cb0007:/# nvidia-smi 
Mon Jan 15 13:25:43 2024       
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.146.01             Driver Version: 537.99       CUDA Version: 12.2     |
|-----------------------------------------+----------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |         Memory-Usage | GPU-Util  Compute M. |
|                                         |                      |               MIG M. |
|=========================================+======================+======================|
|   0  Quadro T2000                   On  | 00000000:01:00.0  On |                  N/A |
| N/A   43C    P8               3W /  60W |    620MiB /  4096MiB |      2%      Default |
|                                         |                      |                  N/A |
+-----------------------------------------+----------------------+----------------------+

+---------------------------------------------------------------------------------------+
| Processes:                                                                            |
|  GPU   GI   CI        PID   Type   Process name                            GPU Memory |
|        ID   ID                                                             Usage      |
|=======================================================================================|
|  No running processes found                                                           |
+---------------------------------------------------------------------------------------+
root@41af85cb0007:/# docker ps -a 
bash: docker: command not found
root@41af85cb0007:/# python -V 
bash: python: command not found

尽管用的是WSL 2 based engine,但是不是Windows管理的。

配值webui

[解决被官方忽视的 AI 容器应用问题] https://soulteary.com/2022/12/09/use-docker-to-quickly-get-started-with-the-chinese-stable-diffusion-model-taiyi.html https://github.com/soulteary/docker-stable-diffusion-taiyi/blob/main/docker/Dockerfile 是对官方的依赖安装的拆解。可以参考,还是不建议这么搞,如果能促成源头修正那就是另一种说法了。

由于他的镜像我也跑不起来,所以直接按照官方来安装,参考借鉴他遇到解决过的问题。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
root@41af85cb0007:/# sed -i.bak -e 's|archive.ubuntu.com/ubuntu/|mirrors.tuna.tsinghua.edu.cn/ubuntu/|' -e 's|security.ubuntu.com/ubuntu/|mirrors.tuna.tsinghua.edu.cn/ubuntu/|' /etc/apt/sources.list

root@41af85cb0007:/# apt update 

root@41af85cb0007:/# apt install -y git wget curl iputils-ping iproute2 traceroute

root@41af85cb0007:/# wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh

root@41af85cb0007:/# bash Miniconda3-latest-Linux-x86_64.sh -b
PREFIX=/root/miniconda3
Unpacking payload ...
                                                                                                                                                                                                           
Installing base environment...


Downloading and Extracting Packages:


Downloading and Extracting Packages:

Preparing transaction: done
Executing transaction: done
installation finished.

root@41af85cb0007:/# /root/miniconda3/bin/conda init bash 
no change     /root/miniconda3/condabin/conda
no change     /root/miniconda3/bin/conda
no change     /root/miniconda3/bin/conda-env
no change     /root/miniconda3/bin/activate
no change     /root/miniconda3/bin/deactivate
no change     /root/miniconda3/etc/profile.d/conda.sh
no change     /root/miniconda3/etc/fish/conf.d/conda.fish
no change     /root/miniconda3/shell/condabin/Conda.psm1
no change     /root/miniconda3/shell/condabin/conda-hook.ps1
no change     /root/miniconda3/lib/python3.11/site-packages/xontrib/conda.xsh
no change     /root/miniconda3/etc/profile.d/conda.csh
modified      /root/.bashrc

==> For changes to take effect, close and re-open your current shell. <==

root@41af85cb0007:/# source ~/.bashrc
(base) root@41af85cb0007:/# 



#https://github.com/IDEA-CCNL/stable-diffusion-webui/zipball/master/
#https://github.com/IDEA-CCNL/stable-diffusion-webui/tarball/master/
#https://docs.github.com/en/repositories/working-with-files/using-files/downloading-source-code-archives#source-code-archive-urls
(base) root@41af85cb0007:/opt# wget -c https://github.com/IDEA-CCNL/stable-diffusion-webui/archive/refs/heads/master.tar.gz -O stable-diffusion-webui.tgz

(base) root@41af85cb0007:/opt# tar zxf stable-diffusion-webui.tgz 
(base) root@41af85cb0007:/opt# mv stable-diffusion-webui-master stable-diffusion-webui
(base) root@41af85cb0007:/opt# cd stable-diffusion-webui


#@@ 反正走代理,没必要
#(webui) root@41af85cb0007:/opt/stable-diffusion-webui# pip config set global.index-url https://pypi.tuna.tsinghua.edu.cn/simple
#Writing to /root/.config/pip/pip.conf


#https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/4345
#https://stackoverflow.com/questions/75099182/stable-diffusion-error-couldnt-install-torch-no-matching-distribution-found
#ERROR: Ignored the following versions that require a different python version: 1.6.2 Requires-Python >=3.7,<3.10; 1.6.3 Requires-Python >=3.7,<3.10; 1.7.0 Requires-Python >=3.7,<3.10; 1.7.1 Requires-Python >=3.7,<3.10
(base) root@41af85cb0007:/opt/stable-diffusion-webui# conda create -n py39 python=3.9
(base) root@41af85cb0007:/opt/stable-diffusion-webui# conda activate py39
(py39) root@41af85cb0007:/opt/stable-diffusion-webui# 


#@@ 用到github,得socks代理一下,@@先去掉代理不然又解析不了@@
(py39) root@41af85cb0007:/opt/stable-diffusion-webui# unset all_proxy && unset ALL_PROXY && unset https_proxy && unset HTTPS_PROXY
(py39) root@41af85cb0007:/opt/stable-diffusion-webui# pip install pysocks
Collecting pysocks
  WARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ConnectTimeoutError(<pip._vendor.urllib3.connection.HTTPSConnection object at 0x7fca218ae370>, 'Connection to files.pythonhosted.org timed out. (connect timeout=15)')': /packages/8d/59/b4572118e098ac8e46e399a1dd0f2d85403ce8bbaad9ec79373ed6badaf9/PySocks-1.7.1-py3-none-any.whl
  Downloading PySocks-1.7.1-py3-none-any.whl (16 kB)
Installing collected packages: pysocks
Successfully installed pysocks-1.7.1
WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv


#?? ImportError: libGL.so.1: cannot open shared object file: No such file or directory

(py39) root@41af85cb0007:/opt/stable-diffusion-webui# apt-get install ffmpeg libsm6 libxext6  -y
...
Setting up tzdata (2023c-0ubuntu0.20.04.2) ...
debconf: unable to initialize frontend: Dialog
debconf: (No usable dialog-like program is installed, so the dialog based frontend cannot be used. at /usr/share/perl5/Debconf/FrontEnd/Dialog.pm line 76.)
debconf: falling back to frontend: Readline
Configuring tzdata
------------------

Please select the geographic area in which you live. Subsequent configuration questions will narrow this down by presenting a list of cities, representing the time zones in which they are located.

  1. Africa  2. America  3. Antarctica  4. Australia  5. Arctic  6. Asia  7. Atlantic  8. Europe  9. Indian  10. Pacific  11. SystemV  12. US  13. Etc
Geographic area: 6

Please select the city or region corresponding to your time zone.

  1. Aden      9. Baghdad   17. Chita       25. Dushanbe     33. Irkutsk    41. Kashgar       49. Macau         57. Omsk        65. Rangoon        73. Taipei    81. Ujung_Pandang  89. Yekaterinburg
  2. Almaty    10. Bahrain  18. Choibalsan  26. Famagusta    34. Istanbul   42. Kathmandu     50. Magadan       58. Oral        66. Riyadh         74. Tashkent  82. Ulaanbaatar    90. Yerevan
  3. Amman     11. Baku     19. Chongqing   27. Gaza         35. Jakarta    43. Khandyga      51. Makassar      59. Phnom_Penh  67. Sakhalin       75. Tbilisi   83. Urumqi
  4. Anadyr    12. Bangkok  20. Colombo     28. Harbin       36. Jayapura   44. Kolkata       52. Manila        60. Pontianak   68. Samarkand      76. Tehran    84. Ust-Nera
  5. Aqtau     13. Barnaul  21. Damascus    29. Hebron       37. Jerusalem  45. Krasnoyarsk   53. Muscat        61. Pyongyang   69. Seoul          77. Tel_Aviv  85. Vientiane
  6. Aqtobe    14. Beirut   22. Dhaka       30. Ho_Chi_Minh  38. Kabul      46. Kuala_Lumpur  54. Nicosia       62. Qatar       70. Shanghai       78. Thimphu   86. Vladivostok
  7. Ashgabat  15. Bishkek  23. Dili        31. Hong_Kong    39. Kamchatka  47. Kuching       55. Novokuznetsk  63. Qostanay    71. Singapore      79. Tokyo     87. Yakutsk
  8. Atyrau    16. Brunei   24. Dubai       32. Hovd         40. Karachi    48. Kuwait        56. Novosibirsk   64. Qyzylorda   72. Srednekolymsk  80. Tomsk     88. Yangon
Time zone: 70


Current default time zone: 'Asia/Shanghai'
Local time is now:      Tue Jan 16 02:00:49 CST 2024.
Universal Time is now:  Mon Jan 15 18:00:49 UTC 2024.
Run 'dpkg-reconfigure tzdata' if you wish to change it.

Setting up libxcb-present0:amd64 (1.14-2) ...
Setting up libglib2.0-data (2.64.6-1~ubuntu20.04.6) ...
Setting up libslang2:amd64 (2.3.2-4) ...
....
Setting up libavdevice58:amd64 (7:4.2.7-0ubuntu0.1) ...
Setting up ffmpeg (7:4.2.7-0ubuntu0.1) ...
Processing triggers for libc-bin (2.31-0ubuntu9.12) ...
/sbin/ldconfig.real: /lib/x86_64-linux-gnu/libcudadebugger.so.1 is not a symbolic link

/sbin/ldconfig.real: /lib/x86_64-linux-gnu/libcuda.so.1 is not a symbolic link

Processing triggers for libgdk-pixbuf2.0-0:amd64 (2.40.0+dfsg-3ubuntu0.4) ...



#@@ 改改改:1 可以root跑,2 直接用当前项目,下载没带.git, 3 使用conda管理环境
(py39) root@41af85cb0007:/opt/stable-diffusion-webui# diff -u webui.sh.back  webui.sh
--- webui.sh.back       2024-01-16 02:10:15.775277218 +0800
+++ webui.sh    2024-01-16 02:41:22.748823781 +0800
@@ -64,23 +64,23 @@
 if [[ $(id -u) -eq 0 ]]
 then
     printf "\n%s\n" "${delimiter}"
-    printf "\e[1m\e[31mERROR: This script must not be launched as root, aborting...\e[0m"
-    printf "\n%s\n" "${delimiter}"
-    exit 1
+#    printf "\e[1m\e[31mERROR: This script must not be launched as root, aborting...\e[0m"
+#    printf "\n%s\n" "${delimiter}"
+#    exit 1
 else
     printf "\n%s\n" "${delimiter}"
     printf "Running on \e[1m\e[32m%s\e[0m user" "$(whoami)"
     printf "\n%s\n" "${delimiter}"
 fi
 
-if [[ -d .git ]]
-then
+#if [[ -d .git ]]
+#then
     printf "\n%s\n" "${delimiter}"
     printf "Repo already cloned, using it as install directory"
     printf "\n%s\n" "${delimiter}"
     install_dir="${PWD}/../"
     clone_dir="${PWD##*/}"
-fi
+#fi

 # Check prerequisites
 for preq in "${GIT}" "${python_cmd}"
@@ -120,19 +120,20 @@
 cd "${install_dir}"/"${clone_dir}"/ || { printf "\e[1m\e[31mERROR: Can't cd to %s/%s/, aborting...\e[0m" "${install_dir}" "${clone_dir}"; exit 1; }
 if [[ ! -d "${venv_dir}" ]]
 then
-    "${python_cmd}" -m venv "${venv_dir}"
+#    "${python_cmd}" -m venv "${venv_dir}"
+    mkdir -p "${venv_dir}"
     first_launch=1
 fi
 # shellcheck source=/dev/null
-if [[ -f "${venv_dir}"/bin/activate ]]
-then
-    source "${venv_dir}"/bin/activate
-else
-    printf "\n%s\n" "${delimiter}"
-    printf "\e[1m\e[31mERROR: Cannot activate python venv, aborting...\e[0m"
-    printf "\n%s\n" "${delimiter}"
-    exit 1
-fi
+#if [[ -f "${venv_dir}"/bin/activate ]]
+#then
+#    source "${venv_dir}"/bin/activate
+#else
+#    printf "\n%s\n" "${delimiter}"
+#    printf "\e[1m\e[31mERROR: Cannot activate python venv, aborting...\e[0m"
+#    printf "\n%s\n" "${delimiter}"
+#    exit 1
+#fi

 printf "\n%s\n" "${delimiter}"
 printf "Launching launch.py..."



(py39) root@41af85cb0007:/opt/stable-diffusion-webui# export HTTPS_PROXY=socks5://172.22.240.1:23333
(py39) root@41af85cb0007:/opt/stable-diffusion-webui# bash webui.sh

################################################################
Install script for stable-diffusion + Web UI
Tested on Debian 11 (Bullseye)

################################################################

################################################################

################################################################
Repo already cloned, using it as install directory

################################################################

################################################################
Launching launch.py...

################################################################
Python 3.9.18 (main, Sep 11 2023, 13:41:44) 
[GCC 11.2.0]
Commit hash: <none>
Installing torch and torchvision
Installing gfpgan
Installing clip
Cloning Stable Diffusion into repositories/stable-diffusion...
Cloning Taming Transformers into repositories/taming-transformers...
Cloning K-diffusion into repositories/k-diffusion...
Cloning CodeFormer into repositories/CodeFormer...
Cloning BLIP into repositories/BLIP...
Installing requirements for CodeFormer
Installing requirements for Web UI
repositories/Taiyi-Stable-Diffusion-1B-Chinese-v0.1/feature_extractor/preprocessor_config.json | File missing.
repositories/Taiyi-Stable-Diffusion-1B-Chinese-v0.1 does not exist or file is missing. (1)Do you want to redownload the Taiyi model? Or (2)move your downloaded Taiyi model path? 1/2: 2

Detection failed, please reconfirm that the model has been moved to: repositories/Taiyi-Stable-Diffusion-1B-Chinese-v0.1
Please move the Taiyi model to: repositories/Taiyi-Stable-Diffusion-1B-Chinese-v0.1. Completed? y: y


???
  File "/root/miniconda3/envs/py39/lib/python3.9/site-packages/httpx/_transports/default.py", line 275, in __init__
    self._pool = httpcore.AsyncConnectionPool(
TypeError: __init__() got an unexpected keyword argument 'socket_options'


#https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/13236

#pip install -U httpcore
(py39) root@41af85cb0007:/opt/stable-diffusion-webui# pip3 install httpx==0.24.1

(py39) root@41af85cb0007:/opt/stable-diffusion-webui# unset all_proxy && unset ALL_PROXY && unset https_proxy && unset HTTPS_PROXY && unset http_proxy && unset HTTP_PROXY
(py39) root@41af85cb0007:/opt/stable-diffusion-webui# bash webui.sh


???
ImportError: cannot import name '_compare_version' from 'torchmetrics.utilities.imports' (/root/miniconda3/envs/py39/lib/python3.9/site-packages/torchmetrics/utilities/imports.py)    

#https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/11648
#conda list torchmetrics
(py39) root@41af85cb0007:/opt/stable-diffusion-webui# conda install --force-reinstall torchmetrics==0.11.4

pip install torchmetrics==0.11.4 torch==1.13.1+cu117 torchvision==0.14.1+cu117 torchtext==0.14.1 torchaudio==0.13.1 torchdata==0.5.1 --extra-index-url https://download.pytorch.org/whl/cu117


???

export COMMANDLINE_ARGS="--lowvram --precision full --no-half --skip-torch-cuda-test"


???
RuntimeError: Cannot add middleware after an application has started

(py39) root@41af85cb0007:/opt/stable-diffusion-webui# pip install fastapi==0.90.1


@@终于启动了
(py39) root@41af85cb0007:/opt/stable-diffusion-webui# bash webui.sh

################################################################
Install script for stable-diffusion + Web UI
Tested on Debian 11 (Bullseye)
################################################################

################################################################

################################################################
Repo already cloned, using it as install directory
################################################################

################################################################
Create and activate python venv
################################################################

################################################################
Launching launch.py...
################################################################
Python 3.9.18 (main, Sep 11 2023, 13:41:44)
[GCC 11.2.0]
Commit hash: <none>
Installing requirements for Web UI
Obtaining file:///opt/stable-diffusion-webui
ERROR: file:///opt/stable-diffusion-webui does not appear to be a Python project: neither 'setup.py' nor 'pyproject.toml' found.
Launching Web UI with arguments: --lowvram --precision full --no-half --ckpt /opt/stable-diffusion-webui/repositories/Taiyi-Stable-Diffusion-1B-Chinese-v0.1/Taiyi-Stable-Diffusion-1B-Chinese-v0.1.ckpt --listen --port 12345
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
making attention of type 'vanilla' with 512 in_channels
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
making attention of type 'vanilla' with 512 in_channels
Loading weights [e2e75020] from /opt/stable-diffusion-webui/repositories/Taiyi-Stable-Diffusion-1B-Chinese-v0.1/Taiyi-Stable-Diffusion-1B-Chinese-v0.1.ckpt
Applying cross attention optimization (Doggettx).
Model loaded.
Loaded a total of 0 textual inversion embeddings.
Embeddings:
Running on local URL:  http://0.0.0.0:12345

To create a public link, set `share=True` in `launch()`.

启动镜像的时刻忘了挂数据U盘了,直接全部拷贝到容器里面:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
(base) root@41af85cb0007:/opt/stable-diffusion-webui/repositories# mkdir Taiyi-Stable-Diffusion-1B-Chinese-v0.1 

#(base) winse@DESKTOP-BR4MG38:Taiyi-Stable-Diffusion-1B-Chinese-v0.1$ tar tf 1.tar * 
#(base) winse@DESKTOP-BR4MG38:Taiyi-Stable-Diffusion-1B-Chinese-v0.1$ docker cp 1.tar 41af85cb0007:/opt/stable-diffusion-webui/repositories/Taiyi-Stable-Diffusion-1B-Chinese-v0.1/

(base) root@41af85cb0007:/opt/stable-diffusion-webui/repositories/Taiyi-Stable-Diffusion-1B-Chinese-v0.1# tar xf 1.tar 
(base) root@41af85cb0007:/opt/stable-diffusion-webui/repositories/Taiyi-Stable-Diffusion-1B-Chinese-v0.1# rm -rf 1.tar 
(base) root@41af85cb0007:/opt/stable-diffusion-webui/repositories/Taiyi-Stable-Diffusion-1B-Chinese-v0.1# ll
total 4084196
drwxr-xr-x 9 root  root        4096 Jan 15 17:57 ./
drwxrwxr-x 9 root  root        4096 Jan 15 17:42 ../
-rwxrwxrwx 1 webui webui 4182159787 Jan 15 00:26 Taiyi-Stable-Diffusion-1B-Chinese-v0.1.ckpt*
-rwxrwxrwx 1 webui webui        146 Jan 15 00:19 configuration.json*
drwxrwxrwx 2 webui webui       4096 Jan 15 17:50 feature_extractor/
-rwxrwxrwx 1 webui webui        539 Jan 15 00:22 model_index.json*
drwxrwxrwx 2 webui webui       4096 Jan 15 17:50 safety_checker/
drwxrwxrwx 2 webui webui       4096 Jan 15 17:50 scheduler/
drwxrwxrwx 2 webui webui       4096 Jan 15 17:50 text_encoder/
drwxrwxrwx 2 webui webui       4096 Jan 15 17:50 tokenizer/
drwxrwxrwx 2 webui webui       4096 Jan 15 17:50 unet/
drwxrwxrwx 2 webui webui       4096 Jan 15 17:50 vae/

太乙webui - 纯净版

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
(base) winse@DESKTOP-BR4MG38:P15$ cat /etc/os-release 
(base) winse@DESKTOP-BR4MG38:P15$ cat /etc/issue
Ubuntu 20.04.6 LTS \n \l

(base) winse@DESKTOP-BR4MG38:P15$ nvidia-smi
Tue Jan 16 07:09:11 2024       
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.146.01             Driver Version: 537.99       CUDA Version: 12.2     |
|-----------------------------------------+----------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |         Memory-Usage | GPU-Util  Compute M. |
|                                         |                      |               MIG M. |
|=========================================+======================+======================|
|   0  Quadro T2000                   On  | 00000000:01:00.0  On |                  N/A |
| N/A   50C    P8               4W /  60W |    537MiB /  4096MiB |      1%      Default |
|                                         |                      |                  N/A |
+-----------------------------------------+----------------------+----------------------+

+---------------------------------------------------------------------------------------+
| Processes:                                                                            |
|  GPU   GI   CI        PID   Type   Process name                            GPU Memory |
|        ID   ID                                                             Usage      |
|=======================================================================================|
|    0   N/A  N/A        32      G   /Xwayland                                 N/A      |
|    0   N/A  N/A        39      G   /Xwayland                                 N/A      |
|    0   N/A  N/A        41      G   /Xwayland                                 N/A      |
+---------------------------------------------------------------------------------------+

选一个跟我现在用的环境一样的版本和系统:https://hub.docker.com/r/nvidia/cuda/tags CUDA and cuDNN images from gitlab.com/nvidia/cuda

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
(base) winse@DESKTOP-BR4MG38:P15$ docker pull nvidia/cuda:12.2.2-devel-ubuntu20.04

(base) winse@DESKTOP-BR4MG38:P15$ docker run --gpus all --ipc host --ulimit memlock=-1 --ulimit stack=67108864 -it -v /mnt/i/ai:/app/stabilityai -p 7860:7860 docker.io/nvidia/cuda:12.2.2-devel-ubuntu20.04

root@c65a73d918b1:/# sed -i.bak -e 's|archive.ubuntu.com/ubuntu/|mirrors.tuna.tsinghua.edu.cn/ubuntu/|' -e 's|security.ubuntu.com/ubuntu/|mirrors.tuna.tsinghua.edu.cn/ubuntu/|' /etc/apt/sources.list
root@c65a73d918b1:/# apt update 

root@c65a73d918b1:/# apt install -y git wget vim

root@c65a73d918b1:/# cd 
root@c65a73d918b1:~# wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh
root@c65a73d918b1:~# bash Miniconda3-latest-Linux-x86_64.sh -b -u                
root@c65a73d918b1:~# ~/miniconda3/bin/conda init bash
root@c65a73d918b1:~# source ~/.bashrc
(base) root@c65a73d918b1:~# 

(base) root@c65a73d918b1:~# wget -c https://github.com/IDEA-CCNL/stable-diffusion-webui/archive/refs/heads/master.tar.gz -O - | tar zxf - 
(base) root@c65a73d918b1:~# mv stable-diffusion-webui-master stable-diffusion-webui        

(base) root@c65a73d918b1:~# cd stable-diffusion-webui
(base) root@c65a73d918b1:~/stable-diffusion-webui# conda create -n py3.10 python=3.10
(base) root@c65a73d918b1:~/stable-diffusion-webui# conda activate py3.10


(py3.10) root@c65a73d918b1:~/stable-diffusion-webui# mkdir .git 
(py3.10) root@c65a73d918b1:~/stable-diffusion-webui# diff -u webui.sh.back  webui.sh 
--- webui.sh.back       2024-01-15 23:39:24.271256229 +0000
+++ webui.sh    2024-01-15 23:42:17.070404354 +0000
@@ -64,9 +64,6 @@
 if [[ $(id -u) -eq 0 ]]
 then
     printf "\n%s\n" "${delimiter}"
-    printf "\e[1m\e[31mERROR: This script must not be launched as root, aborting...\e[0m"
-    printf "\n%s\n" "${delimiter}"
-    exit 1
 else
     printf "\n%s\n" "${delimiter}"
     printf "Running on \e[1m\e[32m%s\e[0m user" "$(whoami)"
@@ -120,19 +117,11 @@
 cd "${install_dir}"/"${clone_dir}"/ || { printf "\e[1m\e[31mERROR: Can't cd to %s/%s/, aborting...\e[0m" "${install_dir}" "${clone_dir}"; exit 1; }
 if [[ ! -d "${venv_dir}" ]]
 then
-    "${python_cmd}" -m venv "${venv_dir}"
+#    "${python_cmd}" -m venv "${venv_dir}"
+    mkdir -p "${venv_dir}"
     first_launch=1
 fi
 # shellcheck source=/dev/null
-if [[ -f "${venv_dir}"/bin/activate ]]
-then
-    source "${venv_dir}"/bin/activate
-else
-    printf "\n%s\n" "${delimiter}"
-    printf "\e[1m\e[31mERROR: Cannot activate python venv, aborting...\e[0m"
-    printf "\n%s\n" "${delimiter}"
-    exit 1
-fi

 printf "\n%s\n" "${delimiter}"
 printf "Launching launch.py..."
 

(py3.10) root@c65a73d918b1:~/stable-diffusion-webui# pip config set global.index-url https://pypi.tuna.tsinghua.edu.cn/simple

(py3.10) root@c65a73d918b1:~/stable-diffusion-webui# pip install pysocks

#https://github.com/invoke-ai/InvokeAI/issues/3560#issuecomment-1689474997
#https://blog.csdn.net/shark1357/article/details/131238924
(py3.10) root@c65a73d918b1:~/stable-diffusion-webui# pip install tb-nightly -i https://mirrors.aliyun.com/pypi/simple
(py3.10) root@c65a73d918b1:~/stable-diffusion-webui# pip install gfpgan==1.3.8

(py3.10) root@c65a73d918b1:~/stable-diffusion-webui# apt-get install ffmpeg libsm6 libxext6  -y

(py3.10) root@c65a73d918b1:~/stable-diffusion-webui# pip install httpcore httpx==0.24.1 torchmetrics==0.11.4 fastapi==0.90.1

(py3.10) root@c65a73d918b1:~/stable-diffusion-webui# vi webui-user.sh

export COMMANDLINE_ARGS="--lowvram --precision full --no-half --skip-torch-cuda-test"


(py3.10) root@c65a73d918b1:~# cd stable-diffusion-webui/repositories/
(py3.10) root@c65a73d918b1:~/stable-diffusion-webui/repositories# ln -s /app/stabilityai/Fengshenbang/Taiyi-Stable-Diffusion-1B-Chinese-v0.1/

(py3.10) root@c65a73d918b1:~/stable-diffusion-webui/repositories#  ll Taiyi-Stable-Diffusion-1B-Chinese-v0.1
lrwxrwxrwx 1 root root 69 Jan 16 07:58 Taiyi-Stable-Diffusion-1B-Chinese-v0.1 -> /app/stabilityai/Fengshenbang/Taiyi-Stable-Diffusion-1B-Chinese-v0.1//


(py3.10) root@c65a73d918b1:~/stable-diffusion-webui# pip install socksio httpx[socks]


#@@ 加代理下载部署时需要的github代码
(py3.10) root@c65a73d918b1:~/stable-diffusion-webui# export HTTPS_PROXY=socks5://172.22.240.1:23333
(py3.10) root@c65a73d918b1:~/stable-diffusion-webui# bash webui.sh --port 7860

然后在Windows浏览器访问: http://localhost:7860/

TODO

汉化: https://github.com/VinsonLaro/stable-diffusion-webui-chinese

–END

Reinstall Redmine on Raspberry2

年前把树莓派拯救了回来 重新折腾raspberry2 由于年底了新平台慢慢成型,可能会用到BUG跟踪,想着不能浪费,能用则用能省者省的原则,把老古董用起来。

原来在树莓派2上安装过redmine的 在树莓派上部署redmine 不过有些年头了,一开始想着刻舟求剑的方法能参照绝不死脑细胞的原则,现实则是四处碰壁。

这个配置确实有点拿不出手了。

1
2
3
4
5
6
7
8
top - 14:01:28 up 1 min,  1 user,  load average: 1.87, 0.63, 0.22
Tasks: 130 total,   1 running, 129 sleeping,   0 stopped,   0 zombie
%Cpu0  :  0.0 us,  0.0 sy,  0.0 ni, 99.7 id,  0.0 wa,  0.0 hi,  0.3 si,  0.0 st
%Cpu1  :  0.0 us,  0.0 sy,  0.0 ni,100.0 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
%Cpu2  :  0.3 us,  0.3 sy,  0.0 ni, 99.3 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
%Cpu3  :  0.0 us,  0.0 sy,  0.0 ni,100.0 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
MiB Mem :    922.0 total,    643.8 free,     79.7 used,    198.5 buff/cache
MiB Swap:    100.0 total,    100.0 free,      0.0 used.    788.3 avail Mem 

参照学习

原来老版本安装的是 postgres9.6 + redmine3.4 的版本,2020年的时刻这些版本都还当时热乎着的,当时也用上了原来的数据以及插件都还想和原来一样使用。现在再来安装这个版本已经时过境迁了,官方的postgres9.6已经归档了。

1、ubuntu-14官方不支持以后,postgresql官网和各种镜像的网站的都没有该版本的9.6的源码。

2、找一个相对旧一些的操作系统ubuntu-20(focal)支持postgres9.6的,然后试着按原来源码编译的方式打包postgres-9.6 armv7的版本。

安装编译的依赖,编译打包除了慢一点(一个通宵)看起来都通过了(中间过程漫长有冗长没细细的看)。

打包好deb,等安装的时刻,就出现依赖不对找不到了。齐了怪了,编译都“通过”了,安装就不行,perl缺的依赖又找不到对应的版本。又重新弄了两遍!!!这个时间和精神消耗,我这身子骨已经跟它耗不起了。改变方式。

3、docker postgres有镜像也是11之后的,更别说是armv7的9.6了。

4、经过几天的折腾,狠狠心咬咬牙,换最新版本。只能最后再迁移数据,这样估计快一点。

实践与改进

确定使用最新的版本后,找到 sameersbn/redmine:5.10 然后用的是postgres-15数据库。在上面折腾的过程中,已经找过armv7的postgres数据库了,这次直接基于官方armv7-postgres-15作为基础来构建我这个应用数据库的镜像。

postgres

编译 armv7-postgres15 除了在LANG上有一些些错之外,其他都是挺顺利的。修改的内容以及Dockerfile内容如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
root@raspberrypi:/opt/docker-postgresql-15-20230628# vim runtime/functions 

300         if [[ -z $(psql -U ${PG_USER} -Atc "SELECT 1 FROM pg_catalog.pg_user WHERE usename = '${DB_USER}'";) ]]; then
301           #psql -U ${PG_USER} -c "CREATE ROLE \"${DB_USER}\" with LOGIN CREATEDB PASSWORD '${DB_PASS}';" >/dev/null
302           psql -U ${PG_USER} -c "CREATE ROLE \"${DB_USER}\" SUPERUSER CREATEDB CREATEROLE LOGIN PASSWORD '${DB_PASS}';" >/dev/null
303         fi

root@raspberrypi:/opt/docker-postgresql-15-20230628# cat Dockerfile 
FROM arm32v7/postgres:15-bullseye

LABEL maintainer="sameer@damagehead.com"

ENV LC_ALL="en_US.UTF-8" \
    LC_CTYPE="en_US.UTF-8"

ENV PG_APP_HOME="/etc/docker-postgresql" \
    PG_VERSION=15 \
    PG_USER=postgres \
    PG_HOME=/var/lib/postgresql \
    PG_RUNDIR=/run/postgresql \
    PG_LOGDIR=/var/log/postgresql \
    PG_CERTDIR=/etc/postgresql/certs

ENV PG_BINDIR=/usr/lib/postgresql/${PG_VERSION}/bin \
    PG_DATADIR=${PG_HOME}/${PG_VERSION}/main

RUN echo "LC_ALL=en_US.UTF-8" >> /etc/environment
RUN echo "en_US.UTF-8 UTF-8" >> /etc/locale.gen
RUN echo "LANG=en_US.UTF-8" > /etc/locale.conf

RUN sed -i 's@http://deb.debian.org@http://mirrors.aliyun.com@g' /etc/apt/sources.list \
 && apt-get update \
 && DEBIAN_FRONTEND=noninteractive apt-get install -y acl sudo locales \
 && update-locale LANG=C.UTF-8 LC_MESSAGES=POSIX \
 && locale-gen en_US.UTF-8 \
 && DEBIAN_FRONTEND=noninteractive dpkg-reconfigure locales \
 && mkdir -p /etc/postgresql/${PG_VERSION}/main \
 && ln -sf ${PG_DATADIR}/postgresql.conf /etc/postgresql/${PG_VERSION}/main/postgresql.conf \
 && ln -sf ${PG_DATADIR}/pg_hba.conf /etc/postgresql/${PG_VERSION}/main/pg_hba.conf \
 && ln -sf ${PG_DATADIR}/pg_ident.conf /etc/postgresql/${PG_VERSION}/main/pg_ident.conf \
 && rm -rf ${PG_HOME} \
 && rm -rf /var/lib/apt/lists/*

COPY runtime/ ${PG_APP_HOME}/

COPY entrypoint.sh /sbin/entrypoint.sh

RUN chmod 755 /sbin/entrypoint.sh

EXPOSE 5432/tcp

WORKDIR ${PG_HOME}

ENTRYPOINT ["/sbin/entrypoint.sh"]

redmine

再编译redmine,这个构建镜像的时刻下载依赖太慢了,可能是安装的这些软件对树莓派2要求有点过高了,多次没响应远程断掉导致要重新弄。用了screen也没有效果。

  1. 开一个screen的窗口,再启动一个容器。
  2. 在容器里面手动安装先把依赖搞定后,最后把容器提交作为一个镜像。
  3. 在这个镜像的基础上再构建redmine应用的镜像。
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
root@raspberrypi:~/docker-redmine-5.1.0# docker run -ti arm32v7/ruby:2.7.8-bullseye bash 

root@e76bef6f1a9c:/# 
wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | apt-key add - 
echo 'deb http://apt.postgresql.org/pub/repos/apt/ bullseye-pgdg main' > /etc/apt/sources.list.d/pgdg.list 

sed -i 's@http://deb.debian.org@https://mirrors.aliyun.com@g' /etc/apt/sources.list && apt-get update
#sed -i 's@http://deb.debian.org@https://mirrors.ustc.edu.cn@g' /etc/apt/sources.list && apt-get update


# 去掉mysql和ruby依赖
apt-get install --no-install-recommends -y \
      supervisor logrotate nginx postgresql-client ca-certificates sudo tzdata \
      imagemagick subversion git cvs bzr mercurial darcs rsync locales openssh-client \
      gcc g++ make patch pkg-config gettext-base libc6-dev zlib1g-dev libxml2-dev \
      libpq5 libyaml-0-2 libcurl4 libssl1.1 uuid-dev xz-utils \
      libxslt1.1 libffi7 zlib1g gsfonts vim-tiny ghostscript sqlite3 libsqlite3-dev 
  
update-locale LANG=C.UTF-8 LC_MESSAGES=POSIX

gem install --no-document bundler


# 修改redmine的安装脚本
# @see https://mirrors.tuna.tsinghua.edu.cn/help/rubygems/
root@raspberrypi:/opt/docker-redmine-5.1.0# vi assets/build/install.sh 
  7 BUILD_DEPENDENCIES="wget libcurl4-openssl-dev libssl-dev libmagickcore-dev libmagickwand-dev \
  8                     libpq-dev libxslt1-dev libffi-dev libyaml-dev \
  9                     libsqlite3-dev"

 41   exec_as_redmine wget --no-check-certificate "http://www.redmine.org/releases/redmine-${REDMINE_VERSION}.tar.gz" -O /tmp/redmine-${RE    DMINE_VERSION}.tar.gz
 
104 exec_as_redmine gem sources --add https://mirrors.tuna.tsinghua.edu.cn/rubygems/ --remove https://rubygems.org/
105 exec_as_redmine bundle config mirror.https://rubygems.org https://mirrors.tuna.tsinghua.edu.cn/rubygems
106 exec_as_redmine bundle install -j$(nproc)

# 拷贝到容器安装redmine
root@raspberrypi:~/docker-redmine-5.1.0# docker cp assets/build/install.sh  e76bef6f1a9c:/opt/


root@e76bef6f1a9c:/# cd /opt/
export RUBY_VERSION=2.7 \
   REDMINE_VERSION=5.1.0 \
   REDMINE_USER="redmine" \
   REDMINE_HOME="/home/redmine" \
   REDMINE_LOG_DIR="/var/log/redmine" \
   REDMINE_ASSETS_DIR="/etc/docker-redmine" \
   RAILS_ENV=production

export REDMINE_INSTALL_DIR="${REDMINE_HOME}/redmine" \
   REDMINE_DATA_DIR="${REDMINE_HOME}/data" \
   REDMINE_BUILD_ASSETS_DIR="${REDMINE_ASSETS_DIR}/build" \
   REDMINE_RUNTIME_ASSETS_DIR="${REDMINE_ASSETS_DIR}/runtime"

root@e76bef6f1a9c:/# bash -x install.sh


# 安装好后,停掉容器提交作为镜像
root@raspberrypi:~/docker-redmine-5.1.0# docker stop e76bef6f1a9c
root@raspberrypi:~/docker-redmine-5.1.0# docker commit e76bef6f1a9c arm32v7/ruby:2.7.8-bullseye-redmine-base

root@raspberrypi:/opt/docker-redmine-5.1.0# cat Dockerfile 
#FROM arm32v7/ruby:2.7.8-bullseye AS add-apt-repositories
#
#RUN sed -i 's@http://deb.debian.org@https://mirrors.aliyun.com@g' /etc/apt/sources.list \
# && apt-get update \
# && DEBIAN_FRONTEND=noninteractive apt-get install -y wget gnupg2 \
# && apt-key adv --keyserver keyserver.ubuntu.com --recv E1DD270288B4E6030699E45FA1715D88E1DF1F24 \
# && apt-key adv --keyserver keyserver.ubuntu.com --recv 80F70E11F0F0D5F10CB20E62F5DA5F09C3173AA6 \
# && apt-key adv --keyserver keyserver.ubuntu.com --recv 8B3981E7A6852F782CC4951600A6F0A3C300EE8C \
# && wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | apt-key add - \
# && echo 'deb http://apt.postgresql.org/pub/repos/apt/ bullseye-pgdg main' > /etc/apt/sources.list.d/pgdg.list 

FROM arm32v7/ruby:2.7.8-bullseye-redmine-base

LABEL maintainer="sameer@damagehead.com"

ENV RUBY_VERSION=2.7 \
    REDMINE_VERSION=5.1.0 \
    REDMINE_USER="redmine" \
    REDMINE_HOME="/home/redmine" \
    REDMINE_LOG_DIR="/var/log/redmine" \
    REDMINE_ASSETS_DIR="/etc/docker-redmine" \
    RAILS_ENV=production

ENV REDMINE_INSTALL_DIR="${REDMINE_HOME}/redmine" \
    REDMINE_DATA_DIR="${REDMINE_HOME}/data" \
    REDMINE_BUILD_ASSETS_DIR="${REDMINE_ASSETS_DIR}/build" \
    REDMINE_RUNTIME_ASSETS_DIR="${REDMINE_ASSETS_DIR}/runtime"

#COPY --from=add-apt-repositories /etc/apt/trusted.gpg /etc/apt/trusted.gpg
#
#COPY --from=add-apt-repositories /etc/apt/sources.list /etc/apt/sources.list
#COPY --from=add-apt-repositories /etc/apt/sources.list.d/pgdg.list /etc/apt/sources.list.d/
#
#RUN sed -i 's@http://deb.debian.org@https://mirrors.aliyun.com@g' /etc/apt/sources.list \
# && apt-get update \
# && DEBIAN_FRONTEND=noninteractive apt-get install --no-install-recommends -y \
#      supervisor logrotate nginx postgresql-client ca-certificates sudo tzdata \
#      imagemagick subversion git cvs rsync locales openssh-client \
#      gcc g++ make patch pkg-config gettext-base libxml2-dev \
#      python3-pil python3-scour libimage-exiftool-perl ffmpegthumbnailer \
#      libpq5 libyaml-0-2 libcurl4 libssl1.1 uuid-dev xz-utils \
#      libxslt1.1 vim-tiny sqlite3 libsqlite3-dev \
# && update-locale LANG=C.UTF-8 LC_MESSAGES=POSIX \
# && gem install --no-document bundler \
# && rm -rf /var/lib/apt/lists/*

#COPY assets/build/ ${REDMINE_BUILD_ASSETS_DIR}/
#
#RUN bash ${REDMINE_BUILD_ASSETS_DIR}/install.sh

COPY assets/runtime/ ${REDMINE_RUNTIME_ASSETS_DIR}/

COPY assets/tools/ /usr/bin/

COPY entrypoint.sh /sbin/entrypoint.sh

COPY VERSION /VERSION

RUN chmod 755 /sbin/entrypoint.sh \
 && sed -i '/session    required     pam_loginuid.so/c\#session    required   pam_loginuid.so' /etc/pam.d/cron

EXPOSE 80/tcp 443/tcp

WORKDIR ${REDMINE_INSTALL_DIR}

ENTRYPOINT ["/sbin/entrypoint.sh"]

CMD ["app:start"]

root@raspberrypi:/opt/docker-redmine-5.1.0# make build

root@raspberrypi:/opt/docker-redmine-5.1.0# docker images 
REPOSITORY             TAG                           IMAGE ID       CREATED        SIZE
sameersbn/postgresql   15-20230628                   cedece0ace69   1 hours ago   324MB
sameersbn/redmine      latest                        f1ab03480a1e   1 hours ago   953MB
arm32v7/ruby           2.7.8-bullseye-redmine-base   3ee3b95c4c85   1 hours ago   953MB
arm32v7/postgres       15-bullseye                   c8e0db9da7af   8 days ago     314MB
arm32v7/ruby           2.7.8-bullseye                8100b7e215f8   6 months ago   667MB

redmine-3.4的数据迁移到redmine-5.1

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
root@1cd14c92899a:/var/lib/postgresql# docker ps 
CONTAINER ID   IMAGE                        COMMAND                  CREATED        STATUS      PORTS                                 NAMES
458df7209e1b   sameersbn/redmine:3.4.6      "/sbin/entrypoint.sh…"   6 months ago   Up 2 days   443/tcp, 172.21.37.204:8081->80/tcp   redmine_redmine_1
884a04c9f985   sameersbn/postgresql:9.6-2   "/sbin/entrypoint.sh"    2 years ago    Up 2 days   5432/tcp                              redmine_postgresql_1

root@1cd14c92899a:/var/lib/postgresql# pg_dump -U postgres -Cc -d redmine_production >redmine.dump 


# 把redmine-5.1的也导出来一份对照,把库里已有的数据保留,在旧sql里面去掉5存在的记录(不然导入报逐渐冲突导入失败的)。

# 配置数据导入,增加旧版本需要的字段。
ALTER TABLE projects
  ADD COLUMN customers_deploys_notifications_emails character varying,
  ADD COLUMN deploys_notifications_emails character varying,
  ADD COLUMN abbreviation character varying
  ;

ALTER TABLE trackers
  ADD COLUMN is_in_chlog boolean DEFAULT false NOT NULL
  ;

ALTER TABLE users
  ADD COLUMN identity_url character varying
  ;

删掉空的表和数据还原部分。把需要的,保留有数据的表。

数据库备份处理不好要重新做的话,要删除对应的文件内容,然后重启即可。

1
2
3
4
root@raspberrypi:~/docker-redmine-5.1.0# docker-compose down 
root@raspberrypi:~/docker-redmine-5.1.0# rm -rf /srv/docker/redmine

root@raspberrypi:~/docker-redmine-5.1.0# docker-compose up 

图片附件解压文件

1
root@raspberrypi:/# tar zxvf srv-docker-redmine.tar.gz /srv/docker/redmine/redmine/files 

redmine插件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
root@raspberrypi:/srv/docker/redmine/redmine/plugins# unzip clipboard_image_paste-1.13.zip 
root@raspberrypi:/srv/docker/redmine/redmine/plugins# mv clipboard_image_paste-1.13 clipboard_image_paste
root@raspberrypi:/srv/docker/redmine/redmine/plugins# rm clipboard_image_paste-1.13.zip 
root@raspberrypi:~/docker-redmine-5.1.0# docker-compose up -d 


root@raspberrypi:/srv/docker/redmine/redmine/plugins# less clipboard_image_paste/init.rb
 30 # @see https://github.com/paginagmbh/redmine_lightbox2/blob/master/init.rb
 31 if Rails::VERSION::MAJOR >= 5
 32   ActiveSupport::Reloader.to_prepare do
 33     require_dependency 'clipboard_image_paste/hooks'
 34     require_dependency 'clipboard_image_paste/attachment_patch'
 35   end 
 36 elsif Rails::VERSION::MAJOR >= 3

# @see https://github.com/peclik/clipboard_image_paste/pull/91/commits/570acebeb5dded80f24e7b01ffddbec09c9eccb6
root@raspberrypi:/srv/docker/redmine/redmine/plugins# less clipboard_image_paste/lib/clipboard_image_paste/attachment_patch.rb
 25       #alias_method_chain :save_attachments, :pasted_images
 26       alias_method :save_attachmenets_without_pasted_images, :save_attachments
 27       alias_method :save_attachments, :save_attachments_with_pasted_images
 
root@raspberrypi:~/docker-redmine-5.1.0# docker-compose restart redmine 


lightbox2
https://github.com/theforeman/redmine_lightbox2/commit/9c8b41f6893d4a92bb30923684bad7a1b40fdb62

apijs
https://www.luigifab.fr/en/redmine/apijs.php

pi@raspberrypi:/srv/docker/redmine/redmine-logs $ docker logs -n 50 1e966fec7f9b

docker-compose

1
2
3
# @see https://github.com/docker/compose/releases/

root@raspberrypi:/usr/local/bin# ln -s docker-compose-linux-armv7 docker-compose

反思与深思

搞搞简单的文字还行,图片大了要计算的话就卡了!!!还是用4吧!!!

pi4上安装简单多了,直接在机器上编译一下镜像就可以了:

  1. postgres的用户修改下权限;
  2. redmine去掉mysql的依赖。
1
2
3
4
5
6
7
8

#@ pi2
root@1cd14c92899a:/var/lib/postgresql# pg_dump -U redmine -Cc -d redmine_production >redmine.dump        

#@ pi4
root@d744a148ea59:/var/lib/postgresql# psql -U redmine -d postgres -f redmine.dump 

# srv的话,只需要复制redmine下的files、plugins。

–END

钉钉群机器人对接ChatGPT

前面一段时刻,通过 https://github.com/zhayujie/bot-on-anything 对接过个人微信的群,但是没过多久微信就被警告不能扫码了,前几天更新了钉钉机器人的对接方法,尝试了一下感觉还行马马虎虎。

当然过程是揪心的:钉钉文档在更新升级,bot这边的文档太简单,需要对着代码,然后对着dingtalk官方的文档对照着去排查。

第一步 首先是创建企业内部机器人,获取Secret

平时对接的告警机器人是只能发消息的,不能接收消息。

在钉钉APP中创建一个组织,然后在以这个组织登录 https://open-dev.dingtalk.com/v1/fe/old#/corprobot 然后再点击 创建机器人:

其中基础信息中的AppSecret后面会用到,对应dingtalk_secret:

机器人基础信息

然后配置接受消息的地址:就是钉钉服务器收到消息后,把消息转发到你的服务器(公网可访问的)地址。服务器出口IP:就是你部署服务器的公网IP,为了验证信息的合法性的一个参数。

服务配置

创建好机器人后,在版本管理和发布页签中,上线机器人。这样我们在组织的群里面就能选择这个机器人了。

第二步 把机器人加入组织群,获取Token

选择 群设置 - 机器人 - 添加机器人,把刚刚创建的机器人加入到我们组织/公司群,然后查看机器人的设置,会提供一个webhook的地址,这一串地址中就包括了一个access_token的字符串参数,就是后面需要用的dingtalk_token。

机器人管理

机器人设置/配置页面

第三步 配置bot,启动服务

配置bot-on-anything的config.json,修改 channel - type 的类型为 dingtalk,dingtalk的参数为:机器人应用凭证的AppSecret对应dingtalk_secret;发布机器人,把机器人加入到群后,机器人设置页面的webhook地址上的access_token对应的是dingtalk_token。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
"channel": {
    "type": "dingtalk",
    "single_chat_prefix": ["bot", "@bot"],
    "single_chat_reply_prefix": "[bot] ",
    "group_chat_prefix": ["@bot"],
    "group_name_white_list": ["ChatGPT测试群"],
    "image_create_prefix": ["画", "看", "找一张"],

    "wechat": {
    },

    "wechat_mp": {
      "token": "2023...",
      "port": "3000"
    },

    "dingtalk": {
      "image_create_prefix": ["画", "draw", "Draw"],
      "port": "3000",
      "dingtalk_token": "d55566a9e90...",
      "dingtalk_post_token": "",
      "dingtalk_secret": "PUMsK......"
    },
  

启动服务,愉快的耍起来。

第四步 小小改进

在测试图片消息的时刻,发现还不太支持,还有回复消息at提问的人也小小改进了一下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
cat dingtalk_channel.py 
# encoding:utf-8
import json
import hmac
import hashlib
import base64
import time
import requests
from urllib.parse import quote_plus
from common import log
from flask import Flask, request, render_template, make_response
from common import const
from common import functions
from config import channel_conf
from config import channel_conf_val
from channel.channel import Channel

class DingTalkChannel(Channel):
    def __init__(self):
        self.dingtalk_token = channel_conf(const.DINGTALK).get('dingtalk_token')
        self.dingtalk_post_token = channel_conf(const.DINGTALK).get('dingtalk_post_token')
        self.dingtalk_secret = channel_conf(const.DINGTALK).get('dingtalk_secret')
        log.info("[DingTalk] dingtalk_secret={}, dingtalk_token={} dingtalk_post_token={}".format(self.dingtalk_secret, self.dingtalk_token, self.dingtalk_post_token))

    def startup(self):
        http_app.run(host='0.0.0.0', port=channel_conf(const.DINGTALK).get('port'))
        
    def notify_dingtalk(self, data):
        timestamp = round(time.time() * 1000)
        secret_enc = bytes(self.dingtalk_secret, encoding='utf-8')
        string_to_sign = '{}\n{}'.format(timestamp, self.dingtalk_secret)
        string_to_sign_enc = bytes(string_to_sign, encoding='utf-8')
        hmac_code = hmac.new(secret_enc, string_to_sign_enc,
                             digestmod=hashlib.sha256).digest()
        sign = quote_plus(base64.b64encode(hmac_code))

        notify_url = f"https://oapi.dingtalk.com/robot/send?access_token={self.dingtalk_token}×tamp={timestamp}&sign={sign}"
        try:
            r = requests.post(notify_url, json=data)
            reply = r.json()
            # log.info("[DingTalk] reply={}".format(str(reply)))
        except Exception as e:
            log.error(e)

    def handle(self, resp):
        reply = "您好,有什么我可以帮助您解答的问题吗?"
        prompt = resp['text']['content']
        prompt = prompt.strip()
        if str(prompt) != 0:
            conversation_id = resp['conversationId']
            sender_id = resp['senderId']
            context = dict()
            img_match_prefix = functions.check_prefix(
                prompt, channel_conf_val(const.DINGTALK, 'image_create_prefix'))
            if img_match_prefix:
                prompt = prompt.split(img_match_prefix, 1)[1].strip()
                context['type'] = 'IMAGE_CREATE'
            id = sender_id
            nick = resp['senderNick']
            staffid = resp['senderStaffId']
            context['from_user_id'] = str(id)
            reply = super().build_reply_content(prompt, context)
        if img_match_prefix and isinstance(reply, list):
            images = ""
            for url in reply:
                images += f"!['IMAGE_CREATE']({url})\n"
            reply = images
            resp = {
                "msgtype": "markdown",
                "markdown": {
                    "title": "IMAGE @" + nick + " ", 
                    "text": images + " \n " + "@" + nick
                },
                "at": {
                    "atUserIds": [
                        staffid
                    ],
                    "isAtAll": False
                }
            }
        else:
            resp = {
                "msgtype": "text",
                "text": {
                    "content": reply
                },
                "at": {
                    "atUserIds": [
                       staffid 
                    ],
                    "isAtAll": False
                }
            }
        return resp 


dd = DingTalkChannel()
http_app = Flask(__name__,)


@http_app.route("/", methods=['POST'])
def chat():
    log.info("[DingTalk] chat_headers={}".format(str(request.headers)))
    log.info("[DingTalk] chat={}".format(str(request.data)))
    token = request.headers.get('token')
    if dd.dingtalk_post_token and token != dd.dingtalk_post_token:
        return {'ret': 203}
    data = json.loads(request.data)
    if data:
        content = data['text']['content']
        if not content:
            return
        reply = dd.handle(resp=data);
        dd.notify_dingtalk(reply)
        return {'ret': 200}
    return {'ret': 201}

后续

增加单聊,多机器人配置

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
"dingtalk": {
  "image_create_prefix": ["画", "draw", "Draw"],
  "port": "3000",
  "dingtalk_token": "方式1",
  "dingtalk_post_token": "",
  "dingtalk_secret": "",
  "dingtalk_robots": ["方式2-key123", "方式2-group123"],
  "方式2-key123": {
      "dingtalk_key": "AppKey",
      "dingtalk_secret": "AppSecret",
      "dingtalk_token": "webhook-access-token",
      "dingtalk_post_token": ""
  },
  "方式2-group123": { 
      "dingtalk_group": "群名",
      "dingtalk_secret": "AppSecret",
      "dingtalk_token": "webhook-access-token",
      "dingtalk_post_token": ""
  }
},

源代码

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
# encoding:utf-8
import json
import hmac
import hashlib
import base64
import time
import requests
from urllib.parse import quote_plus
from common import log
from flask import Flask, request, render_template, make_response
from common import const
from common import functions
from config import channel_conf
from config import channel_conf_val
from channel.channel import Channel


class DingTalkHandler():
    def __init__(self, config):
        self.dingtalk_key = config.get('dingtalk_key')
        self.dingtalk_secret = config.get('dingtalk_secret')
        self.dingtalk_token = config.get('dingtalk_token')
        self.dingtalk_post_token = config.get('dingtalk_post_token')
        self.access_token = None
        log.info("[DingTalk] AppKey={}, AppSecret={} Token={} post Token={}".format(self.dingtalk_key, self.dingtalk_secret, self.dingtalk_token, self.dingtalk_post_token))

    def notify_dingtalk_webhook(self, data):
        timestamp = round(time.time() * 1000)
        secret_enc = bytes(self.dingtalk_secret, encoding='utf-8')
        string_to_sign = '{}\n{}'.format(timestamp, self.dingtalk_secret)
        string_to_sign_enc = bytes(string_to_sign, encoding='utf-8')
        hmac_code = hmac.new(secret_enc, string_to_sign_enc,
                             digestmod=hashlib.sha256).digest()
        sign = quote_plus(base64.b64encode(hmac_code))

        notify_url = f"https://oapi.dingtalk.com/robot/send?access_token={self.dingtalk_token}×tamp={timestamp}&sign={sign}"
        try:
            log.info("[DingTalk] url={}".format(str(notify_url)))
            r = requests.post(notify_url, json=data)
            reply = r.json()
            log.info("[DingTalk] reply={}".format(str(reply)))
        except Exception as e:
            log.error(e)

    def get_token_internal(self):
        access_token_url = 'https://api.dingtalk.com/v1.0/oauth2/accessToken'
        try:
            r = requests.post(access_token_url, json={"appKey": self.dingtalk_key, "appSecret": self.dingtalk_secret})
        except:
            raise Exception("DingTalk token获取失败!!!")

        data = json.loads(r.content)
        access_token = data['accessToken']
        expire_in = data['expireIn']
        
        self.access_token = access_token
        self.expire_at = int(expire_in) + time.time()

        return self.access_token
    
    def get_token(self):
        if self.access_token is None or self.expire_at <= time.time():
            self.get_token_internal()
        
        return self.access_token
    
    def get_post_url(self, data):
        type = data['conversationType']
        if type == "1":
            return f"https://api.dingtalk.com/v1.0/robot/oToMessages/batchSend"
        else:
            return f"https://api.dingtalk.com/v1.0/robot/groupMessages/send"
    
    def build_response(self, reply, data):
        type = data['conversationType']
        if type == "1":
            return self.build_oto_response(reply, data)
        else:
            return self.build_group_response(reply, data)

    def build_oto_response(self, reply, data):
        conversation_id = data['conversationId']
        prompt = data['text']['content']
        prompt = prompt.strip()
        img_match_prefix = functions.check_prefix(
            prompt, channel_conf_val(const.DINGTALK, 'image_create_prefix'))
        nick = data['senderNick']
        staffid = data['senderStaffId']
        robotCode = data['robotCode']
        if img_match_prefix and isinstance(reply, list):
            images = ""
            for url in reply:
                images += f"!['IMAGE_CREATE']({url})\n"
            reply = images
            resp = {
                "msgKey": "sampleMarkdown",
                "msgParam": json.dumps({
                    "title": "IMAGE @" + nick + " ", 
                    "text": images + " \n " + "@" + nick
                }),
                "robotCode": robotCode,
                "userIds": [staffid]
            }
        else:
            resp = {
                "msgKey": "sampleText",
                "msgParam": json.dumps({
                    "content": reply
                }),
                "robotCode": robotCode,
                "userIds": [staffid]
            }
        return resp
    
    def build_group_response(self, reply, data):
        conversation_id = data['conversationId']
        prompt = data['text']['content']
        prompt = prompt.strip()
        img_match_prefix = functions.check_prefix(
            prompt, channel_conf_val(const.DINGTALK, 'image_create_prefix'))
        nick = data['senderNick']
        staffid = data['senderStaffId']
        robot_code = data['robotCode']
        if img_match_prefix and isinstance(reply, list):
            images = ""
            for url in reply:
                images += f"!['IMAGE_CREATE']({url})\n"
            reply = images
            resp = {
                "msgKey": "sampleMarkdown",
                "msgParam": json.dumps({
                    "title": "IMAGE @" + nick + " ", 
                    "text": images + " \n " + "@" + nick
                }),
                "robotCode": robot_code,
                "openConversationId": conversation_id,
                "at": {
                    "atUserIds": [
                        staffid
                    ],
                    "isAtAll": False
                }
            }
        else:
            resp = {
                "msgKey": "sampleText",
                "msgParam": json.dumps({
                    "content": reply + " \n " + "@" + nick
                }),
                "robotCode": robot_code,
                "openConversationId": conversation_id,
                "at": {
                    "atUserIds": [
                       staffid 
                    ],
                    "isAtAll": False
                }
            }
        return resp
    
    
    def build_webhook_response(self, reply, data):
        conversation_id = data['conversationId']
        prompt = data['text']['content']
        prompt = prompt.strip()
        img_match_prefix = functions.check_prefix(
            prompt, channel_conf_val(const.DINGTALK, 'image_create_prefix'))
        nick = data['senderNick']
        staffid = data['senderStaffId']
        robotCode = data['robotCode']
        if img_match_prefix and isinstance(reply, list):
            images = ""
            for url in reply:
                images += f"!['IMAGE_CREATE']({url})\n"
            reply = images
            resp = {
                "msgtype": "markdown",
                "markdown": {
                    "title": "IMAGE @" + nick + " ", 
                    "text": images + " \n " + "@" + nick
                },
                "at": {
                    "atUserIds": [
                        staffid
                    ],
                    "isAtAll": False
                }
            }
        else:
            resp = {
                "msgtype": "text",
                "text": {
                    "content": reply
                },
                "at": {
                    "atUserIds": [
                       staffid 
                    ],
                    "isAtAll": False
                }
            }
        return resp
    
    def chat(self, channel, data):
        reply = channel.handle(data)
        type = data['conversationType']
        if type == "1":
            reply_json = self.build_response(reply, data)
            self.notify_dingtalk(data, reply_json)
        else:
            # group的不清楚怎么@,先用webhook调用
            reply_json = self.build_webhook_response(reply, data)
            self.notify_dingtalk_webhook(reply_json)
        

    def notify_dingtalk(self, data, reply_json):
        headers = {
            'content-type': 'application/json', 
            'x-acs-dingtalk-access-token': self.get_token()
        }

        notify_url = self.get_post_url(data)
        try:
            r = requests.post(notify_url, json=reply_json, headers=headers)
            resp = r.json()
            log.info("[DingTalk] response={}".format(str(resp)))
        except Exception as e:
            log.error(e)


class DingTalkChannel(Channel):
    def __init__(self):
        log.info("[DingTalk] started.")

    def startup(self):
        http_app.run(host='0.0.0.0', port=channel_conf(const.DINGTALK).get('port'))

    def handle(self, data):
        reply = "您好,有什么我可以帮助您解答的问题吗?"
        prompt = data['text']['content']
        prompt = prompt.strip()
        if str(prompt) != 0:
            conversation_id = data['conversationId']
            sender_id = data['senderId']
            context = dict()
            img_match_prefix = functions.check_prefix(
                prompt, channel_conf_val(const.DINGTALK, 'image_create_prefix'))
            if img_match_prefix:
                prompt = prompt.split(img_match_prefix, 1)[1].strip()
                context['type'] = 'IMAGE_CREATE'
            id = sender_id
            context['from_user_id'] = str(id)
            reply = super().build_reply_content(prompt, context)
        return reply
         

dd = DingTalkChannel()
handlers = dict()
robots = channel_conf(const.DINGTALK).get('dingtalk_robots')
if robots and len(robots) > 0:
    for robot in robots:
        robot_config = channel_conf(const.DINGTALK).get(robot)
        robot_key = robot_config.get('dingtalk_key')
        group_name = robot_config.get('dingtalk_group')
        handlers[group_name or robot_key] = DingTalkHandler(robot_config)
else:
    handlers['DEFAULT'] = DingTalkHandler(channel_conf(const.DINGTALK))
http_app = Flask(__name__,)


@http_app.route("/", methods=['POST'])
def chat():
    log.info("[DingTalk] chat_headers={}".format(str(request.headers)))
    log.info("[DingTalk] chat={}".format(str(request.data)))
    token = request.headers.get('token')
    data = json.loads(request.data)
    if data:
        content = data['text']['content']
        if not content:
            return
        code = data['robotCode']
        group_name = None
        if 'conversationTitle' in data:
            group_name = data['conversationTitle']
        handler = handlers.get(group_name, handlers.get(code, handlers.get('DEFAULT')))
        if handler.dingtalk_post_token and token != handler.dingtalk_post_token:
            return {'ret': 203}
        handler.chat(dd, data)
        return {'ret': 200}
    
    return {'ret': 201}


–END

树莓派4安装Clash

安装Clash

参考:https://mraddict.top/posts/clash-on-rpi/

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
root@ubuntu:~# arch

# 下载上传clash
root@ubuntu:~# mv clash-linux-arm64 /usr/local/bin/

root@ubuntu:~# cd /usr/local/bin/
root@ubuntu:/usr/local/bin# chmod +x clash-linux-arm64 
root@ubuntu:/usr/local/bin# ln -s clash-linux-arm64 clash
root@ubuntu:/usr/local/bin# ls clash*
clash  clash-linux-arm64
root@ubuntu:/usr/local/bin# ls -l clash*
lrwxrwxrwx 1 root root      17 Mar 24 21:43 clash -> clash-linux-arm64
-rwxr-xr-x 1 root root 8978432 Jan 29 18:59 clash-linux-arm64


# 启动一次,会生成配置和下载Country.mmdb文件:
root@ubuntu:/usr/local/bin# clash
INFO[0000] Can't find config, create a initial config file 
INFO[0000] Can't find MMDB, start download              
INFO[0344] Mixed(http+socks) proxy listening at: 127.0.0.1:7890 

# clash-ui
root@ubuntu:/usr/local/bin# cd ~/.config/clash
root@ubuntu:~/.config/clash# git clone --branch gh-pages https://github.com/Dreamacro/clash-dashboard.git    
Cloning into 'clash-dashboard'...
remote: Enumerating objects: 3951, done.
remote: Counting objects: 100% (232/232), done.
remote: Compressing objects: 100% (172/172), done.
remote: Total 3951 (delta 79), reused 189 (delta 56), pack-reused 3719
Receiving objects: 100% (3951/3951), 4.87 MiB | 21.00 KiB/s, done.
Resolving deltas: 100% (2240/2240), done.

配置:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
## 配置文件:https://github.com/Dreamacro/clash/wiki/Configuration#introduction
## 下载服务提供的订阅地址内容,保存到文件。

root@ubuntu:~/.config/clash# ln -s Clash_1679667989.yaml config.yaml
root@ubuntu:~/.config/clash# vi config.yaml 
  11 mode: rule
  12 log-level: debug
  13 external-ui: clash-dashboard
  14 external-controller: '0.0.0.0:9090'
  

root@ubuntu:~/.config/clash# clash
INFO[0000] Start initial compatible provider 🎬哔哩哔哩      
INFO[0000] Start initial compatible provider ⚓️其他流量     
INFO[0000] Start initial compatible provider ✈️Telegram 
INFO[0000] Start initial compatible provider 🎬Netflix   
INFO[0000] Start initial compatible provider 🍎苹果服务      
INFO[0000] Start initial compatible provider 🎬Youtube   
INFO[0000] Start initial compatible provider 🎬国外媒体      
INFO[0000] Start initial compatible provider 🚀直接连接      
INFO[0000] Start initial compatible provider 🔰国外流量      
INFO[0000] Start initial compatible provider Ⓜ️ 微软服务    
INFO[0000] HTTP proxy listening at: 127.0.0.1:7890      
INFO[0000] RESTful API listening at: [::]:9090          
INFO[0000] SOCKS proxy listening at: 127.0.0.1:7891     
INFO[0000] Redirect proxy listening at: 127.0.0.1:7892 


# 打开 http://192.168.123.41:9090/ui/#/settings 页面后,添加一下服务器的ip地址,就可以访问了。

安装conda

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
# 用代理下载后面的数据
root@ubuntu:~# export http_proxy="http://127.0.0.1:7890"
root@ubuntu:~# export https_proxy="http://127.0.0.1:7890"

root@ubuntu:~# wget https://github.com/conda-forge/miniforge/releases/latest/download/Miniforge3-Linux-aarch64.sh     
--2023-03-24 23:16:35--  https://github.com/conda-forge/miniforge/releases/latest/download/Miniforge3-Linux-aarch64.sh
Connecting to 127.0.0.1:7890... connected.
Proxy request sent, awaiting response... 302 Found
Location: https://github.com/conda-forge/miniforge/releases/download/22.11.1-4/Miniforge3-Linux-aarch64.sh [following]
--2023-03-24 23:16:35--  https://github.com/conda-forge/miniforge/releases/download/22.11.1-4/Miniforge3-Linux-aarch64.sh
Reusing existing connection to github.com:443.
Proxy request sent, awaiting response... 302 Found
Location: https://objects.githubusercontent.com/github-production-release-asset-2e65be/221584272/97169057-ca90-4fce-873e-a7d6d2e1db90?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20230324%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20230324T151635Z&X-Amz-Expires=300&X-Amz-Signature=c46561816462b1e2bcd82d8b99f8ce840f4ff9b6214fda083828079783d43f0f&X-Amz-SignedHeaders=host&actor_id=0&key_id=0&repo_id=221584272&response-content-disposition=attachment%3B%20filename%3DMiniforge3-Linux-aarch64.sh&response-content-type=application%2Foctet-stream [following]
--2023-03-24 23:16:36--  https://objects.githubusercontent.com/github-production-release-asset-2e65be/221584272/97169057-ca90-4fce-873e-a7d6d2e1db90?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20230324%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20230324T151635Z&X-Amz-Expires=300&X-Amz-Signature=c46561816462b1e2bcd82d8b99f8ce840f4ff9b6214fda083828079783d43f0f&X-Amz-SignedHeaders=host&actor_id=0&key_id=0&repo_id=221584272&response-content-disposition=attachment%3B%20filename%3DMiniforge3-Linux-aarch64.sh&response-content-type=application%2Foctet-stream
Connecting to 127.0.0.1:7890... connected.
Proxy request sent, awaiting response... 200 OK
Length: 53550114 (51M) [application/octet-stream]
Saving to: ‘Miniforge3-Linux-aarch64.sh.1’

Miniforge3-Linux-aarch64.sh.1                    1%[>                                                                                                  ] 752.00K  9.01KB/s    in 74s     

2023-03-24 23:17:51 (10.2 KB/s) - Connection closed at byte 770048. Retrying.

## 默认用的是direct,没有翻墙。在9090/ui页面先把原来的[连接]停掉,然后把[代理]切换到香港的节点

--2023-03-24 23:17:52--  (try: 2)  https://objects.githubusercontent.com/github-production-release-asset-2e65be/221584272/97169057-ca90-4fce-873e-a7d6d2e1db90?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20230324%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20230324T151635Z&X-Amz-Expires=300&X-Amz-Signature=c46561816462b1e2bcd82d8b99f8ce840f4ff9b6214fda083828079783d43f0f&X-Amz-SignedHeaders=host&actor_id=0&key_id=0&repo_id=221584272&response-content-disposition=attachment%3B%20filename%3DMiniforge3-Linux-aarch64.sh&response-content-type=application%2Foctet-stream
Connecting to 127.0.0.1:7890... connected.
Proxy request sent, awaiting response... 206 Partial Content
Length: 53550114 (51M), 52780066 (50M) remaining [application/octet-stream]
Saving to: ‘Miniforge3-Linux-aarch64.sh.1’

Miniforge3-Linux-aarch64.sh.1                  100%[+=================================================================================================>]  51.07M  2.79MB/s    in 21s     

2023-03-24 23:18:14 (2.39 MB/s) - ‘Miniforge3-Linux-aarch64.sh.1’ saved [53550114/53550114]

root@ubuntu:~# 

root@ubuntu:~# rm -rf Miniforge3-Linux-aarch64.sh
root@ubuntu:~# mv Miniforge3-Linux-aarch64.sh.1 Miniforge3-Linux-aarch64.sh
root@ubuntu:~# chmod +x Miniforge3-Linux-aarch64.sh

root@ubuntu:~# rm -rf /usr/local/miniforge3
root@ubuntu:~# ./Miniforge3-Linux-aarch64.sh 

Welcome to Miniforge3 22.11.1-4

In order to continue the installation process, please review the license
agreement.
Please, press ENTER to continue
>>>  
Miniforge installer code uses BSD-3-Clause license as stated below.

Binary packages that come with it have their own licensing terms
and by installing miniforge you agree to the licensing terms of individual
packages as well. They include different OSI-approved licenses including
the GNU General Public License and can be found in pkgs/<pkg-name>/info/licenses
folders.

Miniforge installer comes with a boostrapping executable that is used
when installing miniforge and is deleted after miniforge is installed.
The bootstrapping executable uses micromamba, cli11, cpp-filesystem,
curl, c-ares, krb5, libarchive, libev, lz4, nghttp2, openssl, libsolv,
nlohmann-json, reproc and zstd which are licensed under BSD-3-Clause,
MIT and OpenSSL licenses. Licenses and copyright notices of these
projects can be found at the following URL.
https://github.com/conda-forge/micromamba-feedstock/tree/master/recipe.

=============================================================================

Copyright (c) 2019-2022, conda-forge
All rights reserved.

Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:

1. Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer.

2. Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.

3. Neither the name of the copyright holder nor the names of its contributors
may be used to endorse or promote products derived from this software without
specific prior written permission.

THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.


Do you accept the license terms? [yes|no]
[no] >>> yes

Miniforge3 will now be installed into this location:
/root/miniforge3

  - Press ENTER to confirm the location
  - Press CTRL-C to abort the installation
  - Or specify a different location below

[/root/miniforge3] >>> /usr/local/miniforge3   
PREFIX=/usr/local/miniforge3
Unpacking payload ...
Extracting ca-certificates-2022.12.7-h4fd8a4c_0.conda
Extracting ld_impl_linux-aarch64-2.40-h2d8c526_0.conda
Extracting libgomp-12.2.0-h607ecd0_19.tar.bz2
Extracting libstdcxx-ng-12.2.0-hc13a102_19.tar.bz2
Extracting python_abi-3.10-3_cp310.conda
Extracting tzdata-2022g-h191b570_0.conda
Extracting _openmp_mutex-4.5-2_gnu.tar.bz2
Extracting libgcc-ng-12.2.0-h607ecd0_19.tar.bz2
Extracting bzip2-1.0.8-hf897c2e_4.tar.bz2
Extracting libffi-3.4.2-h3557bc0_5.tar.bz2
Extracting libnsl-2.0.0-hf897c2e_0.tar.bz2
Extracting libuuid-2.32.1-hf897c2e_1000.tar.bz2
Extracting libzlib-1.2.13-h4e544f5_4.tar.bz2
Extracting ncurses-6.3-headf329_1.tar.bz2
Extracting openssl-3.0.8-hb4cce97_0.conda
Extracting xz-5.2.6-h9cdd2b7_0.tar.bz2
Extracting libsqlite-3.40.0-hf9034f9_0.tar.bz2
Extracting readline-8.1.2-h38e3740_0.tar.bz2
Extracting tk-8.6.12-hd8af866_0.tar.bz2
Extracting zstd-1.5.2-h44f6412_6.conda
Extracting python-3.10.9-ha43d526_0_cpython.conda
Extracting certifi-2022.12.7-pyhd8ed1ab_0.conda
Extracting charset-normalizer-2.1.1-pyhd8ed1ab_0.tar.bz2
Extracting colorama-0.4.6-pyhd8ed1ab_0.tar.bz2
Extracting idna-3.4-pyhd8ed1ab_0.tar.bz2
Extracting pluggy-1.0.0-pyhd8ed1ab_5.tar.bz2
Extracting pycosat-0.6.4-py310h761cc84_1.tar.bz2
Extracting pycparser-2.21-pyhd8ed1ab_0.tar.bz2
Extracting pysocks-1.7.1-pyha2e5f31_6.tar.bz2
Extracting ruamel.yaml.clib-0.2.7-py310hb89b984_1.conda
Extracting setuptools-65.6.3-pyhd8ed1ab_0.conda
Extracting toolz-0.12.0-pyhd8ed1ab_0.tar.bz2
Extracting wheel-0.38.4-pyhd8ed1ab_0.tar.bz2
Extracting cffi-1.15.1-py310hf0c4615_3.conda
Extracting pip-23.0-pyhd8ed1ab_0.conda
Extracting ruamel.yaml-0.17.21-py310h761cc84_2.tar.bz2
Extracting tqdm-4.64.1-pyhd8ed1ab_0.tar.bz2
Extracting brotlipy-0.7.0-py310h761cc84_1005.tar.bz2
Extracting cryptography-39.0.1-py310he4ba0b1_0.conda
Extracting zstandard-0.19.0-py310hde4b81c_1.conda
Extracting conda-package-streaming-0.7.0-pyhd8ed1ab_1.conda
Extracting pyopenssl-23.0.0-pyhd8ed1ab_0.conda
Extracting conda-package-handling-2.0.2-pyh38be061_0.conda
Extracting urllib3-1.26.14-pyhd8ed1ab_0.conda
Extracting requests-2.28.2-pyhd8ed1ab_0.conda
Extracting conda-22.11.1-py310h4c7bcd0_1.conda

Installing base environment...


                                           __
          __  ______ ___  ____ _____ ___  / /_  ____ _
         / / / / __ `__ \/ __ `/ __ `__ \/ __ \/ __ `/
        / /_/ / / / / / / /_/ / / / / / / /_/ / /_/ /
       / .___/_/ /_/ /_/\__,_/_/ /_/ /_/_.___/\__,_/
      /_/

Transaction

  Prefix: /usr/local/miniforge3

  Updating specs:

   - conda-forge/linux-aarch64::ca-certificates==2022.12.7=h4fd8a4c_0[md5=2450fbcaf65634e0d071e47e2b8487b4]
   - conda-forge/linux-aarch64::ld_impl_linux-aarch64==2.40=h2d8c526_0[md5=16246d69e945d0b1969a6099e7c5d457]
   - conda-forge/linux-aarch64::libgomp==12.2.0=h607ecd0_19[md5=65b9cb876525dcb2e74a90cf02c6762a]
   - conda-forge/linux-aarch64::libstdcxx-ng==12.2.0=hc13a102_19[md5=981741cd4321edd5c504b48f74fe91f2]
   - conda-forge/linux-aarch64::python_abi==3.10=3_cp310[md5=7f4f00b03d3a7c4d4b8b987e5da461a9]
   - conda-forge/noarch::tzdata==2022g=h191b570_0[md5=51fc4fcfb19f5d95ffc8c339db5068e8]
   - conda-forge/linux-aarch64::_openmp_mutex==4.5=2_gnu[md5=6168d71addc746e8f2b8d57dfd2edcea]
   - conda-forge/linux-aarch64::libgcc-ng==12.2.0=h607ecd0_19[md5=8456a29b6d9fc3123ccb9a966b6b2c49]
   - conda-forge/linux-aarch64::bzip2==1.0.8=hf897c2e_4[md5=2d787570a729e273a4e75775ddf3348a]
   - conda-forge/linux-aarch64::libffi==3.4.2=h3557bc0_5[md5=dddd85f4d52121fab0a8b099c5e06501]
   - conda-forge/linux-aarch64::libnsl==2.0.0=hf897c2e_0[md5=36fdbc05c9d9145ece86f5a63c3f352e]
   - conda-forge/linux-aarch64::libuuid==2.32.1=hf897c2e_1000[md5=e038da5ef9095b0d79aac14a311394e7]
   - conda-forge/linux-aarch64::libzlib==1.2.13=h4e544f5_4[md5=88596b6277fe6d39f046983aae6044db]
   - conda-forge/linux-aarch64::ncurses==6.3=headf329_1[md5=486b68148e121bc8bbadc3cefae4c04f]
   - conda-forge/linux-aarch64::openssl==3.0.8=hb4cce97_0[md5=268fe30a14a3f40fe54da04fc053fd2d]
   - conda-forge/linux-aarch64::xz==5.2.6=h9cdd2b7_0[md5=83baad393a31d59c20b63ba4da6592df]
   - conda-forge/linux-aarch64::libsqlite==3.40.0=hf9034f9_0[md5=9afb0d5dbaa403858a660cd0b4a31d29]
   - conda-forge/linux-aarch64::readline==8.1.2=h38e3740_0[md5=3cdbfb7d7b63ae2c2d35bb167d257ecd]
   - conda-forge/linux-aarch64::tk==8.6.12=hd8af866_0[md5=7894e82ff743bd96c76585ddebe28e2a]
   - conda-forge/linux-aarch64::zstd==1.5.2=h44f6412_6[md5=6d0d1cd6d184129eabb96bb220afb5b2]
   - conda-forge/linux-aarch64::python==3.10.9=ha43d526_0_cpython[md5=24478dd738f2d557efe2a4fc6a248eb3]
   - conda-forge/noarch::certifi==2022.12.7=pyhd8ed1ab_0[md5=fb9addc3db06e56abe03e0e9f21a63e6]
   - conda-forge/noarch::charset-normalizer==2.1.1=pyhd8ed1ab_0[md5=c1d5b294fbf9a795dec349a6f4d8be8e]
   - conda-forge/noarch::colorama==0.4.6=pyhd8ed1ab_0[md5=3faab06a954c2a04039983f2c4a50d99]
   - conda-forge/noarch::idna==3.4=pyhd8ed1ab_0[md5=34272b248891bddccc64479f9a7fffed]
   - conda-forge/noarch::pluggy==1.0.0=pyhd8ed1ab_5[md5=7d301a0d25f424d96175f810935f0da9]
   - conda-forge/linux-aarch64::pycosat==0.6.4=py310h761cc84_1[md5=c701cff6d6e7907c93ab603e58082a7c]
   - conda-forge/noarch::pycparser==2.21=pyhd8ed1ab_0[md5=076becd9e05608f8dc72757d5f3a91ff]
   - conda-forge/noarch::pysocks==1.7.1=pyha2e5f31_6[md5=2a7de29fb590ca14b5243c4c812c8025]
   - conda-forge/linux-aarch64::ruamel.yaml.clib==0.2.7=py310hb89b984_1[md5=89972c78c36ed3261c22bde7c012be03]
   - conda-forge/noarch::setuptools==65.6.3=pyhd8ed1ab_0[md5=9600fc9524d3f821e6a6d58c52f5bf5a]
   - conda-forge/noarch::toolz==0.12.0=pyhd8ed1ab_0[md5=92facfec94bc02d6ccf42e7173831a36]
   - conda-forge/noarch::wheel==0.38.4=pyhd8ed1ab_0[md5=c829cfb8cb826acb9de0ac1a2df0a940]
   - conda-forge/linux-aarch64::cffi==1.15.1=py310hf0c4615_3[md5=a2bedcb1d205485ea32fe5d2bd6fd970]
   - conda-forge/noarch::pip==23.0=pyhd8ed1ab_0[md5=85b35999162ec95f9f999bac15279c02]
   - conda-forge/linux-aarch64::ruamel.yaml==0.17.21=py310h761cc84_2[md5=98c0b13f20fcb4f5080554d137e39b37]
   - conda-forge/noarch::tqdm==4.64.1=pyhd8ed1ab_0[md5=5526ff3f88f9db87bb0924b9ce575345]
   - conda-forge/linux-aarch64::brotlipy==0.7.0=py310h761cc84_1005[md5=66934993368d01f896652925d3ac7e66]
   - conda-forge/linux-aarch64::cryptography==39.0.1=py310he4ba0b1_0[md5=3129345d217e5fd6488df794e49e327b]
   - conda-forge/linux-aarch64::zstandard==0.19.0=py310hde4b81c_1[md5=d4b3cc980179c38949c83fe23057d97c]
   - conda-forge/noarch::conda-package-streaming==0.7.0=pyhd8ed1ab_1[md5=1a2fa9e53cfbc2e4d9ab21990805a436]
   - conda-forge/noarch::pyopenssl==23.0.0=pyhd8ed1ab_0[md5=d41957700e83bbb925928764cb7f8878]
   - conda-forge/noarch::conda-package-handling==2.0.2=pyh38be061_0[md5=44800e9bd13143292097c65e57323038]
   - conda-forge/noarch::urllib3==1.26.14=pyhd8ed1ab_0[md5=01f33ad2e0aaf6b5ba4add50dad5ad29]
   - conda-forge/noarch::requests==2.28.2=pyhd8ed1ab_0[md5=11d178fc55199482ee48d6812ea83983]
   - conda-forge/linux-aarch64::conda==22.11.1=py310h4c7bcd0_1[md5=a71c4cc6bd77f61c0c1601b28291c460]


  Package                      Version  Build               Channel           Size
────────────────────────────────────────────────────────────────────────────────────
  Install:
────────────────────────────────────────────────────────────────────────────────────

  + _openmp_mutex                  4.5  2_gnu               conda-forge     Cached
  + brotlipy                     0.7.0  py310h761cc84_1005  conda-forge     Cached
  + bzip2                        1.0.8  hf897c2e_4          conda-forge     Cached
  + ca-certificates          2022.12.7  h4fd8a4c_0          conda-forge     Cached
  + certifi                  2022.12.7  pyhd8ed1ab_0        conda-forge     Cached
  + cffi                        1.15.1  py310hf0c4615_3     conda-forge     Cached
  + charset-normalizer           2.1.1  pyhd8ed1ab_0        conda-forge     Cached
  + colorama                     0.4.6  pyhd8ed1ab_0        conda-forge     Cached
  + conda                      22.11.1  py310h4c7bcd0_1     conda-forge     Cached
  + conda-package-handling       2.0.2  pyh38be061_0        conda-forge     Cached
  + conda-package-streaming      0.7.0  pyhd8ed1ab_1        conda-forge     Cached
  + cryptography                39.0.1  py310he4ba0b1_0     conda-forge     Cached
  + idna                           3.4  pyhd8ed1ab_0        conda-forge     Cached
  + ld_impl_linux-aarch64         2.40  h2d8c526_0          conda-forge     Cached
  + libffi                       3.4.2  h3557bc0_5          conda-forge     Cached
  + libgcc-ng                   12.2.0  h607ecd0_19         conda-forge     Cached
  + libgomp                     12.2.0  h607ecd0_19         conda-forge     Cached
  + libnsl                       2.0.0  hf897c2e_0          conda-forge     Cached
  + libsqlite                   3.40.0  hf9034f9_0          conda-forge     Cached
  + libstdcxx-ng                12.2.0  hc13a102_19         conda-forge     Cached
  + libuuid                     2.32.1  hf897c2e_1000       conda-forge     Cached
  + libzlib                     1.2.13  h4e544f5_4          conda-forge     Cached
  + ncurses                        6.3  headf329_1          conda-forge     Cached
  + openssl                      3.0.8  hb4cce97_0          conda-forge     Cached
  + pip                           23.0  pyhd8ed1ab_0        conda-forge     Cached
  + pluggy                       1.0.0  pyhd8ed1ab_5        conda-forge     Cached
  + pycosat                      0.6.4  py310h761cc84_1     conda-forge     Cached
  + pycparser                     2.21  pyhd8ed1ab_0        conda-forge     Cached
  + pyopenssl                   23.0.0  pyhd8ed1ab_0        conda-forge     Cached
  + pysocks                      1.7.1  pyha2e5f31_6        conda-forge     Cached
  + python                      3.10.9  ha43d526_0_cpython  conda-forge     Cached
  + python_abi                    3.10  3_cp310             conda-forge     Cached
  + readline                     8.1.2  h38e3740_0          conda-forge     Cached
  + requests                    2.28.2  pyhd8ed1ab_0        conda-forge     Cached
  + ruamel.yaml                0.17.21  py310h761cc84_2     conda-forge     Cached
  + ruamel.yaml.clib             0.2.7  py310hb89b984_1     conda-forge     Cached
  + setuptools                  65.6.3  pyhd8ed1ab_0        conda-forge     Cached
  + tk                          8.6.12  hd8af866_0          conda-forge     Cached
  + toolz                       0.12.0  pyhd8ed1ab_0        conda-forge     Cached
  + tqdm                        4.64.1  pyhd8ed1ab_0        conda-forge     Cached
  + tzdata                       2022g  h191b570_0          conda-forge     Cached
  + urllib3                    1.26.14  pyhd8ed1ab_0        conda-forge     Cached
  + wheel                       0.38.4  pyhd8ed1ab_0        conda-forge     Cached
  + xz                           5.2.6  h9cdd2b7_0          conda-forge     Cached
  + zstandard                   0.19.0  py310hde4b81c_1     conda-forge     Cached
  + zstd                         1.5.2  h44f6412_6          conda-forge     Cached

  Summary:

  Install: 46 packages

  Total download: 0 B

────────────────────────────────────────────────────────────────────────────────────



Transaction starting
Linking ca-certificates-2022.12.7-h4fd8a4c_0
Linking ld_impl_linux-aarch64-2.40-h2d8c526_0
Linking libgomp-12.2.0-h607ecd0_19
Linking libstdcxx-ng-12.2.0-hc13a102_19
Linking python_abi-3.10-3_cp310
Linking tzdata-2022g-h191b570_0
Linking _openmp_mutex-4.5-2_gnu
Linking libgcc-ng-12.2.0-h607ecd0_19
Linking bzip2-1.0.8-hf897c2e_4
Linking libffi-3.4.2-h3557bc0_5
Linking libnsl-2.0.0-hf897c2e_0
Linking libuuid-2.32.1-hf897c2e_1000
Linking libzlib-1.2.13-h4e544f5_4
Linking ncurses-6.3-headf329_1
Linking openssl-3.0.8-hb4cce97_0
Linking xz-5.2.6-h9cdd2b7_0
Linking libsqlite-3.40.0-hf9034f9_0
Linking readline-8.1.2-h38e3740_0
Linking tk-8.6.12-hd8af866_0
Linking zstd-1.5.2-h44f6412_6
Linking python-3.10.9-ha43d526_0_cpython
Linking certifi-2022.12.7-pyhd8ed1ab_0
Linking charset-normalizer-2.1.1-pyhd8ed1ab_0
Linking colorama-0.4.6-pyhd8ed1ab_0
Linking idna-3.4-pyhd8ed1ab_0
Linking pluggy-1.0.0-pyhd8ed1ab_5
Linking pycosat-0.6.4-py310h761cc84_1
Linking pycparser-2.21-pyhd8ed1ab_0
Linking pysocks-1.7.1-pyha2e5f31_6
Linking ruamel.yaml.clib-0.2.7-py310hb89b984_1
Linking setuptools-65.6.3-pyhd8ed1ab_0
Linking toolz-0.12.0-pyhd8ed1ab_0
Linking wheel-0.38.4-pyhd8ed1ab_0
Linking cffi-1.15.1-py310hf0c4615_3
Linking pip-23.0-pyhd8ed1ab_0
Linking ruamel.yaml-0.17.21-py310h761cc84_2
Linking tqdm-4.64.1-pyhd8ed1ab_0
Linking brotlipy-0.7.0-py310h761cc84_1005
Linking cryptography-39.0.1-py310he4ba0b1_0
Linking zstandard-0.19.0-py310hde4b81c_1
Linking conda-package-streaming-0.7.0-pyhd8ed1ab_1
Linking pyopenssl-23.0.0-pyhd8ed1ab_0
Linking conda-package-handling-2.0.2-pyh38be061_0
Linking urllib3-1.26.14-pyhd8ed1ab_0
Linking requests-2.28.2-pyhd8ed1ab_0
Linking conda-22.11.1-py310h4c7bcd0_1
Transaction finished
installation finished.
Do you wish the installer to initialize Miniforge3
by running conda init? [yes|no]
[no] >>> yes
no change     /usr/local/miniforge3/condabin/conda
no change     /usr/local/miniforge3/bin/conda
no change     /usr/local/miniforge3/bin/conda-env
no change     /usr/local/miniforge3/bin/activate
no change     /usr/local/miniforge3/bin/deactivate
no change     /usr/local/miniforge3/etc/profile.d/conda.sh
no change     /usr/local/miniforge3/etc/fish/conf.d/conda.fish
no change     /usr/local/miniforge3/shell/condabin/Conda.psm1
no change     /usr/local/miniforge3/shell/condabin/conda-hook.ps1
no change     /usr/local/miniforge3/lib/python3.10/site-packages/xontrib/conda.xsh
no change     /usr/local/miniforge3/etc/profile.d/conda.csh
modified      /root/.bashrc

==> For changes to take effect, close and re-open your current shell. <==

If you'd prefer that conda's base environment not be activated on startup, 
   set the auto_activate_base parameter to false: 

conda config --set auto_activate_base false

Thank you for installing Miniforge3!
root@ubuntu:~# 

这里修改了 /root/.bashrc 添加了:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
# >>> conda initialize >>>
# !! Contents within this block are managed by 'conda init' !!
__conda_setup="$('/usr/local/miniforge3/bin/conda' 'shell.bash' 'hook' 2> /dev/null)"
if [ $? -eq 0 ]; then
    eval "$__conda_setup"
else
    if [ -f "/usr/local/miniforge3/etc/profile.d/conda.sh" ]; then
        . "/usr/local/miniforge3/etc/profile.d/conda.sh"
    else
        export PATH="/usr/local/miniforge3/bin:$PATH"
    fi
fi
unset __conda_setup
# <<< conda initialize <<<

为了重新加载环境变量,新开一个窗口,查看版本:

1
2
3
4
5
6
7
8
9
10
11
12
13
(base) root@ubuntu:~# python -V
Python 3.10.9
(base) root@ubuntu:~# 
(base) root@ubuntu:~# conda -V
conda 22.11.1

(base) root@ubuntu:~# python3
Python 3.10.9 | packaged by conda-forge | (main, Feb  2 2023, 20:11:30) [GCC 11.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import platform
>>> platform.architecture()
('64bit', 'ELF')
>>> 

使用conda新建一个python3.8的配置 https://conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
(base) root@ubuntu:~# conda info --envs
# conda environments:
#
base                  *  /usr/local/miniforge3

(base) root@ubuntu:~# export http_proxy="http://127.0.0.1:7890"
(base) root@ubuntu:~# export https_proxy="http://127.0.0.1:7890" 

(base) root@ubuntu:~# conda create -n openai python=3.8 
Collecting package metadata (current_repodata.json): done
Solving environment: done


==> WARNING: A newer version of conda exists. <==
  current version: 22.11.1
  latest version: 23.1.0

Please update conda by running

    $ conda update -n base -c conda-forge conda

Or to minimize the number of packages updated during conda update use

     conda install conda=23.1.0



## Package Plan ##

  environment location: /usr/local/miniforge3/envs/openai

  added / updated specs:
    - python=3.8


The following packages will be downloaded:

    package                    |            build
    ---------------------------|-----------------
    pypy3.8-7.3.11             |       hf9a8208_0        31.6 MB  conda-forge
    ------------------------------------------------------------
                                           Total:        31.6 MB

The following NEW packages will be INSTALLED:

  _openmp_mutex      conda-forge/linux-aarch64::_openmp_mutex-4.5-2_gnu 
  bzip2              conda-forge/linux-aarch64::bzip2-1.0.8-hf897c2e_4 
  ca-certificates    conda-forge/linux-aarch64::ca-certificates-2022.12.7-h4fd8a4c_0 
  expat              conda-forge/linux-aarch64::expat-2.5.0-ha18d298_0 
  gdbm               conda-forge/linux-aarch64::gdbm-1.18-h0a1914f_2 
  libffi             conda-forge/linux-aarch64::libffi-3.4.2-h3557bc0_5 
  libgcc-ng          conda-forge/linux-aarch64::libgcc-ng-12.2.0-h607ecd0_19 
  libgomp            conda-forge/linux-aarch64::libgomp-12.2.0-h607ecd0_19 
  libsqlite          conda-forge/linux-aarch64::libsqlite-3.40.0-hf9034f9_0 
  libstdcxx-ng       conda-forge/linux-aarch64::libstdcxx-ng-12.2.0-hc13a102_19 
  libzlib            conda-forge/linux-aarch64::libzlib-1.2.13-h4e544f5_4 
  ncurses            conda-forge/linux-aarch64::ncurses-6.3-headf329_1 
  openssl            conda-forge/linux-aarch64::openssl-3.1.0-hb4cce97_0 
  pip                conda-forge/noarch::pip-23.0.1-pyhd8ed1ab_0 
  pypy3.8            conda-forge/linux-aarch64::pypy3.8-7.3.11-hf9a8208_0 
  python             conda-forge/linux-aarch64::python-3.8.16-0_73_pypy 
  python_abi         conda-forge/linux-aarch64::python_abi-3.8-3_pypy38_pp73 
  readline           conda-forge/linux-aarch64::readline-8.2-h8fc344f_1 
  setuptools         conda-forge/noarch::setuptools-67.6.0-pyhd8ed1ab_0 
  sqlite             conda-forge/linux-aarch64::sqlite-3.40.0-h69ca7e5_0 
  tk                 conda-forge/linux-aarch64::tk-8.6.12-hd8af866_0 
  wheel              conda-forge/noarch::wheel-0.40.0-pyhd8ed1ab_0 
  xz                 conda-forge/linux-aarch64::xz-5.2.6-h9cdd2b7_0 
  zlib               conda-forge/linux-aarch64::zlib-1.2.13-h4e544f5_4 


Proceed ([y]/n)? y


Downloading and Extracting Packages
                                                                                                                                                                                          
Preparing transaction: done
Verifying transaction: done
Executing transaction: done
#
# To activate this environment, use
#
#     $ conda activate openai
#
# To deactivate an active environment, use
#
#     $ conda deactivate

(base) root@ubuntu:~# 

激活配置:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
(base) root@ubuntu:~# conda activate openai
(openai) root@ubuntu:~# 
(openai) root@ubuntu:~# python --version
Python 3.8.16 | packaged by conda-forge | (a9dbdca6, Jan 29 2023, 10:19:50)
[PyPy 7.3.11 with GCC 11.3.0]
(openai) root@ubuntu:~# pip --version 
pip 23.0.1 from /usr/local/miniforge3/envs/openai/lib/pypy3.8/site-packages/pip (python 3.8)
(openai) root@ubuntu:~# git clone https://github.com/zhayujie/bot-on-anything
Cloning into 'bot-on-anything'...
remote: Enumerating objects: 644, done.
remote: Counting objects: 100% (271/271), done.
remote: Compressing objects: 100% (116/116), done.
remote: Total 644 (delta 190), reused 194 (delta 154), pack-reused 373
Receiving objects: 100% (644/644), 588.10 KiB | 448.00 KiB/s, done.
Resolving deltas: 100% (373/373), done.
(openai) root@ubuntu:~# 
(openai) root@ubuntu:~# pip3 install itchat-uos==1.5.0.dev0
Collecting itchat-uos==1.5.0.dev0
  Downloading itchat_uos-1.5.0.dev0-py3-none-any.whl (52 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 52.5/52.5 kB 244.0 kB/s eta 0:00:00
Collecting requests
  Downloading requests-2.28.2-py3-none-any.whl (62 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 62.8/62.8 kB 409.1 kB/s eta 0:00:00
Collecting pyqrcode
  Downloading PyQRCode-1.2.1.zip (41 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 41.9/41.9 kB 961.5 kB/s eta 0:00:00
  Preparing metadata (setup.py) ... done
Collecting pypng
  Downloading pypng-0.20220715.0-py3-none-any.whl (58 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 58.1/58.1 kB 540.0 kB/s eta 0:00:00
Collecting charset-normalizer<4,>=2
  Downloading charset_normalizer-3.1.0-py3-none-any.whl (46 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 46.2/46.2 kB 642.4 kB/s eta 0:00:00
Collecting idna<4,>=2.5
  Downloading idna-3.4-py3-none-any.whl (61 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 61.5/61.5 kB 567.0 kB/s eta 0:00:00
Collecting urllib3<1.27,>=1.21.1
  Downloading urllib3-1.26.15-py2.py3-none-any.whl (140 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 140.9/140.9 kB 1.0 MB/s eta 0:00:00
Collecting certifi>=2017.4.17
  Downloading certifi-2022.12.7-py3-none-any.whl (155 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 155.3/155.3 kB 1.2 MB/s eta 0:00:00
Building wheels for collected packages: pyqrcode
  Building wheel for pyqrcode (setup.py) ... done
  Created wheel for pyqrcode: filename=PyQRCode-1.2.1-py3-none-any.whl size=36228 sha256=ba8cd080e7793f5e55c14fa704e57c1459ae29aa6481c719833bf9e148de5ad0
  Stored in directory: /root/.cache/pip/wheels/5f/46/eb/231c89e0ae989c528db1a30d3aae90c4fee29f14d4e0369312
Successfully built pyqrcode
Installing collected packages: pyqrcode, pypng, urllib3, idna, charset-normalizer, certifi, requests, itchat-uos
Successfully installed certifi-2022.12.7 charset-normalizer-3.1.0 idna-3.4 itchat-uos-1.5.0.dev0 pypng-0.20220715.0 pyqrcode-1.2.1 requests-2.28.2 urllib3-1.26.15
WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
(openai) root@ubuntu:~# 
(openai) root@ubuntu:~# pip3 install --upgrade openai
Collecting openai
  Downloading openai-0.27.2-py3-none-any.whl (70 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 70.1/70.1 kB 277.1 kB/s eta 0:00:00
Requirement already satisfied: requests>=2.20 in /usr/local/miniforge3/envs/openai/lib/python3.8/site-packages (from openai) (2.28.2)
Collecting tqdm
  Downloading tqdm-4.65.0-py3-none-any.whl (77 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 77.1/77.1 kB 394.8 kB/s eta 0:00:00
Collecting aiohttp
  Downloading aiohttp-3.8.4.tar.gz (7.3 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 7.3/7.3 MB 2.1 MB/s eta 0:00:00
  Installing build dependencies ... done
  Getting requirements to build wheel ... done
  Installing backend dependencies ... done
  Preparing metadata (pyproject.toml) ... done
Requirement already satisfied: charset-normalizer<4,>=2 in /usr/local/miniforge3/envs/openai/lib/python3.8/site-packages (from requests>=2.20->openai) (3.1.0)
Requirement already satisfied: idna<4,>=2.5 in /usr/local/miniforge3/envs/openai/lib/python3.8/site-packages (from requests>=2.20->openai) (3.4)
Requirement already satisfied: urllib3<1.27,>=1.21.1 in /usr/local/miniforge3/envs/openai/lib/python3.8/site-packages (from requests>=2.20->openai) (1.26.15)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/miniforge3/envs/openai/lib/python3.8/site-packages (from requests>=2.20->openai) (2022.12.7)
Collecting attrs>=17.3.0
  Downloading attrs-22.2.0-py3-none-any.whl (60 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 60.0/60.0 kB 834.6 kB/s eta 0:00:00
Collecting multidict<7.0,>=4.5
  Downloading multidict-6.0.4.tar.gz (51 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 51.3/51.3 kB 1.4 MB/s eta 0:00:00
  Installing build dependencies ... done
  Getting requirements to build wheel ... done
  Installing backend dependencies ... done
  Preparing metadata (pyproject.toml) ... done
Collecting async-timeout<5.0,>=4.0.0a3
  Downloading async_timeout-4.0.2-py3-none-any.whl (5.8 kB)
Collecting yarl<2.0,>=1.0
  Downloading yarl-1.8.2.tar.gz (172 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 172.3/172.3 kB 1.8 MB/s eta 0:00:00
  Installing build dependencies ... done
  Getting requirements to build wheel ... done
  Preparing metadata (pyproject.toml) ... done
Collecting frozenlist>=1.1.1
  Downloading frozenlist-1.3.3.tar.gz (66 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 66.6/66.6 kB 392.4 kB/s eta 0:00:00
  Installing build dependencies ... done
  Getting requirements to build wheel ... done
  Preparing metadata (pyproject.toml) ... done
Collecting aiosignal>=1.1.2
  Downloading aiosignal-1.3.1-py3-none-any.whl (7.6 kB)
Building wheels for collected packages: aiohttp, frozenlist, multidict, yarl
  Building wheel for aiohttp (pyproject.toml) ... done
  Created wheel for aiohttp: filename=aiohttp-3.8.4-py3-none-any.whl size=183495 sha256=6bdf6b07bb86d5e5e452691da3570b0465c4177bec6fbe8ab668aef8d27ce39c
  Stored in directory: /root/.cache/pip/wheels/46/48/fb/1fceb5376aa4eb481cec5ca10d6aece4455cf6a95030009502
  Building wheel for frozenlist (pyproject.toml) ... done
  Created wheel for frozenlist: filename=frozenlist-1.3.3-py3-none-any.whl size=9271 sha256=83ae70b8a9145f1fb94dffca4105ef207500e63dd3d8c056dc63a3f6ba664927
  Stored in directory: /root/.cache/pip/wheels/0e/e7/55/8036a4cd9267238ba8aa2d714837827b8fd836324632469067
  Building wheel for multidict (pyproject.toml) ... done
  Created wheel for multidict: filename=multidict-6.0.4-py3-none-any.whl size=9710 sha256=0a72821685197753e4e43473bbfe1232bb33cd79fa4ad9c97daae3a7e4afd097
  Stored in directory: /root/.cache/pip/wheels/23/65/1f/b5b0672ad49d2ff7b2c6ad75f24f45c407aab185c37803ae76
  Building wheel for yarl (pyproject.toml) ... done
  Created wheel for yarl: filename=yarl-1.8.2-py3-none-any.whl size=24118 sha256=4ed71ce5647e860cac0a0b5496e72e837878a33d62177d3bdded7d20d068b299
  Stored in directory: /root/.cache/pip/wheels/4e/40/2e/3261e7db3f6b66ca3d8a1ec694f2d5d87f89110e2204675597
Successfully built aiohttp frozenlist multidict yarl
Installing collected packages: tqdm, multidict, frozenlist, attrs, async-timeout, yarl, aiosignal, aiohttp, openai
Successfully installed aiohttp-3.8.4 aiosignal-1.3.1 async-timeout-4.0.2 attrs-22.2.0 frozenlist-1.3.3 multidict-6.0.4 openai-0.27.2 tqdm-4.65.0 yarl-1.8.2
WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
(openai) root@ubuntu:~# 

学习conda

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
1
$ eval "$(conda shell.bash activate)"
$ conda shell.bash activate mamba-poc

2
eval "$(conda shell.bash hook)"
conda activate <env-name>

3
#!/bin/bash
source /Users/yourname/anaconda/bin/activate your_env
python --version # example way to see that your virtual env loaded as expected

4
try using

source ~/anaconda3/etc/profile.d/conda.sh

and then do

conda activate pult

5
https://unix.stackexchange.com/questions/689163/launch-terminal-and-conda-activate-env-from-bash-script
# Just activate my conda
alias my_conda='source /home/$USER/anaconda3/bin/activate && conda activate MyPy38'

# Open Jupyter Notebook in my Env
alias my_jupn='source /home/$USER/anaconda3/bin/activate && conda activate MyPy38 && jupyter-notebook'

# Open Jupyter Lab in my Env
alias my_jupl='source /home/$USER/anaconda3/bin/activate && conda activate MyPy38 && jupyter-lab'

# Open Spyder in my Env
alias my_spyder='source /home/$USER/anaconda3/bin/activate && conda activate MyPy38 && spyder'

花生壳

–END

重新折腾raspberry2

买了新的raspberry4后,就没怎么去弄旧的树莓派2了,今天再次看到想着运行起来看看,插上电源后,一直亮着绿灯,然后就没其他反应了。

树莓派2有点旧,现在直接拿着新的手机充电器去接的时刻,一直起不来,刚开始是怀疑是用TTL接错线导致板子烧了,也没有显示器查看界面。

后面想着重新安装一下试试,然后拿着去接网线有线网络,这样就拿了一个旧的充电器的头。没想到这个时刻电源指示灯尽然闪起来了。想来这可能是电流过载保护了。

问题解决了,也把最新的重新安装的记录一下。

安装

参考: * Setting up your Raspberry Pi

安装系统

1 打开 Raspberry Pi Imager 下载并安装,然后把SD card插入电脑。

2 设置初始化 用户和密码,并默认打开ssh服务(新版本已经去掉默认用户了,所以要设置一下 An update to Raspberry Pi OS Bullseye

3 选择操作系统

4 连上有线网络,接通电源。然后打开路由管理界面,查看raspberry的ip地址

5 用ssh连接服务器

配置Wifi网络

原来已经买了usb的无线网卡,用 ifconfig 也能查看到 wlan0 的接口。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
root@raspberrypi:~# apt install vim 
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
  libgpm2 vim-runtime
Suggested packages:
  gpm ctags vim-doc vim-scripts
The following NEW packages will be installed:
  libgpm2 vim vim-runtime
0 upgraded, 3 newly installed, 0 to remove and 0 not upgraded.
...
update-alternatives: using /usr/bin/vim.basic to provide /usr/bin/vim (vim) in auto mode
update-alternatives: using /usr/bin/vim.basic to provide /usr/bin/vimdiff (vimdiff) in auto mode
update-alternatives: using /usr/bin/vim.basic to provide /usr/bin/rvim (rvim) in auto mode
update-alternatives: using /usr/bin/vim.basic to provide /usr/bin/rview (rview) in auto mode
update-alternatives: using /usr/bin/vim.basic to provide /usr/bin/vi (vi) in auto mode
update-alternatives: using /usr/bin/vim.basic to provide /usr/bin/view (view) in auto mode
update-alternatives: using /usr/bin/vim.basic to provide /usr/bin/ex (ex) in auto mode
Processing triggers for man-db (2.9.4-2) ...
Processing triggers for libc-bin (2.31-13+rpt2+rpi1+deb11u5) ...

root@raspberrypi:~# echo "set mouse-=a" >>~/.vimrc
root@raspberrypi:~# vim /etc/wpa_supplicant/wpa_supplicant.conf    
root@raspberrypi:~# 


root@raspberrypi:~# iwlist wlan0 scan

# 算一个加密的配置
root@raspberrypi:~# wpa_passphrase winse
# reading passphrase from stdin
xxx
network={
        ssid="winse"
        #psk="xxx"
        psk=xxx
}

root@raspberrypi:~# cat /etc/wpa_supplicant/wpa_supplicant.conf   
country=CN
ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=netdev
update_config=1

network={
        ssid="winse"
        #psk="xxx"
        psk=xxx
}


# 配置服务
pi@raspberrypi:~ $ cat /lib/systemd/system/wpa_supplicant.service 
[Unit]
Description=WPA supplicant
Before=network.target
After=dbus.service
Wants=network.target
IgnoreOnIsolate=true

[Service]
#Type=dbus
#BusName=fi.w1.wpa_supplicant1
Type=forking
ExecStart=/sbin/wpa_supplicant -u -s -O /run/wpa_supplicant -c /etc/wpa_supplicant/wpa_supplicant.conf -i wlan0 -B -D wext
Restart=always

[Install]
WantedBy=multi-user.target
#Alias=dbus-fi.w1.wpa_supplicant1.service
pi@raspberrypi:~ $ cat /etc/systemd/system/dhclient.service 
[Unit]
Description= DHCP Client
Before=network.target

[Service]
Type=forking
ExecStart=/sbin/dhclient wlan0 -v
ExecStop=/sbin/dhclient wlan0 -r
Restart=always

[Install] 
WantedBy=multi-user.target
pi@raspberrypi:~ $ 

root@raspberrypi:~# systemctl daemon-reload 

root@raspberrypi:~# systemctl stop NetworkManager
root@raspberrypi:~# systemctl enable wpa_supplicant.service 
root@raspberrypi:~# systemctl enable dhclient.service
Created symlink /etc/systemd/system/multi-user.target.wants/dhclient.service → /etc/systemd/system/dhclient.service.

配置好后,重启服务器,再次查看路由器管理web界面:

然后再把网线拔掉,再重启一次确认一下。

遇到的问题:

1 https://zhuanlan.zhihu.com/p/136463580

1
2
3
如果树莓派系统使用的是Raspbian Stretch,则ifup命令可能不起作用,可能会收到一条错误消息,
内容如下:“ ifdown:unknown interface wlan0 ”。可以使用以下任何命令来解决:
sudo ifconfig wlan0 up

2 https://shumeipai.nxez.com/2017/09/13/raspberry-pi-network-configuration-before-boot.html

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
说明以及不同安全性的 WiFi 配置示例:
#ssid:网络的ssid
#psk:密码
#priority:连接优先级,数字越大优先级越高(不可以是负数)
#scan_ssid:连接隐藏WiFi时需要指定该值为1

如果你的 WiFi 没有密码

network={
  ssid="你的无线网络名称(ssid)"
  key_mgmt=NONE
}
如果你的 WiFi 使用WEP加密


network={
  ssid="你的无线网络名称(ssid)"
  key_mgmt=NONE
  wep_key0="你的wifi密码"
}
如果你的 WiFi 使用WPA/WPA2加密


network={
  ssid="你的无线网络名称(ssid)"
  key_mgmt=WPA-PSK
  psk="你的wifi密码"
}

如果你不清楚 WiFi 的加密模式,可以在安卓手机上用 root explorer 打开 /data/misc/wifi/wpa/wpa_supplicant.conf,查看 WiFi 的信息。

3 https://www.labno3.com/2021/03/22/setting-up-raspberry-pi-wifi/

1
2
3
如果连接有问题,一定要确认Pi是否支持WiFi。也有可能你的SSID是错误的,要扫描和检查,
使用sudo iwlist wlan0 scan并检查essid字段。
这个字段应该和你在ssid字段输入的一样。

4 查看信息 https://www.baeldung.com/linux/connect-network-cli

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
root@raspberrypi:~# ip link show wlan0
3: wlan0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN mode DEFAULT group default qlen 1000
  link/ether 00:5a:39:e1:4d:bb brd ff:ff:ff:ff:ff:ff
root@raspberrypi:~# ip link set wlan0 up  
root@raspberrypi:~# ip link show wlan0
3: wlan0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN mode DEFAULT group default qlen 1000
  link/ether 00:5a:39:e1:4d:bb brd ff:ff:ff:ff:ff:ff
root@raspberrypi:~# 
root@raspberrypi:~# iw wlan0 link
Not connected.

root@raspberrypi:~# ifconfig wlan0 down
root@raspberrypi:~# ifconfig wlan0 up 
root@raspberrypi:~# ifconfig wlan0
wlan0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
      ether 00:5a:39:e1:4d:bb  txqueuelen 1000  (Ethernet)
      RX packets 0  bytes 0 (0.0 B)
      RX errors 0  dropped 0  overruns 0  frame 0
      TX packets 0  bytes 0 (0.0 B)
      TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

5 https://shapeshed.com/linux-wifi/

1
wpa_cli

6 https://www.bilibili.com/read/cv8895717

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
vim /etc/wpa_supplicant/wpa_supplicant.conf 编辑该文件,在文件顶部增加以下内容
country=CN
ctrl_interface=/run/wpa_supplicant
update_config=1

特别说明:country=CN 由于各个国家wifi使用频段不同,尤其5G频段

vim /etc/rc.local 添加以下内容

#!/bin/bash

ip link set wlan0 up &
wpa_supplicant -B -i wlan0 -D nl80211 -c /etc/wpa_supplicant/wpa_supplicant.conf &
dhclient wlan0

exit 0

7 https://blog.csdn.net/u010049696/article/details/48765999

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
配置service。进入到/usr/lib/systemd/system目录,可以看到下面四个文件:

wpa_supplicant-nl80211@.service
wpa_supplicant.service
wpa_supplicant@.service
wpa_supplicant-wired@.service

编辑wpa_supplicant.service文件,如下:

[Unit]
Description=WPA supplicant


[Service]
Type=dbus
BusName=fi.epitest.hostap.WPASupplicant
ExecStart=/usr/bin/wpa_supplicant -c/etc/wpa_supplicant/test.conf -i wlp3s0


[Install]
WantedBy=multi-user.target
Alias=dbus-fi.epitest.hostap.WPASupplicant.service

其中,只需修改ExecStart=/usr/bin/wpa_supplicant -c/etc/wpa_supplicant/test.conf -i wlp3s0即可。

8 https://www.linuxbabe.com/command-line/ubuntu-server-16-04-wifi-wpa-supplicant

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
Auto Connect on Startup
To automatically connect to wireless network at boot time, we need to edit the wpa_supplicant.service file. It’s a good idea to copy the file from /lib/systemd/system/ directory to /etc/systemd/system/ directory, then edit it because we don’t want newer version of wpasupplicant to override our modifications.

sudo cp /lib/systemd/system/wpa_supplicant.service /etc/systemd/system/wpa_supplicant.service

sudo nano /etc/systemd/system/wpa_supplicant.service
Find the following line.

ExecStart=/sbin/wpa_supplicant -u -s -O /run/wpa_supplicant
Change it to the following. Obviously you need to change wlp3s0 if that isn’t your interface name.

ExecStart=/sbin/wpa_supplicant -u -s -c /etc/wpa_supplicant.conf -i wlp3s0
It’s recommended to always try to restart wpa_supplicant when failure is detected. Add the following right below the ExecStart line.

Restart=always
If you can find the following line in this file, comment it out (Add the # character at the beginning of the line).

Alias=dbus-fi.w1.wpa_supplicant1.service
Save and close the file. Then enable wpa_supplicant service to start at boot time.

sudo systemctl enable wpa_supplicant.service

~~~

sudo nano /etc/systemd/system/dhclient.service
Put the following text into the file.

[Unit]
Description= DHCP Client
Before=network.target

[Service]
Type=forking
ExecStart=/sbin/dhclient wlp3s0 -v
ExecStop=/sbin/dhclient wlp3s0 -r
Restart=always

[Install] 
WantedBy=multi-user.target
Save and close the file. Then enable this service.

sudo systemctl enable dhclient.service

–END