fix: typo

This commit is contained in:
uetchy 2021-02-25 22:36:19 +09:00
parent fcd16cd8cf
commit 66b6186296
6 changed files with 62 additions and 71 deletions

View File

@ -9,9 +9,9 @@ Nextcloud does not have support for generating thumbnails from Affinity Photo an
Glancing at `.afphoto` and `.afdesign` in Finder, I noticed that it has a QuickLook support and an ability to show the thumbnail image. So these files should have thumbnail image somewhere inside its binary. Glancing at `.afphoto` and `.afdesign` in Finder, I noticed that it has a QuickLook support and an ability to show the thumbnail image. So these files should have thumbnail image somewhere inside its binary.
I wrote a simple script to seek for [PNG signature](https://www.w3.org/TR/PNG/) inside a binary and save it as a PNG file. I wrote a piece of Node.js script to seek for [PNG signature](https://www.w3.org/TR/PNG/) inside a binary and save it as an image file.
```js ```js afthumb.js
const fs = require("fs"); const fs = require("fs");
// png spec: https://www.w3.org/TR/PNG/ // png spec: https://www.w3.org/TR/PNG/
@ -118,6 +118,8 @@ Easy-peasy!
# Bonus: PDF thumbnail generator # Bonus: PDF thumbnail generator
Install `ghostscript` on your server to make it work.
```php lib/private/Preview/PDF.php ```php lib/private/Preview/PDF.php
<?php <?php

View File

@ -8,7 +8,7 @@ date: 2021-02-13T01:00:00
例としてドット が 3 つの点字を用意した。 例としてドット が 3 つの点字を用意した。
- 組み合わせ数は 3P3 + 3P2 + 3P1 - 組み合わせ数は 3P3 + 3P2 + 3P1
- $f(N,K) = \sum_{i=K → 0} N P K$ - $f(N,K) = \sum_{k=K → 1} N P k$
- 2 の n 乗っぽい - 2 の n 乗っぽい
- べき集合の濃度? - べき集合の濃度?
- 2 進数のブール配列と考えればわかりやすくなった - 2 進数のブール配列と考えればわかりやすくなった

View File

@ -1,6 +1,6 @@
--- ---
title: Installing Arch Linux title: Installing Arch Linux
date: 2021-02-12 date: 2021-02-12T00:00:00
--- ---
This note includes all commands I typed when I set up Arch Linux on my new bare metal server. This note includes all commands I typed when I set up Arch Linux on my new bare metal server.
@ -104,17 +104,17 @@ echo "LANG=en_US.UTF-8" > /etc/locale.conf
```bash ```bash
hostnamectl set-hostname polka hostnamectl set-hostname polka
hostnamectl set-chassis server hostnamectl set-chassis server
```
vim /etc/hosts ```ini /etc/hosts
# 127.0.0.1 localhost 127.0.0.1 localhost
# ::1 localhost ::1 localhost
# 127.0.0.1 polka 127.0.0.1 polka
``` ```
See https://systemd.network/systemd.network.html. See https://systemd.network/systemd.network.html.
```ini ```ini /etc/systemd/network/wired.network
# /etc/systemd/network/wired.network
[Match] [Match]
Name=enp5s0 Name=enp5s0
@ -127,8 +127,7 @@ DNS=1.1.1.1 # Cloudflare for the fallback DNS server
MACVLAN=dns-shim # to handle local dns lookup to 10.0.1.100 which is managed by Docker macvlan driver MACVLAN=dns-shim # to handle local dns lookup to 10.0.1.100 which is managed by Docker macvlan driver
``` ```
```ini ```ini /etc/systemd/network/dns-shim.netdev
# /etc/systemd/network/dns-shim.netdev
# to handle local dns lookup to 10.0.1.100 # to handle local dns lookup to 10.0.1.100
[NetDev] [NetDev]
Name=dns-shim Name=dns-shim
@ -138,8 +137,7 @@ Kind=macvlan
Mode=bridge Mode=bridge
``` ```
```ini ```ini /etc/systemd/network/dns-shim.network
# /etc/systemd/network/dns-shim.network
# to handle local dns lookup to 10.0.1.100 # to handle local dns lookup to 10.0.1.100
[Match] [Match]
Name=dns-shim Name=dns-shim
@ -268,10 +266,9 @@ nvidia-smi # test runtime
```bash ```bash
pacman -S docker docker-compose pacman -S docker docker-compose
yay -S nvidia-container-runtime-bin yay -S nvidia-container-runtime-bin
vim /etc/docker/daemon.json
``` ```
```json ```json /etc/docker/daemon.json
{ {
"log-driver": "journald", "log-driver": "journald",
"log-opts": { "log-opts": {
@ -304,8 +301,7 @@ yay -S telegraf
vim /etc/telegraf/telegraf.conf vim /etc/telegraf/telegraf.conf
``` ```
```ini ```ini /etc/sudoers.d/telegraf
# File: /etc/sudoers.d/telegraf
Cmnd_Alias FAIL2BAN = /usr/bin/fail2ban-client status, /usr/bin/fail2ban-client status * Cmnd_Alias FAIL2BAN = /usr/bin/fail2ban-client status, /usr/bin/fail2ban-client status *
telegraf ALL=(root) NOEXEC: NOPASSWD: FAIL2BAN telegraf ALL=(root) NOEXEC: NOPASSWD: FAIL2BAN
Defaults!FAIL2BAN !logfile, !syslog, !pam_session Defaults!FAIL2BAN !logfile, !syslog, !pam_session
@ -318,8 +314,7 @@ pacman -S fail2ban
systemctl enable --now fail2ban systemctl enable --now fail2ban
``` ```
```ini ```ini /etc/fail2ban/jail.local
# File: /etc/fail2ban/jail.local
[DEFAULT] [DEFAULT]
bantime = 60m bantime = 60m
ignoreip = 127.0.0.1/8 10.0.1.0/24 ignoreip = 127.0.0.1/8 10.0.1.0/24
@ -340,8 +335,7 @@ maxretry = 1
bantime = 1d bantime = 1d
``` ```
```ini ```ini /etc/fail2ban/filter.d/mailu.conf
# File: /etc/fail2ban/filter.d/mailu.conf
[INCLUDES] [INCLUDES]
before = common.conf before = common.conf
@ -369,13 +363,11 @@ Dynamic DNS for Cloudflare.
yay -S cfddns yay -S cfddns
``` ```
```yml ```yml /etc/cfddns/cfddns.yml
# File: /etc/cfddns/cfddns.yml
token: <token> token: <token>
``` ```
```ini ```ini /etc/cfddns/domains
# File: /etc/cfddns/domains
uechi.io uechi.io
datastore.uechi.io datastore.uechi.io
``` ```
@ -393,8 +385,7 @@ systemctl enable --now smartd
## backup ## backup
```ini ```ini /etc/backups/borg.service
# File: /etc/backups/borg.service
[Unit] [Unit]
Description=Borg Daily Backup Service Description=Borg Daily Backup Service
@ -406,8 +397,7 @@ IOSchedulingPriority=7
ExecStart=/etc/backups/run.sh ExecStart=/etc/backups/run.sh
``` ```
```ini ```ini /etc/backups/borg.timer
# File: /etc/backups/borg.timer
[Unit] [Unit]
Description=Borg Daily Backup Timer Description=Borg Daily Backup Timer
@ -420,20 +410,31 @@ RandomizedDelaySec=10min
WantedBy=timers.target WantedBy=timers.target
``` ```
```bash ```bash /etc/backups/run.sh
# File: /etc/backups/run.sh #!/bin/bash -ue
# The udev rule is not terribly accurate and may trigger our service before
# the kernel has finished probing partitions. Sleep for a bit to ensure
# the kernel is done.
#
# This can be avoided by using a more precise udev rule, e.g. matching
# a specific hardware path and partition.
sleep 5 sleep 5
# #
# Script configuration # Script configuration
# #
export BORG_PASSPHRASE="<PASSPHRASE>" export BORG_PASSPHRASE="<pass>"
MOUNTPOINT=/mnt/backup MOUNTPOINT=/mnt/backup
TARGET=$MOUNTPOINT/borg TARGET=$MOUNTPOINT/borg
# Archive name schema # Archive name schema
DATE=$(date --iso-8601) DATE=$(date --iso-8601)
#
# Create backups
#
# Options for borg create # Options for borg create
BORG_OPTS="--stats --compression lz4 --checkpoint-interval 86400" BORG_OPTS="--stats --compression lz4 --checkpoint-interval 86400"
@ -452,26 +453,19 @@ borg create $BORG_OPTS \
--exclude /root/.cache \ --exclude /root/.cache \
--exclude /var/cache \ --exclude /var/cache \
--exclude /var/lib/docker/devicemapper \ --exclude /var/lib/docker/devicemapper \
--exclude /home \ --exclude 'sh:/home/*/.cache' \
--exclude 'sh:/home/*/.cargo' \
--one-file-system \ --one-file-system \
$TARGET::'{hostname}-system-{now}' \ $TARGET::'{hostname}-system-{now}' \
/ /boot / /boot
echo "# home"
borg create $BORG_OPTS \
--exclude 'sh:/home/*/.cache' \
--exclude 'sh:/home/*/.cargo' \
$TARGET::'{hostname}-home-{now}' \
/home/
echo "# data" echo "# data"
borg create $BORG_OPTS \ borg create $BORG_OPTS \
$TARGET::'{hostname}-data-{now}' \ $TARGET::'{hostname}-data-{now}' \
/mnt/data /mnt/data /mnt/ftl
echo "Start pruning" echo "Start pruning"
BORG_PRUNE_OPTS="--list --stats --keep-daily 7 --keep-weekly 4 --keep-monthly 3" BORG_PRUNE_OPTS="--list --stats --keep-daily 7 --keep-weekly 5 --keep-monthly 3"
borg prune $BORG_PRUNE_OPTS --prefix '{hostname}-home-' $TARGET
borg prune $BORG_PRUNE_OPTS --prefix '{hostname}-system-' $TARGET borg prune $BORG_PRUNE_OPTS --prefix '{hostname}-system-' $TARGET
borg prune $BORG_PRUNE_OPTS --prefix '{hostname}-data-' $TARGET borg prune $BORG_PRUNE_OPTS --prefix '{hostname}-data-' $TARGET
@ -526,17 +520,18 @@ certbot certonly \
-d "*.uechi.io" -d "*.uechi.io"
openssl x509 -in /etc/letsencrypt/live/uechi.io/fullchain.pem -text openssl x509 -in /etc/letsencrypt/live/uechi.io/fullchain.pem -text
certbot certificates certbot certificates
```
cat <<EOD > /etc/systemd/system/certbot.service ```ini /etc/systemd/system/certbot.service
[Unit] [Unit]
Description=Let's Encrypt renewal Description=Let's Encrypt renewal
[Service] [Service]
Type=oneshot Type=oneshot
ExecStart=/usr/bin/certbot renew --quiet --agree-tos --deploy-hook "docker exec nginx-proxy-le /app/signal_le_service" ExecStart=/usr/bin/certbot renew --quiet --agree-tos --deploy-hook "docker exec nginx-proxy-le /app/signal_le_service"
EOD ```
cat <<EOD > /etc/systemd/system/certbot.timer ```ini /etc/systemd/system/certbot.timer
[Unit] [Unit]
Description=Twice daily renewal of Let's Encrypt's certificates Description=Twice daily renewal of Let's Encrypt's certificates
@ -547,7 +542,6 @@ Persistent=true
[Install] [Install]
WantedBy=timers.target WantedBy=timers.target
EOD
``` ```
- [Certbot - ArchWiki](https://wiki.archlinux.org/index.php/Certbot) - [Certbot - ArchWiki](https://wiki.archlinux.org/index.php/Certbot)
@ -559,8 +553,9 @@ EOD
```bash ```bash
pacman -S alsa-utils # maybe requires reboot pacman -S alsa-utils # maybe requires reboot
arecord -L # list devices arecord -L # list devices
```
cat <<EOD > /etc/asound.conf ```conf /etc/asound.conf
pcm.m96k { pcm.m96k {
type hw type hw
card M96k card M96k
@ -572,12 +567,10 @@ pcm.!default {
type plug type plug
slave.pcm "m96k" slave.pcm "m96k"
} }
EOD ```
```
arecord -vv /dev/null # test mic arecord -vv /dev/null # test mic
```
```
alsamixer # gui mixer alsamixer # gui mixer
``` ```

View File

@ -11,25 +11,23 @@ date: 2021-02-13T00:00:00
- メールサーバー - メールサーバー
- DNS サーバー - DNS サーバー
- Nextcloud - Nextcloud
- TimeMachine
- GitLab - GitLab
- LanguageTool
- VPN 他 - VPN 他
- 計算実験 - 計算実験
- Web サーバー - Web サーバー
- VS Code Remote SSH のホスト先 - VS Code Remote SSH のホスト先
重いタスクを並列してやらせたいので最優先は CPU とメモリです。メモリはデュアルリンクを重視して 32GBx2 を、CPU は昨今のライブラリのマルチコア対応を勘案して Ryzen 9 3950X にしました。 重いタスクを並列してやらせたいので最優先は CPU とメモリです。メモリはデュアルリンクを重視して 32GBx2 を、CPU は昨今のライブラリのマルチコア対応を勘案して Ryzen 9 3950X を選びました。
> 結果から言うとメモリはもっと必要でした。巨大な Pandas データフレームを並列処理なんかするとサクッと消えてしまいます。予算に余裕があるなら 128GB ほど用意したほうが良いです > 結果から言うとメモリはもっと必要でした。巨大な Pandas データフレームを並列処理なんかするとサクッと消えてしまいます。予算に余裕があるなら 128GB ほど用意したほうが良いかもしれません
GPU は古いサーバーに突っ込んでいた NVIDIA GeForce GTX TITAN X (Maxwell)を流用しました。グラフィックメモリが 12GB ですが、最大ワークロード時でも 5GB は残るので今のところ十分です。 GPU は古いサーバーに突っ込んでいた NVIDIA GeForce GTX TITAN X (Maxwell)を流用しました。グラフィックメモリが 12GB ちょっとですが、最大ワークロード時でも 5GB は残るので今のところ十分です。必要になったタイミングで増やします。
記憶装置は 3TB HDD 2 台と 500GB NVMe、そして古いサーバーから引っこ抜いた 500GB SSD です。NVMe メモリは OS 用、SSD/HDD はデータとバックアップ用にしました 記憶装置は 3TB HDD 2 台と 500GB NVMe、そして古いサーバーから引っこ抜いた 500GB SSD です。NVMe メモリは OS 用、SSD/HDD はデータとバックアップ用にしま
マザーボードは X570 と比較して、実装されているコンデンサーやパーツがサーバー向きだと思った ASRock B550 Taichi にしました。 マザーボードは X570 と比較して、実装されているコンデンサーやパーツがサーバー向きだと思った ASRock B550 Taichi にしました。
電源は今後 GPU を追加することを考えて 800W 電源を選びました。実際にサーバーを稼働させながら使用電力を計測したところ、アイドル時に 180W 前後、フル稼働時でも 350W を超えない程度でした。今後 UPS を買う場合はその付近+バッファのグレードを買うと良いかもしれません 電源は今後 GPU を追加することを考えて 800W 電源を選びました。実際にサーバーを稼働させながら使用電力を計測したところ、アイドル時に 180W 前後、フル稼働時でも 350W を超えませんでした。今後 UPS を買う場合はその付近+バッファを考慮したグレードを選ぶことにします
ケースは Fractal Design の Meshify 2 です。 ケースは Fractal Design の Meshify 2 です。

View File

@ -19,7 +19,7 @@ date: 2021-02-14T00:00:00
実際にコードを書いて本当に望んでいる結果が得られるのかを検証します。 実際にコードを書いて本当に望んでいる結果が得られるのかを検証します。
```js ```js split-bill.js
const history = [ const history = [
{ {
amount: 121, amount: 121,
@ -122,7 +122,7 @@ for (const [_, { name, consumption }] of data) {
} }
``` ```
`history`に支払い履歴を書き込んでから実行すると、「送金表」「履歴」「実質支払総額」が得られます。 `history`に支払い履歴を書き込んで実行すると、「送金表」「履歴」「実質支払総額」が得られます。
```md ```md
# Transaction table # Transaction table

View File

@ -10,11 +10,11 @@ img {
# README # README
> I'm **Yasuaki Uechi**, a graduate student studying about recurrent neural networks. I was born in Okinawa, Japan in 1994 and have been living in Kanagawa. > I'm **Yasuaki Uechi**, a graduate student studying about recurrent neural networks. I was born in Okinawa, Japan and have been living in Kanagawa.
# Contact # Contact
Reach me at `y@uechi.io` (recommended) or [@uechz](https://twitter.com/uechz) on Twitter. Beware that I'm not a person of quick reply so if you are in urgent need of a lightning response better bomb my inbox with reminders so I can prioritize your email than any other emails. Reach me at `y@uechi.io` (recommended) or [@uechz](https://twitter.com/uechz) on Twitter. Beware that I'm not a person of quick reply so if you are in urgent need of a lightning response, better bomb my inbox with reminders so I can prioritize your email and reply real quick.
## GPG Key ## GPG Key
@ -23,18 +23,18 @@ Get [GPG Key](https://github.com/uetchy.gpg) on GitHub.
# Facts # Facts
- interests: webdev, nlp, ml, ux, coffee, puzzle - interests: webdev, nlp, ml, ux, coffee, puzzle
- have confidence in: js (incl. nodejs, typescript, and react), python, ui design, linux, english - have confidence in: js (incl. nodejs, typescript, and react), ui design
- have no confidence in: normal life stuff - have no confidence in: normal life stuff
- have some experience in: swift, go, ruby, rust, c++, pytorch - have some experience in: python, swift, go, ruby, rust, c++, pytorch, linux
- language skills: - language skills:
- Japanese: native - Japanese: native
- English: TOEIC 940, TOEFL 78 - English: TOEIC 940, TOEFL 78
- Chinese: noob - play violin and tin whistle to keep sanity, drink a bit.
- play violin and tin whistle to keep sanity.
# Publishing # Publishing
- [Open Source @ GitHub](https://github.com/uetchy) - [Open Source @ GitHub](https://github.com/uetchy)
- [Blog (Japanese and English)](https://uechi.io)
- [Technical Note (English) @ dev.to](https://dev.to/uetchy) - [Technical Note (English) @ dev.to](https://dev.to/uetchy)
- [Technical Note (Japanese) @ Qiita](https://qiita.com/uetchy) - [Technical Note (Japanese) @ Qiita](https://qiita.com/uetchy)
- [Design Portfolio @ Behance](https://www.behance.net/uechi) - [Design Portfolio @ Behance](https://www.behance.net/uechi)
@ -42,7 +42,5 @@ Get [GPG Key](https://github.com/uetchy.gpg) on GitHub.
# Current Activity # Current Activity
- Seeking for a job - Seeking for a job
- Developing new product - Developing something new
- Experimenting new algorithm
- Occasionally doing Ring Fit Adventure
- Fighting against binge-eating - Fighting against binge-eating