Open AlbaHoo opened 4 years ago
rails delete_all just delete current table, destroy_all will delete the current table + all associated tables.
To use multiple accounts in aws-cli
$ aws configure --profile account1 $ aws configure --profile account2
$ aws dynamodb list-tables --profile account1 $ aws s3 ls --profile account2 Note:
If you name the profile to be default it will become default profile i.e. when no --profile param in the command.
export default async function downloadFileFromUrl(url, defaultName, headers, onFinish) {
const response = await fetch(url, { method: 'GET', headers: headers || {} });
const contentDisposition = response.headers.get('Content-Disposition');
const filename = contentDisposition ? contentDisposition.split('filename=')[1] : defaultName;
if (response.ok) {
const blob = await response.blob();
// Create blob link to download
const blobUrl = window.URL.createObjectURL(blob);
const link = document.createElement('a');
link.href = blobUrl;
link.download = filename;
// Force download
link.click();
} else {
const message = Error Message: ${response.statusText}
;
handleError(message);
}
if (onFinish) {
onFinish();
}
}
// IE10+ : (has Blob, but not a[download] or URL) if (navigator.msSaveBlob) { return navigator.msSaveBlob(blob, fileName); }
aws='http://52.77.191.142/api' staging='http://adbpublicstg.prod.acquia-sites.com/multimedia/scf/scf.api.php?api_url=http://52.77.191.142/api' prod='https://www.adb.org/multimedia/scf/scf.api.php?api_url=http://52.77.191.142/api' rm -rf build REACT_APP_API_URL=${aws} yarn build export AWS_DEFAULT_PROFILE=coriolis aws s3 sync build s3://adb-frontend-preview --acl public-read rm -rf build REACT_APP_API_URL=${staging} yarn build && zip "build-staging-${1:-'final'}.zip" -r build rm -rf build REACT_APP_API_URL=${prod} yarn build && zip "build-prod-${1:-'final'}.zip" -r build
#!/bin/bash -e
REMOTE_BRANCHES="`mktemp`"
LOCAL_BRANCHES="`mktemp`"
DANGLING_BRANCHES="`mktemp`"
git for-each-ref --format="%(refname)" refs/remotes/origin/ | \
sed 's#^refs/remotes/origin/##' > "$REMOTE_BRANCHES"
git for-each-ref --format="%(refname)" refs/heads/ | \
sed 's#^refs/heads/##' > "$LOCAL_BRANCHES"
grep -vxF -f "$REMOTE_BRANCHES" "$LOCAL_BRANCHES" | \
sort -V > "$DANGLING_BRANCHES"
rm -f "$REMOTE_BRANCHES" "$LOCAL_BRANCHES"
# prune remote branches, wait till finished, and then delete local references
git remote prune origin && cat "$DANGLING_BRANCHES" | while read -r B; do git branch -D "$B"; done
rm -f "$DANGLING_BRANCHES"
# git fetch -p && for branch in `git branch -vv | grep ': gone]' | awk '{print $1}'`; do git branch -D $branch; done
"\e[A": history-search-backward "\e[B": history-search-forward
13.229.188.59 github.com
dig github.com
Provisional headers are shown
CORS
pg_dump -U Username -h DatabaseEndPoint -a -t TableToCopy SourceDatabase > dump cat dump | psql -h DatabaseEndPoint -p portNumber -U Username -W TargetDatabase
select nextval(pg_get_serial_sequence('counts', 'id'));
天问9问,视频问,个人怎么跨地域去起诉一个大公司的不合法问题?
ruby-install ruby 2.5 --no-install-deps -- --with-openssl-dir=$(brew --prefix openssl) --disable-install-doc
MacOS upgarde deprecated openssl openssl issue with gem install error
convert out.png -resize 2516x1718 -gravity center -extent 3555x2000 out2.png convert +append 1.png 2.png out.png convert -append 1.png 2.png out.png
convert 1.png -append -bordercolor "#FFFFFF" -border 0x100 out2.png settings such as -bordercolor, need to come before the option that uses it, such as -border
Initial match device width
To remove ^M in vim
This worked for me
:e ++ff=dos The :e ++ff=dos command tells Vim to read the file again, forcing dos file format. Vim will remove CRLF and LF-only line endings, leaving only the text of each line in the buffer.
then
:set ff=unix and finally
:wq
聊天APP A和B交流 A先设定默认的密钥 A也可以设定和B聊天的密钥 B通过别的手段获取了A的密钥,输入和A的聊天窗口进行解密
信息邮局: 没有用户, 用户A可以发送一条加密的数据到服务器 3天以内任何人都可以通过密码获取这个数据,数据获取以后自动删除 发送这种信息1毛钱1m.条,访问免费。
du -sh *|sort -h
DATEADD(year, 1, '2017/08/25')
统计数据分析网站
cat urls_tags.csv | python -c 'import csv, json, sys; print(json.dumps([dict(r) for r in csv.DictReader(sys.stdin)]))' | jq
00775-15842 81bc2-6f04a 9f73f-a731e 3031c-401ba 85c88-04c5a 81ea4-b7a70 1aab3-d23bc 7e3ae-20a99 b3e62-032cd 27112-c4060 4cce6-7be8a 37fa4-c5691 01ea7-07e40 302f8-3fc6d 970b2-fafc8 f91e8-2b906
4DeY99nkBP1eIRv1tIRDUE8c 9KxEf4TERc72Zry8zuz9DYVi
docker exec -it SQLEXPRESS01 "bash" /opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P "[password]"
USE master ;
GO
DROP DATABASE TestDB;
GO
create database TestDB;
GO
USE [master] GO ALTER DATABASE [TestDB] SET SINGLE_USER WITH ROLLBACK IMMEDIATE GO USE [master] GO DROP DATABASE [TestDB] GO
动作捕捉的原理
Option3: mastercard
Doc: swagger-ish
Desc:The Carbon Calculator API enables Issuers to provide their consumers with visibility into the environmental impact based on their spending habits.
Pro:
Simple integration, it estimates the emission based on historical transactions.Con:
May have some privacy issue, and this API fit more to individuals instead of a SME company.
Price is unknown, but seems free need to create a project to find out.
Option4: myclimate
Doc: swagger
Desc:It provides dedicated APIs to calculate carbon emission, include
flights
car trips
cruises
companies
personal footprint
Pro:
Straightforward to integrate, and the companies API looks like the right fit.
Con:
Cannot try out, need to contact info@myclimate.org for an account.
Don't know how accurate the calculation would be.
Option5: climatiq
Doc:
Price Doc:
Desc: The api take parameters to calculate the final carbon emission.The quick guide of parameters.
Pro:
Looks professional, rich documentation.
Free plan: <10 API requests/sec; <100,000 API calls/month.
Flexible since it has many different dimenions of params.
Con: don’t know yet.
Climatiq looks like the most professional across all the options here, personally I will choose this one as a start.
DefaultEndpointsProtocol=https;AccountName=coriolisotfpapidev;AccountKey=Ww3uyMlzSsFEMFiCqzX+c174jo2TU2cUsLURSCxn+ymSiEpySsKASSWvbRtDGN+4Dv2RzlGXcrJQaUIqkiYNgQ==;EndpointSuffix=core.windows.net
8230 cd ~/Downloads/Apache_.coriolis.systems
8231 cat 43aad6c00663cfc1.crt gd_bundle-g2-g1.crt >> ssl-bundle.crt
8232 cp generated-private-key.txt private.pem
8234 mkdir nginx
8235 mv private.pem nginx
8236 mv ssl-bundle.crt nginx
8237 cd nginx
8238 scp * remoteServer:~/
644 for private.pem 777 for ssl-bundle.crt
ssl_certificate /usr/local/nginx/ssl-bundle.crt;
ssl_certificate_key /usr/local/nginx/private.pem;
file -b --mime-encoding source.file
101M Shared 128G Alba
~/Library/ 995M Nemu 1.3G Mail 7.5G Caches 8.3G Application Support 8.3G Developer 17G Android 26G Containers
~/Library/Containers 304M com.tencent.meeting 458M com.xiami.client 4.4G com.tencent.xinWeChat 20G com.docker.docker
小程序:
hi
text-to-speech pytorch https://github.com/ffmpegwasm/ffmpeg.wasm
/usr/local/share/dotnet/x64/dotnet
/usr/local/share/dotnet/dotnet
export PATH="/usr/local/share/dotnet/x64:$PATH"
export PATH="/usr/local/share/dotnet:$PATH"
微信小程序一家人玩扑克牌
dd if=/dev/rdisk4 of=sd_backup.dmg status=progress bs=16M
https://blog.jaimyn.dev/the-fastest-way-to-clone-sd-card-macos/
From that output we can see that our SD card must be /dev/disk4 as our card is 32GB in size and has a fat32 and linux partition (standard for most raspberry pi images). You should add an r in front of disk4 so it looks like this /dev/rdisk4. The r means when we’re copying, it will use the “raw” disk. For an operation like this, it is much more efficient.
Copy the disk image (dmg) to your SD card You’ll first need to unmount your SD card. Do not click the eject button in finder, but run this command, replacing 4 with whatever number you identified as your sd card sudo diskutil unmountDisk /dev/disk4.
Then to copy the image, run the following command:
sudo gdd of=/dev/rdisk4 if=sd_backup.dmg status=progress bs=16M
Tip: you can experiment with different numbers for the block size by replacing bs=16M with larger or smaller numbers to see if it makes a difference to the speed. I’ve found 16M the best for my hardware.
watermark remove
text image ocr
sudo spctl --master-disable
I also had blob: URL in video/@src, but by watching Developer tools > Network during playback it turned out that video/source/@src was URL for m3u8 playlist.
An m3u8-backed video can be readily downloaded by either:
ffplay -i "https://cdn.example.tv/api/media/tv/xyzxyz/1080/index.m3u8" ffmpeg -i "https://cdn.example.tv/api/media/tv/xyzxyz/1080/index.m3u8" -codec copy file.mkv tl;dr - blob URL sounds like the binary you want to get but there might be easier way to get the video. Just check out Network tab in Dev tools while you play the video to see what you are actually fetching.
defaults write com.microsoft.Word AppleLanguages '("zh_CN")' defaults write com.microsoft.Excel AppleLanguages '("zh_CN")' defaults write com.microsoft.Powerpoint AppleLanguages '("zh_CN")'
defaults write com.microsoft.Word AppleLanguages '("en")' defaults write com.microsoft.Excel AppleLanguages '("en")' defaults write com.microsoft.Powerpoint AppleLanguages '("en")'
qczf-mwrm-jshe-uynp-hzsf stripe
create a scripe to delete old files: delete_old_log
#!/bin/sh
# delete all log files whole last modified time was over 6 days ago
file=$(find /srv/backend/helong/log/frame.log.* -mtime +$1)
if [ -z "$file" ]
then
echo "No log fine found"
else
echo "delete these files: $file"
rm $file
fi
chmox +x delete_old_log
Then create cronjob to run
crontab -e
0 23 * /home/rails/script/delete_old_log 7
make a tool page:
WireGuard:
docker run -d \ --name=wg-easy \ -e LANG=de \ -e WG_HOST=🚨YOUR_SERVER_IP \ -e PASSWORD=🚨YOUR_ADMIN_PASSWORD \ -v ~/.wg-easy:/etc/wireguard \ -p 51820:51820/udp \ -p 51821:51821/tcp \ --cap-add=NET_ADMIN \ --cap-add=SYS_MODULE \ --sysctl="net.ipv4.conf.all.src_valid_mark=1" \ --sysctl="net.ipv4.ip_forward=1" \ --restart unless-stopped \ ghcr.io/wg-easy/wg-easy
Ubuntu 18: apt-get install wireguard-dkms
Dashboard: "YOUR_SERVER_IP:51820" login with YOUR_ADMIN_PASSWORD
Add a client, download the file.
Rename the downloaded file to have .conf
default.conf preview
[Interface]
PrivateKey =
Address =
DNS = 1.1.1.1
[Peer]
PublicKey =
PresharedKey =
AllowedIPs = 0.0.0.0/0, ::/0
PersistentKeepalive = 0
Endpoint = your_ip:51820
Download WireGuard profile and import tunnel
1827 sudo apt-get update -y 1828 sudo apt-get install php7.3-fpm -y 1829 exit 1830 ls 1831 vim /etc/nginx/sites-available/cms 1832 sudo ln -s /etc/nginx/sites-available/cms /etc/nginx/sites-enabled/cms 1833 ls /etc/nginx/sites-enabled 1834 nginx -t 1835 systemctl reload nginx 1836 mysql 1837 sudo mysql 1838 sudo systemctl start mysqld.service 1839 sudo systemctl start mysql.service 1840 locale mysql 1841 locale mysql.service 1842 which mysql 1843 exit 1844 ls 1845 cp -r PbootCMS/ /var/www/html/ 1846 chmod -R a+w /var/www/html 1847 ls /var/www/html/static/backup/sql/ 1848 mysql -uroot -pHUdqh pbootcms 1849 vim /var/www/html/config/database.php 1850 systemctl reload nginx 1851 ls 1852 cd /etc/php/7.2/ 1853 ls 1854 cd fm 1855 cd fpm/ 1856 ls 1857 vim php.ini 1858 history 1859 systemctl list 1860 systemctl -l 1861 systemctl -l | grep php 1862 systemctl php7.2-fpm.service restart 1863 systemctl restart php7.2-fpm.service 1864 vim php.ini 1865 systemctl restart php7.2-fpm.service 1866 vim php.ini 1867 apt-get install php-gd 1868 systemctl restart php7.2-fpm.service 1869 apt-get install php-mbstring 1870 apt-get install php-curl 1871 mysql -uroot -pHUdqh 1872 history 1873 exit 1874 /etc/mysql/mysql.conf.d/mysqld.cnf 1875 vim /etc/mysql/mysql.conf.d/mysqld.cnf 1876 exit 1877 mysql 1878 exit 1879 mysql 1880 ls 1881 history 1882 vim /var/www/html/config/database.php 1883 exit 1884 Reading package lists... Done 1885 sudo add-apt-repository universe 1886 sudo apt install php-fpm php-mysql 1887 ls 1888 vim /etc/nginx/sites-available/ 1889 cp /etc/nginx/sites-available/tenty /etc/nginx/sites-available/cms 1890 sudo -E bash 1891 sudo apt install mysql-server 1892 sudo mysql_secure_installation 1893 sudo mysql 1894 sudo mysql -uroot -p 1895 ls 1896 cd ~ && git clone https://gitee.com/hnaoyun/PbootCMS.git 1897 cp -r PbootCMS/ /var/www/html/ 1898 sudo -E bash 1899 mysql -uroot -pHUdqh 1900 service mysql restart 1901 sudo service mysql restart 1902 mysql -uroot -pHUdqh 1903 cat /etc/mysql/mysql.conf.d/mysqld.cnf 1904 sudo -E bash 1905 mysql -umysql -pHUdqh 1906 mysql -uroot -pHUdqh 1907 sudo -E bash 1908 sudo mysql 1909 mysql 1910 mysql -uroot -pHUdqh 1911 sudo mysql 1912 cd /var/www/html/ 1913 ls 1914 sudo mysql 1915 mysql 1916 mysql -uroot -pHUdqh 1917 sudo service mysql restart 1918 mysql -uroot -pHUdqh 1919 mysql 1920 sudo mysql 1921 sudo -E bash 1922 history 1923 sudo service mysql restart 1924 history | grep restart 1925 history
https://cheeger.com/general/2019/11/29/quick-notes.html
Quick Note