2. Selectively bypassing Docker cache in steps
3. Quick and dirty Linux directory size
4. Docker+WSL2 Performance and Storage Tuning
5. HP iLO4 SSL_ERROR_NO_CYPHER_OVERLAP Fix
If you don't want to pay for a Proxmox subscription you can still get updates through the no-subscription channel.
cd /etc/apt/sources.list.d cp pve-enterprise.list pve-no-subscription.list nano pve-no-subscription.list
Edit the pve-no-subscription.list to the below
deb http://download.proxmox.com/debian/pve bookworm pve-no-subscription
Run updates, but only use dist-upgrade and not regular upgrade as it may break dependencies.
apt-get update apt-get dist-upgrade
I have a project where I routinely build and rebuild containers between two repos, in which one of the docker build steps pulls the latest compiled code from the others repo. When doing this, the Docker cache gets in the way as it caches the published code.
- Project 1 publishes compiled code to blob storage
- Project 2 pulls the compiled code and publishes a built container
Project 2's Dockerfile will look something like:
FROM ubuntu:22.04 RUN wget https://blob.core.windows.net/version-1.zip RUN unzip /var/www/version-1.zip -d /var/www/
The issue is if I update the content of version-1.zip, Docker will cache this content in its build process and be out of date.
I came across a great solution on stackoverflow: https://stackoverflow.com/questions/35134713/disable-cache-for-specific-run-commands
This solution doesn't work completely for me, as I am using docker-compose up commands, not docker-compose build. However, after a little trial and error, I have the below workflow working:
FROM ubuntu:22.04 ARG CACHEBUST=1 RUN wget https://blob.core.windows.net/version-1.zip RUN unzip /var/www/version-1.zip -d /var/www/
Run a build:
docker compose -f "docker-compose.yml" build --build-arg CACHEBUST=someuniquekey
Run an up:
docker compose -f "docker-compose.yml" up -d --build
This way the first run Docker build is cache busted using whatever unique key you want, and the second Docker up uses the newly compiled cache. NOTE: you can omit the last --build to not trigger a new cached build if you like. Now I can selectively bust out of the cache at a particular step, which in a long Dockerfile, can save heaps of time. I guess you could even put multiple args at strategic places along your Dockerfile and be able to trigger a bust where it makes most sense.
du -shc * | sort -rh
WSL is fantastic for allowing devs and engineers to mix and match environments and toolsets. It's saved me many times having to maintain VM's specifically for different environments and versions of software. Microsoft are doing a pretty good job these days at updating it to support new features and bug fixes, however, running WSL and Docker as a permanent part of your workflow it's not without it's flaws.
This post will be added to as I remember optimizations that I have used in the past, however, all of them are specific to running Linux images and containers, not Windows.
Keep the WSL Kernel up to date
Make sure to keep the WSL Kernel up to date to take advantage of all the fixes Microsoft push.
Preventing Docker memory hog
I routinely work from a system with 16GB of RAM and running a few docker images would chew all available memory through the WSL vmmem process which would in turn lock my machine. The best workaround I could find for this was to set an upper limit for WSL memory consumption. You can do this through editing the .wslconfig file in your Users directory.
[wsl2] memory=3GB # Limits VM memory in WSL 2 up to 3GB processors=2 # Makes the WSL 2 VM use two virtual processors
You will need to reboot WSL for this to take effect.
NOTE: half memory sizes don't seem to work, I tried 3.5GB and it just let it run max RAM on the system.
Slow filesystem access rates
When performing any sort of intensive file actions to files hosted in Windows but accessed through WSL you'll notice it's incredibly slow. There are a lot of open cases about this on the WSL GitHub repo, but the underlying issue is how the filesystem is "mounted" between the Windows and WSL.
This bug is incredibly frustrating when working with containers that host nginx or Apache as page load times are in the multiple seconds irrespective of local or server caching. The best way around this issue is not have filesystem served from Windows, but serve it inside of the WSL distro. This used to be incredibly finicky to achieve but is easy now given the integration of tooling to WSL.
For example, say you have a single container that serves web content through Apache and that your development workflow means you have to modify the web content and see changes in realtime (ie React, Vue, webpack etc). Instead of building the docker container with files sourced from a windows directory, move the files to the WSL Linux filesystem (clone your Repo in Linux if you're working on committed files), then from the Linux commandline issue your build. Through the WSL2/Docker integration, the Docker socket will let you build inside of Linux using the Linux filesystem but run the container natively on your Windows host.
To edit your files inside the container, you can run VS Code from your Linux commandline which through the Code/WSL integration will let you edit your Linux filesystem.
Mounting into Linux FS
Keep in mind that if you do need to mount from Windows into your Linux filesystem for whatever reason you can do it via a private share that is automatically exposed.
If you have multiple distros installed they will be in their own directory under that root.
Tuning the Docker vhdx
Optimize-VHD -Path $Env:LOCALAPPDATA\Docker\wsl\data\ext4.vhdx -Mode Full
This command didn't do much for me. It took about 10minutes to run and only reduced my vhdx from 71.4GB to 70.2GB.
Error not binding port
I've had this recurring error every so often when restarting Windows and running Docker with WSL2. Every so often Docker compains it can't bind to a port that I need (like MySQL). Hunting down the cause of this is interesting - https://github.com/docker/for-win/issues/3171 - https://github.com/microsoft/WSL/issues/5306
The quick fix to this is:
net stop winnat net start winnat
Sometimes even in the most organised of worlds, we still manage to miss patching older systems. I found an old HP iLO server running iLO 4 - 1.20 with no way to log into it. Every modern browser and OS has now deprecated old TLS, RC4 and 3DES cyphers for certificates with the most common FF error being thrown: SSL_ERROR_NO_CYPHER_OVERLAP.
Once in, it's easy enough to upgrade to a more modern iLO FW which supports modern TLS considering that HP make it readily available https://support.hpe.com/connect/s/softwaredetails?language=en_US&softwareId=MTX_729b6d22f37f4f229dfccbc3a9.
This is the computed list of SSH bruteforce IP’s and commonly used usernames for April 2013.
Top 50 SSH bruteforce offenders IP’s.
|Failed Attempt Count||IP|
Top 50 SSH bruteforce usernames.
|Failed Attempt Count||Username|
I maintain a radius server that proxies requests from publicly accessible SSH servers which, unfortunately must run on port 22.
There are over 140 SSH servers that proxy all requests through this server and due to the logging which is configured I am able to capture all failed attempts including username password and IP address. I frequently scan these logs to find the top offending IP addresses and common usernames so I can add them to a blacklist for the radius server to drop straight away.
There are many public projects that compile sources of such information, however these logs are easy for me to divulge for others to incorporate into similar lists.
I will throw some old stats of interest and work on this to become a monthly release.
Failed Attacks: 19,969,074
Failed Attacks: 11,335,220
Failed Attacks: 5,277,817 <- I guess everyone went quite over the holiday period?
Failed Attacks: 6,786,138
Failed Attacks: 17,375,929
Failed Attacks: 16,437,020
Failed Attacks: 5,542,223
Failed Attacks To Date: 3,347,659
As mentioned, I can confirm that Cisco Call Manager 9 (CCM9 ) does work in VirtualBox and can be installed in a similar manner to CCM7. I have had both 9.0.1 and 9.1.1 have been installed with all services running perfectly.
As we did with CCM7, CCM9 must first be installed in VMware and then moved over to VirtualBox. CCM9 is now 100% supported in VMware, so the install process should be flawless. Keep in mind though that VirtualBox is definitely not officially supported, so you will get no help from TAC. This should only be used in a lab environment.
The minimum requirements for CCM9 are the same as they were in CCM7, 1x 80GB SCSI disk with 2048MB RAM. The CUC prerequisites have changed slightly and if you use 80GB/2048MB you won’t be able to install CUC. I haven’t been bothered to find the minimum requirements for CUC but I’ll post them up when I get some time.
I’ve used VMware Workstation 8.0, but you should be able to use any version of VMware to build the initial machine. All we need to do is to have the install complete and boot successfully, all other finer details can be changed once we move over to VirtualBox.
- Start by creating a new VM and choose a custom config.
- Depending on your version of VMware this may change, but I used Workstation 8.0 as the hardware platform.
- We don’t want to use the auto deployment scripts and we will need to modify the hardware before boot, so just choose the ISO later.
- Any version of Red Hat should work here, but I used 64-bit version of Enterprise 6.
- Name it appropriately.
- One processor is enough but if you’ve got more resources to throw at it, you may be able to do it here as long as you match the same in VirtualBox later.
- Same goes for the RAM. The minimum requirements call for 2048MB but if you’ve got more, chuck it in.
- I hate using NAT, but it’s probably useful for labs. In any case I’ve got bridged here, but we will redo this step later in the VBox config.
- Make sure you use SCSI here. I haven’t tried SAS but it may work too.
- Create a new HDD.
- Make sure this is set to SCSI, it won’t work with IDE here.
- I’ve got the minimum as 80GB here, but if you’ve got more throw it here.
- This is where the vmdk is stored, make sure you take note of the location as we will need this file later to import into VBox.
- Finish it up.
- Edit your VM before powering it on, we’ve got a few things to do here.
- Select the CD/DVD drive and browse for your ISO.
- Select your ISO.
- I’ve finished up here, but if you want you can remove the floppy, sound cards etc.
- Power on the VMWare image.
- The install process here is exactly the same as a typical CCM9 install, I’ve included it just for the sake of doing so.
- Notice here that CUC isn’t available because our hardware config is too low speccd.
- This will take quite a while.
- Once the installation has finished, log in and shut it down.
- Now it’s time to fire up VirtualBox.
- Add a new Red Hat 64-bit guest.
- Make sure your memory size is the same as what you built in VMware.
- We need to not add a new hard drive here (we will be reusing the one built by VMware).
- Just accept this.
- We need to edit our VM before powering it on.
- Remove the SATA controller, if you remember we built the VM in VMware using SCSI disks.
- Add a SCSI controller.
- Select Choose Existing Disk.
- Browse to the vmdk file that was outputted by VMware.
- Your disk setup should now look like this.
- Choose the IDE CDROM drive to boot from the CentOS live boot disk. Note that you can boot of any live distro, I actually used the Ubuntu 12.04 live CD because I was having issues with remote key forwarding to the VM whilst using CentOS.
- Again, I hate NAT’ed NIC’s so I switched mine to bridged.
- Mount your CCM partition and chroot to it.
- vi/nano/whatever the hardware_check.sh script in /usr/local/bin/base_scripts/ which is similar to what we did in CCM7.
- Find the function check_deployment() as shown below.
- Like we did for CCM7 edit out the isDeploymentValidForHardware function.
- Make sure you save the file, I used vi to edit this so :wq! it.
- Throw the following lines in to change the hardware type to match those by VMware.
vboxmanage setextradata “<VM name>” “VBoxInternal/Devices/pcbios/0/Config/DmiBIOSVersion” “6 ”
vboxmanage setextradata “<VM name>” “VBoxInternal/Devices/pcbios/0/Config/DmiSystemVendor” “VMware”
vboxmanage setextradata “<VM name>” “VBoxInternal/Devices/pcbios/0/Config/DmiBIOSVendor” “Phoenix Technologies LTD”
vboxmanage setextradata “<VM name>” “VBoxInternal/Devices/pcbios/0/Config/DmiSystemProduct” “VMware Virtual Platform”
- Now you’re ready to fire up CCM9 in VirtualBox so just run that thang.
- On bootup you should be able to see the OS detecting all your hardware as VMware devices – this is a good thing, don’t worry
- If you receive some weird output, don’t worry too much, the important thing is that the OS boots and services start successfully.
- Again, ignore any of these types of errors, this is why this shouldn’t be used in production.
- Login, hooray!
- Because the hardware has been modified slightly, the OS is unable to detect the vCPU and the amount of RAM.
- However, everything still works perfectly 😉
Just a few notes about the install. In the CCM7 install I did before, I added a new user whilst chroot’ed over to the CCM partition so we could SSH in later to modify the check_deployment() script. I only attempted a few times, but every time I tried my SSH user couldn’t log in. All permissions were set correctly, the user was added to the OS properly but SSH wouldn’t work. I’m sure if I dug deeper I would probably find some sort of SSH permission script in Cisco’s funky land, but for the purposes of getting CCM9 into VirtualBox it wasn’t needed.
I’ll be posting some more info on the topic as I use this more. Also, due to CCM9’s new licensing model I *may* look at loading licenses on to get this running past the 60 day limitation.
Following on from the previous article I wrote about CCM7 in VirtualBox I can confirm that CCM9 can be installed in a similar manner. Both 9.0.1 and 9.1.1 have been installed with all services running perfectly.
I will post up a detailed guide on how to install and configure CCM9 in VirtualBox shortly.
For the sake of shits the site has been moved to a new host 🙂