Container fail at creation (pve 9 with ZFS-Pool) #1744

Open
opened 2025-11-20 05:16:32 -05:00 by saavagebueno · 49 comments
Owner

Originally created by @jsspen on GitHub (Aug 19, 2025).

Have you read and understood the above guidelines?

yes

📜 What is the name of the script you are using?

twingate-connector

📂 What was the exact command used to execute the script?

bash -c "$(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/ct/twingate-connector.sh)"

⚙️ What settings are you using?

  • Default Settings
  • Advanced Settings

🖥️ Which Linux distribution are you using?

Ubuntu 24.04

📝 Provide a clear and concise description of the issue.

Script fails to build container

🔄 Steps to reproduce the issue.

Run the script with default settings

Paste the full error output (if available).

⚙️  Using Default Settings on node hoh (🔍  Verbose)
  🆔  Container ID: 154
  🖥️  Operating System: ubuntu (24.04)
  📦  Container Type: Unprivileged
  💾  Disk Size: 3 GB
  🧠  CPU Cores: 1
  🛠️  RAM Size: 1024 MiB
  🔍  Verbose Mode: Enabled
  🚀  Creating a Twingate-Connector LXC using the above default settings
  
  ✔️   Storage local (Free: 24.4GB  Used: 11.5GB) [Template]
  ✔️   Storage ssd512a (Free: 363.8GB  Used: 101.9GB) [Container]
  ✔️   Template ubuntu-24.04-standard_24.04-2_amd64.tar.zst [local]
   ✖️   Container creation failed. Checking if template is corrupted or incomplete.
   ✖️   Template is valid, but container creation still failed.

[ERROR] in line 1160: exit code 0: while executing command bash -c "$(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/create_lxc.sh)" $?

🖼️ Additional context (optional).

No response

Originally created by @jsspen on GitHub (Aug 19, 2025). ### ✅ Have you read and understood the above guidelines? yes ### 📜 What is the name of the script you are using? twingate-connector ### 📂 What was the exact command used to execute the script? bash -c "$(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/ct/twingate-connector.sh)" ### ⚙️ What settings are you using? - [x] Default Settings - [ ] Advanced Settings ### 🖥️ Which Linux distribution are you using? Ubuntu 24.04 ### 📝 Provide a clear and concise description of the issue. Script fails to build container ### 🔄 Steps to reproduce the issue. Run the script with default settings ### ❌ Paste the full error output (if available). ``` ⚙️ Using Default Settings on node hoh (🔍 Verbose) 🆔 Container ID: 154 🖥️ Operating System: ubuntu (24.04) 📦 Container Type: Unprivileged 💾 Disk Size: 3 GB 🧠 CPU Cores: 1 🛠️ RAM Size: 1024 MiB 🔍 Verbose Mode: Enabled 🚀 Creating a Twingate-Connector LXC using the above default settings ✔️ Storage local (Free: 24.4GB Used: 11.5GB) [Template] ✔️ Storage ssd512a (Free: 363.8GB Used: 101.9GB) [Container] ✔️ Template ubuntu-24.04-standard_24.04-2_amd64.tar.zst [local] ✖️ Container creation failed. Checking if template is corrupted or incomplete. ✖️ Template is valid, but container creation still failed. [ERROR] in line 1160: exit code 0: while executing command bash -c "$(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/create_lxc.sh)" $? ``` ### 🖼️ Additional context (optional). _No response_
saavagebueno added the questionImplemented in VED waiting push to Main labels 2025-11-20 05:16:32 -05:00
Author
Owner

@MickLesk commented on GitHub (Aug 19, 2025):

Your Container Template is crappy. Remove it and try it again

Or your SSD hasnt rootdir inside. Run pveam status

@MickLesk commented on GitHub (Aug 19, 2025): Your Container Template is crappy. Remove it and try it again Or your SSD hasnt rootdir inside. Run pveam status
Author
Owner

@jsspen commented on GitHub (Aug 19, 2025):

  Using Default Settings on node hoh (🔍  Verbose)
  🆔  Container ID: 154
  🖥️  Operating System: ubuntu (24.04)
  📦  Container Type: Unprivileged
  💾  Disk Size: 3 GB
  🧠  CPU Cores: 1
  🛠️  RAM Size: 1024 MiB
  🔍  Verbose Mode: Enabled
  🚀  Creating a Twingate-Connector LXC using the above default settings
  
  ✔️   Storage local (Free: 27.7GB  Used: 8.2GB) [Template]
  ✔️   Storage ssd512a (Free: 363.8GB  Used: 101.9GB) [Container]
  ✔️   Template ubuntu-24.04-standard_24.04-2_amd64.tar.zst [online]
   💡   Template ubuntu-24.04-standard_24.04-2_amd64.tar.zst is missing or corrupted. Re-downloading.
  ✔️   Template download successful.
   ✖️   Container creation failed. Checking if template is corrupted or incomplete.
   ✖️   Template is valid, but container creation still failed.

[ERROR] in line 1160: exit code 0: while executing command bash -c "$(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/create_lxc.sh)" $?

I'm not familiar with pveam but status isn't a valid command. I did a little Googling and I'm guessing you meant pvesm status?

Name             Type     Status           Total            Used       Available        %
local             dir     active        40516856         8867716        29558748   21.89%
local-lvm     lvmthin     active        56545280        10262968        46282311   18.15%
nvme256           dir     active       249935956       224977492        24958464   90.01%
ssd512a           dir     active       499862372       109322140       390540232   21.87%

I have used this SSD for storing other templates without issue for a while now and I just used it with the Mealie script - though I guess it could be an Ubuntu template issue since Mealie is Debian? I'll try to find a script that uses Ubuntu and see what happens then.

Edit: I just ran the Jellyfin script as a test and it failed in the same way:

⚙️  Using Default Settings on node hoh (🔍  Verbose)
  🆔  Container ID: 155
  🖥️  Operating System: ubuntu (24.04)
  📦  Container Type: Unprivileged
  💾  Disk Size: 8 GB
  🧠  CPU Cores: 2
  🛠️  RAM Size: 2048 MiB
  🔍  Verbose Mode: Enabled
  🚀  Creating a Jellyfin LXC using the above default settings
  
  ✔️   Storage local (Free: 27.5GB  Used: 8.4GB) [Template]
  ✔️   Storage ssd512a (Free: 357.8GB  Used: 107.8GB) [Container]
  ✔️   Template ubuntu-24.04-standard_24.04-2_amd64.tar.zst [local]
   ✖️   Container creation failed. Checking if template is corrupted or incomplete.
   ✖️   Template is valid, but container creation still failed.

[ERROR] in line 1160: exit code 0: while executing command bash -c "$(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/create_lxc.sh)" $?

Edit 2: I also just tried selecting a different storage drive for the template storage (using ssd512a for both template and container) and it failed in the same way.

@jsspen commented on GitHub (Aug 19, 2025): ``` Using Default Settings on node hoh (🔍 Verbose) 🆔 Container ID: 154 🖥️ Operating System: ubuntu (24.04) 📦 Container Type: Unprivileged 💾 Disk Size: 3 GB 🧠 CPU Cores: 1 🛠️ RAM Size: 1024 MiB 🔍 Verbose Mode: Enabled 🚀 Creating a Twingate-Connector LXC using the above default settings ✔️ Storage local (Free: 27.7GB Used: 8.2GB) [Template] ✔️ Storage ssd512a (Free: 363.8GB Used: 101.9GB) [Container] ✔️ Template ubuntu-24.04-standard_24.04-2_amd64.tar.zst [online] 💡 Template ubuntu-24.04-standard_24.04-2_amd64.tar.zst is missing or corrupted. Re-downloading. ✔️ Template download successful. ✖️ Container creation failed. Checking if template is corrupted or incomplete. ✖️ Template is valid, but container creation still failed. [ERROR] in line 1160: exit code 0: while executing command bash -c "$(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/create_lxc.sh)" $? ``` I'm not familiar with `pveam` but `status` isn't a valid command. I did a little Googling and I'm guessing you meant `pvesm status`? ``` Name Type Status Total Used Available % local dir active 40516856 8867716 29558748 21.89% local-lvm lvmthin active 56545280 10262968 46282311 18.15% nvme256 dir active 249935956 224977492 24958464 90.01% ssd512a dir active 499862372 109322140 390540232 21.87% ``` I have used this SSD for storing other templates without issue for a while now and I just used it with the Mealie script - though I guess it could be an Ubuntu template issue since Mealie is Debian? I'll try to find a script that uses Ubuntu and see what happens then. Edit: I just ran the Jellyfin script as a test and it failed in the same way: ``` ⚙️ Using Default Settings on node hoh (🔍 Verbose) 🆔 Container ID: 155 🖥️ Operating System: ubuntu (24.04) 📦 Container Type: Unprivileged 💾 Disk Size: 8 GB 🧠 CPU Cores: 2 🛠️ RAM Size: 2048 MiB 🔍 Verbose Mode: Enabled 🚀 Creating a Jellyfin LXC using the above default settings ✔️ Storage local (Free: 27.5GB Used: 8.4GB) [Template] ✔️ Storage ssd512a (Free: 357.8GB Used: 107.8GB) [Container] ✔️ Template ubuntu-24.04-standard_24.04-2_amd64.tar.zst [local] ✖️ Container creation failed. Checking if template is corrupted or incomplete. ✖️ Template is valid, but container creation still failed. [ERROR] in line 1160: exit code 0: while executing command bash -c "$(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/create_lxc.sh)" $? ``` Edit 2: I also just tried selecting a different storage drive for the template storage (using ssd512a for both template and container) and it failed in the same way.
Author
Owner

@MickLesk commented on GitHub (Aug 20, 2025):

Can you run an simple ubuntu ct from dev? Share full Output. (Please in Verbose Mode)

bash -c "$(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVED/main/ct/ubuntu.sh)"

@MickLesk commented on GitHub (Aug 20, 2025): Can you run an simple ubuntu ct from dev? Share full Output. (Please in Verbose Mode) `bash -c "$(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVED/main/ct/ubuntu.sh)"`
Author
Owner

@jsspen commented on GitHub (Aug 20, 2025):

Nope, same result. Out of my ~50 containers the only one I have that uses Ubuntu is Jellyfin. It works fine but it is on 22.04.5 LTS.

  ⚙️  Using Default Settings on node hoh (🔍  Verbose)
  🆔  Container ID: 155
  🖥️  Operating System: ubuntu (24.04)
  📦  Container Type: Privileged
  💾  Disk Size: 2 GB
  🧠  CPU Cores: 1
  🛠️  RAM Size: 512 MiB
  🚀  Creating a Ubuntu LXC using the above default settings
  
+ command -v curl
+ source /dev/fd/63
++ curl -fsSL https://git.community-scripts.org/community-scripts/ProxmoxVED/raw/branch/main/misc/core.func
++ [[ -n '' ]]
++ _CORE_FUNC_LOADED=1
++ trap stop_spinner EXIT INT TERM
+ load_functions
+ [[ -n '' ]]
+ __FUNCTIONS_LOADED=1
+ color
++ echo '\033[33m'
+ YW='\033[33m'
+ YWB=''
++ echo '\033[36m'
+ BL='\033[36m'
++ echo '\033[01;31m'
+ RD='\033[01;31m'
++ echo '\033[4;92m'
+ BGN='\033[4;92m'
++ echo '\033[1;92m'
+ GN='\033[1;92m'
++ echo '\033[32m'
+ DGN='\033[32m'
++ echo '\033[m'
+ CL='\033[m'
+ formatting
+ BFR='\r\033[K'
++ echo '\033[1m'
+ BOLD='\033[1m'
+ HOLD=' '
+ TAB='  '
+ TAB3='      '
+ icons
+ CM='  ✔️  '
+ CROSS='  ✖️  '
+ DNSOK='✔️ '
+ DNSFAIL='  ✖️  '
+ INFO='  💡  \033[m'
+ OS='  🖥️  \033[m'
+ OSVERSION='  🌟  \033[m'
+ CONTAINERTYPE='  📦  \033[m'
+ DISKSIZE='  💾  \033[m'
+ CPUCORE='  🧠  \033[m'
+ RAMSIZE='  🛠️  \033[m'
+ SEARCH='  🔍  \033[m'
+ VERBOSE_CROPPED='🔍  '
+ VERIFYPW='  🔐  \033[m'
+ CONTAINERID='  🆔  \033[m'
+ HOSTNAME='  🏠  \033[m'
+ BRIDGE='  🌉  \033[m'
+ NETWORK='  📡  \033[m'
+ GATEWAY='  🌐  \033[m'
+ DISABLEIPV6='  🚫  \033[m'
+ DEFAULT='  ⚙️  \033[m'
+ MACADDRESS='  🔗  \033[m'
+ VLANTAG='  🏷️  \033[m'
+ ROOTSSH='  🔑  \033[m'
+ CREATING='  🚀  \033[m'
+ ADVANCED='  🧩  \033[m'
+ FUSE='  🗂️  \033[m'
+ HOURGLASS='  ⏳  '
+ default_vars
+ RETRY_NUM=10
+ RETRY_EVERY=3
+ i=10
+ set_std_mode
+ '[' yes = yes ']'
+ STD=
+ set -Eeuo pipefail
+ trap 'error_handler $LINENO "$BASH_COMMAND"' ERR
+ trap on_exit EXIT
+ trap on_interrupt INT
+ trap on_terminate TERM
+ [[ -n 155 ]]
+ [[ -n ubuntu ]]
+ msg_debug CTID=155
+ [[ 0 == \1 ]]
+ msg_debug PCT_OSTYPE=ubuntu
+ [[ 0 == \1 ]]
+ msg_debug PCT_OSVERSION=24.04
+ [[ 0 == \1 ]]
+ '[' 155 -ge 100 ']'
+ qm status 155
+ pct status 155
+ msg_info 'Validating storage'
+ local 'msg=Validating storage'
+ [[ -z Validating storage ]]
+ declare -p MSG_INFO_SHOWN
+ MSG_INFO_SHOWN=()
+ declare -gA MSG_INFO_SHOWN
+ [[ -n '' ]]
+ MSG_INFO_SHOWN["$msg"]=1
+ stop_spinner
+ local pid=
+ [[ -z '' ]]
+ [[ -f /tmp/.spinner.pid ]]
+ [[ -n '' ]]
+ unset SPINNER_PID SPINNER_MSG
+ stty sane
+ SPINNER_MSG='Validating storage'
+ is_verbose_mode
+ local verbose=yes
+ local tty_status
+ [[ -t 2 ]]
+ tty_status=interactive
+ [[ yes != \n\o ]]
+ local 'HOURGLASS=  ⏳  '
+ printf '\r\e[2K%s %b' '  ⏳  ' '\033[33mValidating storage\033[m'
  ⏳   Validating storage+ return
+ check_storage_support rootdir
+ local CONTENT=rootdir
+ VALID_STORAGES=()
+ local -a VALID_STORAGES
+ IFS=
+ read -r line
++ pvesm status -content rootdir
++ awk 'NR>1'
+ local STORAGE_NAME
++ awk '{print $1}'
+ STORAGE_NAME=local-lvm
+ [[ -z local-lvm ]]
+ VALID_STORAGES+=("$STORAGE_NAME")
+ IFS=
+ read -r line
+ local STORAGE_NAME
++ awk '{print $1}'
+ STORAGE_NAME=nvme256
+ [[ -z nvme256 ]]
+ VALID_STORAGES+=("$STORAGE_NAME")
+ IFS=
+ read -r line
+ local STORAGE_NAME
++ awk '{print $1}'
+ STORAGE_NAME=ssd512a
+ [[ -z ssd512a ]]
+ VALID_STORAGES+=("$STORAGE_NAME")
+ IFS=
+ read -r line
+ [[ 3 -gt 0 ]]
+ check_storage_support vztmpl
+ local CONTENT=vztmpl
+ VALID_STORAGES=()
+ local -a VALID_STORAGES
+ IFS=
+ read -r line
++ pvesm status -content vztmpl
++ awk 'NR>1'
+ local STORAGE_NAME
++ awk '{print $1}'
+ STORAGE_NAME=local
+ [[ -z local ]]
+ VALID_STORAGES+=("$STORAGE_NAME")
+ IFS=
+ read -r line
+ local STORAGE_NAME
++ awk '{print $1}'
+ STORAGE_NAME=nvme256
+ [[ -z nvme256 ]]
+ VALID_STORAGES+=("$STORAGE_NAME")
+ IFS=
+ read -r line
+ local STORAGE_NAME
++ awk '{print $1}'
+ STORAGE_NAME=ssd512a
+ [[ -z ssd512a ]]
+ VALID_STORAGES+=("$STORAGE_NAME")
+ IFS=
+ read -r line
+ [[ 3 -gt 0 ]]
+ msg_info 'Checking template storage'
+ local 'msg=Checking template storage'
+ [[ -z Checking template storage ]]
+ declare -p MSG_INFO_SHOWN
+ declare -A MSG_INFO_SHOWN
+ [[ -n '' ]]
+ MSG_INFO_SHOWN["$msg"]=1
+ stop_spinner
+ local pid=
+ [[ -z '' ]]
+ [[ -f /tmp/.spinner.pid ]]
+ [[ -n '' ]]
+ unset SPINNER_PID SPINNER_MSG
+ stty sane
+ SPINNER_MSG='Checking template storage'
+ is_verbose_mode
+ local verbose=yes
+ local tty_status
+ [[ -t 2 ]]
+ tty_status=interactive
+ [[ yes != \n\o ]]
+ local 'HOURGLASS=  ⏳  '
+ printf '\r\e[2K%s %b' '  ⏳  ' '\033[33mChecking template storage\033[m'
  ⏳   Checking template storage+ return
+ true
+ select_storage template
+ local CLASS=template CONTENT CONTENT_LABEL
+ case $CLASS in
+ CONTENT=vztmpl
+ CONTENT_LABEL='Container template'
+ '[' vztmpl = rootdir ']'
+ local -A STORAGE_MAP
+ local -a MENU
+ local COL_WIDTH=0
++ pvesm status -content vztmpl
+ read -r TAG TYPE _ TOTAL USED FREE _
++ awk 'NR>1'
+ [[ -n local ]]
+ [[ -n dir ]]
+ local STORAGE_NAME=local
+ local 'DISPLAY=local (dir)'
++ numfmt --to=iec --from-unit=K --format %.1f
+ local USED_FMT=8.4G
++ numfmt --to=iec --from-unit=K --format %.1f
+ local FREE_FMT=27.5G
+ local 'INFO=Free: 27.5GB  Used: 8.4GB'
+ STORAGE_MAP["$DISPLAY"]=local
+ MENU+=("$DISPLAY" "$INFO" "OFF")
+ (( 11 > COL_WIDTH ))
+ COL_WIDTH=11
+ read -r TAG TYPE _ TOTAL USED FREE _
+ [[ -n nvme256 ]]
+ [[ -n dir ]]
+ local STORAGE_NAME=nvme256
+ local 'DISPLAY=nvme256 (dir)'
++ numfmt --to=iec --from-unit=K --format %.1f
+ local USED_FMT=209.6G
++ numfmt --to=iec --from-unit=K --format %.1f
+ local FREE_FMT=23.3G
+ local 'INFO=Free: 23.3GB  Used: 209.6GB'
+ STORAGE_MAP["$DISPLAY"]=nvme256
+ MENU+=("$DISPLAY" "$INFO" "OFF")
+ (( 13 > COL_WIDTH ))
+ COL_WIDTH=13
+ read -r TAG TYPE _ TOTAL USED FREE _
+ [[ -n ssd512a ]]
+ [[ -n dir ]]
+ local STORAGE_NAME=ssd512a
+ local 'DISPLAY=ssd512a (dir)'
++ numfmt --to=iec --from-unit=K --format %.1f
+ local USED_FMT=111.8G
++ numfmt --to=iec --from-unit=K --format %.1f
+ local FREE_FMT=353.8G
+ local 'INFO=Free: 353.8GB  Used: 111.8GB'
+ STORAGE_MAP["$DISPLAY"]=ssd512a
+ MENU+=("$DISPLAY" "$INFO" "OFF")
+ (( 13 > COL_WIDTH ))
+ read -r TAG TYPE _ TOTAL USED FREE _
+ '[' 9 -eq 0 ']'
+ '[' 3 -eq 1 ']'
+ local WIDTH=55
+ true
+ local DISPLAY_SELECTED
++ whiptail --backtitle 'Proxmox VE Helper Scripts' --title 'Storage Pools' --radiolist 'Which storage pool for container template?\n(Spacebar to select)' 16 55 6 'local (dir)' 'Free: 27.5GB  Used: 8.4GB' OFF 'nvme256 (dir)' 'Free: 23.3GB  Used: 209.6GB' OFF 'ssd512a (dir)' 'Free: 353.8GB  Used: 111.8GB' OFF
+ DISPLAY_SELECTED='local (dir)'
+ [[ 0 -ne 0 ]]
++ sed 's/[[:space:]]*$//'
+ DISPLAY_SELECTED='local (dir)'
+ [[ -z local (dir) ]]
+ [[ -z _ ]]
+ STORAGE_RESULT=local
+ (( i = 0 ))
+ (( i < 9 ))
+ [[ local (dir) == \l\o\c\a\l\ \(\d\i\r\) ]]
+ STORAGE_INFO='Free: 27.5GB  Used: 8.4GB'
+ break
+ return 0
+ TEMPLATE_STORAGE=local
+ TEMPLATE_STORAGE_INFO='Free: 27.5GB  Used: 8.4GB'
+ msg_ok 'Storage \033[36mlocal\033[m (Free: 27.5GB  Used: 8.4GB) [Template]'
+ local 'msg=Storage \033[36mlocal\033[m (Free: 27.5GB  Used: 8.4GB) [Template]'
+ [[ -z Storage \033[36mlocal\033[m (Free: 27.5GB  Used: 8.4GB) [Template] ]]
+ stop_spinner
+ local pid=
+ [[ -z '' ]]
+ [[ -f /tmp/.spinner.pid ]]
+ [[ -n '' ]]
+ unset SPINNER_PID SPINNER_MSG
+ stty sane
+ clear_line
+ tput cr
+ tput el
+ printf '%s %b\n' '  ✔️  ' '\033[1;92mStorage \033[36mlocal\033[m (Free: 27.5GB  Used: 8.4GB) [Template]\033[m'
  ✔️   Storage local (Free: 27.5GB  Used: 8.4GB) [Template]
+ unset 'MSG_INFO_SHOWN[Storage \033[36mlocal\033[m (Free: 27.5GB  Used: 8.4GB) [Template]]'
+ break
+ true
+ select_storage container
+ local CLASS=container CONTENT CONTENT_LABEL
+ case $CLASS in
+ CONTENT=rootdir
+ CONTENT_LABEL=Container
+ '[' rootdir = rootdir ']'
+ '[' -n '' ']'
+ local -A STORAGE_MAP
+ local -a MENU
+ local COL_WIDTH=0
+ read -r TAG TYPE _ TOTAL USED FREE _
++ pvesm status -content rootdir
++ awk 'NR>1'
+ [[ -n local-lvm ]]
+ [[ -n lvmthin ]]
+ local STORAGE_NAME=local-lvm
+ local 'DISPLAY=local-lvm (lvmthin)'
++ numfmt --to=iec --from-unit=K --format %.1f
+ local USED_FMT=9.6G
++ numfmt --to=iec --from-unit=K --format %.1f
+ local FREE_FMT=43.2G
+ local 'INFO=Free: 43.2GB  Used: 9.6GB'
+ STORAGE_MAP["$DISPLAY"]=local-lvm
+ MENU+=("$DISPLAY" "$INFO" "OFF")
+ (( 19 > COL_WIDTH ))
+ COL_WIDTH=19
+ read -r TAG TYPE _ TOTAL USED FREE _
+ [[ -n nvme256 ]]
+ [[ -n dir ]]
+ local STORAGE_NAME=nvme256
+ local 'DISPLAY=nvme256 (dir)'
++ numfmt --to=iec --from-unit=K --format %.1f
+ local USED_FMT=209.6G
++ numfmt --to=iec --from-unit=K --format %.1f
+ local FREE_FMT=23.3G
+ local 'INFO=Free: 23.3GB  Used: 209.6GB'
+ STORAGE_MAP["$DISPLAY"]=nvme256
+ MENU+=("$DISPLAY" "$INFO" "OFF")
+ (( 13 > COL_WIDTH ))
+ read -r TAG TYPE _ TOTAL USED FREE _
+ [[ -n ssd512a ]]
+ [[ -n dir ]]
+ local STORAGE_NAME=ssd512a
+ local 'DISPLAY=ssd512a (dir)'
++ numfmt --to=iec --from-unit=K --format %.1f
+ local USED_FMT=111.8G
++ numfmt --to=iec --from-unit=K --format %.1f
+ local FREE_FMT=353.8G
+ local 'INFO=Free: 353.8GB  Used: 111.8GB'
+ STORAGE_MAP["$DISPLAY"]=ssd512a
+ MENU+=("$DISPLAY" "$INFO" "OFF")
+ (( 13 > COL_WIDTH ))
+ read -r TAG TYPE _ TOTAL USED FREE _
+ '[' 9 -eq 0 ']'
+ '[' 3 -eq 1 ']'
+ local WIDTH=61
+ true
+ local DISPLAY_SELECTED
++ whiptail --backtitle 'Proxmox VE Helper Scripts' --title 'Storage Pools' --radiolist 'Which storage pool for container?\n(Spacebar to select)' 16 61 6 'local-lvm (lvmthin)' 'Free: 43.2GB  Used: 9.6GB' OFF 'nvme256 (dir)' 'Free: 23.3GB  Used: 209.6GB' OFF 'ssd512a (dir)' 'Free: 353.8GB  Used: 111.8GB' OFF
+ DISPLAY_SELECTED='ssd512a (dir)'
+ [[ 0 -ne 0 ]]
++ sed 's/[[:space:]]*$//'
+ DISPLAY_SELECTED='ssd512a (dir)'
+ [[ -z ssd512a (dir) ]]
+ [[ -z _ ]]
+ STORAGE_RESULT=ssd512a
+ (( i = 0 ))
+ (( i < 9 ))
+ [[ local-lvm (lvmthin) == \s\s\d\5\1\2\a\ \(\d\i\r\) ]]
+ (( i += 3 ))
+ (( i < 9 ))
+ [[ nvme256 (dir) == \s\s\d\5\1\2\a\ \(\d\i\r\) ]]
+ (( i += 3 ))
+ (( i < 9 ))
+ [[ ssd512a (dir) == \s\s\d\5\1\2\a\ \(\d\i\r\) ]]
+ STORAGE_INFO='Free: 353.8GB  Used: 111.8GB'
+ break
+ return 0
+ CONTAINER_STORAGE=ssd512a
+ CONTAINER_STORAGE_INFO='Free: 353.8GB  Used: 111.8GB'
+ msg_ok 'Storage \033[36mssd512a\033[m (Free: 353.8GB  Used: 111.8GB) [Container]'
+ local 'msg=Storage \033[36mssd512a\033[m (Free: 353.8GB  Used: 111.8GB) [Container]'
+ [[ -z Storage \033[36mssd512a\033[m (Free: 353.8GB  Used: 111.8GB) [Container] ]]
+ stop_spinner
+ local pid=
+ [[ -z '' ]]
+ [[ -f /tmp/.spinner.pid ]]
+ [[ -n '' ]]
+ unset SPINNER_PID SPINNER_MSG
+ stty sane
+ clear_line
+ tput cr
+ tput el
+ printf '%s %b\n' '  ✔️  ' '\033[1;92mStorage \033[36mssd512a\033[m (Free: 353.8GB  Used: 111.8GB) [Container]\033[m'
  ✔️   Storage ssd512a (Free: 353.8GB  Used: 111.8GB) [Container]
+ unset 'MSG_INFO_SHOWN[Storage \033[36mssd512a\033[m (Free: 353.8GB  Used: 111.8GB) [Container]]'
+ break
++ pvesm status
++ awk -v s=ssd512a '$1 == s { print $6 }'
+ STORAGE_FREE=379822884
+ REQUIRED_KB=2097152
+ '[' 379822884 -lt 2097152 ']'
+ '[' -f /etc/pve/corosync.conf ']'
+ TEMPLATE_SEARCH=ubuntu-24.04
+ case "$PCT_OSTYPE" in
+ TEMPLATE_PATTERN=-standard_
+ msg_info 'Searching for template '\''ubuntu-24.04'\'''
+ local 'msg=Searching for template '\''ubuntu-24.04'\'''
+ [[ -z Searching for template 'ubuntu-24.04' ]]
+ declare -p MSG_INFO_SHOWN
+ declare -A MSG_INFO_SHOWN
+ [[ -n '' ]]
+ MSG_INFO_SHOWN["$msg"]=1
+ stop_spinner
+ local pid=
+ [[ -z '' ]]
+ [[ -f /tmp/.spinner.pid ]]
+ [[ -n '' ]]
+ unset SPINNER_PID SPINNER_MSG
+ stty sane
+ SPINNER_MSG='Searching for template '\''ubuntu-24.04'\'''
+ is_verbose_mode
+ local verbose=yes
+ local tty_status
+ [[ -t 2 ]]
+ tty_status=interactive
+ [[ yes != \n\o ]]
+ local 'HOURGLASS=  ⏳  '
+ printf '\r\e[2K%s %b' '  ⏳  ' '\033[33mSearching for template '\''ubuntu-24.04'\''\033[m'
  ⏳   Searching for template 'ubuntu-24.04'+ return
+ mapfile -t TEMPLATES
++ pveam list local
++ awk -v s=ubuntu-24.04 -v p=-standard_ '$1 ~ s && $1 ~ p {print $1}'
++ sed 's/.*\///'
++ sort -t - -k 2 -V
+ '[' 1 -gt 0 ']'
+ TEMPLATE_SOURCE=local
+ TEMPLATE=ubuntu-24.04-standard_24.04-2_amd64.tar.zst
++ pvesm path local:vztmpl/ubuntu-24.04-standard_24.04-2_amd64.tar.zst
+ TEMPLATE_PATH=/var/lib/vz/template/cache/ubuntu-24.04-standard_24.04-2_amd64.tar.zst
+ msg_ok 'Template \033[36mubuntu-24.04-standard_24.04-2_amd64.tar.zst\033[m [local]'
+ local 'msg=Template \033[36mubuntu-24.04-standard_24.04-2_amd64.tar.zst\033[m [local]'
+ [[ -z Template \033[36mubuntu-24.04-standard_24.04-2_amd64.tar.zst\033[m [local] ]]
+ stop_spinner
+ local pid=
+ [[ -z '' ]]
+ [[ -f /tmp/.spinner.pid ]]
+ [[ -n '' ]]
+ unset SPINNER_PID SPINNER_MSG
+ stty sane
+ clear_line
+ tput cr
+ tput el
+ printf '%s %b\n' '  ✔️  ' '\033[1;92mTemplate \033[36mubuntu-24.04-standard_24.04-2_amd64.tar.zst\033[m [local]\033[m'
  ✔️   Template ubuntu-24.04-standard_24.04-2_amd64.tar.zst [local]
+ unset 'MSG_INFO_SHOWN[Template \033[36mubuntu-24.04-standard_24.04-2_amd64.tar.zst\033[m [local]]'
+ msg_debug TEMPLATE_SEARCH=ubuntu-24.04
+ [[ 0 == \1 ]]
+ msg_debug 'TEMPLATES=(ubuntu-24.04-standard_24.04-2_amd64.tar.zst)'
+ [[ 0 == \1 ]]
+ msg_debug 'Selected TEMPLATE=ubuntu-24.04-standard_24.04-2_amd64.tar.zst'
+ [[ 0 == \1 ]]
+ msg_debug TEMPLATE_PATH=/var/lib/vz/template/cache/ubuntu-24.04-standard_24.04-2_amd64.tar.zst
+ [[ 0 == \1 ]]
+ TEMPLATE_VALID=1
+ '[' '!' -s /var/lib/vz/template/cache/ubuntu-24.04-standard_24.04-2_amd64.tar.zst ']'
+ tar --use-compress-program=zstdcat -tf /var/lib/vz/template/cache/ubuntu-24.04-standard_24.04-2_amd64.tar.zst
+ '[' 1 -eq 0 ']'
+ msg_info 'Creating LXC Container'
+ local 'msg=Creating LXC Container'
+ [[ -z Creating LXC Container ]]
+ declare -p MSG_INFO_SHOWN
+ declare -A MSG_INFO_SHOWN
+ [[ -n '' ]]
+ MSG_INFO_SHOWN["$msg"]=1
+ stop_spinner
+ local pid=
+ [[ -z '' ]]
+ [[ -f /tmp/.spinner.pid ]]
+ [[ -n '' ]]
+ unset SPINNER_PID SPINNER_MSG
+ stty sane
+ SPINNER_MSG='Creating LXC Container'
+ is_verbose_mode
+ local verbose=yes
+ local tty_status
+ [[ -t 2 ]]
+ tty_status=interactive
+ [[ yes != \n\o ]]
+ local 'HOURGLASS=  ⏳  '
+ printf '\r\e[2K%s %b' '  ⏳  ' '\033[33mCreating LXC Container\033[m'
  ⏳   Creating LXC Container+ return
+ grep -q root:100000:65536 /etc/subuid
+ grep -q root:100000:65536 /etc/subgid
+ PCT_OPTIONS=(${PCT_OPTIONS[@]:-${DEFAULT_PCT_OPTIONS[@]}})
+ [[  -features nesting=1,fuse=1 -hostname ubuntu -tags community-script;os -net0 name=eth0,bridge=vmbr0,ip=dhcp -onboot 1 -cores 1 -memory 512 -unprivileged 0  =~  -rootfs  ]]
+ PCT_OPTIONS+=(-rootfs "$CONTAINER_STORAGE:${PCT_DISK_SIZE:-8}")
+ lockfile=/tmp/template.ubuntu-24.04-standard_24.04-2_amd64.tar.zst.lock
+ msg_debug 'Creating lockfile: /tmp/template.ubuntu-24.04-standard_24.04-2_amd64.tar.zst.lock'
+ [[ 0 == \1 ]]
+ exec
+ flock -w 60 9
+ msg_debug 'pct create command: pct create 155 local:vztmpl/ubuntu-24.04-standard_24.04-2_amd64.tar.zst -features nesting=1,fuse=1 -hostname ubuntu -tags community-script;os -net0 name=eth0,bridge=vmbr0,ip=dhcp -onboot 1 -cores 1 -memory 512 -unprivileged 0 -rootfs ssd512a:2'
+ [[ 0 == \1 ]]
+ pct create 155 local:vztmpl/ubuntu-24.04-standard_24.04-2_amd64.tar.zst -features nesting=1,fuse=1 -hostname ubuntu -tags 'community-script;os' -net0 name=eth0,bridge=vmbr0,ip=dhcp -onboot 1 -cores 1 -memory 512 -unprivileged 0 -rootfs ssd512a:2
+ msg_error 'Container creation failed. Checking if template is corrupted or incomplete.'
+ stop_spinner
+ local pid=
+ [[ -z '' ]]
+ [[ -f /tmp/.spinner.pid ]]
+ [[ -n '' ]]
+ unset SPINNER_PID SPINNER_MSG
+ stty sane
+ local 'msg=Container creation failed. Checking if template is corrupted or incomplete.'
+ echo -e '\r\033[K   ✖️   \033[01;31mContainer creation failed. Checking if template is corrupted or incomplete.\033[m'
   ✖️   Container creation failed. Checking if template is corrupted or incomplete.
+ [[ ! -s /var/lib/vz/template/cache/ubuntu-24.04-standard_24.04-2_amd64.tar.zst ]]
++ stat -c%s /var/lib/vz/template/cache/ubuntu-24.04-standard_24.04-2_amd64.tar.zst
+ [[ 141589318 -lt 1000000 ]]
+ zstdcat /var/lib/vz/template/cache/ubuntu-24.04-standard_24.04-2_amd64.tar.zst
+ tar -tf -
+ msg_error 'Template is valid, but container creation still failed.'
+ stop_spinner
+ local pid=
+ [[ -z '' ]]
+ [[ -f /tmp/.spinner.pid ]]
+ [[ -n '' ]]
+ unset SPINNER_PID SPINNER_MSG
+ stty sane
+ local 'msg=Template is valid, but container creation still failed.'
+ echo -e '\r\033[K   ✖️   \033[01;31mTemplate is valid, but container creation still failed.\033[m'
   ✖️   Template is valid, but container creation still failed.
+ exit 209
+ on_exit
+ local exit_code=209
+ [[ -n /tmp/template.ubuntu-24.04-standard_24.04-2_amd64.tar.zst.lock ]]
+ [[ -e /tmp/template.ubuntu-24.04-standard_24.04-2_amd64.tar.zst.lock ]]
+ rm -f /tmp/template.ubuntu-24.04-standard_24.04-2_amd64.tar.zst.lock
+ exit 209
@jsspen commented on GitHub (Aug 20, 2025): Nope, same result. Out of my ~50 containers the only one I have that uses Ubuntu is Jellyfin. It works fine but it is on 22.04.5 LTS. ``` ⚙️ Using Default Settings on node hoh (🔍 Verbose) 🆔 Container ID: 155 🖥️ Operating System: ubuntu (24.04) 📦 Container Type: Privileged 💾 Disk Size: 2 GB 🧠 CPU Cores: 1 🛠️ RAM Size: 512 MiB 🚀 Creating a Ubuntu LXC using the above default settings + command -v curl + source /dev/fd/63 ++ curl -fsSL https://git.community-scripts.org/community-scripts/ProxmoxVED/raw/branch/main/misc/core.func ++ [[ -n '' ]] ++ _CORE_FUNC_LOADED=1 ++ trap stop_spinner EXIT INT TERM + load_functions + [[ -n '' ]] + __FUNCTIONS_LOADED=1 + color ++ echo '\033[33m' + YW='\033[33m' + YWB='' ++ echo '\033[36m' + BL='\033[36m' ++ echo '\033[01;31m' + RD='\033[01;31m' ++ echo '\033[4;92m' + BGN='\033[4;92m' ++ echo '\033[1;92m' + GN='\033[1;92m' ++ echo '\033[32m' + DGN='\033[32m' ++ echo '\033[m' + CL='\033[m' + formatting + BFR='\r\033[K' ++ echo '\033[1m' + BOLD='\033[1m' + HOLD=' ' + TAB=' ' + TAB3=' ' + icons + CM=' ✔️ ' + CROSS=' ✖️ ' + DNSOK='✔️ ' + DNSFAIL=' ✖️ ' + INFO=' 💡 \033[m' + OS=' 🖥️ \033[m' + OSVERSION=' 🌟 \033[m' + CONTAINERTYPE=' 📦 \033[m' + DISKSIZE=' 💾 \033[m' + CPUCORE=' 🧠 \033[m' + RAMSIZE=' 🛠️ \033[m' + SEARCH=' 🔍 \033[m' + VERBOSE_CROPPED='🔍 ' + VERIFYPW=' 🔐 \033[m' + CONTAINERID=' 🆔 \033[m' + HOSTNAME=' 🏠 \033[m' + BRIDGE=' 🌉 \033[m' + NETWORK=' 📡 \033[m' + GATEWAY=' 🌐 \033[m' + DISABLEIPV6=' 🚫 \033[m' + DEFAULT=' ⚙️ \033[m' + MACADDRESS=' 🔗 \033[m' + VLANTAG=' 🏷️ \033[m' + ROOTSSH=' 🔑 \033[m' + CREATING=' 🚀 \033[m' + ADVANCED=' 🧩 \033[m' + FUSE=' 🗂️ \033[m' + HOURGLASS=' ⏳ ' + default_vars + RETRY_NUM=10 + RETRY_EVERY=3 + i=10 + set_std_mode + '[' yes = yes ']' + STD= + set -Eeuo pipefail + trap 'error_handler $LINENO "$BASH_COMMAND"' ERR + trap on_exit EXIT + trap on_interrupt INT + trap on_terminate TERM + [[ -n 155 ]] + [[ -n ubuntu ]] + msg_debug CTID=155 + [[ 0 == \1 ]] + msg_debug PCT_OSTYPE=ubuntu + [[ 0 == \1 ]] + msg_debug PCT_OSVERSION=24.04 + [[ 0 == \1 ]] + '[' 155 -ge 100 ']' + qm status 155 + pct status 155 + msg_info 'Validating storage' + local 'msg=Validating storage' + [[ -z Validating storage ]] + declare -p MSG_INFO_SHOWN + MSG_INFO_SHOWN=() + declare -gA MSG_INFO_SHOWN + [[ -n '' ]] + MSG_INFO_SHOWN["$msg"]=1 + stop_spinner + local pid= + [[ -z '' ]] + [[ -f /tmp/.spinner.pid ]] + [[ -n '' ]] + unset SPINNER_PID SPINNER_MSG + stty sane + SPINNER_MSG='Validating storage' + is_verbose_mode + local verbose=yes + local tty_status + [[ -t 2 ]] + tty_status=interactive + [[ yes != \n\o ]] + local 'HOURGLASS= ⏳ ' + printf '\r\e[2K%s %b' ' ⏳ ' '\033[33mValidating storage\033[m' ⏳ Validating storage+ return + check_storage_support rootdir + local CONTENT=rootdir + VALID_STORAGES=() + local -a VALID_STORAGES + IFS= + read -r line ++ pvesm status -content rootdir ++ awk 'NR>1' + local STORAGE_NAME ++ awk '{print $1}' + STORAGE_NAME=local-lvm + [[ -z local-lvm ]] + VALID_STORAGES+=("$STORAGE_NAME") + IFS= + read -r line + local STORAGE_NAME ++ awk '{print $1}' + STORAGE_NAME=nvme256 + [[ -z nvme256 ]] + VALID_STORAGES+=("$STORAGE_NAME") + IFS= + read -r line + local STORAGE_NAME ++ awk '{print $1}' + STORAGE_NAME=ssd512a + [[ -z ssd512a ]] + VALID_STORAGES+=("$STORAGE_NAME") + IFS= + read -r line + [[ 3 -gt 0 ]] + check_storage_support vztmpl + local CONTENT=vztmpl + VALID_STORAGES=() + local -a VALID_STORAGES + IFS= + read -r line ++ pvesm status -content vztmpl ++ awk 'NR>1' + local STORAGE_NAME ++ awk '{print $1}' + STORAGE_NAME=local + [[ -z local ]] + VALID_STORAGES+=("$STORAGE_NAME") + IFS= + read -r line + local STORAGE_NAME ++ awk '{print $1}' + STORAGE_NAME=nvme256 + [[ -z nvme256 ]] + VALID_STORAGES+=("$STORAGE_NAME") + IFS= + read -r line + local STORAGE_NAME ++ awk '{print $1}' + STORAGE_NAME=ssd512a + [[ -z ssd512a ]] + VALID_STORAGES+=("$STORAGE_NAME") + IFS= + read -r line + [[ 3 -gt 0 ]] + msg_info 'Checking template storage' + local 'msg=Checking template storage' + [[ -z Checking template storage ]] + declare -p MSG_INFO_SHOWN + declare -A MSG_INFO_SHOWN + [[ -n '' ]] + MSG_INFO_SHOWN["$msg"]=1 + stop_spinner + local pid= + [[ -z '' ]] + [[ -f /tmp/.spinner.pid ]] + [[ -n '' ]] + unset SPINNER_PID SPINNER_MSG + stty sane + SPINNER_MSG='Checking template storage' + is_verbose_mode + local verbose=yes + local tty_status + [[ -t 2 ]] + tty_status=interactive + [[ yes != \n\o ]] + local 'HOURGLASS= ⏳ ' + printf '\r\e[2K%s %b' ' ⏳ ' '\033[33mChecking template storage\033[m' ⏳ Checking template storage+ return + true + select_storage template + local CLASS=template CONTENT CONTENT_LABEL + case $CLASS in + CONTENT=vztmpl + CONTENT_LABEL='Container template' + '[' vztmpl = rootdir ']' + local -A STORAGE_MAP + local -a MENU + local COL_WIDTH=0 ++ pvesm status -content vztmpl + read -r TAG TYPE _ TOTAL USED FREE _ ++ awk 'NR>1' + [[ -n local ]] + [[ -n dir ]] + local STORAGE_NAME=local + local 'DISPLAY=local (dir)' ++ numfmt --to=iec --from-unit=K --format %.1f + local USED_FMT=8.4G ++ numfmt --to=iec --from-unit=K --format %.1f + local FREE_FMT=27.5G + local 'INFO=Free: 27.5GB Used: 8.4GB' + STORAGE_MAP["$DISPLAY"]=local + MENU+=("$DISPLAY" "$INFO" "OFF") + (( 11 > COL_WIDTH )) + COL_WIDTH=11 + read -r TAG TYPE _ TOTAL USED FREE _ + [[ -n nvme256 ]] + [[ -n dir ]] + local STORAGE_NAME=nvme256 + local 'DISPLAY=nvme256 (dir)' ++ numfmt --to=iec --from-unit=K --format %.1f + local USED_FMT=209.6G ++ numfmt --to=iec --from-unit=K --format %.1f + local FREE_FMT=23.3G + local 'INFO=Free: 23.3GB Used: 209.6GB' + STORAGE_MAP["$DISPLAY"]=nvme256 + MENU+=("$DISPLAY" "$INFO" "OFF") + (( 13 > COL_WIDTH )) + COL_WIDTH=13 + read -r TAG TYPE _ TOTAL USED FREE _ + [[ -n ssd512a ]] + [[ -n dir ]] + local STORAGE_NAME=ssd512a + local 'DISPLAY=ssd512a (dir)' ++ numfmt --to=iec --from-unit=K --format %.1f + local USED_FMT=111.8G ++ numfmt --to=iec --from-unit=K --format %.1f + local FREE_FMT=353.8G + local 'INFO=Free: 353.8GB Used: 111.8GB' + STORAGE_MAP["$DISPLAY"]=ssd512a + MENU+=("$DISPLAY" "$INFO" "OFF") + (( 13 > COL_WIDTH )) + read -r TAG TYPE _ TOTAL USED FREE _ + '[' 9 -eq 0 ']' + '[' 3 -eq 1 ']' + local WIDTH=55 + true + local DISPLAY_SELECTED ++ whiptail --backtitle 'Proxmox VE Helper Scripts' --title 'Storage Pools' --radiolist 'Which storage pool for container template?\n(Spacebar to select)' 16 55 6 'local (dir)' 'Free: 27.5GB Used: 8.4GB' OFF 'nvme256 (dir)' 'Free: 23.3GB Used: 209.6GB' OFF 'ssd512a (dir)' 'Free: 353.8GB Used: 111.8GB' OFF + DISPLAY_SELECTED='local (dir)' + [[ 0 -ne 0 ]] ++ sed 's/[[:space:]]*$//' + DISPLAY_SELECTED='local (dir)' + [[ -z local (dir) ]] + [[ -z _ ]] + STORAGE_RESULT=local + (( i = 0 )) + (( i < 9 )) + [[ local (dir) == \l\o\c\a\l\ \(\d\i\r\) ]] + STORAGE_INFO='Free: 27.5GB Used: 8.4GB' + break + return 0 + TEMPLATE_STORAGE=local + TEMPLATE_STORAGE_INFO='Free: 27.5GB Used: 8.4GB' + msg_ok 'Storage \033[36mlocal\033[m (Free: 27.5GB Used: 8.4GB) [Template]' + local 'msg=Storage \033[36mlocal\033[m (Free: 27.5GB Used: 8.4GB) [Template]' + [[ -z Storage \033[36mlocal\033[m (Free: 27.5GB Used: 8.4GB) [Template] ]] + stop_spinner + local pid= + [[ -z '' ]] + [[ -f /tmp/.spinner.pid ]] + [[ -n '' ]] + unset SPINNER_PID SPINNER_MSG + stty sane + clear_line + tput cr + tput el + printf '%s %b\n' ' ✔️ ' '\033[1;92mStorage \033[36mlocal\033[m (Free: 27.5GB Used: 8.4GB) [Template]\033[m' ✔️ Storage local (Free: 27.5GB Used: 8.4GB) [Template] + unset 'MSG_INFO_SHOWN[Storage \033[36mlocal\033[m (Free: 27.5GB Used: 8.4GB) [Template]]' + break + true + select_storage container + local CLASS=container CONTENT CONTENT_LABEL + case $CLASS in + CONTENT=rootdir + CONTENT_LABEL=Container + '[' rootdir = rootdir ']' + '[' -n '' ']' + local -A STORAGE_MAP + local -a MENU + local COL_WIDTH=0 + read -r TAG TYPE _ TOTAL USED FREE _ ++ pvesm status -content rootdir ++ awk 'NR>1' + [[ -n local-lvm ]] + [[ -n lvmthin ]] + local STORAGE_NAME=local-lvm + local 'DISPLAY=local-lvm (lvmthin)' ++ numfmt --to=iec --from-unit=K --format %.1f + local USED_FMT=9.6G ++ numfmt --to=iec --from-unit=K --format %.1f + local FREE_FMT=43.2G + local 'INFO=Free: 43.2GB Used: 9.6GB' + STORAGE_MAP["$DISPLAY"]=local-lvm + MENU+=("$DISPLAY" "$INFO" "OFF") + (( 19 > COL_WIDTH )) + COL_WIDTH=19 + read -r TAG TYPE _ TOTAL USED FREE _ + [[ -n nvme256 ]] + [[ -n dir ]] + local STORAGE_NAME=nvme256 + local 'DISPLAY=nvme256 (dir)' ++ numfmt --to=iec --from-unit=K --format %.1f + local USED_FMT=209.6G ++ numfmt --to=iec --from-unit=K --format %.1f + local FREE_FMT=23.3G + local 'INFO=Free: 23.3GB Used: 209.6GB' + STORAGE_MAP["$DISPLAY"]=nvme256 + MENU+=("$DISPLAY" "$INFO" "OFF") + (( 13 > COL_WIDTH )) + read -r TAG TYPE _ TOTAL USED FREE _ + [[ -n ssd512a ]] + [[ -n dir ]] + local STORAGE_NAME=ssd512a + local 'DISPLAY=ssd512a (dir)' ++ numfmt --to=iec --from-unit=K --format %.1f + local USED_FMT=111.8G ++ numfmt --to=iec --from-unit=K --format %.1f + local FREE_FMT=353.8G + local 'INFO=Free: 353.8GB Used: 111.8GB' + STORAGE_MAP["$DISPLAY"]=ssd512a + MENU+=("$DISPLAY" "$INFO" "OFF") + (( 13 > COL_WIDTH )) + read -r TAG TYPE _ TOTAL USED FREE _ + '[' 9 -eq 0 ']' + '[' 3 -eq 1 ']' + local WIDTH=61 + true + local DISPLAY_SELECTED ++ whiptail --backtitle 'Proxmox VE Helper Scripts' --title 'Storage Pools' --radiolist 'Which storage pool for container?\n(Spacebar to select)' 16 61 6 'local-lvm (lvmthin)' 'Free: 43.2GB Used: 9.6GB' OFF 'nvme256 (dir)' 'Free: 23.3GB Used: 209.6GB' OFF 'ssd512a (dir)' 'Free: 353.8GB Used: 111.8GB' OFF + DISPLAY_SELECTED='ssd512a (dir)' + [[ 0 -ne 0 ]] ++ sed 's/[[:space:]]*$//' + DISPLAY_SELECTED='ssd512a (dir)' + [[ -z ssd512a (dir) ]] + [[ -z _ ]] + STORAGE_RESULT=ssd512a + (( i = 0 )) + (( i < 9 )) + [[ local-lvm (lvmthin) == \s\s\d\5\1\2\a\ \(\d\i\r\) ]] + (( i += 3 )) + (( i < 9 )) + [[ nvme256 (dir) == \s\s\d\5\1\2\a\ \(\d\i\r\) ]] + (( i += 3 )) + (( i < 9 )) + [[ ssd512a (dir) == \s\s\d\5\1\2\a\ \(\d\i\r\) ]] + STORAGE_INFO='Free: 353.8GB Used: 111.8GB' + break + return 0 + CONTAINER_STORAGE=ssd512a + CONTAINER_STORAGE_INFO='Free: 353.8GB Used: 111.8GB' + msg_ok 'Storage \033[36mssd512a\033[m (Free: 353.8GB Used: 111.8GB) [Container]' + local 'msg=Storage \033[36mssd512a\033[m (Free: 353.8GB Used: 111.8GB) [Container]' + [[ -z Storage \033[36mssd512a\033[m (Free: 353.8GB Used: 111.8GB) [Container] ]] + stop_spinner + local pid= + [[ -z '' ]] + [[ -f /tmp/.spinner.pid ]] + [[ -n '' ]] + unset SPINNER_PID SPINNER_MSG + stty sane + clear_line + tput cr + tput el + printf '%s %b\n' ' ✔️ ' '\033[1;92mStorage \033[36mssd512a\033[m (Free: 353.8GB Used: 111.8GB) [Container]\033[m' ✔️ Storage ssd512a (Free: 353.8GB Used: 111.8GB) [Container] + unset 'MSG_INFO_SHOWN[Storage \033[36mssd512a\033[m (Free: 353.8GB Used: 111.8GB) [Container]]' + break ++ pvesm status ++ awk -v s=ssd512a '$1 == s { print $6 }' + STORAGE_FREE=379822884 + REQUIRED_KB=2097152 + '[' 379822884 -lt 2097152 ']' + '[' -f /etc/pve/corosync.conf ']' + TEMPLATE_SEARCH=ubuntu-24.04 + case "$PCT_OSTYPE" in + TEMPLATE_PATTERN=-standard_ + msg_info 'Searching for template '\''ubuntu-24.04'\''' + local 'msg=Searching for template '\''ubuntu-24.04'\''' + [[ -z Searching for template 'ubuntu-24.04' ]] + declare -p MSG_INFO_SHOWN + declare -A MSG_INFO_SHOWN + [[ -n '' ]] + MSG_INFO_SHOWN["$msg"]=1 + stop_spinner + local pid= + [[ -z '' ]] + [[ -f /tmp/.spinner.pid ]] + [[ -n '' ]] + unset SPINNER_PID SPINNER_MSG + stty sane + SPINNER_MSG='Searching for template '\''ubuntu-24.04'\''' + is_verbose_mode + local verbose=yes + local tty_status + [[ -t 2 ]] + tty_status=interactive + [[ yes != \n\o ]] + local 'HOURGLASS= ⏳ ' + printf '\r\e[2K%s %b' ' ⏳ ' '\033[33mSearching for template '\''ubuntu-24.04'\''\033[m' ⏳ Searching for template 'ubuntu-24.04'+ return + mapfile -t TEMPLATES ++ pveam list local ++ awk -v s=ubuntu-24.04 -v p=-standard_ '$1 ~ s && $1 ~ p {print $1}' ++ sed 's/.*\///' ++ sort -t - -k 2 -V + '[' 1 -gt 0 ']' + TEMPLATE_SOURCE=local + TEMPLATE=ubuntu-24.04-standard_24.04-2_amd64.tar.zst ++ pvesm path local:vztmpl/ubuntu-24.04-standard_24.04-2_amd64.tar.zst + TEMPLATE_PATH=/var/lib/vz/template/cache/ubuntu-24.04-standard_24.04-2_amd64.tar.zst + msg_ok 'Template \033[36mubuntu-24.04-standard_24.04-2_amd64.tar.zst\033[m [local]' + local 'msg=Template \033[36mubuntu-24.04-standard_24.04-2_amd64.tar.zst\033[m [local]' + [[ -z Template \033[36mubuntu-24.04-standard_24.04-2_amd64.tar.zst\033[m [local] ]] + stop_spinner + local pid= + [[ -z '' ]] + [[ -f /tmp/.spinner.pid ]] + [[ -n '' ]] + unset SPINNER_PID SPINNER_MSG + stty sane + clear_line + tput cr + tput el + printf '%s %b\n' ' ✔️ ' '\033[1;92mTemplate \033[36mubuntu-24.04-standard_24.04-2_amd64.tar.zst\033[m [local]\033[m' ✔️ Template ubuntu-24.04-standard_24.04-2_amd64.tar.zst [local] + unset 'MSG_INFO_SHOWN[Template \033[36mubuntu-24.04-standard_24.04-2_amd64.tar.zst\033[m [local]]' + msg_debug TEMPLATE_SEARCH=ubuntu-24.04 + [[ 0 == \1 ]] + msg_debug 'TEMPLATES=(ubuntu-24.04-standard_24.04-2_amd64.tar.zst)' + [[ 0 == \1 ]] + msg_debug 'Selected TEMPLATE=ubuntu-24.04-standard_24.04-2_amd64.tar.zst' + [[ 0 == \1 ]] + msg_debug TEMPLATE_PATH=/var/lib/vz/template/cache/ubuntu-24.04-standard_24.04-2_amd64.tar.zst + [[ 0 == \1 ]] + TEMPLATE_VALID=1 + '[' '!' -s /var/lib/vz/template/cache/ubuntu-24.04-standard_24.04-2_amd64.tar.zst ']' + tar --use-compress-program=zstdcat -tf /var/lib/vz/template/cache/ubuntu-24.04-standard_24.04-2_amd64.tar.zst + '[' 1 -eq 0 ']' + msg_info 'Creating LXC Container' + local 'msg=Creating LXC Container' + [[ -z Creating LXC Container ]] + declare -p MSG_INFO_SHOWN + declare -A MSG_INFO_SHOWN + [[ -n '' ]] + MSG_INFO_SHOWN["$msg"]=1 + stop_spinner + local pid= + [[ -z '' ]] + [[ -f /tmp/.spinner.pid ]] + [[ -n '' ]] + unset SPINNER_PID SPINNER_MSG + stty sane + SPINNER_MSG='Creating LXC Container' + is_verbose_mode + local verbose=yes + local tty_status + [[ -t 2 ]] + tty_status=interactive + [[ yes != \n\o ]] + local 'HOURGLASS= ⏳ ' + printf '\r\e[2K%s %b' ' ⏳ ' '\033[33mCreating LXC Container\033[m' ⏳ Creating LXC Container+ return + grep -q root:100000:65536 /etc/subuid + grep -q root:100000:65536 /etc/subgid + PCT_OPTIONS=(${PCT_OPTIONS[@]:-${DEFAULT_PCT_OPTIONS[@]}}) + [[ -features nesting=1,fuse=1 -hostname ubuntu -tags community-script;os -net0 name=eth0,bridge=vmbr0,ip=dhcp -onboot 1 -cores 1 -memory 512 -unprivileged 0 =~ -rootfs ]] + PCT_OPTIONS+=(-rootfs "$CONTAINER_STORAGE:${PCT_DISK_SIZE:-8}") + lockfile=/tmp/template.ubuntu-24.04-standard_24.04-2_amd64.tar.zst.lock + msg_debug 'Creating lockfile: /tmp/template.ubuntu-24.04-standard_24.04-2_amd64.tar.zst.lock' + [[ 0 == \1 ]] + exec + flock -w 60 9 + msg_debug 'pct create command: pct create 155 local:vztmpl/ubuntu-24.04-standard_24.04-2_amd64.tar.zst -features nesting=1,fuse=1 -hostname ubuntu -tags community-script;os -net0 name=eth0,bridge=vmbr0,ip=dhcp -onboot 1 -cores 1 -memory 512 -unprivileged 0 -rootfs ssd512a:2' + [[ 0 == \1 ]] + pct create 155 local:vztmpl/ubuntu-24.04-standard_24.04-2_amd64.tar.zst -features nesting=1,fuse=1 -hostname ubuntu -tags 'community-script;os' -net0 name=eth0,bridge=vmbr0,ip=dhcp -onboot 1 -cores 1 -memory 512 -unprivileged 0 -rootfs ssd512a:2 + msg_error 'Container creation failed. Checking if template is corrupted or incomplete.' + stop_spinner + local pid= + [[ -z '' ]] + [[ -f /tmp/.spinner.pid ]] + [[ -n '' ]] + unset SPINNER_PID SPINNER_MSG + stty sane + local 'msg=Container creation failed. Checking if template is corrupted or incomplete.' + echo -e '\r\033[K ✖️ \033[01;31mContainer creation failed. Checking if template is corrupted or incomplete.\033[m' ✖️ Container creation failed. Checking if template is corrupted or incomplete. + [[ ! -s /var/lib/vz/template/cache/ubuntu-24.04-standard_24.04-2_amd64.tar.zst ]] ++ stat -c%s /var/lib/vz/template/cache/ubuntu-24.04-standard_24.04-2_amd64.tar.zst + [[ 141589318 -lt 1000000 ]] + zstdcat /var/lib/vz/template/cache/ubuntu-24.04-standard_24.04-2_amd64.tar.zst + tar -tf - + msg_error 'Template is valid, but container creation still failed.' + stop_spinner + local pid= + [[ -z '' ]] + [[ -f /tmp/.spinner.pid ]] + [[ -n '' ]] + unset SPINNER_PID SPINNER_MSG + stty sane + local 'msg=Template is valid, but container creation still failed.' + echo -e '\r\033[K ✖️ \033[01;31mTemplate is valid, but container creation still failed.\033[m' ✖️ Template is valid, but container creation still failed. + exit 209 + on_exit + local exit_code=209 + [[ -n /tmp/template.ubuntu-24.04-standard_24.04-2_amd64.tar.zst.lock ]] + [[ -e /tmp/template.ubuntu-24.04-standard_24.04-2_amd64.tar.zst.lock ]] + rm -f /tmp/template.ubuntu-24.04-standard_24.04-2_amd64.tar.zst.lock + exit 209 ```
Author
Owner

@MickLesk commented on GitHub (Aug 20, 2025):

Its your SSD that failed

pct create 999 local:vztmpl/ubuntu-24.04-standard_24.04-2_amd64.tar.zst -rootfs ssd512a:2

@MickLesk commented on GitHub (Aug 20, 2025): Its your SSD that failed pct create 999 local:vztmpl/ubuntu-24.04-standard_24.04-2_amd64.tar.zst -rootfs ssd512a:2
Author
Owner

@jsspen commented on GitHub (Aug 20, 2025):

The SSD is having no trouble with Debian or Apache-based containers, though.

Output of pct create 999 local:vztmpl/ubuntu-24.04-standard_24.04-2_amd64.tar.zst -rootfs ssd512a:2 has revealed something new: "Unsupported Ubuntu version".

Formatting '/mnt/pve/ssd512a/images/999/vm-999-disk-0.raw', fmt=raw size=2147483648 preallocation=off
Creating filesystem with 524288 4k blocks and 131072 inodes
Filesystem UUID: 173abc3f-8d3b-4504-8022-a1d397d3c5c8
Superblock backups stored on blocks: 
        32768, 98304, 163840, 229376, 294912
extracting archive '/var/lib/vz/template/cache/ubuntu-24.04-standard_24.04-2_amd64.tar.zst'
Total bytes read: 564490240 (539MiB, 215MiB/s)
unable to create CT 999 - unsupported Ubuntu version '24.04'

Edit: Tried using a different disk and same problem

  ⚙️  Using Default Settings on node hoh (🔍  Verbose)
  🆔  Container ID: 122
  🖥️  Operating System: ubuntu (24.04)
  📦  Container Type: Unprivileged
  💾  Disk Size: 3 GB
  🧠  CPU Cores: 1
  🛠️  RAM Size: 1024 MiB
  🔍  Verbose Mode: Enabled
  🚀  Creating a Twingate-Connector LXC using the above default settings
  
  ✔️   Storage nvme256 (Free: 23.3GB  Used: 209.6GB) [Template]
  ✔️   Storage nvme256 (Free: 23.3GB  Used: 209.6GB) [Container]
  ✔️   Template ubuntu-24.04-standard_24.04-2_amd64.tar.zst [online]
   💡   Template ubuntu-24.04-standard_24.04-2_amd64.tar.zst is missing or corrupted. Re-downloading.
  ✔️   Template download successful.
   ✖️   Container creation failed. Checking if template is corrupted or incomplete.
   ✖️   Template is valid, but container creation still failed.

[ERROR] in line 1160: exit code 0: while executing command bash -c "$(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/create_lxc.sh)" $?
@jsspen commented on GitHub (Aug 20, 2025): The SSD is having no trouble with Debian or Apache-based containers, though. Output of `pct create 999 local:vztmpl/ubuntu-24.04-standard_24.04-2_amd64.tar.zst -rootfs ssd512a:2` has revealed something new: "Unsupported Ubuntu version". ``` Formatting '/mnt/pve/ssd512a/images/999/vm-999-disk-0.raw', fmt=raw size=2147483648 preallocation=off Creating filesystem with 524288 4k blocks and 131072 inodes Filesystem UUID: 173abc3f-8d3b-4504-8022-a1d397d3c5c8 Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912 extracting archive '/var/lib/vz/template/cache/ubuntu-24.04-standard_24.04-2_amd64.tar.zst' Total bytes read: 564490240 (539MiB, 215MiB/s) unable to create CT 999 - unsupported Ubuntu version '24.04' ``` Edit: Tried using a different disk and same problem ``` ⚙️ Using Default Settings on node hoh (🔍 Verbose) 🆔 Container ID: 122 🖥️ Operating System: ubuntu (24.04) 📦 Container Type: Unprivileged 💾 Disk Size: 3 GB 🧠 CPU Cores: 1 🛠️ RAM Size: 1024 MiB 🔍 Verbose Mode: Enabled 🚀 Creating a Twingate-Connector LXC using the above default settings ✔️ Storage nvme256 (Free: 23.3GB Used: 209.6GB) [Template] ✔️ Storage nvme256 (Free: 23.3GB Used: 209.6GB) [Container] ✔️ Template ubuntu-24.04-standard_24.04-2_amd64.tar.zst [online] 💡 Template ubuntu-24.04-standard_24.04-2_amd64.tar.zst is missing or corrupted. Re-downloading. ✔️ Template download successful. ✖️ Container creation failed. Checking if template is corrupted or incomplete. ✖️ Template is valid, but container creation still failed. [ERROR] in line 1160: exit code 0: while executing command bash -c "$(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/create_lxc.sh)" $? ```
Author
Owner

@bossanova808 commented on GitHub (Aug 21, 2025):

It's not a failing disk (or even just ubuntu), or a corrupt download - the debian LXC script is failing at the same point for me. Worked fine a few days ago.

@bossanova808 commented on GitHub (Aug 21, 2025): It's not a failing disk (or even just ubuntu), or a corrupt download - the debian LXC script is failing at the same point for me. Worked fine a few days ago.
Author
Owner

@bossanova808 commented on GitHub (Aug 21, 2025):

(& manually creating a CT with the downloaded template works just fine...)

@bossanova808 commented on GitHub (Aug 21, 2025): (& manually creating a CT with the downloaded template works just fine...)
Author
Owner

@MickLesk commented on GitHub (Aug 21, 2025):

Need more Input.

pveam status
pveam update
pveam list local

And output of pct create templateName (Like above)

@MickLesk commented on GitHub (Aug 21, 2025): Need more Input. pveam status pveam update pveam list local And output of pct create templateName (Like above)
Author
Owner

@bossanova808 commented on GitHub (Aug 21, 2025):

pvesm status

Name                Type     Status           Total            Used       Available        %
guests           zfspool     active       943064920        25278344       917786576    2.68%
local                dir     active        55156976         7853532        44469220   14.24%
local-lvm        lvmthin     active        99872768               0        99872768    0.00%
temp-backups         dir     active       479596204        38263248       416897296    7.98%

pveam update

update successful

pveam list local

NAME                                                         SIZE
local:vztmpl/debian-12-standard_12.7-1_amd64.tar.zst         120.65MB
local:vztmpl/ubuntu-22.04-standard_22.04-1_amd64.tar.zst     123.81MB
local:vztmpl/ubuntu-24.04-standard_24.04-2_amd64.tar.zst     135.03MB

pct create 999 local:vztmpl/debian-12-standard_12.7-1_amd64.tar.zst --rootfs guests:2

extracting archive '/var/lib/vz/template/cache/debian-12-standard_12.7-1_amd64.tar.zst'
tar: /var/lib/lxc/999/rootfs: Cannot open: Permission denied
tar: Error is not recoverable: exiting now
unable to create CT 999 - command 'lxc-usernsexec -m u:0:100000:65536 -m g:0:100000:65536 -- tar xpf - --zstd --totals --one-file-system -p --sparse --numeric-owner --acls --xattrs '--xattrs-include=user.*' '--xattrs-include=security.capability' '--warning=no-file-ignored' '--warning=no-xattr-write' -C /var/lib/lxc/999/rootfs --skip-old-files --anchored --exclude './dev/*'' failed: exit code 2

As I say, using the GUI, I was manually able to create a CT with debian-12-standard_12.7-1_amd64.tar.zst without issue.

...which seems odd...

@bossanova808 commented on GitHub (Aug 21, 2025): `pvesm status` ``` Name Type Status Total Used Available % guests zfspool active 943064920 25278344 917786576 2.68% local dir active 55156976 7853532 44469220 14.24% local-lvm lvmthin active 99872768 0 99872768 0.00% temp-backups dir active 479596204 38263248 416897296 7.98% ``` `pveam update` ``` update successful ``` `pveam list local` ``` NAME SIZE local:vztmpl/debian-12-standard_12.7-1_amd64.tar.zst 120.65MB local:vztmpl/ubuntu-22.04-standard_22.04-1_amd64.tar.zst 123.81MB local:vztmpl/ubuntu-24.04-standard_24.04-2_amd64.tar.zst 135.03MB ``` `pct create 999 local:vztmpl/debian-12-standard_12.7-1_amd64.tar.zst --rootfs guests:2` ``` extracting archive '/var/lib/vz/template/cache/debian-12-standard_12.7-1_amd64.tar.zst' tar: /var/lib/lxc/999/rootfs: Cannot open: Permission denied tar: Error is not recoverable: exiting now unable to create CT 999 - command 'lxc-usernsexec -m u:0:100000:65536 -m g:0:100000:65536 -- tar xpf - --zstd --totals --one-file-system -p --sparse --numeric-owner --acls --xattrs '--xattrs-include=user.*' '--xattrs-include=security.capability' '--warning=no-file-ignored' '--warning=no-xattr-write' -C /var/lib/lxc/999/rootfs --skip-old-files --anchored --exclude './dev/*'' failed: exit code 2 ``` As I say, using the GUI, I was manually able to create a CT with debian-12-standard_12.7-1_amd64.tar.zst without issue. ...which seems odd...
Author
Owner

@MickLesk commented on GitHub (Aug 21, 2025):

Can you remove the Debian Template and Execute an Script again?

pct remove local:vztmpl/debian-12-standard_12.7-1_amd64.tar.zst

@MickLesk commented on GitHub (Aug 21, 2025): Can you remove the Debian Template and Execute an Script again? pct remove local:vztmpl/debian-12-standard_12.7-1_amd64.tar.zst
Author
Owner

@bossanova808 commented on GitHub (Aug 21, 2025):

There's no pct remove but I removed using the GUI, then:

  💡    Using Config File on node pve-homeserver25
  💡  Using config File: /opt/community-scripts/.settings
  🆔  Container ID: 209
  📦  Container Type: Unprivileged
  🔐  Root Password: ********
  🏠  Hostname: test
  💾  Disk Size: 10 GB
  🧠  CPU Cores: 20
  🛠️  RAM Size: 32768 MiB
  🌉  Bridge: vmbr0
  📡  IP Address: dhcp
  🌐  Gateway IP Address: Default
  📡  APT-Cacher IP Address: Default
  ⚙️  Interface MTU Size: Default
  📡  IPv6 Address: none
  🔍  DNS Search Domain: Host
  📡  DNS Server IP Address: Host
  🏷️  Vlan: Default
  📡  Tags: community-scripts;
  🔑  Root SSH Access: yes
  🔑  SSH Authorized Key: ********************
  🗂️  Enable FUSE: No
  🗂️  Enable TUN: No
  🔍  Verbose Mode: yes
  🚀  Creating a Debian LXC using the above settings
  ✔️   Storage local (Free: 41.6GB  Used: 7.2GB) [Template]
  ✔️   Storage guests (Free: 854.7GB  Used: 23.7GB) [Container]
  ✔️   Template debian-12-standard_12.7-1_amd64.tar.zst [online]
   💡   Template debian-12-standard_12.7-1_amd64.tar.zst is missing or corrupted. Re-downloading.
  ✔️   Template download successful.
   ✖️   Container creation failed. Checking if template is corrupted or incomplete.
   ✖️   Template is valid, but container creation still failed.

[ERROR] in line 1160: exit code 0: while executing command bash -c "$(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/create_lxc.sh)" $?
@bossanova808 commented on GitHub (Aug 21, 2025): There's no `pct remove` but I removed using the GUI, then: ``` 💡 Using Config File on node pve-homeserver25 💡 Using config File: /opt/community-scripts/.settings 🆔 Container ID: 209 📦 Container Type: Unprivileged 🔐 Root Password: ******** 🏠 Hostname: test 💾 Disk Size: 10 GB 🧠 CPU Cores: 20 🛠️ RAM Size: 32768 MiB 🌉 Bridge: vmbr0 📡 IP Address: dhcp 🌐 Gateway IP Address: Default 📡 APT-Cacher IP Address: Default ⚙️ Interface MTU Size: Default 📡 IPv6 Address: none 🔍 DNS Search Domain: Host 📡 DNS Server IP Address: Host 🏷️ Vlan: Default 📡 Tags: community-scripts; 🔑 Root SSH Access: yes 🔑 SSH Authorized Key: ******************** 🗂️ Enable FUSE: No 🗂️ Enable TUN: No 🔍 Verbose Mode: yes 🚀 Creating a Debian LXC using the above settings ✔️ Storage local (Free: 41.6GB Used: 7.2GB) [Template] ✔️ Storage guests (Free: 854.7GB Used: 23.7GB) [Container] ✔️ Template debian-12-standard_12.7-1_amd64.tar.zst [online] 💡 Template debian-12-standard_12.7-1_amd64.tar.zst is missing or corrupted. Re-downloading. ✔️ Template download successful. ✖️ Container creation failed. Checking if template is corrupted or incomplete. ✖️ Template is valid, but container creation still failed. [ERROR] in line 1160: exit code 0: while executing command bash -c "$(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/create_lxc.sh)" $? ```
Author
Owner

@MickLesk commented on GitHub (Aug 21, 2025):

I mean, Run pct remove in your Host and then for example our script (Default Settings) Debian LXC (... /debian.sh)

@MickLesk commented on GitHub (Aug 21, 2025): I mean, Run pct remove in your Host and then for example our script (Default Settings) Debian LXC (... /debian.sh)
Author
Owner

@bossanova808 commented on GitHub (Aug 21, 2025):

pct remove is not a command on the host - hence I used the GUI to remove the downloaded deb template - same result right?

pct remove local:vztmpl/debian-12-standard_12.7-1_amd64.tar.zst
ERROR: unknown command 'pct remove'
USAGE: pct <COMMAND> [ARGS] [OPTIONS]

       pct clone <vmid> <newid> [OPTIONS]
       pct create <vmid> <ostemplate> [OPTIONS]
       pct destroy <vmid> [OPTIONS]
       pct list
       pct migrate <vmid> <target> [OPTIONS]
       pct move-volume <vmid> <volume> [<storage>] [<target-vmid>] [<target-volume>] [OPTIONS]
       pct pending <vmid>
       pct resize <vmid> <disk> <size> [OPTIONS]
       pct restore <vmid> <ostemplate> [OPTIONS]
       pct template <vmid>

       pct config <vmid> [OPTIONS]
       pct set <vmid> [OPTIONS]

       pct delsnapshot <vmid> <snapname> [OPTIONS]
       pct listsnapshot <vmid>
       pct rollback <vmid> <snapname> [OPTIONS]
       pct snapshot <vmid> <snapname> [OPTIONS]

       pct reboot <vmid> [OPTIONS]
       pct resume <vmid>
       pct shutdown <vmid> [OPTIONS]
       pct start <vmid> [OPTIONS]
       pct stop <vmid> [OPTIONS]
       pct suspend <vmid>

       pct console <vmid> [OPTIONS]
       pct cpusets
       pct df <vmid>
       pct enter <vmid> [OPTIONS]
       pct exec <vmid> [<extra-args>] [OPTIONS]
       pct fsck <vmid> [OPTIONS]
       pct fstrim <vmid> [OPTIONS]
       pct mount <vmid>
       pct pull <vmid> <path> <destination> [OPTIONS]
       pct push <vmid> <file> <destination> [OPTIONS]
       pct remote-migrate <vmid> [<target-vmid>] <target-endpoint> --target-bridge <string> --target-storage <string> [OPTIONS]
       pct rescan  [OPTIONS]
       pct status <vmid> [OPTIONS]
       pct unlock <vmid>
       pct unmount <vmid>

       pct help [<extra-args>] [OPTIONS]

I then ran the script, but here's with default settings same issue.

  ⚙️  Using Default Settings on node pve-homeserver25
  🆔  Container ID: 100
  🖥️  Operating System: debian (12)
  📦  Container Type: Unprivileged
  💾  Disk Size: 2 GB
  🧠  CPU Cores: 1
  🛠️  RAM Size: 512 MiB
  🚀  Creating a Debian LXC using the above default settings
  
  ✔️   Storage local (Free: 41.6GB  Used: 7.2GB) [Template]
  ✔️   Storage guests (Free: 854.6GB  Used: 23.8GB) [Container]
  ✔️   Template debian-12-standard_12.7-1_amd64.tar.zst [online]
   💡   Template debian-12-standard_12.7-1_amd64.tar.zst is missing or corrupted. Re-downloading.
  ✔️   Template download successful.
   ✖️   Container creation failed. Checking if template is corrupted or incomplete.
   ✖️   Template is valid, but container creation still failed.

[ERROR] in line 1160: exit code 0: while executing command bash -c "$(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/create_lxc.sh)" $?

(Sorry if I am misunderstanding...!)

@bossanova808 commented on GitHub (Aug 21, 2025): `pct remove` is not a command on the host - hence I used the GUI to remove the downloaded deb template - same result right? ``` pct remove local:vztmpl/debian-12-standard_12.7-1_amd64.tar.zst ERROR: unknown command 'pct remove' USAGE: pct <COMMAND> [ARGS] [OPTIONS] pct clone <vmid> <newid> [OPTIONS] pct create <vmid> <ostemplate> [OPTIONS] pct destroy <vmid> [OPTIONS] pct list pct migrate <vmid> <target> [OPTIONS] pct move-volume <vmid> <volume> [<storage>] [<target-vmid>] [<target-volume>] [OPTIONS] pct pending <vmid> pct resize <vmid> <disk> <size> [OPTIONS] pct restore <vmid> <ostemplate> [OPTIONS] pct template <vmid> pct config <vmid> [OPTIONS] pct set <vmid> [OPTIONS] pct delsnapshot <vmid> <snapname> [OPTIONS] pct listsnapshot <vmid> pct rollback <vmid> <snapname> [OPTIONS] pct snapshot <vmid> <snapname> [OPTIONS] pct reboot <vmid> [OPTIONS] pct resume <vmid> pct shutdown <vmid> [OPTIONS] pct start <vmid> [OPTIONS] pct stop <vmid> [OPTIONS] pct suspend <vmid> pct console <vmid> [OPTIONS] pct cpusets pct df <vmid> pct enter <vmid> [OPTIONS] pct exec <vmid> [<extra-args>] [OPTIONS] pct fsck <vmid> [OPTIONS] pct fstrim <vmid> [OPTIONS] pct mount <vmid> pct pull <vmid> <path> <destination> [OPTIONS] pct push <vmid> <file> <destination> [OPTIONS] pct remote-migrate <vmid> [<target-vmid>] <target-endpoint> --target-bridge <string> --target-storage <string> [OPTIONS] pct rescan [OPTIONS] pct status <vmid> [OPTIONS] pct unlock <vmid> pct unmount <vmid> pct help [<extra-args>] [OPTIONS] ``` I then ran the script, but here's with default settings same issue. ``` ⚙️ Using Default Settings on node pve-homeserver25 🆔 Container ID: 100 🖥️ Operating System: debian (12) 📦 Container Type: Unprivileged 💾 Disk Size: 2 GB 🧠 CPU Cores: 1 🛠️ RAM Size: 512 MiB 🚀 Creating a Debian LXC using the above default settings ✔️ Storage local (Free: 41.6GB Used: 7.2GB) [Template] ✔️ Storage guests (Free: 854.6GB Used: 23.8GB) [Container] ✔️ Template debian-12-standard_12.7-1_amd64.tar.zst [online] 💡 Template debian-12-standard_12.7-1_amd64.tar.zst is missing or corrupted. Re-downloading. ✔️ Template download successful. ✖️ Container creation failed. Checking if template is corrupted or incomplete. ✖️ Template is valid, but container creation still failed. [ERROR] in line 1160: exit code 0: while executing command bash -c "$(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/create_lxc.sh)" $? ``` (Sorry if I am misunderstanding...!)
Author
Owner

@MickLesk commented on GitHub (Aug 21, 2025):

Ah okay, command not available for you. Then remove the Template from GUI please. (Button "remove")

But at First:
ls -lh /var/lib/vz/template/cache/

You can also remove it with:
rm /var/lib/vz/template/cache/debian-12-standard_*

@MickLesk commented on GitHub (Aug 21, 2025): Ah okay, command not available for you. Then remove the Template from GUI please. (Button "remove") But at First: ls -lh /var/lib/vz/template/cache/ You can also remove it with: rm /var/lib/vz/template/cache/debian-12-standard_*
Author
Owner

@bossanova808 commented on GitHub (Aug 22, 2025):

Yep that's what you have above, removed (I've done it both ways, GUI and with rm - it's definitely gone.

The issue remains - as per the log above (in the verbose mode you can see it downloads a fresh copy).

But I hit the same error each time...

@bossanova808 commented on GitHub (Aug 22, 2025): Yep that's what you have above, removed (I've done it both ways, GUI and with rm - it's definitely gone. The issue remains - as per the log above (in the verbose mode you can see it downloads a fresh copy). But I hit the same error each time...
Author
Owner

@MickLesk commented on GitHub (Aug 22, 2025):

Whats your pveversion?

Can you check the subuuids?
grep root /etc/subuid
grep root /etc/subgid

@MickLesk commented on GitHub (Aug 22, 2025): Whats your pveversion? Can you check the subuuids? grep root /etc/subuid grep root /etc/subgid
Author
Owner

@bossanova808 commented on GitHub (Aug 22, 2025):

pve-manager/9.0.5/9c5600b249dbfd2f (running kernel: 6.14.8-2-pve)

root:100000:65536
root:100000:65536

@bossanova808 commented on GitHub (Aug 22, 2025): `pve-manager/9.0.5/9c5600b249dbfd2f (running kernel: 6.14.8-2-pve)` `root:100000:65536` `root:100000:65536`
Author
Owner

@MickLesk commented on GitHub (Aug 22, 2025):

hm strange, all fine.

Can you Test 2 different variants? 1 Variant = priviligeted / 1 unpriv.

pct create 950 local:vztmpl/debian-12-standard_12.7-1_amd64.tar.zst \
  -rootfs guests:2 -unprivileged 0 -hostname test950 -memory 512 -cores 1 -net0 name=eth0,bridge=vmbr0,ip=dhcp

unpriv:

pct create 951 local:vztmpl/debian-12-standard_12.7-1_amd64.tar.zst \
  -rootfs guests:2 -unprivileged 1 -hostname test951 -memory 512 -cores 1 -net0 name=eth0,bridge=vmbr0,ip=dhcp
@MickLesk commented on GitHub (Aug 22, 2025): hm strange, all fine. Can you Test 2 different variants? 1 Variant = priviligeted / 1 unpriv. ```bash pct create 950 local:vztmpl/debian-12-standard_12.7-1_amd64.tar.zst \ -rootfs guests:2 -unprivileged 0 -hostname test950 -memory 512 -cores 1 -net0 name=eth0,bridge=vmbr0,ip=dhcp ``` unpriv: ```bash pct create 951 local:vztmpl/debian-12-standard_12.7-1_amd64.tar.zst \ -rootfs guests:2 -unprivileged 1 -hostname test951 -memory 512 -cores 1 -net0 name=eth0,bridge=vmbr0,ip=dhcp ```
Author
Owner

@bossanova808 commented on GitHub (Aug 22, 2025):

Privileged works ok, not the unpriv:

17:01 root@pve-homeserver25:~ $ pct create 950 local:vztmpl/debian-12-standard_12.7-1_amd64.tar.zst \
  -rootfs guests:2 -unprivileged 0 -hostname test950 -memory 512 -cores 1 -net0 name=eth0,bridge=vmbr0,ip=dhcp
extracting archive '/var/lib/vz/template/cache/debian-12-standard_12.7-1_amd64.tar.zst'
Total bytes read: 521902080 (498MiB, 626MiB/s)
Detected container architecture: amd64
Creating SSH host key 'ssh_host_ed25519_key' - this may take some time ...
done: SHA256:cjjy5BE7yOOVxx root@test950
Creating SSH host key 'ssh_host_rsa_key' - this may take some time ...
done: SHA256:5RtS6L//nSvX/Nbxx root@test950
Creating SSH host key 'ssh_host_ecdsa_key' - this may take some time ...
done: SHA256:UoWuVId93eMdxx root@test950
17:15 root@pve-homeserver25:~ $
17:15 root@pve-homeserver25:~ $
17:15 root@pve-homeserver25:~ $ pct create 951 local:vztmpl/debian-12-standard_12.7-1_amd64.tar.zst \
  -rootfs guests:2 -unprivileged 1 -hostname test951 -memory 512 -cores 1 -net0 name=eth0,bridge=vmbr0,ip=dhcp
extracting archive '/var/lib/vz/template/cache/debian-12-standard_12.7-1_amd64.tar.zst'
tar: /var/lib/lxc/951/rootfs: Cannot open: Permission denied
tar: Error is not recoverable: exiting now
unable to create CT 951 - command 'lxc-usernsexec -m u:0:100000:65536 -m g:0:100000:65536 -- tar xpf - --zstd --totals --one-file-system -p --sparse --numeric-owner --acls --xattrs '--xattrs-include=user.*' '--xattrs-include=security.capability' '--warning=no-file-ignored' '--warning=no-xattr-write' -C /var/lib/lxc/951/rootfs --skip-old-files --anchored --exclude './dev/*'' failed: exit code 2
@bossanova808 commented on GitHub (Aug 22, 2025): Privileged works ok, not the unpriv: ``` 17:01 root@pve-homeserver25:~ $ pct create 950 local:vztmpl/debian-12-standard_12.7-1_amd64.tar.zst \ -rootfs guests:2 -unprivileged 0 -hostname test950 -memory 512 -cores 1 -net0 name=eth0,bridge=vmbr0,ip=dhcp extracting archive '/var/lib/vz/template/cache/debian-12-standard_12.7-1_amd64.tar.zst' Total bytes read: 521902080 (498MiB, 626MiB/s) Detected container architecture: amd64 Creating SSH host key 'ssh_host_ed25519_key' - this may take some time ... done: SHA256:cjjy5BE7yOOVxx root@test950 Creating SSH host key 'ssh_host_rsa_key' - this may take some time ... done: SHA256:5RtS6L//nSvX/Nbxx root@test950 Creating SSH host key 'ssh_host_ecdsa_key' - this may take some time ... done: SHA256:UoWuVId93eMdxx root@test950 17:15 root@pve-homeserver25:~ $ 17:15 root@pve-homeserver25:~ $ 17:15 root@pve-homeserver25:~ $ pct create 951 local:vztmpl/debian-12-standard_12.7-1_amd64.tar.zst \ -rootfs guests:2 -unprivileged 1 -hostname test951 -memory 512 -cores 1 -net0 name=eth0,bridge=vmbr0,ip=dhcp extracting archive '/var/lib/vz/template/cache/debian-12-standard_12.7-1_amd64.tar.zst' tar: /var/lib/lxc/951/rootfs: Cannot open: Permission denied tar: Error is not recoverable: exiting now unable to create CT 951 - command 'lxc-usernsexec -m u:0:100000:65536 -m g:0:100000:65536 -- tar xpf - --zstd --totals --one-file-system -p --sparse --numeric-owner --acls --xattrs '--xattrs-include=user.*' '--xattrs-include=security.capability' '--warning=no-file-ignored' '--warning=no-xattr-write' -C /var/lib/lxc/951/rootfs --skip-old-files --anchored --exclude './dev/*'' failed: exit code 2 ```
Author
Owner

@MickLesk commented on GitHub (Aug 22, 2025):

Ah is this a zfs-pool?

@MickLesk commented on GitHub (Aug 22, 2025): Ah is this a zfs-pool?
Author
Owner

@bossanova808 commented on GitHub (Aug 22, 2025):

Yep

@bossanova808 commented on GitHub (Aug 22, 2025): Yep
Author
Owner

@MickLesk commented on GitHub (Aug 22, 2025):

Interesting, either it's a bug or a feature. I'll ask Proxmox myself. I think you were still using PVE 8.4 last week, right?

Can you provide me with a few more “valuable” outputs?

pveversion -v
uname -a

pvesm status

pvesm list guests
zpool status

I have maybe an workaround, can you test it? i dont have zfs-pools

pct restore 960 local:vztmpl/debian-12-standard_12.7-1_amd64.tar.zst \
  -rootfs guests:2 \
  -unprivileged 1 \
  -hostname test960 \
  -memory 512 \
  -cores 1 \
  -net0 name=eth0,bridge=vmbr0,ip=dhcp
@MickLesk commented on GitHub (Aug 22, 2025): Interesting, either it's a bug or a feature. I'll ask Proxmox myself. I think you were still using PVE 8.4 last week, right? Can you provide me with a few more “valuable” outputs? ```bash pveversion -v uname -a pvesm status pvesm list guests zpool status ``` I have maybe an workaround, can you test it? i dont have zfs-pools ```bash pct restore 960 local:vztmpl/debian-12-standard_12.7-1_amd64.tar.zst \ -rootfs guests:2 \ -unprivileged 1 \ -hostname test960 \ -memory 512 \ -cores 1 \ -net0 name=eth0,bridge=vmbr0,ip=dhcp ```
Author
Owner

@bossanova808 commented on GitHub (Aug 22, 2025):

No, this was a fresh PVE9 install.

The restore command gives:
unable to use volume local:vztmpl/debian-12-standard_12.7-1_amd64.tar.zst - content type needs to be 'backup'

17:53 root@pve-homeserver25:/var/lib/lxc $ pveversion -v
proxmox-ve: 9.0.0 (running kernel: 6.14.8-2-pve)
pve-manager: 9.0.5 (running version: 9.0.5/9c5600b249dbfd2f)
proxmox-kernel-helper: 9.0.3
proxmox-kernel-6.14.8-2-pve-signed: 6.14.8-2
proxmox-kernel-6.14: 6.14.8-2
ceph-fuse: 19.2.3-pve1
corosync: 3.1.9-pve2
criu: 4.1.1-1
frr-pythontools: 10.3.1-1+pve4
ifupdown2: 3.3.0-1+pmx9
intel-microcode: 3.20250512.1
ksm-control-daemon: 1.5-1
libjs-extjs: 7.0.0-5
libproxmox-acme-perl: 1.7.0
libproxmox-backup-qemu0: 2.0.1
libproxmox-rs-perl: 0.4.1
libpve-access-control: 9.0.3
libpve-apiclient-perl: 3.4.0
libpve-cluster-api-perl: 9.0.6
libpve-cluster-perl: 9.0.6
libpve-common-perl: 9.0.9
libpve-guest-common-perl: 6.0.2
libpve-http-server-perl: 6.0.4
libpve-network-perl: 1.1.6
libpve-rs-perl: 0.10.10
libpve-storage-perl: 9.0.13
libspice-server1: 0.15.2-1+b1
lvm2: 2.03.31-2+pmx1
lxc-pve: 6.0.4-2
lxcfs: 6.0.4-pve1
novnc-pve: 1.6.0-3
proxmox-backup-client: 4.0.14-1
proxmox-backup-file-restore: 4.0.14-1
proxmox-backup-restore-image: 1.0.0
proxmox-firewall: 1.1.1
proxmox-kernel-helper: 9.0.3
proxmox-mail-forward: 1.0.2
proxmox-mini-journalreader: 1.6
proxmox-offline-mirror-helper: 0.7.0
proxmox-widget-toolkit: 5.0.5
pve-cluster: 9.0.6
pve-container: 6.0.9
pve-docs: 9.0.8
pve-edk2-firmware: 4.2025.02-4
pve-esxi-import-tools: 1.0.1
pve-firewall: 6.0.3
pve-firmware: 3.16-3
pve-ha-manager: 5.0.4
pve-i18n: 3.5.2
pve-qemu-kvm: 10.0.2-4
pve-xtermjs: 5.5.0-2
qemu-server: 9.0.18
smartmontools: 7.4-pve1
spiceterm: 3.4.0
swtpm: 0.8.0+pve2
vncterm: 1.9.0
zfsutils-linux: 2.3.3-pve1

17:53 root@pve-homeserver25:/var/lib/lxc $ uname -a
Linux pve-homeserver25 6.14.8-2-pve #1 SMP PREEMPT_DYNAMIC PMX 6.14.8-2 (2025-07-22T10:04Z) x86_64 GNU/Linux

17:53 root@pve-homeserver25:/var/lib/lxc $ pvesm status
Name                Type     Status           Total            Used       Available        %
guests           zfspool     active       943064932        26020076       917044856    2.76%
local                dir     active        55156976         7856372        44466380   14.24%
local-lvm        lvmthin     active        99872768               0        99872768    0.00%
temp-backups         dir     active       479596204        38263248       416897296    7.98%

17:54 root@pve-homeserver25:/var/lib/lxc $ zpool status
  pool: guests
 state: ONLINE
config:

        NAME                                                      STATE     READ WRITE CKSUM
        guests                                                    ONLINE       0     0     0
          mirror-0                                                ONLINE       0     0     0
            nvme-Seagate_FireCuda_520_SSD_ZP1000GM30002_7QG00QC6  ONLINE       0     0     0
            nvme-CT1000T500SSD8_25225078891B                      ONLINE       0     0     0

errors: No known data errors

17:55 root@pve-homeserver25:/var/lib/lxc $ pvesm list guests
Volid                    Format  Type              Size VMID
guests:subvol-201-disk-0 subvol  rootdir    10737418240 201
guests:subvol-202-disk-0 subvol  rootdir    10737418240 202
guests:subvol-203-disk-0 subvol  rootdir    10737418240 203
guests:subvol-204-disk-0 subvol  rootdir    10737418240 204
guests:subvol-205-disk-0 subvol  rootdir   107374182400 205
guests:subvol-206-disk-0 subvol  rootdir    10737418240 206
guests:subvol-207-disk-0 subvol  rootdir    10737418240 207
guests:subvol-208-disk-0 subvol  rootdir    10737418240 208


@bossanova808 commented on GitHub (Aug 22, 2025): No, this was a fresh PVE9 install. The restore command gives: `unable to use volume local:vztmpl/debian-12-standard_12.7-1_amd64.tar.zst - content type needs to be 'backup'` ``` 17:53 root@pve-homeserver25:/var/lib/lxc $ pveversion -v proxmox-ve: 9.0.0 (running kernel: 6.14.8-2-pve) pve-manager: 9.0.5 (running version: 9.0.5/9c5600b249dbfd2f) proxmox-kernel-helper: 9.0.3 proxmox-kernel-6.14.8-2-pve-signed: 6.14.8-2 proxmox-kernel-6.14: 6.14.8-2 ceph-fuse: 19.2.3-pve1 corosync: 3.1.9-pve2 criu: 4.1.1-1 frr-pythontools: 10.3.1-1+pve4 ifupdown2: 3.3.0-1+pmx9 intel-microcode: 3.20250512.1 ksm-control-daemon: 1.5-1 libjs-extjs: 7.0.0-5 libproxmox-acme-perl: 1.7.0 libproxmox-backup-qemu0: 2.0.1 libproxmox-rs-perl: 0.4.1 libpve-access-control: 9.0.3 libpve-apiclient-perl: 3.4.0 libpve-cluster-api-perl: 9.0.6 libpve-cluster-perl: 9.0.6 libpve-common-perl: 9.0.9 libpve-guest-common-perl: 6.0.2 libpve-http-server-perl: 6.0.4 libpve-network-perl: 1.1.6 libpve-rs-perl: 0.10.10 libpve-storage-perl: 9.0.13 libspice-server1: 0.15.2-1+b1 lvm2: 2.03.31-2+pmx1 lxc-pve: 6.0.4-2 lxcfs: 6.0.4-pve1 novnc-pve: 1.6.0-3 proxmox-backup-client: 4.0.14-1 proxmox-backup-file-restore: 4.0.14-1 proxmox-backup-restore-image: 1.0.0 proxmox-firewall: 1.1.1 proxmox-kernel-helper: 9.0.3 proxmox-mail-forward: 1.0.2 proxmox-mini-journalreader: 1.6 proxmox-offline-mirror-helper: 0.7.0 proxmox-widget-toolkit: 5.0.5 pve-cluster: 9.0.6 pve-container: 6.0.9 pve-docs: 9.0.8 pve-edk2-firmware: 4.2025.02-4 pve-esxi-import-tools: 1.0.1 pve-firewall: 6.0.3 pve-firmware: 3.16-3 pve-ha-manager: 5.0.4 pve-i18n: 3.5.2 pve-qemu-kvm: 10.0.2-4 pve-xtermjs: 5.5.0-2 qemu-server: 9.0.18 smartmontools: 7.4-pve1 spiceterm: 3.4.0 swtpm: 0.8.0+pve2 vncterm: 1.9.0 zfsutils-linux: 2.3.3-pve1 17:53 root@pve-homeserver25:/var/lib/lxc $ uname -a Linux pve-homeserver25 6.14.8-2-pve #1 SMP PREEMPT_DYNAMIC PMX 6.14.8-2 (2025-07-22T10:04Z) x86_64 GNU/Linux 17:53 root@pve-homeserver25:/var/lib/lxc $ pvesm status Name Type Status Total Used Available % guests zfspool active 943064932 26020076 917044856 2.76% local dir active 55156976 7856372 44466380 14.24% local-lvm lvmthin active 99872768 0 99872768 0.00% temp-backups dir active 479596204 38263248 416897296 7.98% 17:54 root@pve-homeserver25:/var/lib/lxc $ zpool status pool: guests state: ONLINE config: NAME STATE READ WRITE CKSUM guests ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 nvme-Seagate_FireCuda_520_SSD_ZP1000GM30002_7QG00QC6 ONLINE 0 0 0 nvme-CT1000T500SSD8_25225078891B ONLINE 0 0 0 errors: No known data errors 17:55 root@pve-homeserver25:/var/lib/lxc $ pvesm list guests Volid Format Type Size VMID guests:subvol-201-disk-0 subvol rootdir 10737418240 201 guests:subvol-202-disk-0 subvol rootdir 10737418240 202 guests:subvol-203-disk-0 subvol rootdir 10737418240 203 guests:subvol-204-disk-0 subvol rootdir 10737418240 204 guests:subvol-205-disk-0 subvol rootdir 107374182400 205 guests:subvol-206-disk-0 subvol rootdir 10737418240 206 guests:subvol-207-disk-0 subvol rootdir 10737418240 207 guests:subvol-208-disk-0 subvol rootdir 10737418240 208 ```
Author
Owner

@MickLesk commented on GitHub (Aug 22, 2025):

thx, if you are available, can you execute this? Ive build an workaround in dev repo, but idk if this works

var_verbose=YES bash -c "$(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVED/main/ct/debian.sh)"

for ubuntu 24.04 users:

var_verbose=YES var_version="24.04" bash -c "$(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVED/main/ct/ubuntu.sh)"
@MickLesk commented on GitHub (Aug 22, 2025): thx, if you are available, can you execute this? Ive build an workaround in dev repo, but idk if this works ```bash var_verbose=YES bash -c "$(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVED/main/ct/debian.sh)" ``` for ubuntu 24.04 users: ```bash var_verbose=YES var_version="24.04" bash -c "$(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVED/main/ct/ubuntu.sh)" ```
Author
Owner

@bossanova808 commented on GitHub (Aug 22, 2025):

  ⚙️  Using Default Settings on node pve-homeserver25
  🆔  Container ID: 100
  🖥️  Operating System: debian (12)
  📦  Container Type: Unprivileged
  💾  Disk Size: 15 GB
  🧠  CPU Cores: 4
  🛠️  RAM Size: 4096 MiB
  🚀  Creating a Debian LXC using the above default settings
  
  ✔️   Storage local (Free: 41.5GB  Used: 7.4GB) [Template]
  ✔️   Storage guests (Free: 854.0GB  Used: 24.4GB) [Container]
  ✔️   Template debian-12-standard_12.7-1_amd64.tar.zst [local]
   ✖️   Container creation failed on local. Checking template...
   ✖️   Template is valid, but container creation still failed on local.
@bossanova808 commented on GitHub (Aug 22, 2025): ``` ⚙️ Using Default Settings on node pve-homeserver25 🆔 Container ID: 100 🖥️ Operating System: debian (12) 📦 Container Type: Unprivileged 💾 Disk Size: 15 GB 🧠 CPU Cores: 4 🛠️ RAM Size: 4096 MiB 🚀 Creating a Debian LXC using the above default settings ✔️ Storage local (Free: 41.5GB Used: 7.4GB) [Template] ✔️ Storage guests (Free: 854.0GB Used: 24.4GB) [Container] ✔️ Template debian-12-standard_12.7-1_amd64.tar.zst [local] ✖️ Container creation failed on local. Checking template... ✖️ Template is valid, but container creation still failed on local. ```
Author
Owner

@MickLesk commented on GitHub (Aug 22, 2025):

okay next round, ive created an zfs-pool and that work fine:

Image

can you do following:
cat /etc/pve/storage.cfg

Example Output (my install)

zfspool: testpool
        pool testpool
        content images,rootdir
        mountpoint /testpool
        nodes proxmox

pvesm list guests

Example Output (my 2 deployed tests here)

Volid                      Format  Type             Size VMID
testpool:subvol-109-disk-0 subvol  rootdir   16106127360 109
testpool:subvol-250-disk-0 subvol  rootdir   10737418240 250

Edit:
Solution should be: content rootdir,images inside the cfg

pvesm set guests --content rootdir,images
pvesm status

@MickLesk commented on GitHub (Aug 22, 2025): okay next round, ive created an zfs-pool and that work fine: <img width="637" height="355" alt="Image" src="https://github.com/user-attachments/assets/c9d0f476-a64f-482f-8bba-85271f315f3b" /> can you do following: ```cat /etc/pve/storage.cfg``` Example Output (my install) ```log zfspool: testpool pool testpool content images,rootdir mountpoint /testpool nodes proxmox ``` ```pvesm list guests``` Example Output (my 2 deployed tests here) ```log Volid Format Type Size VMID testpool:subvol-109-disk-0 subvol rootdir 16106127360 109 testpool:subvol-250-disk-0 subvol rootdir 10737418240 250 ``` Edit: Solution should be: content rootdir,images inside the cfg pvesm set guests --content rootdir,images pvesm status
Author
Owner

@bossanova808 commented on GitHub (Aug 22, 2025):

dir: local
        path /var/lib/vz
        content backup,vztmpl,iso

lvmthin: local-lvm
        thinpool data
        vgname pve
        content rootdir,images

zfspool: guests
        pool guests
        content images,rootdir
        mountpoint /guests
        nodes pve-homeserver25

dir: temp-backups
        path /mnt/pve/temp-backups
        content vztmpl,iso,images,backup,rootdir,snippets
        is_mountpoint 1
        nodes pve-homeserver25

Volid                    Format  Type              Size VMID
guests:subvol-201-disk-0 subvol  rootdir    10737418240 201
guests:subvol-202-disk-0 subvol  rootdir    10737418240 202
guests:subvol-203-disk-0 subvol  rootdir    10737418240 203
guests:subvol-204-disk-0 subvol  rootdir    10737418240 204
guests:subvol-205-disk-0 subvol  rootdir   107374182400 205
guests:subvol-206-disk-0 subvol  rootdir    10737418240 206
guests:subvol-207-disk-0 subvol  rootdir    10737418240 207
guests:subvol-208-disk-0 subvol  rootdir    10737418240 208
@bossanova808 commented on GitHub (Aug 22, 2025): ``` dir: local path /var/lib/vz content backup,vztmpl,iso lvmthin: local-lvm thinpool data vgname pve content rootdir,images zfspool: guests pool guests content images,rootdir mountpoint /guests nodes pve-homeserver25 dir: temp-backups path /mnt/pve/temp-backups content vztmpl,iso,images,backup,rootdir,snippets is_mountpoint 1 nodes pve-homeserver25 ``` ``` Volid Format Type Size VMID guests:subvol-201-disk-0 subvol rootdir 10737418240 201 guests:subvol-202-disk-0 subvol rootdir 10737418240 202 guests:subvol-203-disk-0 subvol rootdir 10737418240 203 guests:subvol-204-disk-0 subvol rootdir 10737418240 204 guests:subvol-205-disk-0 subvol rootdir 107374182400 205 guests:subvol-206-disk-0 subvol rootdir 10737418240 206 guests:subvol-207-disk-0 subvol rootdir 10737418240 207 guests:subvol-208-disk-0 subvol rootdir 10737418240 208 ```
Author
Owner

@MickLesk commented on GitHub (Aug 22, 2025):

ah lol, can you add / improve following via nano ?

dir: local
path /var/lib/vz
content backup,vztmpl,iso,rootdir

and then rerun the debian.sh/or ubuntu.sh bash call from above

@MickLesk commented on GitHub (Aug 22, 2025): ah lol, can you add / improve following via nano ? dir: local path /var/lib/vz content backup,vztmpl,iso,**rootdir** and then rerun the debian.sh/or ubuntu.sh bash call from above
Author
Owner

@AntonWilloughby commented on GitHub (Aug 23, 2025):

I am experiencing the same issue running PVE 8.2.2

@AntonWilloughby commented on GitHub (Aug 23, 2025): I am experiencing the same issue running PVE 8.2.2
Author
Owner

@MickLesk commented on GitHub (Aug 23, 2025):

Then try my Last Post?

@MickLesk commented on GitHub (Aug 23, 2025): Then try my Last Post?
Author
Owner

@bossanova808 commented on GitHub (Aug 24, 2025):

The issue isn't our systems though, it's the scripts...right? Something has changed, as they were working fine not long ago.

I don't want to use local as a rootdir for guests...

@bossanova808 commented on GitHub (Aug 24, 2025): The issue isn't our systems though, it's the scripts...right? Something has changed, as they were working fine not long ago. I don't want to use local as a rootdir for guests...
Author
Owner

@MickLesk commented on GitHub (Aug 24, 2025):

I need to know whether it's because of that or not. If I don't know, bad luck, then I'll close the issue due to lack of information and it won't be possible.

@MickLesk commented on GitHub (Aug 24, 2025): I need to know whether it's because of that or not. If I don't know, bad luck, then I'll close the issue due to lack of information and it won't be possible.
Author
Owner

@bossanova808 commented on GitHub (Aug 24, 2025):

OK, added rootdir to the appropriate storage.cfg

dir: local
        path /var/lib/vz
        content backup,vztmpl,iso,rootdir

Then tried this again:

var_verbose=YES bash -c "$(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVED/main/ct/debian.sh)"

Same issue still (tried creating on both local and guests with same result):

  ⚙️  Using Default Settings on node pve-homeserver25
  🆔  Container ID: 100
  🖥️  Operating System: debian (12)
  📦  Container Type: Unprivileged
  💾  Disk Size: 15 GB
  🧠  CPU Cores: 4
  🛠️  RAM Size: 4096 MiB
  🚀  Creating a Debian LXC using the above default settings
  
  ✔️   Storage local (Free: 41.4GB  Used: 7.4GB) [Template]
  ✔️   Storage local (Free: 41.4GB  Used: 7.4GB) [Container]
  ✔️   Storage 'local' supports 'rootdir'
  ✔️   Template storage 'local' supports 'vztmpl'
  ✔️   Template debian-12-standard_12.7-1_amd64.tar.zst [local]
   ✖️   Container creation failed on local. Checking template...
   ✖️   Template is valid, but container creation still failed on local.
@bossanova808 commented on GitHub (Aug 24, 2025): OK, added `rootdir` to the appropriate `storage.cfg` ``` dir: local path /var/lib/vz content backup,vztmpl,iso,rootdir ``` Then tried this again: `var_verbose=YES bash -c "$(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVED/main/ct/debian.sh)"` Same issue still (tried creating on both `local` and `guests` with same result): ``` ⚙️ Using Default Settings on node pve-homeserver25 🆔 Container ID: 100 🖥️ Operating System: debian (12) 📦 Container Type: Unprivileged 💾 Disk Size: 15 GB 🧠 CPU Cores: 4 🛠️ RAM Size: 4096 MiB 🚀 Creating a Debian LXC using the above default settings ✔️ Storage local (Free: 41.4GB Used: 7.4GB) [Template] ✔️ Storage local (Free: 41.4GB Used: 7.4GB) [Container] ✔️ Storage 'local' supports 'rootdir' ✔️ Template storage 'local' supports 'vztmpl' ✔️ Template debian-12-standard_12.7-1_amd64.tar.zst [local] ✖️ Container creation failed on local. Checking template... ✖️ Template is valid, but container creation still failed on local. ```
Author
Owner

@MickLesk commented on GitHub (Aug 25, 2025):

I'm slowly running out of ideas. I've now attached the fifth different disk, with the same configurations as you. Everything is running smoothly. I even set up a new VM, even there. I can only explain the error with ZFS itself. The script isn't doing anything wrong. It even checks for any problems.

zfs get acltype,xattr guests
zfs get acltype,xattr guests/subvol-208-disk-0

grep root /etc/subuid
grep root /etc/subgid

@MickLesk commented on GitHub (Aug 25, 2025): I'm slowly running out of ideas. I've now attached the fifth different disk, with the same configurations as you. Everything is running smoothly. I even set up a new VM, even there. I can only explain the error with ZFS itself. The script isn't doing anything wrong. It even checks for any problems. zfs get acltype,xattr guests zfs get acltype,xattr guests/subvol-208-disk-0 grep root /etc/subuid grep root /etc/subgid
Author
Owner

@jsspen commented on GitHub (Aug 25, 2025):

I don't think it's a ZFS issue as I'm not using ZFS at all...

pvesm status
Name             Type     Status           Total            Used       Available        %
local             dir     active        40516856        11509084        26917380   28.41%
local-lvm     lvmthin     active        56545280        20644681        35900598   36.51%
nvme256           dir     active       249935956       216762088        33173868   86.73%
ssd512a           dir     active       499862372       100331080       399531292   20.07%

Also, I said this last week but maybe it got lost in the shuffle. It seems like this would be relevant:

Output of pct create 999 local:vztmpl/ubuntu-24.04-standard_24.04-2_amd64.tar.zst -rootfs ssd512a:2 has revealed something new: "Unsupported Ubuntu version".

Formatting '/mnt/pve/ssd512a/images/999/vm-999-disk-0.raw', fmt=raw size=2147483648 preallocation=off
Creating filesystem with 524288 4k blocks and 131072 inodes
Filesystem UUID: 173abc3f-8d3b-4504-8022-a1d397d3c5c8
Superblock backups stored on blocks: 
        32768, 98304, 163840, 229376, 294912
extracting archive '/var/lib/vz/template/cache/ubuntu-24.04-standard_24.04-2_amd64.tar.zst'
Total bytes read: 564490240 (539MiB, 215MiB/s)
unable to create CT 999 - unsupported Ubuntu version '24.04'

I'm also not sure that these are the same issue. Unlike @bossanova808 I'm able to create a Debian LXC without issue using the script bash -c "$(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/ct/debian.sh)"

@jsspen commented on GitHub (Aug 25, 2025): I don't think it's a ZFS issue as I'm not using ZFS at all... ``` pvesm status Name Type Status Total Used Available % local dir active 40516856 11509084 26917380 28.41% local-lvm lvmthin active 56545280 20644681 35900598 36.51% nvme256 dir active 249935956 216762088 33173868 86.73% ssd512a dir active 499862372 100331080 399531292 20.07% ``` Also, I said this last week but maybe it got lost in the shuffle. It seems like this would be relevant: > Output of `pct create 999 local:vztmpl/ubuntu-24.04-standard_24.04-2_amd64.tar.zst -rootfs ssd512a:2` has revealed something new: "Unsupported Ubuntu version". > > ``` > Formatting '/mnt/pve/ssd512a/images/999/vm-999-disk-0.raw', fmt=raw size=2147483648 preallocation=off > Creating filesystem with 524288 4k blocks and 131072 inodes > Filesystem UUID: 173abc3f-8d3b-4504-8022-a1d397d3c5c8 > Superblock backups stored on blocks: > 32768, 98304, 163840, 229376, 294912 > extracting archive '/var/lib/vz/template/cache/ubuntu-24.04-standard_24.04-2_amd64.tar.zst' > Total bytes read: 564490240 (539MiB, 215MiB/s) > unable to create CT 999 - unsupported Ubuntu version '24.04' > ``` I'm also not sure that these are the same issue. Unlike @bossanova808 I'm able to create a Debian LXC without issue using the script `bash -c "$(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/ct/debian.sh)"`
Author
Owner

@MickLesk commented on GitHub (Aug 25, 2025):

And Debian or Ubuntu from VED?

@MickLesk commented on GitHub (Aug 25, 2025): And Debian or Ubuntu from VED?
Author
Owner

@keviiin38 commented on GitHub (Aug 26, 2025):

👋

I was having the exact same error as the OP while using the Homer script (Debian LXC), here are some details :

  • Install with default settings => OK ✔️
  • Install with custom settings (custom VLAN) and default for others => OK ✔️
  • Install with custom settings (custom MAC and VLAN) and default for others => NOT OK

When I used a MAC address from an online generator tool, it failed...
But when I ran the script, without specifying a MAC, deleting everything, and reusing the generated MAC, it worked ! 🙃

After that, I found on a Proxmox blog post someone asking if it's possible to use this kind of online tool, so I suspect there are some gotchas with these generated MAC...

Conclusion, nothing to do with the helper script in my case, simply an "invalid" MAC address specified...

My setup is Proxmox VE 9 fresh install with LVM for storage

@keviiin38 commented on GitHub (Aug 26, 2025): 👋 I was having the exact same error as the OP while using the Homer script (Debian LXC), here are some details : - Install with default settings => OK ✔️ - Install with custom settings (_custom VLAN_) and default for others => OK ✔️ - Install with custom settings (_custom MAC and VLAN)_ and default for others => NOT OK ❌ When I used a MAC address from an online generator tool, it failed... But when I ran the script, without specifying a MAC, deleting everything, and reusing the generated MAC, it worked ! 🙃 After that, I found on a Proxmox blog post someone asking if it's possible to use this kind of online tool, so I suspect there are some gotchas with these generated MAC... Conclusion, nothing to do with the helper script in my case, simply an "invalid" MAC address specified... _My setup is Proxmox VE 9 fresh install with LVM for storage_
Author
Owner

@jsspen commented on GitHub (Aug 26, 2025):

Well, strangely, I just tried the Twigate script again today and it worked... I haven't changed anything since last week when I tried the script (and other Ubuntu-based LXCs) two dozen or so times without success. I haven't even rebooted my system in the meantime! I guess my initial problem is "solved" so I'm going to bow out of this discussion and leave this to @bossanova808 and others who are still having related (?) issues.

@jsspen commented on GitHub (Aug 26, 2025): Well, strangely, I just tried the Twigate script again today and it worked... I haven't changed **anything** since last week when I tried the script (and other Ubuntu-based LXCs) two dozen or so times without success. I haven't even rebooted my system in the meantime! I guess my initial problem is "solved" so I'm going to bow out of this discussion and leave this to @bossanova808 and others who are still having related (?) issues.
Author
Owner

@bossanova808 commented on GitHub (Aug 27, 2025):

Still not working here but am away on holiday for this week so can't really try much...

@bossanova808 commented on GitHub (Aug 27, 2025): Still not working here but am away on holiday for this week so can't really try much...
Author
Owner

@gabriellxxix commented on GitHub (Aug 27, 2025):

It looks like a problem with the script. Two days ago, I installed the scripts without any issue, but today this error shows up. It’s not related to SSD or restarts—I tested two different scripts on three different servers, updated them, rebooted, and one of the servers doesn’t even use SSD. All of them show the same failure. I think it might be due to a recent change in the scripts.

  ✔️   Cluster is quorate
  ✔️   Template debian-12-standard_12.7-1_amd64.tar.zst [online]
   💡   Template debian-12-standard_12.7-1_amd64.tar.zst is missing or corrupted. Re-downloading.
  ✔️   Template download successful.
   ✖️   Container creation failed. Checking if template is corrupted or incomplete.
   ✖️   Template is valid, but container creation still failed.

[ERROR] in line 1160: exit code 0: while executing command bash -c "$(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/create_lxc.sh)" $?
@gabriellxxix commented on GitHub (Aug 27, 2025): It looks like a problem with the script. Two days ago, I installed the scripts without any issue, but today this error shows up. It’s not related to SSD or restarts—I tested two different scripts on three different servers, updated them, rebooted, and one of the servers doesn’t even use SSD. All of them show the same failure. I think it might be due to a recent change in the scripts. ``` ✔️ Cluster is quorate ✔️ Template debian-12-standard_12.7-1_amd64.tar.zst [online] 💡 Template debian-12-standard_12.7-1_amd64.tar.zst is missing or corrupted. Re-downloading. ✔️ Template download successful. ✖️ Container creation failed. Checking if template is corrupted or incomplete. ✖️ Template is valid, but container creation still failed. [ERROR] in line 1160: exit code 0: while executing command bash -c "$(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/create_lxc.sh)" $? ```
Author
Owner

@MickLesk commented on GitHub (Aug 27, 2025):

Gitea Script isnt Change since many weeks. It fails before the LXC is created

You can try the Default debian.sh from ProxmoxVED

@MickLesk commented on GitHub (Aug 27, 2025): Gitea Script isnt Change since many weeks. It fails before the LXC is created You can try the Default debian.sh from ProxmoxVED
Author
Owner

@gabriellxxix commented on GitHub (Aug 27, 2025):

After trying again, it worked correctly this time, even with the customization.

@gabriellxxix commented on GitHub (Aug 27, 2025): After trying again, it worked correctly this time, even with the customization.
Author
Owner

@bossanova808 commented on GitHub (Aug 30, 2025):

Ok I worked out my issue, actually by running the older pulse install script (listed here: https://github.com/rcourtman/Pulse?tab=readme-ov-file#install), as I didn't realise there was also a community script to use instead (which is actually listed here, but that's not as easy to find - https://github.com/rcourtman/Pulse/blob/main/docs/FAQ.md)

When that (original) script failed, I could see in the Proxmox task log (i.e. bottom of web UI) there was an error - I clicked on that to get the output window and could see there was a permissions error with the script accessing the new folder in /var/lib/lxc/ - looking in there I could see that the permissions were: drwxrwx--- - and 'others' could not cd into that folder. This pointed back to an error I had induced with umask it seems. Restoring the umask to 022 for root and the issue is gone.

Thanks for your persistence/patience @MickLesk. Probably not worth it, but might be worth adding some sort of check/log in the scripts for that - with the community scripts is it just bails and the error can't be seen, which makes things harder...

@bossanova808 commented on GitHub (Aug 30, 2025): Ok I worked out my issue, actually by running the older `pulse` install script (listed here: https://github.com/rcourtman/Pulse?tab=readme-ov-file#install), as I didn't realise there was also a community script to use instead (which is actually listed here, but that's not as easy to find - https://github.com/rcourtman/Pulse/blob/main/docs/FAQ.md) When that (original) script failed, I could see in the Proxmox task log (i.e. bottom of web UI) there was an error - I clicked on that to get the output window and could see there was a permissions error with the script accessing the new folder in `/var/lib/lxc/` - looking in there I could see that the permissions were: `drwxrwx---` - and 'others' could not cd into that folder. This pointed back to an error I had induced with `umask` it seems. Restoring the umask to `022` for root and the issue is gone. Thanks for your persistence/patience @MickLesk. Probably not worth it, but might be worth adding some sort of check/log in the scripts for that - with the community scripts is it just bails and the error can't be seen, which makes things harder...
Author
Owner

@Sciruss commented on GitHub (Sep 6, 2025):

I, too, am getting the exact same error when attempting to run the Technitium LXC creation script. I'm attempting to create the LXC on LVM-Thin, not a ZFS drive.

This may be a naive question, but is it possible the $? at the end of the bash call is causing the error due to the create_lxc.sh script being very strict (set -Eeuo pipefail) about errors?

The build.func appears to be passing an unexpected argument ($?) and then the create_lxc.sh script is failing with exit code 0 because the curl command succeeded, so $? was 0, but this is being incorrectly passed as an argument.

Again, shot in the dark here.

@Sciruss commented on GitHub (Sep 6, 2025): I, too, am getting the exact same error when attempting to run the Technitium LXC creation script. I'm attempting to create the LXC on LVM-Thin, not a ZFS drive. This may be a naive question, but is it possible the `$?` at the end of the bash call is causing the error due to the create_lxc.sh script being very strict (`set -Eeuo pipefail`) about errors? The build.func appears to be passing an unexpected argument (`$?`) and then the create_lxc.sh script is failing with `exit code 0` because the curl command succeeded, so `$?` was 0, but this is being incorrectly passed as an argument. Again, shot in the dark here.
Author
Owner

@Sciruss commented on GitHub (Sep 8, 2025):

I resolved my issue by forking and changing the error handling code in create_lxc.sh.

My issues was the my MTU was set too large, and I had forgot to change it on the PVE bridge after I reinstalled PVE. Correcting the MTU issue resolved the exit.

It seems that the error handling in create_lxc.sh doesn't make it out of that script if the PVE processes throw an error that is caught by -Ee. I had to disable that in order to get the error to output that it the lsc creation failed due to the MTU.

@Sciruss commented on GitHub (Sep 8, 2025): I resolved my issue by forking and changing the error handling code in create_lxc.sh. My issues was the my MTU was set too large, and I had forgot to change it on the PVE bridge after I reinstalled PVE. Correcting the MTU issue resolved the exit. It seems that the error handling in create_lxc.sh doesn't make it out of that script if the PVE processes throw an error that is caught by -Ee. I had to disable that in order to get the error to output that it the lsc creation failed due to the MTU.
Author
Owner

@lucacome commented on GitHub (Sep 24, 2025):

I'm getting the same when trying to create a Debian LXC

  🚀  Creating a Debian LXC using the above settings
  ✔️   Storage local (Free: 267.3GB  Used: 468.3GB) [Template]
  ✔️   Storage local-zfs (Free: 267.3GB  Used: 136.5GB) [Container]
  ✔️   Template debian-13-standard_13.1-1_amd64.tar.zst [local]
   ✖️   Container creation failed. Checking if template is corrupted or incomplete.
   ✖️   Template is valid, but container creation still failed.

[ERROR] in line 1152: exit code 0: while executing command bash -c "$(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/create_lxc.sh)" $?

I tried deleting the template and redownloading it, but I'm still getting the error.

@lucacome commented on GitHub (Sep 24, 2025): I'm getting the same when trying to create a Debian LXC ``` 🚀 Creating a Debian LXC using the above settings ✔️ Storage local (Free: 267.3GB Used: 468.3GB) [Template] ✔️ Storage local-zfs (Free: 267.3GB Used: 136.5GB) [Container] ✔️ Template debian-13-standard_13.1-1_amd64.tar.zst [local] ✖️ Container creation failed. Checking if template is corrupted or incomplete. ✖️ Template is valid, but container creation still failed. [ERROR] in line 1152: exit code 0: while executing command bash -c "$(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/create_lxc.sh)" $? ``` I tried deleting the template and redownloading it, but I'm still getting the error.
Author
Owner

@lucacome commented on GitHub (Sep 24, 2025):

I don't know if it helps but when I try to create it manually it says

unable to create CT 140 - unsupported debian version '13.1'

@lucacome commented on GitHub (Sep 24, 2025): I don't know if it helps but when I try to create it manually it says > unable to create CT 140 - unsupported debian version '13.1'
Author
Owner

@MickLesk commented on GitHub (Sep 25, 2025):

You need to Upgrade your pve-container. Its another issue and i have written it in Last 2 weeks 20 Times in Issues & Discussions

@MickLesk commented on GitHub (Sep 25, 2025): You need to Upgrade your pve-container. Its another issue and i have written it in Last 2 weeks 20 Times in Issues & Discussions
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: SVI/ProxmoxVE#1744