| CVE |
Vendors |
Products |
Updated |
CVSS v3.1 |
| In the Linux kernel, the following vulnerability has been resolved:
ALSA: hda: tas2781: Fix wrong reference of tasdevice_priv
During the conversion to unify the calibration data management, the
reference to tasdevice_priv was wrongly set to h->hda_priv instead of
h->priv. This resulted in memory corruption and crashes eventually.
Unfortunately it's a void pointer, hence the compiler couldn't know
that it's wrong. |
| In the Linux kernel, the following vulnerability has been resolved:
RDMA/rxe: Flush delayed SKBs while releasing RXE resources
When skb packets are sent out, these skb packets still depends on
the rxe resources, for example, QP, sk, when these packets are
destroyed.
If these rxe resources are released when the skb packets are destroyed,
the call traces will appear.
To avoid skb packets hang too long time in some network devices,
a timestamp is added when these skb packets are created. If these
skb packets hang too long time in network devices, these network
devices can free these skb packets to release rxe resources. |
| In the Linux kernel, the following vulnerability has been resolved:
iio: accel: sca3300: fix uninitialized iio scan data
Fix potential leak of uninitialized stack data to userspace by ensuring
that the `channels` array is zeroed before use. |
| In the Linux kernel, the following vulnerability has been resolved:
i2c: rtl9300: Fix out-of-bounds bug in rtl9300_i2c_smbus_xfer
The data->block[0] variable comes from user. Without proper check,
the variable may be very large to cause an out-of-bounds bug.
Fix this bug by checking the value of data->block[0] first.
1. commit 39244cc75482 ("i2c: ismt: Fix an out-of-bounds bug in
ismt_access()")
2. commit 92fbb6d1296f ("i2c: xgene-slimpro: Fix out-of-bounds bug in
xgene_slimpro_i2c_xfer()") |
| In the Linux kernel, the following vulnerability has been resolved:
net/sched: Fix backlog accounting in qdisc_dequeue_internal
This issue applies for the following qdiscs: hhf, fq, fq_codel, and
fq_pie, and occurs in their change handlers when adjusting to the new
limit. The problem is the following in the values passed to the
subsequent qdisc_tree_reduce_backlog call given a tbf parent:
When the tbf parent runs out of tokens, skbs of these qdiscs will
be placed in gso_skb. Their peek handlers are qdisc_peek_dequeued,
which accounts for both qlen and backlog. However, in the case of
qdisc_dequeue_internal, ONLY qlen is accounted for when pulling
from gso_skb. This means that these qdiscs are missing a
qdisc_qstats_backlog_dec when dropping packets to satisfy the
new limit in their change handlers.
One can observe this issue with the following (with tc patched to
support a limit of 0):
export TARGET=fq
tc qdisc del dev lo root
tc qdisc add dev lo root handle 1: tbf rate 8bit burst 100b latency 1ms
tc qdisc replace dev lo handle 3: parent 1:1 $TARGET limit 1000
echo ''; echo 'add child'; tc -s -d qdisc show dev lo
ping -I lo -f -c2 -s32 -W0.001 127.0.0.1 2>&1 >/dev/null
echo ''; echo 'after ping'; tc -s -d qdisc show dev lo
tc qdisc change dev lo handle 3: parent 1:1 $TARGET limit 0
echo ''; echo 'after limit drop'; tc -s -d qdisc show dev lo
tc qdisc replace dev lo handle 2: parent 1:1 sfq
echo ''; echo 'post graft'; tc -s -d qdisc show dev lo
The second to last show command shows 0 packets but a positive
number (74) of backlog bytes. The problem becomes clearer in the
last show command, where qdisc_purge_queue triggers
qdisc_tree_reduce_backlog with the positive backlog and causes an
underflow in the tbf parent's backlog (4096 Mb instead of 0).
To fix this issue, the codepath for all clients of qdisc_dequeue_internal
has been simplified: codel, pie, hhf, fq, fq_pie, and fq_codel.
qdisc_dequeue_internal handles the backlog adjustments for all cases that
do not directly use the dequeue handler.
The old fq_codel_change limit adjustment loop accumulated the arguments to
the subsequent qdisc_tree_reduce_backlog call through the cstats field.
However, this is confusing and error prone as fq_codel_dequeue could also
potentially mutate this field (which qdisc_dequeue_internal calls in the
non gso_skb case), so we have unified the code here with other qdiscs. |
| In the Linux kernel, the following vulnerability has been resolved:
scsi: ufs: ufs-qcom: Fix ESI null pointer dereference
ESI/MSI is a performance optimization feature that provides dedicated
interrupts per MCQ hardware queue. This is optional feature and UFS MCQ
should work with and without ESI feature.
Commit e46a28cea29a ("scsi: ufs: qcom: Remove the MSI descriptor abuse")
brings a regression in ESI (Enhanced System Interrupt) configuration that
causes a null pointer dereference when Platform MSI allocation fails.
The issue occurs in when platform_device_msi_init_and_alloc_irqs() in
ufs_qcom_config_esi() fails (returns -EINVAL) but the current code uses
__free() macro for automatic cleanup free MSI resources that were never
successfully allocated.
Unable to handle kernel NULL pointer dereference at virtual
address 0000000000000008
Call trace:
mutex_lock+0xc/0x54 (P)
platform_device_msi_free_irqs_all+0x1c/0x40
ufs_qcom_config_esi+0x1d0/0x220 [ufs_qcom]
ufshcd_config_mcq+0x28/0x104
ufshcd_init+0xa3c/0xf40
ufshcd_pltfrm_init+0x504/0x7d4
ufs_qcom_probe+0x20/0x58 [ufs_qcom]
Fix by restructuring the ESI configuration to try MSI allocation first,
before any other resource allocation and instead use explicit cleanup
instead of __free() macro to avoid cleanup of unallocated resources.
Tested on SM8750 platform with MCQ enabled, both with and without
Platform ESI support. |
| In the Linux kernel, the following vulnerability has been resolved:
cifs: Fix oops due to uninitialised variable
Fix smb3_init_transform_rq() to initialise buffer to NULL before calling
netfs_alloc_folioq_buffer() as netfs assumes it can append to the buffer it
is given. Setting it to NULL means it should start a fresh buffer, but the
value is currently undefined. |
| In the Linux kernel, the following vulnerability has been resolved:
net: usb: asix_devices: Fix PHY address mask in MDIO bus initialization
Syzbot reported shift-out-of-bounds exception on MDIO bus initialization.
The PHY address should be masked to 5 bits (0-31). Without this
mask, invalid PHY addresses could be used, potentially causing issues
with MDIO bus operations.
Fix this by masking the PHY address with 0x1f (31 decimal) to ensure
it stays within the valid range. |
| In the Linux kernel, the following vulnerability has been resolved:
s390/mm: Do not map lowcore with identity mapping
Since the identity mapping is pinned to address zero the lowcore is always
also mapped to address zero, this happens regardless of the relocate_lowcore
command line option. If the option is specified the lowcore is mapped
twice, instead of only once.
This means that NULL pointer accesses will succeed instead of causing an
exception (low address protection still applies, but covers only parts).
To fix this never map the first two pages of physical memory with the
identity mapping. |
| In the Linux kernel, the following vulnerability has been resolved:
drm/xe: Fix vm_bind_ioctl double free bug
If the argument check during an array bind fails, the bind_ops are freed
twice as seen below. Fix this by setting bind_ops to NULL after freeing.
==================================================================
BUG: KASAN: double-free in xe_vm_bind_ioctl+0x1b2/0x21f0 [xe]
Free of addr ffff88813bb9b800 by task xe_vm/14198
CPU: 5 UID: 0 PID: 14198 Comm: xe_vm Not tainted 6.16.0-xe-eudebug-cmanszew+ #520 PREEMPT(full)
Hardware name: Intel Corporation Alder Lake Client Platform/AlderLake-P DDR5 RVP, BIOS ADLPFWI1.R00.2411.A02.2110081023 10/08/2021
Call Trace:
<TASK>
dump_stack_lvl+0x82/0xd0
print_report+0xcb/0x610
? __virt_addr_valid+0x19a/0x300
? xe_vm_bind_ioctl+0x1b2/0x21f0 [xe]
kasan_report_invalid_free+0xc8/0xf0
? xe_vm_bind_ioctl+0x1b2/0x21f0 [xe]
? xe_vm_bind_ioctl+0x1b2/0x21f0 [xe]
check_slab_allocation+0x102/0x130
kfree+0x10d/0x440
? should_fail_ex+0x57/0x2f0
? xe_vm_bind_ioctl+0x1b2/0x21f0 [xe]
xe_vm_bind_ioctl+0x1b2/0x21f0 [xe]
? __pfx_xe_vm_bind_ioctl+0x10/0x10 [xe]
? __lock_acquire+0xab9/0x27f0
? lock_acquire+0x165/0x300
? drm_dev_enter+0x53/0xe0 [drm]
? find_held_lock+0x2b/0x80
? drm_dev_exit+0x30/0x50 [drm]
? drm_ioctl_kernel+0x128/0x1c0 [drm]
drm_ioctl_kernel+0x128/0x1c0 [drm]
? __pfx_xe_vm_bind_ioctl+0x10/0x10 [xe]
? find_held_lock+0x2b/0x80
? __pfx_drm_ioctl_kernel+0x10/0x10 [drm]
? should_fail_ex+0x57/0x2f0
? __pfx_xe_vm_bind_ioctl+0x10/0x10 [xe]
drm_ioctl+0x352/0x620 [drm]
? __pfx_drm_ioctl+0x10/0x10 [drm]
? __pfx_rpm_resume+0x10/0x10
? do_raw_spin_lock+0x11a/0x1b0
? find_held_lock+0x2b/0x80
? __pm_runtime_resume+0x61/0xc0
? rcu_is_watching+0x20/0x50
? trace_irq_enable.constprop.0+0xac/0xe0
xe_drm_ioctl+0x91/0xc0 [xe]
__x64_sys_ioctl+0xb2/0x100
? rcu_is_watching+0x20/0x50
do_syscall_64+0x68/0x2e0
entry_SYSCALL_64_after_hwframe+0x76/0x7e
RIP: 0033:0x7fa9acb24ded
(cherry picked from commit a01b704527c28a2fd43a17a85f8996b75ec8492a) |
| In the Linux kernel, the following vulnerability has been resolved:
netlink: avoid infinite retry looping in netlink_unicast()
netlink_attachskb() checks for the socket's read memory allocation
constraints. Firstly, it has:
rmem < READ_ONCE(sk->sk_rcvbuf)
to check if the just increased rmem value fits into the socket's receive
buffer. If not, it proceeds and tries to wait for the memory under:
rmem + skb->truesize > READ_ONCE(sk->sk_rcvbuf)
The checks don't cover the case when skb->truesize + sk->sk_rmem_alloc is
equal to sk->sk_rcvbuf. Thus the function neither successfully accepts
these conditions, nor manages to reschedule the task - and is called in
retry loop for indefinite time which is caught as:
rcu: INFO: rcu_sched self-detected stall on CPU
rcu: 0-....: (25999 ticks this GP) idle=ef2/1/0x4000000000000000 softirq=262269/262269 fqs=6212
(t=26000 jiffies g=230833 q=259957)
NMI backtrace for cpu 0
CPU: 0 PID: 22 Comm: kauditd Not tainted 5.10.240 #68
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.17.0-4.fc42 04/01/2014
Call Trace:
<IRQ>
dump_stack lib/dump_stack.c:120
nmi_cpu_backtrace.cold lib/nmi_backtrace.c:105
nmi_trigger_cpumask_backtrace lib/nmi_backtrace.c:62
rcu_dump_cpu_stacks kernel/rcu/tree_stall.h:335
rcu_sched_clock_irq.cold kernel/rcu/tree.c:2590
update_process_times kernel/time/timer.c:1953
tick_sched_handle kernel/time/tick-sched.c:227
tick_sched_timer kernel/time/tick-sched.c:1399
__hrtimer_run_queues kernel/time/hrtimer.c:1652
hrtimer_interrupt kernel/time/hrtimer.c:1717
__sysvec_apic_timer_interrupt arch/x86/kernel/apic/apic.c:1113
asm_call_irq_on_stack arch/x86/entry/entry_64.S:808
</IRQ>
netlink_attachskb net/netlink/af_netlink.c:1234
netlink_unicast net/netlink/af_netlink.c:1349
kauditd_send_queue kernel/audit.c:776
kauditd_thread kernel/audit.c:897
kthread kernel/kthread.c:328
ret_from_fork arch/x86/entry/entry_64.S:304
Restore the original behavior of the check which commit in Fixes
accidentally missed when restructuring the code.
Found by Linux Verification Center (linuxtesting.org). |
| In the Linux kernel, the following vulnerability has been resolved:
net: ftgmac100: fix potential NULL pointer access in ftgmac100_phy_disconnect
After the call to phy_disconnect() netdev->phydev is reset to NULL.
So fixed_phy_unregister() would be called with a NULL pointer as argument.
Therefore cache the phy_device before this call. |
| In the Linux kernel, the following vulnerability has been resolved:
LoongArch: BPF: Fix jump offset calculation in tailcall
The extra pass of bpf_int_jit_compile() skips JIT context initialization
which essentially skips offset calculation leaving out_offset = -1, so
the jmp_offset in emit_bpf_tail_call is calculated by
"#define jmp_offset (out_offset - (cur_offset))"
is a negative number, which is wrong. The final generated assembly are
as follow.
54: bgeu $a2, $t1, -8 # 0x0000004c
58: addi.d $a6, $s5, -1
5c: bltz $a6, -16 # 0x0000004c
60: alsl.d $t2, $a2, $a1, 0x3
64: ld.d $t2, $t2, 264
68: beq $t2, $zero, -28 # 0x0000004c
Before apply this patch, the follow test case will reveal soft lock issues.
cd tools/testing/selftests/bpf/
./test_progs --allow=tailcalls/tailcall_bpf2bpf_1
dmesg:
watchdog: BUG: soft lockup - CPU#2 stuck for 26s! [test_progs:25056] |
| In the Linux kernel, the following vulnerability has been resolved:
net: hibmcge: fix rtnl deadlock issue
Currently, the hibmcge netdev acquires the rtnl_lock in
pci_error_handlers.reset_prepare() and releases it in
pci_error_handlers.reset_done().
However, in the PCI framework:
pci_reset_bus - __pci_reset_slot - pci_slot_save_and_disable_locked -
pci_dev_save_and_disable - err_handler->reset_prepare(dev);
In pci_slot_save_and_disable_locked():
list_for_each_entry(dev, &slot->bus->devices, bus_list) {
if (!dev->slot || dev->slot!= slot)
continue;
pci_dev_save_and_disable(dev);
if (dev->subordinate)
pci_bus_save_and_disable_locked(dev->subordinate);
}
This will iterate through all devices under the current bus and execute
err_handler->reset_prepare(), causing two devices of the hibmcge driver
to sequentially request the rtnl_lock, leading to a deadlock.
Since the driver now executes netif_device_detach()
before the reset process, it will not concurrently with
other netdev APIs, so there is no need to hold the rtnl_lock now.
Therefore, this patch removes the rtnl_lock during the reset process and
adjusts the position of HBG_NIC_STATE_RESETTING to ensure
that multiple resets are not executed concurrently. |
| In the Linux kernel, the following vulnerability has been resolved:
net: hibmcge: fix the division by zero issue
When the network port is down, the queue is released, and ring->len is 0.
In debugfs, hbg_get_queue_used_num() will be called,
which may lead to a division by zero issue.
This patch adds a check, if ring->len is 0,
hbg_get_queue_used_num() directly returns 0. |
| In the Linux kernel, the following vulnerability has been resolved:
net: kcm: Fix race condition in kcm_unattach()
syzbot found a race condition when kcm_unattach(psock)
and kcm_release(kcm) are executed at the same time.
kcm_unattach() is missing a check of the flag
kcm->tx_stopped before calling queue_work().
If the kcm has a reserved psock, kcm_unattach() might get executed
between cancel_work_sync() and unreserve_psock() in kcm_release(),
requeuing kcm->tx_work right before kcm gets freed in kcm_done().
Remove kcm->tx_stopped and replace it by the less
error-prone disable_work_sync(). |
| In the Linux kernel, the following vulnerability has been resolved:
drm/xe/migrate: prevent infinite recursion
If the buf + offset is not aligned to XE_CAHELINE_BYTES we fallback to
using a bounce buffer. However the bounce buffer here is allocated on
the stack, and the only alignment requirement here is that it's
naturally aligned to u8, and not XE_CACHELINE_BYTES. If the bounce
buffer is also misaligned we then recurse back into the function again,
however the new bounce buffer might also not be aligned, and might never
be until we eventually blow through the stack, as we keep recursing.
Instead of using the stack use kmalloc, which should respect the
power-of-two alignment request here. Fixes a kernel panic when
triggering this path through eudebug.
v2 (Stuart):
- Add build bug check for power-of-two restriction
- s/EINVAL/ENOMEM/
(cherry picked from commit 38b34e928a08ba594c4bbf7118aa3aadacd62fff) |
| In the Linux kernel, the following vulnerability has been resolved:
x86/fpu: Fix NULL dereference in avx512_status()
Problem
-------
With CONFIG_X86_DEBUG_FPU enabled, reading /proc/[kthread]/arch_status
causes a warning and a NULL pointer dereference.
This is because the AVX-512 timestamp code uses x86_task_fpu() but
doesn't check it for NULL. CONFIG_X86_DEBUG_FPU addles that function
for kernel threads (PF_KTHREAD specifically), making it return NULL.
The point of the warning was to ensure that kernel threads only access
task->fpu after going through kernel_fpu_begin()/_end(). Note: all
kernel tasks exposed in /proc have a valid task->fpu.
Solution
--------
One option is to silence the warning and check for NULL from
x86_task_fpu(). However, that warning is fairly fresh and seems like a
defense against misuse of the FPU state in kernel threads.
Instead, stop outputting AVX-512_elapsed_ms for kernel threads
altogether. The data was garbage anyway because avx512_timestamp is
only updated for user threads, not kernel threads.
If anyone ever wants to track kernel thread AVX-512 use, they can come
back later and do it properly, separate from this bug fix.
[ dhansen: mostly rewrite changelog ] |
| In the Linux kernel, the following vulnerability has been resolved:
i2c: core: Fix double-free of fwnode in i2c_unregister_device()
Before commit df6d7277e552 ("i2c: core: Do not dereference fwnode in struct
device"), i2c_unregister_device() only called fwnode_handle_put() on
of_node-s in the form of calling of_node_put(client->dev.of_node).
But after this commit the i2c_client's fwnode now unconditionally gets
fwnode_handle_put() on it.
When the i2c_client has no primary (ACPI / OF) fwnode but it does have
a software fwnode, the software-node will be the primary node and
fwnode_handle_put() will put() it.
But for the software fwnode device_remove_software_node() will also put()
it leading to a double free:
[ 82.665598] ------------[ cut here ]------------
[ 82.665609] refcount_t: underflow; use-after-free.
[ 82.665808] WARNING: CPU: 3 PID: 1502 at lib/refcount.c:28 refcount_warn_saturate+0xba/0x11
...
[ 82.666830] RIP: 0010:refcount_warn_saturate+0xba/0x110
...
[ 82.666962] <TASK>
[ 82.666971] i2c_unregister_device+0x60/0x90
Fix this by not calling fwnode_handle_put() when the primary fwnode is
a software-node. |
| In the Linux kernel, the following vulnerability has been resolved:
nvmet: pci-epf: Do not complete commands twice if nvmet_req_init() fails
Have nvmet_req_init() and req->execute() complete failed commands.
Description of the problem:
nvmet_req_init() calls __nvmet_req_complete() internally upon failure,
e.g., unsupported opcode, which calls the "queue_response" callback,
this results in nvmet_pci_epf_queue_response() being called, which will
call nvmet_pci_epf_complete_iod() if data_len is 0 or if dma_dir is
different from DMA_TO_DEVICE. This results in a double completion as
nvmet_pci_epf_exec_iod_work() also calls nvmet_pci_epf_complete_iod()
when nvmet_req_init() fails.
Steps to reproduce:
On the host send a command with an unsupported opcode with nvme-cli,
For example the admin command "security receive"
$ sudo nvme security-recv /dev/nvme0n1 -n1 -x4096
This triggers a double completion as nvmet_req_init() fails and
nvmet_pci_epf_queue_response() is called, here iod->dma_dir is still
in the default state of "DMA_NONE" as set by default in
nvmet_pci_epf_alloc_iod(), so nvmet_pci_epf_complete_iod() is called.
Because nvmet_req_init() failed nvmet_pci_epf_complete_iod() is also
called in nvmet_pci_epf_exec_iod_work() leading to a double completion.
This not only sends two completions to the host but also corrupts the
state of the PCI NVMe target leading to kernel oops.
This patch lets nvmet_req_init() and req->execute() complete all failed
commands, and removes the double completion case in
nvmet_pci_epf_exec_iod_work() therefore fixing the edge cases where
double completions occurred. |