Age | Commit message (Collapse) | Author |
|
Instead of using a red-black tree, rely on the VM system to store kmem
specific private data.
|
|
Don't encourage anyone to use non reclaimable pools of resources, it's
a Bad Thing To Do.
|
|
* kern/slab.c (kmem_cache_init): Relax alignment restriction.
* kern/thread.c (stack_cache): New variable.
(stack_alloc): Use the slab allocator.
(stack_collect): Adjust accordingly.
(thread_init): Initialize `stack_cache'.
|
|
* kern/slab.c (host_slab_info): Fix locking.
|
|
Import the macro definitions from the x15 kernel project, and replace
all similar definitions littered all over the place with it.
Importing this file will make importing code from the x15 kernel
easier. We are already using the red-black tree implementation and
the slab allocator from it, and we will import even more code in the
near future.
* kern/list.h: Do not define `structof', include `macros.h' instead.
* kern/rbtree.h: Likewise.
* kern/slab.c: Do not define `ARRAY_SIZE', include `macros.h' instead.
* i386/grub/misc.h: Likewise.
* i386/i386/xen.h: Do not define `barrier', include `macros.h' instead.
* kern/macro_help.h: Delete file. Replaced by `macros.h'.
* kern/macros.h: New file.
* Makefrag.am (libkernel_a_SOURCES): Add new file, remove old file.
* device/dev_master.h: Adopt accordingly.
* device/io_req.h: Likewise.
* device/net_io.h: Likewise.
* i386/intel/read_fault.c: Likewise.
* ipc/ipc_kmsg.h: Likewise.
* ipc/ipc_mqueue.h: Likewise.
* ipc/ipc_object.h: Likewise.
* ipc/ipc_port.h: Likewise.
* ipc/ipc_space.h: Likewise.
* ipc/ipc_splay.c: Likewise.
* ipc/ipc_splay.h: Likewise.
* kern/assert.h: Likewise.
* kern/ast.h: Likewise.
* kern/pc_sample.h: Likewise.
* kern/refcount.h: Likewise.
* kern/sched.h: Likewise.
* kern/sched_prim.c: Likewise.
* kern/timer.c: Likewise.
* kern/timer.h: Likewise.
* vm/vm_fault.c: Likewise.
* vm/vm_map.h: Likewise.
* vm/vm_object.h: Likewise.
* vm/vm_page.h: Likewise.
|
|
* kern/slab.c (kmem_cache_compute_sizes): Initialize optimal_size and
assert that a size is selected.
|
|
* kern/slab.c (KMEM_MAP_SIZE): Decrease from 128MiB to 96MiB.
|
|
The slab allocator relies on the fact that kmem_cache_error does not
return. Previously, kmem_error was using printf. Use panic instead.
Found using the Clang Static Analyzer.
* kern/slab.c (kmem_error): Use panic instead of printf.
|
|
* kern/slab.c (kmem_cache_error): Use kmem_warn instead of kmem_error
to print the cache name and its address.
|
|
info_size is initialized to a random value. Quiet the warning by
initializing to zero.
* kern/slab.c (info_size): Initialize to zero.
|
|
optimal_embed is initialized to a random value. Quiet the warning by
initializing to zero.
* kern/slab.c (optimal_embed): Initialize to zero.
|
|
* kern/slab.c (slab_info): Fix format for vm_size_t.
|
|
There is currently no actual use of constructors, which is why this bug has
been long overlooked.
* kern/slab.c (kmem_cpu_pool_fill): Call constructor on buffers unless
verification is enabled for the cache, or the constructor is NULL.
|
|
* kern/slab.c (kmem_cache_free): Relock cache before retrying releasing
an object to the CPU pool layer.
|
|
This reverts a change brought when reworking slab lists handling that made
the allocator store slabs in LIFO order, whatever their reference count.
While it's fine for free slabs, it actually increased fragmentation for
partial slabs.
* kern/slab.c (kmem_cache_alloc_from_slab): Insert slabs that become partial
at the end of the partial slabs list.
|
|
This change increases clarity.
* kern/list.h (list_insert): Rename to ...
(list_insert_head): ... this.
* kern/slab.c: Update calls to list_insert.
|
|
* kern/slab.c (kalloc_init): Remove unused variables.
|
|
Instead of walking the list of free slabs while holding the cache lock,
detach the list from the cache and directly compute the final count values,
and destroy slabs after releasing the cache lock.
* kern/slab.c (kmem_cache_reap): Optimize.
|
|
Don't enforce strong ordering of partial slabs. Separating partial slabs
from free slabs is already effective against fragmentation, and sorting
would sometimes cause pathological scalability issues. In addition, store
new slabs (whether free or partial) in LIFO order for better cache usage.
* kern/slab.c (kmem_cache_grow): Insert new slab at the head of the slabs list.
(kmem_cache_alloc_from_slab): Likewise. In addition, don't sort partial slabs.
(kmem_cache_free_to_slab): Likewise.
* kern/slab.h: Remove comment about partial slabs sorting.
|
|
* kern/slab.c (kmem_cache_free): Lock cache before releasing an object to
the slab layer.
|
|
This problem was overlooked because of simple locks being no-ops.
* kern/slab.c (slab_collect): Fix lock name to kmem_cache_list_lock.
(host_slab_info): Likewise.
|
|
The purpose of this function is to allow kernel code to display the state
of the slab caches in situations where the host_slab_info RPC wouldn't be
available, e.g. before a panic.
* kern/slab.c (slab_info): New function.
* kern/slab.h: Add declaration for slab_info.
|
|
The slab garbage collector uses sched_tick as its time reference, which
is increased every seconds, while the interval is expressed in clock
ticks. Use the proper time reference instead.
* kern/slab.c (kmem_gc_last_tick): Declare as unsigned long.
(slab_collect): Use elapsed_ticks instead of sched_tick.
|
|
* kern/slab.c (KMEM_GC_INTERVAL): Increase to 5 seconds.
|
|
* ipc/ipc_table.c: Add #include <kern/slab.h>.
(ipc_table_alloc): Use kmem_map instead of kalloc_map when allocating
a table.
(ipc_table_realloc): Likewise for reallocation.
(ipc_table_free): Likewise for release.
* kern/kalloc.h (kalloc_map): Remove declaration.
* kern/slab.c (KMEM_MAP_SIZE): Increase to 128 MiB.
(KALLOC_MAP_SIZE): Remove macro.
(kalloc_map_store): Remove variable.
(kalloc_map): Likewise.
(kalloc_pagealloc): Use kmem_map instead of kalloc_map for general
purpose allocations.
(kalloc_pagefree): Likewise.
(kalloc_init): Remove the creation of kalloc_map.
|
|
Maksym and I have assigned copyright to the Free Software Foundation.
In addition, restore the original upstream copyrights for correctness.
* kern/list.h: Fix copyright assignment.
* kern/rbtree.c: Likewise.
* kern/rbtree.h: Likewise.
* kern/rbtree_i.h: Likewise.
* kern/slab.c: Likewise.
* kern/slab.h: Likewise.
|
|
TODO: remonter formats
* i386/include/mach/i386/vm_types.h (vm_offset_t): Define to unsigned long.
(signed32_t): Define to signed int.
(unsigned32_t): Define to unsigned int.
* i386/include/mach/sa/stdarg.h (__va_size): Use sizeof(unsigned long)-1
instead of 3.
* include/mach/port.h (mach_port_t): Define to vm_offset_t instead of
natural_t.
* include/sys/types.h (size_t): Define to unsigned long instead of
natural_t.
* linux/src/include/asm-i386/posix_types.h (__kernel_size_t): Define to
unsigned long.
(__kernel_ssize_t): Define to long.
* linux/src/include/linux/stddef.h (size_t): Define to unsigned long.
* device/dev_pager.c (dev_pager_hash): Cast port to vm_offset_t insted of
natural_t.
(device_pager_data_request): Fix format.
* device/ds_routines.c (ds_no_senders): Fix format.
* i386/i386/io_map.c (io_map): Likewise.
* i386/i386at/autoconf.c (take_dev_irq): Likewise.
* i386/i386at/com.c (comattach): Likewise.
* i386/i386at/lpr.c (lprattach): Likewise.
* i386/i386at/model_dep.c (mem_size_init, mem_size_init, c_boot_entry):
Likewise.
* i386/intel/pmap.c (pmap_enter): Likewise.
* ipc/ipc_notify.c (ipc_notify_port_deleted, ipc_notify_msg_accepted,
ipc_notify_dead_name): Likewise.
* ipc/mach_port.c (mach_port_destroy, mach_port_deallocate): Likewise.
* kern/ipc_kobject.c (ipc_kobject_destroy): Likewise.
* kern/slab.c (kalloc_init): Likewise.
* vm/vm_fault.c (vm_fault_page): Likewise.
* vm/vm_map.c (vm_map_pmap_enter): Likewise.
* xen/block.c (device_read): Likewise.
* device/net_io.c (bpf_match): Take unsigned long * instead of unsigned int
*.
(bpf_do_filter): Make mem unsigned long instead of long.
* i386/i386/ktss.c (ktss_init): Cast pointer to unsigned long instead of
unsigned.
* i386/i386/pcb.c (stack_attach, switch_ktss): Cast pointers to long instead of
int.
* i386/i386/trap.c (dump_ss): Likewise.
* ipc/ipc_hash.c (IH_LOCAL_HASH): Cast object to vm_offset_t.
* ipc/mach_msg.c (mach_msg_receive, mach_msg_receive_continue): Cast kmsg to
vm_offset_t instead of natural_t.
* kern/pc_sample.c (take_pc_sample): Cast to vm_offset_t instead of
natural_t.
* kern/boot_script.c (sym, arg): Set type of `val' field to long instead of int.
(create_task, builtin_symbols, boot_script_parse_line,
boot_script_define_function): Cast to long instead of int.
* kern/bootstrap.c (bootstrap_create): Likewise.
* kern/sched_prim.c (decl_simple_lock_data): Likewise.
* kern/printf.c (vsnprintf): Set size type to size_t.
* kern/printf.h (vsnprintf): Likewise.
* vm/vm_map.h (kentry_data_size): Fix type to vm_size_t.
* vm/vm_object.c (vm_object_pmap_protect_by_page): Fix size parameter type
to vm_size_t.
|
|
As it is intended to completely replace the zone allocator, remove it on
the way. So long to the venerable code !
* Makefrag.am (libkernel_a_SOURCES): Add kern/slab.{c,h}, remove kern/kalloc.c
and kern/zalloc.{c,h}.
* configfrag.ac (SLAB_VERIFY, SLAB_USE_CPU_POOLS): Add defines.
* i386/Makefrag.am (libkernel_a_SOURCES): Remove i386/i386/zalloc.h.
* i386/configfrag.ac (CPU_L1_SHIFT): Remove define.
* include/mach_debug/slab_info.h: New file.
* kern/slab.c: Likewise.
* kern/slab.h: Likewise.
* i386/i386/zalloc.h: Remove file.
* include/mach_debug/zone_info.h: Likewise.
* kern/kalloc.c: Likewise.
* kern/zalloc.c: Likewise.
* kern/zalloc.h: Likewise.
|