Merge branch 'unstable' of https://github.com/antirez/redis into unstable

This commit is contained in:
Guy Benoish 2017-05-09 18:42:32 +03:00
commit 89a9e5a9a2
178 changed files with 8860 additions and 17396 deletions

View File

@ -1,28 +0,0 @@
version: '{build}'
environment:
matrix:
- MSYSTEM: MINGW64
CPU: x86_64
MSVC: amd64
- MSYSTEM: MINGW32
CPU: i686
MSVC: x86
- MSYSTEM: MINGW64
CPU: x86_64
- MSYSTEM: MINGW32
CPU: i686
install:
- set PATH=c:\msys64\%MSYSTEM%\bin;c:\msys64\usr\bin;%PATH%
- if defined MSVC call "c:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\vcvarsall.bat" %MSVC%
- if defined MSVC pacman --noconfirm -Rsc mingw-w64-%CPU%-gcc gcc
- pacman --noconfirm -Suy mingw-w64-%CPU%-make
build_script:
- bash -c "autoconf"
- bash -c "./configure"
- mingw32-make -j3
- file lib/jemalloc.dll
- mingw32-make -j3 tests
- mingw32-make -k check

View File

@ -73,19 +73,3 @@ test/include/test/jemalloc_test_defs.h
/test/unit/*.out /test/unit/*.out
/VERSION /VERSION
*.pdb
*.sdf
*.opendb
*.opensdf
*.cachefile
*.suo
*.user
*.sln.docstates
*.tmp
/msvc/Win32/
/msvc/x64/
/msvc/projects/*/*/Debug*/
/msvc/projects/*/*/Release*/
/msvc/projects/*/*/Win32/
/msvc/projects/*/*/x64/

View File

@ -1,29 +0,0 @@
language: c
matrix:
include:
- os: linux
compiler: gcc
- os: linux
compiler: gcc
env:
- EXTRA_FLAGS=-m32
addons:
apt:
packages:
- gcc-multilib
- os: osx
compiler: clang
- os: osx
compiler: clang
env:
- EXTRA_FLAGS=-m32
before_script:
- autoconf
- ./configure${EXTRA_FLAGS:+ CC="$CC $EXTRA_FLAGS"}
- make -j3
- make -j3 tests
script:
- make check

View File

@ -1,10 +1,10 @@
Unless otherwise specified, files in the jemalloc source distribution are Unless otherwise specified, files in the jemalloc source distribution are
subject to the following license: subject to the following license:
-------------------------------------------------------------------------------- --------------------------------------------------------------------------------
Copyright (C) 2002-2016 Jason Evans <jasone@canonware.com>. Copyright (C) 2002-2015 Jason Evans <jasone@canonware.com>.
All rights reserved. All rights reserved.
Copyright (C) 2007-2012 Mozilla Foundation. All rights reserved. Copyright (C) 2007-2012 Mozilla Foundation. All rights reserved.
Copyright (C) 2009-2016 Facebook, Inc. All rights reserved. Copyright (C) 2009-2015 Facebook, Inc. All rights reserved.
Redistribution and use in source and binary forms, with or without Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met: modification, are permitted provided that the following conditions are met:

View File

@ -4,226 +4,6 @@ brevity. Much more detail can be found in the git revision history:
https://github.com/jemalloc/jemalloc https://github.com/jemalloc/jemalloc
* 4.4.0 (December 3, 2016)
New features:
- Add configure support for *-*-linux-android. (@cferris1000, @jasone)
- Add the --disable-syscall configure option, for use on systems that place
security-motivated limitations on syscall(2). (@jasone)
- Add support for Debian GNU/kFreeBSD. (@thesam)
Optimizations:
- Add extent serial numbers and use them where appropriate as a sort key that
is higher priority than address, so that the allocation policy prefers older
extents. This tends to improve locality (decrease fragmentation) when
memory grows downward. (@jasone)
- Refactor madvise(2) configuration so that MADV_FREE is detected and utilized
on Linux 4.5 and newer. (@jasone)
- Mark partially purged arena chunks as non-huge-page. This improves
interaction with Linux's transparent huge page functionality. (@jasone)
Bug fixes:
- Fix size class computations for edge conditions involving extremely large
allocations. This regression was first released in 4.0.0. (@jasone,
@ingvarha)
- Remove overly restrictive assertions related to the cactive statistic. This
regression was first released in 4.1.0. (@jasone)
- Implement a more reliable detection scheme for os_unfair_lock on macOS.
(@jszakmeister)
* 4.3.1 (November 7, 2016)
Bug fixes:
- Fix a severe virtual memory leak. This regression was first released in
4.3.0. (@interwq, @jasone)
- Refactor atomic and prng APIs to restore support for 32-bit platforms that
use pre-C11 toolchains, e.g. FreeBSD's mips. (@jasone)
* 4.3.0 (November 4, 2016)
This is the first release that passes the test suite for multiple Windows
configurations, thanks in large part to @glandium setting up continuous
integration via AppVeyor (and Travis CI for Linux and OS X).
New features:
- Add "J" (JSON) support to malloc_stats_print(). (@jasone)
- Add Cray compiler support. (@ronawho)
Optimizations:
- Add/use adaptive spinning for bootstrapping and radix tree node
initialization. (@jasone)
Bug fixes:
- Fix large allocation to search starting in the optimal size class heap,
which can substantially reduce virtual memory churn and fragmentation. This
regression was first released in 4.0.0. (@mjp41, @jasone)
- Fix stats.arenas.<i>.nthreads accounting. (@interwq)
- Fix and simplify decay-based purging. (@jasone)
- Make DSS (sbrk(2)-related) operations lockless, which resolves potential
deadlocks during thread exit. (@jasone)
- Fix over-sized allocation of radix tree leaf nodes. (@mjp41, @ogaun,
@jasone)
- Fix over-sized allocation of arena_t (plus associated stats) data
structures. (@jasone, @interwq)
- Fix EXTRA_CFLAGS to not affect configuration. (@jasone)
- Fix a Valgrind integration bug. (@ronawho)
- Disallow 0x5a junk filling when running in Valgrind. (@jasone)
- Fix a file descriptor leak on Linux. This regression was first released in
4.2.0. (@vsarunas, @jasone)
- Fix static linking of jemalloc with glibc. (@djwatson)
- Use syscall(2) rather than {open,read,close}(2) during boot on Linux. This
works around other libraries' system call wrappers performing reentrant
allocation. (@kspinka, @Whissi, @jasone)
- Fix OS X default zone replacement to work with OS X 10.12. (@glandium,
@jasone)
- Fix cached memory management to avoid needless commit/decommit operations
during purging, which resolves permanent virtual memory map fragmentation
issues on Windows. (@mjp41, @jasone)
- Fix TSD fetches to avoid (recursive) allocation. This is relevant to
non-TLS and Windows configurations. (@jasone)
- Fix malloc_conf overriding to work on Windows. (@jasone)
- Forcibly disable lazy-lock on Windows (was forcibly *enabled*). (@jasone)
* 4.2.1 (June 8, 2016)
Bug fixes:
- Fix bootstrapping issues for configurations that require allocation during
tsd initialization (e.g. --disable-tls). (@cferris1000, @jasone)
- Fix gettimeofday() version of nstime_update(). (@ronawho)
- Fix Valgrind regressions in calloc() and chunk_alloc_wrapper(). (@ronawho)
- Fix potential VM map fragmentation regression. (@jasone)
- Fix opt_zero-triggered in-place huge reallocation zeroing. (@jasone)
- Fix heap profiling context leaks in reallocation edge cases. (@jasone)
* 4.2.0 (May 12, 2016)
New features:
- Add the arena.<i>.reset mallctl, which makes it possible to discard all of
an arena's allocations in a single operation. (@jasone)
- Add the stats.retained and stats.arenas.<i>.retained statistics. (@jasone)
- Add the --with-version configure option. (@jasone)
- Support --with-lg-page values larger than actual page size. (@jasone)
Optimizations:
- Use pairing heaps rather than red-black trees for various hot data
structures. (@djwatson, @jasone)
- Streamline fast paths of rtree operations. (@jasone)
- Optimize the fast paths of calloc() and [m,d,sd]allocx(). (@jasone)
- Decommit unused virtual memory if the OS does not overcommit. (@jasone)
- Specify MAP_NORESERVE on Linux if [heuristic] overcommit is active, in order
to avoid unfortunate interactions during fork(2). (@jasone)
Bug fixes:
- Fix chunk accounting related to triggering gdump profiles. (@jasone)
- Link against librt for clock_gettime(2) if glibc < 2.17. (@jasone)
- Scale leak report summary according to sampling probability. (@jasone)
* 4.1.1 (May 3, 2016)
This bugfix release resolves a variety of mostly minor issues, though the
bitmap fix is critical for 64-bit Windows.
Bug fixes:
- Fix the linear scan version of bitmap_sfu() to shift by the proper amount
even when sizeof(long) is not the same as sizeof(void *), as on 64-bit
Windows. (@jasone)
- Fix hashing functions to avoid unaligned memory accesses (and resulting
crashes). This is relevant at least to some ARM-based platforms.
(@rkmisra)
- Fix fork()-related lock rank ordering reversals. These reversals were
unlikely to cause deadlocks in practice except when heap profiling was
enabled and active. (@jasone)
- Fix various chunk leaks in OOM code paths. (@jasone)
- Fix malloc_stats_print() to print opt.narenas correctly. (@jasone)
- Fix MSVC-specific build/test issues. (@rustyx, @yuslepukhin)
- Fix a variety of test failures that were due to test fragility rather than
core bugs. (@jasone)
* 4.1.0 (February 28, 2016)
This release is primarily about optimizations, but it also incorporates a lot
of portability-motivated refactoring and enhancements. Many people worked on
this release, to an extent that even with the omission here of minor changes
(see git revision history), and of the people who reported and diagnosed
issues, so much of the work was contributed that starting with this release,
changes are annotated with author credits to help reflect the collaborative
effort involved.
New features:
- Implement decay-based unused dirty page purging, a major optimization with
mallctl API impact. This is an alternative to the existing ratio-based
unused dirty page purging, and is intended to eventually become the sole
purging mechanism. New mallctls:
+ opt.purge
+ opt.decay_time
+ arena.<i>.decay
+ arena.<i>.decay_time
+ arenas.decay_time
+ stats.arenas.<i>.decay_time
(@jasone, @cevans87)
- Add --with-malloc-conf, which makes it possible to embed a default
options string during configuration. This was motivated by the desire to
specify --with-malloc-conf=purge:decay , since the default must remain
purge:ratio until the 5.0.0 release. (@jasone)
- Add MS Visual Studio 2015 support. (@rustyx, @yuslepukhin)
- Make *allocx() size class overflow behavior defined. The maximum
size class is now less than PTRDIFF_MAX to protect applications against
numerical overflow, and all allocation functions are guaranteed to indicate
errors rather than potentially crashing if the request size exceeds the
maximum size class. (@jasone)
- jeprof:
+ Add raw heap profile support. (@jasone)
+ Add --retain and --exclude for backtrace symbol filtering. (@jasone)
Optimizations:
- Optimize the fast path to combine various bootstrapping and configuration
checks and execute more streamlined code in the common case. (@interwq)
- Use linear scan for small bitmaps (used for small object tracking). In
addition to speeding up bitmap operations on 64-bit systems, this reduces
allocator metadata overhead by approximately 0.2%. (@djwatson)
- Separate arena_avail trees, which substantially speeds up run tree
operations. (@djwatson)
- Use memoization (boot-time-computed table) for run quantization. Separate
arena_avail trees reduced the importance of this optimization. (@jasone)
- Attempt mmap-based in-place huge reallocation. This can dramatically speed
up incremental huge reallocation. (@jasone)
Incompatible changes:
- Make opt.narenas unsigned rather than size_t. (@jasone)
Bug fixes:
- Fix stats.cactive accounting regression. (@rustyx, @jasone)
- Handle unaligned keys in hash(). This caused problems for some ARM systems.
(@jasone, @cferris1000)
- Refactor arenas array. In addition to fixing a fork-related deadlock, this
makes arena lookups faster and simpler. (@jasone)
- Move retained memory allocation out of the default chunk allocation
function, to a location that gets executed even if the application installs
a custom chunk allocation function. This resolves a virtual memory leak.
(@buchgr)
- Fix a potential tsd cleanup leak. (@cferris1000, @jasone)
- Fix run quantization. In practice this bug had no impact unless
applications requested memory with alignment exceeding one page.
(@jasone, @djwatson)
- Fix LinuxThreads-specific bootstrapping deadlock. (Cosmin Paraschiv)
- jeprof:
+ Don't discard curl options if timeout is not defined. (@djwatson)
+ Detect failed profile fetches. (@djwatson)
- Fix stats.arenas.<i>.{dss,lg_dirty_mult,decay_time,pactive,pdirty} for
--disable-stats case. (@jasone)
* 4.0.4 (October 24, 2015)
This bugfix release fixes another xallocx() regression. No other regressions
have come to light in over a month, so this is likely a good starting point
for people who prefer to wait for "dot one" releases with all the major issues
shaken out.
Bug fixes:
- Fix xallocx(..., MALLOCX_ZERO to zero the last full trailing page of large
allocations that have been randomly assigned an offset of 0 when
--enable-cache-oblivious configure option is enabled.
* 4.0.3 (September 24, 2015) * 4.0.3 (September 24, 2015)
This bugfix release continues the trend of xallocx() and heap profiling fixes. This bugfix release continues the trend of xallocx() and heap profiling fixes.

26
deps/jemalloc/INSTALL vendored
View File

@ -35,10 +35,6 @@ any of the following arguments (not a definitive list) to 'configure':
will cause files to be installed into /usr/local/include, /usr/local/lib, will cause files to be installed into /usr/local/include, /usr/local/lib,
and /usr/local/man. and /usr/local/man.
--with-version=<major>.<minor>.<bugfix>-<nrev>-g<gid>
Use the specified version string rather than trying to generate one (if in
a git repository) or use existing the VERSION file (if present).
--with-rpath=<colon-separated-rpath> --with-rpath=<colon-separated-rpath>
Embed one or more library paths, so that libjemalloc can find the libraries Embed one or more library paths, so that libjemalloc can find the libraries
it is linked to. This works only on ELF-based systems. it is linked to. This works only on ELF-based systems.
@ -88,14 +84,6 @@ any of the following arguments (not a definitive list) to 'configure':
versions of jemalloc can coexist in the same installation directory. For versions of jemalloc can coexist in the same installation directory. For
example, libjemalloc.so.0 becomes libjemalloc<suffix>.so.0. example, libjemalloc.so.0 becomes libjemalloc<suffix>.so.0.
--with-malloc-conf=<malloc_conf>
Embed <malloc_conf> as a run-time options string that is processed prior to
the malloc_conf global variable, the /etc/malloc.conf symlink, and the
MALLOC_CONF environment variable. For example, to change the default chunk
size to 256 KiB:
--with-malloc-conf=lg_chunk:18
--disable-cc-silence --disable-cc-silence
Disable code that silences non-useful compiler warnings. This is mainly Disable code that silences non-useful compiler warnings. This is mainly
useful during development when auditing the set of warnings that are being useful during development when auditing the set of warnings that are being
@ -206,11 +194,6 @@ any of the following arguments (not a definitive list) to 'configure':
most extreme case increases physical memory usage for the 16 KiB size class most extreme case increases physical memory usage for the 16 KiB size class
to 20 KiB. to 20 KiB.
--disable-syscall
Disable use of syscall(2) rather than {open,read,write,close}(2). This is
intended as a workaround for systems that place security limitations on
syscall(2).
--with-xslroot=<path> --with-xslroot=<path>
Specify where to find DocBook XSL stylesheets when building the Specify where to find DocBook XSL stylesheets when building the
documentation. documentation.
@ -332,15 +315,6 @@ LDFLAGS="?"
PATH="?" PATH="?"
'configure' uses this to find programs. 'configure' uses this to find programs.
In some cases it may be necessary to work around configuration results that do
not match reality. For example, Linux 4.5 added support for the MADV_FREE flag
to madvise(2), which can cause problems if building on a host with MADV_FREE
support and deploying to a target without. To work around this, use a cache
file to override the relevant configuration variable defined in configure.ac,
e.g.:
echo "je_cv_madv_free=no" > config.cache && ./configure -C
=== Advanced compilation ======================================================= === Advanced compilation =======================================================
To build only parts of jemalloc, use the following targets: To build only parts of jemalloc, use the following targets:

View File

@ -24,11 +24,11 @@ abs_objroot := @abs_objroot@
# Build parameters. # Build parameters.
CPPFLAGS := @CPPFLAGS@ -I$(srcroot)include -I$(objroot)include CPPFLAGS := @CPPFLAGS@ -I$(srcroot)include -I$(objroot)include
EXTRA_CFLAGS := @EXTRA_CFLAGS@ CFLAGS := @CFLAGS@
CFLAGS := @CFLAGS@ $(EXTRA_CFLAGS)
LDFLAGS := @LDFLAGS@ LDFLAGS := @LDFLAGS@
EXTRA_LDFLAGS := @EXTRA_LDFLAGS@ EXTRA_LDFLAGS := @EXTRA_LDFLAGS@
LIBS := @LIBS@ LIBS := @LIBS@
TESTLIBS := @TESTLIBS@
RPATH_EXTRA := @RPATH_EXTRA@ RPATH_EXTRA := @RPATH_EXTRA@
SO := @so@ SO := @so@
IMPORTLIB := @importlib@ IMPORTLIB := @importlib@
@ -53,19 +53,15 @@ enable_prof := @enable_prof@
enable_valgrind := @enable_valgrind@ enable_valgrind := @enable_valgrind@
enable_zone_allocator := @enable_zone_allocator@ enable_zone_allocator := @enable_zone_allocator@
MALLOC_CONF := @JEMALLOC_CPREFIX@MALLOC_CONF MALLOC_CONF := @JEMALLOC_CPREFIX@MALLOC_CONF
link_whole_archive := @link_whole_archive@
DSO_LDFLAGS = @DSO_LDFLAGS@ DSO_LDFLAGS = @DSO_LDFLAGS@
SOREV = @SOREV@ SOREV = @SOREV@
PIC_CFLAGS = @PIC_CFLAGS@ PIC_CFLAGS = @PIC_CFLAGS@
CTARGET = @CTARGET@ CTARGET = @CTARGET@
LDTARGET = @LDTARGET@ LDTARGET = @LDTARGET@
TEST_LD_MODE = @TEST_LD_MODE@
MKLIB = @MKLIB@ MKLIB = @MKLIB@
AR = @AR@ AR = @AR@
ARFLAGS = @ARFLAGS@ ARFLAGS = @ARFLAGS@
CC_MM = @CC_MM@ CC_MM = @CC_MM@
LM := @LM@
INSTALL = @INSTALL@
ifeq (macho, $(ABI)) ifeq (macho, $(ABI))
TEST_LIBRARY_PATH := DYLD_FALLBACK_LIBRARY_PATH="$(objroot)lib" TEST_LIBRARY_PATH := DYLD_FALLBACK_LIBRARY_PATH="$(objroot)lib"
@ -82,34 +78,15 @@ LIBJEMALLOC := $(LIBPREFIX)jemalloc$(install_suffix)
# Lists of files. # Lists of files.
BINS := $(objroot)bin/jemalloc-config $(objroot)bin/jemalloc.sh $(objroot)bin/jeprof BINS := $(objroot)bin/jemalloc-config $(objroot)bin/jemalloc.sh $(objroot)bin/jeprof
C_HDRS := $(objroot)include/jemalloc/jemalloc$(install_suffix).h C_HDRS := $(objroot)include/jemalloc/jemalloc$(install_suffix).h
C_SRCS := $(srcroot)src/jemalloc.c \ C_SRCS := $(srcroot)src/jemalloc.c $(srcroot)src/arena.c \
$(srcroot)src/arena.c \ $(srcroot)src/atomic.c $(srcroot)src/base.c $(srcroot)src/bitmap.c \
$(srcroot)src/atomic.c \ $(srcroot)src/chunk.c $(srcroot)src/chunk_dss.c \
$(srcroot)src/base.c \ $(srcroot)src/chunk_mmap.c $(srcroot)src/ckh.c $(srcroot)src/ctl.c \
$(srcroot)src/bitmap.c \ $(srcroot)src/extent.c $(srcroot)src/hash.c $(srcroot)src/huge.c \
$(srcroot)src/chunk.c \ $(srcroot)src/mb.c $(srcroot)src/mutex.c $(srcroot)src/pages.c \
$(srcroot)src/chunk_dss.c \ $(srcroot)src/prof.c $(srcroot)src/quarantine.c $(srcroot)src/rtree.c \
$(srcroot)src/chunk_mmap.c \ $(srcroot)src/stats.c $(srcroot)src/tcache.c $(srcroot)src/util.c \
$(srcroot)src/ckh.c \ $(srcroot)src/tsd.c
$(srcroot)src/ctl.c \
$(srcroot)src/extent.c \
$(srcroot)src/hash.c \
$(srcroot)src/huge.c \
$(srcroot)src/mb.c \
$(srcroot)src/mutex.c \
$(srcroot)src/nstime.c \
$(srcroot)src/pages.c \
$(srcroot)src/prng.c \
$(srcroot)src/prof.c \
$(srcroot)src/quarantine.c \
$(srcroot)src/rtree.c \
$(srcroot)src/stats.c \
$(srcroot)src/spin.c \
$(srcroot)src/tcache.c \
$(srcroot)src/ticker.c \
$(srcroot)src/tsd.c \
$(srcroot)src/util.c \
$(srcroot)src/witness.c
ifeq ($(enable_valgrind), 1) ifeq ($(enable_valgrind), 1)
C_SRCS += $(srcroot)src/valgrind.c C_SRCS += $(srcroot)src/valgrind.c
endif endif
@ -128,11 +105,6 @@ DSOS := $(objroot)lib/$(LIBJEMALLOC).$(SOREV)
ifneq ($(SOREV),$(SO)) ifneq ($(SOREV),$(SO))
DSOS += $(objroot)lib/$(LIBJEMALLOC).$(SO) DSOS += $(objroot)lib/$(LIBJEMALLOC).$(SO)
endif endif
ifeq (1, $(link_whole_archive))
LJEMALLOC := -Wl,--whole-archive -L$(objroot)lib -l$(LIBJEMALLOC) -Wl,--no-whole-archive
else
LJEMALLOC := $(objroot)lib/$(LIBJEMALLOC).$(IMPORTLIB)
endif
PC := $(objroot)jemalloc.pc PC := $(objroot)jemalloc.pc
MAN3 := $(objroot)doc/jemalloc$(install_suffix).3 MAN3 := $(objroot)doc/jemalloc$(install_suffix).3
DOCS_XML := $(objroot)doc/jemalloc$(install_suffix).xml DOCS_XML := $(objroot)doc/jemalloc$(install_suffix).xml
@ -144,19 +116,10 @@ C_TESTLIB_SRCS := $(srcroot)test/src/btalloc.c $(srcroot)test/src/btalloc_0.c \
$(srcroot)test/src/mtx.c $(srcroot)test/src/mq.c \ $(srcroot)test/src/mtx.c $(srcroot)test/src/mq.c \
$(srcroot)test/src/SFMT.c $(srcroot)test/src/test.c \ $(srcroot)test/src/SFMT.c $(srcroot)test/src/test.c \
$(srcroot)test/src/thd.c $(srcroot)test/src/timer.c $(srcroot)test/src/thd.c $(srcroot)test/src/timer.c
ifeq (1, $(link_whole_archive)) C_UTIL_INTEGRATION_SRCS := $(srcroot)src/util.c
C_UTIL_INTEGRATION_SRCS := TESTS_UNIT := $(srcroot)test/unit/atomic.c \
else
C_UTIL_INTEGRATION_SRCS := $(srcroot)src/nstime.c $(srcroot)src/util.c
endif
TESTS_UNIT := \
$(srcroot)test/unit/a0.c \
$(srcroot)test/unit/arena_reset.c \
$(srcroot)test/unit/atomic.c \
$(srcroot)test/unit/bitmap.c \ $(srcroot)test/unit/bitmap.c \
$(srcroot)test/unit/ckh.c \ $(srcroot)test/unit/ckh.c \
$(srcroot)test/unit/decay.c \
$(srcroot)test/unit/fork.c \
$(srcroot)test/unit/hash.c \ $(srcroot)test/unit/hash.c \
$(srcroot)test/unit/junk.c \ $(srcroot)test/unit/junk.c \
$(srcroot)test/unit/junk_alloc.c \ $(srcroot)test/unit/junk_alloc.c \
@ -166,10 +129,6 @@ TESTS_UNIT := \
$(srcroot)test/unit/math.c \ $(srcroot)test/unit/math.c \
$(srcroot)test/unit/mq.c \ $(srcroot)test/unit/mq.c \
$(srcroot)test/unit/mtx.c \ $(srcroot)test/unit/mtx.c \
$(srcroot)test/unit/pack.c \
$(srcroot)test/unit/pages.c \
$(srcroot)test/unit/ph.c \
$(srcroot)test/unit/prng.c \
$(srcroot)test/unit/prof_accum.c \ $(srcroot)test/unit/prof_accum.c \
$(srcroot)test/unit/prof_active.c \ $(srcroot)test/unit/prof_active.c \
$(srcroot)test/unit/prof_gdump.c \ $(srcroot)test/unit/prof_gdump.c \
@ -181,16 +140,11 @@ TESTS_UNIT := \
$(srcroot)test/unit/quarantine.c \ $(srcroot)test/unit/quarantine.c \
$(srcroot)test/unit/rb.c \ $(srcroot)test/unit/rb.c \
$(srcroot)test/unit/rtree.c \ $(srcroot)test/unit/rtree.c \
$(srcroot)test/unit/run_quantize.c \
$(srcroot)test/unit/SFMT.c \ $(srcroot)test/unit/SFMT.c \
$(srcroot)test/unit/size_classes.c \ $(srcroot)test/unit/size_classes.c \
$(srcroot)test/unit/smoothstep.c \
$(srcroot)test/unit/stats.c \ $(srcroot)test/unit/stats.c \
$(srcroot)test/unit/ticker.c \
$(srcroot)test/unit/nstime.c \
$(srcroot)test/unit/tsd.c \ $(srcroot)test/unit/tsd.c \
$(srcroot)test/unit/util.c \ $(srcroot)test/unit/util.c \
$(srcroot)test/unit/witness.c \
$(srcroot)test/unit/zero.c $(srcroot)test/unit/zero.c
TESTS_INTEGRATION := $(srcroot)test/integration/aligned_alloc.c \ TESTS_INTEGRATION := $(srcroot)test/integration/aligned_alloc.c \
$(srcroot)test/integration/allocated.c \ $(srcroot)test/integration/allocated.c \
@ -312,69 +266,69 @@ $(STATIC_LIBS):
$(objroot)test/unit/%$(EXE): $(objroot)test/unit/%.$(O) $(TESTS_UNIT_LINK_OBJS) $(C_JET_OBJS) $(C_TESTLIB_UNIT_OBJS) $(objroot)test/unit/%$(EXE): $(objroot)test/unit/%.$(O) $(TESTS_UNIT_LINK_OBJS) $(C_JET_OBJS) $(C_TESTLIB_UNIT_OBJS)
@mkdir -p $(@D) @mkdir -p $(@D)
$(CC) $(LDTARGET) $(filter %.$(O),$^) $(call RPATH,$(objroot)lib) $(LDFLAGS) $(filter-out -lm,$(LIBS)) $(LM) $(EXTRA_LDFLAGS) $(CC) $(LDTARGET) $(filter %.$(O),$^) $(call RPATH,$(objroot)lib) $(LDFLAGS) $(filter-out -lm,$(LIBS)) -lm $(TESTLIBS) $(EXTRA_LDFLAGS)
$(objroot)test/integration/%$(EXE): $(objroot)test/integration/%.$(O) $(C_TESTLIB_INTEGRATION_OBJS) $(C_UTIL_INTEGRATION_OBJS) $(objroot)lib/$(LIBJEMALLOC).$(IMPORTLIB) $(objroot)test/integration/%$(EXE): $(objroot)test/integration/%.$(O) $(C_TESTLIB_INTEGRATION_OBJS) $(C_UTIL_INTEGRATION_OBJS) $(objroot)lib/$(LIBJEMALLOC).$(IMPORTLIB)
@mkdir -p $(@D) @mkdir -p $(@D)
$(CC) $(TEST_LD_MODE) $(LDTARGET) $(filter %.$(O),$^) $(call RPATH,$(objroot)lib) $(LJEMALLOC) $(LDFLAGS) $(filter-out -lm,$(filter -lrt -lpthread,$(LIBS))) $(LM) $(EXTRA_LDFLAGS) $(CC) $(LDTARGET) $(filter %.$(O),$^) $(call RPATH,$(objroot)lib) $(objroot)lib/$(LIBJEMALLOC).$(IMPORTLIB) $(LDFLAGS) $(filter-out -lm,$(filter -lpthread,$(LIBS))) -lm $(TESTLIBS) $(EXTRA_LDFLAGS)
$(objroot)test/stress/%$(EXE): $(objroot)test/stress/%.$(O) $(C_JET_OBJS) $(C_TESTLIB_STRESS_OBJS) $(objroot)lib/$(LIBJEMALLOC).$(IMPORTLIB) $(objroot)test/stress/%$(EXE): $(objroot)test/stress/%.$(O) $(C_JET_OBJS) $(C_TESTLIB_STRESS_OBJS) $(objroot)lib/$(LIBJEMALLOC).$(IMPORTLIB)
@mkdir -p $(@D) @mkdir -p $(@D)
$(CC) $(TEST_LD_MODE) $(LDTARGET) $(filter %.$(O),$^) $(call RPATH,$(objroot)lib) $(objroot)lib/$(LIBJEMALLOC).$(IMPORTLIB) $(LDFLAGS) $(filter-out -lm,$(LIBS)) $(LM) $(EXTRA_LDFLAGS) $(CC) $(LDTARGET) $(filter %.$(O),$^) $(call RPATH,$(objroot)lib) $(objroot)lib/$(LIBJEMALLOC).$(IMPORTLIB) $(LDFLAGS) $(filter-out -lm,$(LIBS)) -lm $(TESTLIBS) $(EXTRA_LDFLAGS)
build_lib_shared: $(DSOS) build_lib_shared: $(DSOS)
build_lib_static: $(STATIC_LIBS) build_lib_static: $(STATIC_LIBS)
build_lib: build_lib_shared build_lib_static build_lib: build_lib_shared build_lib_static
install_bin: install_bin:
$(INSTALL) -d $(BINDIR) install -d $(BINDIR)
@for b in $(BINS); do \ @for b in $(BINS); do \
echo "$(INSTALL) -m 755 $$b $(BINDIR)"; \ echo "install -m 755 $$b $(BINDIR)"; \
$(INSTALL) -m 755 $$b $(BINDIR); \ install -m 755 $$b $(BINDIR); \
done done
install_include: install_include:
$(INSTALL) -d $(INCLUDEDIR)/jemalloc install -d $(INCLUDEDIR)/jemalloc
@for h in $(C_HDRS); do \ @for h in $(C_HDRS); do \
echo "$(INSTALL) -m 644 $$h $(INCLUDEDIR)/jemalloc"; \ echo "install -m 644 $$h $(INCLUDEDIR)/jemalloc"; \
$(INSTALL) -m 644 $$h $(INCLUDEDIR)/jemalloc; \ install -m 644 $$h $(INCLUDEDIR)/jemalloc; \
done done
install_lib_shared: $(DSOS) install_lib_shared: $(DSOS)
$(INSTALL) -d $(LIBDIR) install -d $(LIBDIR)
$(INSTALL) -m 755 $(objroot)lib/$(LIBJEMALLOC).$(SOREV) $(LIBDIR) install -m 755 $(objroot)lib/$(LIBJEMALLOC).$(SOREV) $(LIBDIR)
ifneq ($(SOREV),$(SO)) ifneq ($(SOREV),$(SO))
ln -sf $(LIBJEMALLOC).$(SOREV) $(LIBDIR)/$(LIBJEMALLOC).$(SO) ln -sf $(LIBJEMALLOC).$(SOREV) $(LIBDIR)/$(LIBJEMALLOC).$(SO)
endif endif
install_lib_static: $(STATIC_LIBS) install_lib_static: $(STATIC_LIBS)
$(INSTALL) -d $(LIBDIR) install -d $(LIBDIR)
@for l in $(STATIC_LIBS); do \ @for l in $(STATIC_LIBS); do \
echo "$(INSTALL) -m 755 $$l $(LIBDIR)"; \ echo "install -m 755 $$l $(LIBDIR)"; \
$(INSTALL) -m 755 $$l $(LIBDIR); \ install -m 755 $$l $(LIBDIR); \
done done
install_lib_pc: $(PC) install_lib_pc: $(PC)
$(INSTALL) -d $(LIBDIR)/pkgconfig install -d $(LIBDIR)/pkgconfig
@for l in $(PC); do \ @for l in $(PC); do \
echo "$(INSTALL) -m 644 $$l $(LIBDIR)/pkgconfig"; \ echo "install -m 644 $$l $(LIBDIR)/pkgconfig"; \
$(INSTALL) -m 644 $$l $(LIBDIR)/pkgconfig; \ install -m 644 $$l $(LIBDIR)/pkgconfig; \
done done
install_lib: install_lib_shared install_lib_static install_lib_pc install_lib: install_lib_shared install_lib_static install_lib_pc
install_doc_html: install_doc_html:
$(INSTALL) -d $(DATADIR)/doc/jemalloc$(install_suffix) install -d $(DATADIR)/doc/jemalloc$(install_suffix)
@for d in $(DOCS_HTML); do \ @for d in $(DOCS_HTML); do \
echo "$(INSTALL) -m 644 $$d $(DATADIR)/doc/jemalloc$(install_suffix)"; \ echo "install -m 644 $$d $(DATADIR)/doc/jemalloc$(install_suffix)"; \
$(INSTALL) -m 644 $$d $(DATADIR)/doc/jemalloc$(install_suffix); \ install -m 644 $$d $(DATADIR)/doc/jemalloc$(install_suffix); \
done done
install_doc_man: install_doc_man:
$(INSTALL) -d $(MANDIR)/man3 install -d $(MANDIR)/man3
@for d in $(DOCS_MAN3); do \ @for d in $(DOCS_MAN3); do \
echo "$(INSTALL) -m 644 $$d $(MANDIR)/man3"; \ echo "install -m 644 $$d $(MANDIR)/man3"; \
$(INSTALL) -m 644 $$d $(MANDIR)/man3; \ install -m 644 $$d $(MANDIR)/man3; \
done done
install_doc: install_doc_html install_doc_man install_doc: install_doc_html install_doc_man
@ -395,22 +349,18 @@ stress_dir:
check_dir: check_unit_dir check_integration_dir check_dir: check_unit_dir check_integration_dir
check_unit: tests_unit check_unit_dir check_unit: tests_unit check_unit_dir
$(MALLOC_CONF)="purge:ratio" $(SHELL) $(objroot)test/test.sh $(TESTS_UNIT:$(srcroot)%.c=$(objroot)%) $(SHELL) $(objroot)test/test.sh $(TESTS_UNIT:$(srcroot)%.c=$(objroot)%)
$(MALLOC_CONF)="purge:decay" $(SHELL) $(objroot)test/test.sh $(TESTS_UNIT:$(srcroot)%.c=$(objroot)%)
check_integration_prof: tests_integration check_integration_dir check_integration_prof: tests_integration check_integration_dir
ifeq ($(enable_prof), 1) ifeq ($(enable_prof), 1)
$(MALLOC_CONF)="prof:true" $(SHELL) $(objroot)test/test.sh $(TESTS_INTEGRATION:$(srcroot)%.c=$(objroot)%) $(MALLOC_CONF)="prof:true" $(SHELL) $(objroot)test/test.sh $(TESTS_INTEGRATION:$(srcroot)%.c=$(objroot)%)
$(MALLOC_CONF)="prof:true,prof_active:false" $(SHELL) $(objroot)test/test.sh $(TESTS_INTEGRATION:$(srcroot)%.c=$(objroot)%) $(MALLOC_CONF)="prof:true,prof_active:false" $(SHELL) $(objroot)test/test.sh $(TESTS_INTEGRATION:$(srcroot)%.c=$(objroot)%)
endif endif
check_integration_decay: tests_integration check_integration_dir
$(MALLOC_CONF)="purge:decay,decay_time:-1" $(SHELL) $(objroot)test/test.sh $(TESTS_INTEGRATION:$(srcroot)%.c=$(objroot)%)
$(MALLOC_CONF)="purge:decay,decay_time:0" $(SHELL) $(objroot)test/test.sh $(TESTS_INTEGRATION:$(srcroot)%.c=$(objroot)%)
$(MALLOC_CONF)="purge:decay" $(SHELL) $(objroot)test/test.sh $(TESTS_INTEGRATION:$(srcroot)%.c=$(objroot)%)
check_integration: tests_integration check_integration_dir check_integration: tests_integration check_integration_dir
$(SHELL) $(objroot)test/test.sh $(TESTS_INTEGRATION:$(srcroot)%.c=$(objroot)%) $(SHELL) $(objroot)test/test.sh $(TESTS_INTEGRATION:$(srcroot)%.c=$(objroot)%)
stress: tests_stress stress_dir stress: tests_stress stress_dir
$(SHELL) $(objroot)test/test.sh $(TESTS_STRESS:$(srcroot)%.c=$(objroot)%) $(SHELL) $(objroot)test/test.sh $(TESTS_STRESS:$(srcroot)%.c=$(objroot)%)
check: check_unit check_integration check_integration_decay check_integration_prof check: tests check_dir check_integration_prof
$(SHELL) $(objroot)test/test.sh $(TESTS_UNIT:$(srcroot)%.c=$(objroot)%) $(TESTS_INTEGRATION:$(srcroot)%.c=$(objroot)%)
ifeq ($(enable_code_coverage), 1) ifeq ($(enable_code_coverage), 1)
coverage_unit: check_unit coverage_unit: check_unit

View File

@ -17,4 +17,4 @@ jemalloc.
The ChangeLog file contains a brief summary of changes for each release. The ChangeLog file contains a brief summary of changes for each release.
URL: http://jemalloc.net/ URL: http://www.canonware.com/jemalloc/

View File

@ -1 +1 @@
4.4.0-0-gf1f76357313e7dcad7262f17a48ff0a2e005fcdc 4.0.3-0-ge9192eacf8935e29fc62fddc2701f7942b1cc02c

View File

@ -95,7 +95,7 @@ my @EVINCE = ("evince"); # could also be xpdf or perhaps acroread
my @KCACHEGRIND = ("kcachegrind"); my @KCACHEGRIND = ("kcachegrind");
my @PS2PDF = ("ps2pdf"); my @PS2PDF = ("ps2pdf");
# These are used for dynamic profiles # These are used for dynamic profiles
my @URL_FETCHER = ("curl", "-s", "--fail"); my @URL_FETCHER = ("curl", "-s");
# These are the web pages that servers need to support for dynamic profiles # These are the web pages that servers need to support for dynamic profiles
my $HEAP_PAGE = "/pprof/heap"; my $HEAP_PAGE = "/pprof/heap";
@ -223,14 +223,12 @@ Call-graph Options:
--nodefraction=<f> Hide nodes below <f>*total [default=.005] --nodefraction=<f> Hide nodes below <f>*total [default=.005]
--edgefraction=<f> Hide edges below <f>*total [default=.001] --edgefraction=<f> Hide edges below <f>*total [default=.001]
--maxdegree=<n> Max incoming/outgoing edges per node [default=8] --maxdegree=<n> Max incoming/outgoing edges per node [default=8]
--focus=<regexp> Focus on backtraces with nodes matching <regexp> --focus=<regexp> Focus on nodes matching <regexp>
--thread=<n> Show profile for thread <n> --thread=<n> Show profile for thread <n>
--ignore=<regexp> Ignore backtraces with nodes matching <regexp> --ignore=<regexp> Ignore nodes matching <regexp>
--scale=<n> Set GV scaling [default=0] --scale=<n> Set GV scaling [default=0]
--heapcheck Make nodes with non-0 object counts --heapcheck Make nodes with non-0 object counts
(i.e. direct leak generators) more visible (i.e. direct leak generators) more visible
--retain=<regexp> Retain only nodes that match <regexp>
--exclude=<regexp> Exclude all nodes that match <regexp>
Miscellaneous: Miscellaneous:
--tools=<prefix or binary:fullpath>[,...] \$PATH for object tool pathnames --tools=<prefix or binary:fullpath>[,...] \$PATH for object tool pathnames
@ -341,8 +339,6 @@ sub Init() {
$main::opt_ignore = ''; $main::opt_ignore = '';
$main::opt_scale = 0; $main::opt_scale = 0;
$main::opt_heapcheck = 0; $main::opt_heapcheck = 0;
$main::opt_retain = '';
$main::opt_exclude = '';
$main::opt_seconds = 30; $main::opt_seconds = 30;
$main::opt_lib = ""; $main::opt_lib = "";
@ -414,8 +410,6 @@ sub Init() {
"ignore=s" => \$main::opt_ignore, "ignore=s" => \$main::opt_ignore,
"scale=i" => \$main::opt_scale, "scale=i" => \$main::opt_scale,
"heapcheck" => \$main::opt_heapcheck, "heapcheck" => \$main::opt_heapcheck,
"retain=s" => \$main::opt_retain,
"exclude=s" => \$main::opt_exclude,
"inuse_space!" => \$main::opt_inuse_space, "inuse_space!" => \$main::opt_inuse_space,
"inuse_objects!" => \$main::opt_inuse_objects, "inuse_objects!" => \$main::opt_inuse_objects,
"alloc_space!" => \$main::opt_alloc_space, "alloc_space!" => \$main::opt_alloc_space,
@ -1166,21 +1160,8 @@ sub PrintSymbolizedProfile {
} }
print '---', "\n"; print '---', "\n";
my $profile_marker; $PROFILE_PAGE =~ m,[^/]+$,; # matches everything after the last slash
if ($main::profile_type eq 'heap') { my $profile_marker = $&;
$HEAP_PAGE =~ m,[^/]+$,; # matches everything after the last slash
$profile_marker = $&;
} elsif ($main::profile_type eq 'growth') {
$GROWTH_PAGE =~ m,[^/]+$,; # matches everything after the last slash
$profile_marker = $&;
} elsif ($main::profile_type eq 'contention') {
$CONTENTION_PAGE =~ m,[^/]+$,; # matches everything after the last slash
$profile_marker = $&;
} else { # elsif ($main::profile_type eq 'cpu')
$PROFILE_PAGE =~ m,[^/]+$,; # matches everything after the last slash
$profile_marker = $&;
}
print '--- ', $profile_marker, "\n"; print '--- ', $profile_marker, "\n";
if (defined($main::collected_profile)) { if (defined($main::collected_profile)) {
# if used with remote fetch, simply dump the collected profile to output. # if used with remote fetch, simply dump the collected profile to output.
@ -1190,12 +1171,6 @@ sub PrintSymbolizedProfile {
} }
close(SRC); close(SRC);
} else { } else {
# --raw/http: For everything to work correctly for non-remote profiles, we
# would need to extend PrintProfileData() to handle all possible profile
# types, re-enable the code that is currently disabled in ReadCPUProfile()
# and FixCallerAddresses(), and remove the remote profile dumping code in
# the block above.
die "--raw/http: jeprof can only dump remote profiles for --raw\n";
# dump a cpu-format profile to standard out # dump a cpu-format profile to standard out
PrintProfileData($profile); PrintProfileData($profile);
} }
@ -2846,43 +2821,6 @@ sub ExtractCalls {
return $calls; return $calls;
} }
sub FilterFrames {
my $symbols = shift;
my $profile = shift;
if ($main::opt_retain eq '' && $main::opt_exclude eq '') {
return $profile;
}
my $result = {};
foreach my $k (keys(%{$profile})) {
my $count = $profile->{$k};
my @addrs = split(/\n/, $k);
my @path = ();
foreach my $a (@addrs) {
my $sym;
if (exists($symbols->{$a})) {
$sym = $symbols->{$a}->[0];
} else {
$sym = $a;
}
if ($main::opt_retain ne '' && $sym !~ m/$main::opt_retain/) {
next;
}
if ($main::opt_exclude ne '' && $sym =~ m/$main::opt_exclude/) {
next;
}
push(@path, $a);
}
if (scalar(@path) > 0) {
my $reduced_path = join("\n", @path);
AddEntry($result, $reduced_path, $count);
}
}
return $result;
}
sub RemoveUninterestingFrames { sub RemoveUninterestingFrames {
my $symbols = shift; my $symbols = shift;
my $profile = shift; my $profile = shift;
@ -3027,9 +2965,6 @@ sub RemoveUninterestingFrames {
my $reduced_path = join("\n", @path); my $reduced_path = join("\n", @path);
AddEntry($result, $reduced_path, $count); AddEntry($result, $reduced_path, $count);
} }
$result = FilterFrames($symbols, $result);
return $result; return $result;
} }
@ -3339,7 +3274,7 @@ sub ResolveRedirectionForCurl {
# Add a timeout flat to URL_FETCHER. Returns a new list. # Add a timeout flat to URL_FETCHER. Returns a new list.
sub AddFetchTimeout { sub AddFetchTimeout {
my $timeout = shift; my $timeout = shift;
my @fetcher = @_; my @fetcher = shift;
if (defined($timeout)) { if (defined($timeout)) {
if (join(" ", @fetcher) =~ m/\bcurl -s/) { if (join(" ", @fetcher) =~ m/\bcurl -s/) {
push(@fetcher, "--max-time", sprintf("%d", $timeout)); push(@fetcher, "--max-time", sprintf("%d", $timeout));
@ -3385,27 +3320,6 @@ sub ReadSymbols {
return $map; return $map;
} }
sub URLEncode {
my $str = shift;
$str =~ s/([^A-Za-z0-9\-_.!~*'()])/ sprintf "%%%02x", ord $1 /eg;
return $str;
}
sub AppendSymbolFilterParams {
my $url = shift;
my @params = ();
if ($main::opt_retain ne '') {
push(@params, sprintf("retain=%s", URLEncode($main::opt_retain)));
}
if ($main::opt_exclude ne '') {
push(@params, sprintf("exclude=%s", URLEncode($main::opt_exclude)));
}
if (scalar @params > 0) {
$url = sprintf("%s?%s", $url, join("&", @params));
}
return $url;
}
# Fetches and processes symbols to prepare them for use in the profile output # Fetches and processes symbols to prepare them for use in the profile output
# code. If the optional 'symbol_map' arg is not given, fetches symbols from # code. If the optional 'symbol_map' arg is not given, fetches symbols from
# $SYMBOL_PAGE for all PC values found in profile. Otherwise, the raw symbols # $SYMBOL_PAGE for all PC values found in profile. Otherwise, the raw symbols
@ -3430,11 +3344,9 @@ sub FetchSymbols {
my $command_line; my $command_line;
if (join(" ", @URL_FETCHER) =~ m/\bcurl -s/) { if (join(" ", @URL_FETCHER) =~ m/\bcurl -s/) {
$url = ResolveRedirectionForCurl($url); $url = ResolveRedirectionForCurl($url);
$url = AppendSymbolFilterParams($url);
$command_line = ShellEscape(@URL_FETCHER, "-d", "\@$main::tmpfile_sym", $command_line = ShellEscape(@URL_FETCHER, "-d", "\@$main::tmpfile_sym",
$url); $url);
} else { } else {
$url = AppendSymbolFilterParams($url);
$command_line = (ShellEscape(@URL_FETCHER, "--post", $url) $command_line = (ShellEscape(@URL_FETCHER, "--post", $url)
. " < " . ShellEscape($main::tmpfile_sym)); . " < " . ShellEscape($main::tmpfile_sym));
} }
@ -3515,22 +3427,12 @@ sub FetchDynamicProfile {
} }
$url .= sprintf("seconds=%d", $main::opt_seconds); $url .= sprintf("seconds=%d", $main::opt_seconds);
$fetch_timeout = $main::opt_seconds * 1.01 + 60; $fetch_timeout = $main::opt_seconds * 1.01 + 60;
# Set $profile_type for consumption by PrintSymbolizedProfile.
$main::profile_type = 'cpu';
} else { } else {
# For non-CPU profiles, we add a type-extension to # For non-CPU profiles, we add a type-extension to
# the target profile file name. # the target profile file name.
my $suffix = $path; my $suffix = $path;
$suffix =~ s,/,.,g; $suffix =~ s,/,.,g;
$profile_file .= $suffix; $profile_file .= $suffix;
# Set $profile_type for consumption by PrintSymbolizedProfile.
if ($path =~ m/$HEAP_PAGE/) {
$main::profile_type = 'heap';
} elsif ($path =~ m/$GROWTH_PAGE/) {
$main::profile_type = 'growth';
} elsif ($path =~ m/$CONTENTION_PAGE/) {
$main::profile_type = 'contention';
}
} }
my $profile_dir = $ENV{"JEPROF_TMPDIR"} || ($ENV{HOME} . "/jeprof"); my $profile_dir = $ENV{"JEPROF_TMPDIR"} || ($ENV{HOME} . "/jeprof");
@ -3828,8 +3730,6 @@ sub ReadProfile {
my $symbol_marker = $&; my $symbol_marker = $&;
$PROFILE_PAGE =~ m,[^/]+$,; # matches everything after the last slash $PROFILE_PAGE =~ m,[^/]+$,; # matches everything after the last slash
my $profile_marker = $&; my $profile_marker = $&;
$HEAP_PAGE =~ m,[^/]+$,; # matches everything after the last slash
my $heap_marker = $&;
# Look at first line to see if it is a heap or a CPU profile. # Look at first line to see if it is a heap or a CPU profile.
# CPU profile may start with no header at all, and just binary data # CPU profile may start with no header at all, and just binary data
@ -3856,13 +3756,7 @@ sub ReadProfile {
$header = ReadProfileHeader(*PROFILE) || ""; $header = ReadProfileHeader(*PROFILE) || "";
} }
if ($header =~ m/^--- *($heap_marker|$growth_marker)/o) {
# Skip "--- ..." line for profile types that have their own headers.
$header = ReadProfileHeader(*PROFILE) || "";
}
$main::profile_type = ''; $main::profile_type = '';
if ($header =~ m/^heap profile:.*$growth_marker/o) { if ($header =~ m/^heap profile:.*$growth_marker/o) {
$main::profile_type = 'growth'; $main::profile_type = 'growth';
$result = ReadHeapProfile($prog, *PROFILE, $header); $result = ReadHeapProfile($prog, *PROFILE, $header);
@ -3914,9 +3808,9 @@ sub ReadProfile {
# independent implementation. # independent implementation.
sub FixCallerAddresses { sub FixCallerAddresses {
my $stack = shift; my $stack = shift;
# --raw/http: Always subtract one from pc's, because PrintSymbolizedProfile() if ($main::use_symbolized_profile) {
# dumps unadjusted profiles. return $stack;
{ } else {
$stack =~ /(\s)/; $stack =~ /(\s)/;
my $delimiter = $1; my $delimiter = $1;
my @addrs = split(' ', $stack); my @addrs = split(' ', $stack);
@ -3984,7 +3878,12 @@ sub ReadCPUProfile {
for (my $j = 0; $j < $d; $j++) { for (my $j = 0; $j < $d; $j++) {
my $pc = $slots->get($i+$j); my $pc = $slots->get($i+$j);
# Subtract one from caller pc so we map back to call instr. # Subtract one from caller pc so we map back to call instr.
$pc--; # However, don't do this if we're reading a symbolized profile
# file, in which case the subtract-one was done when the file
# was written.
if ($j > 0 && !$main::use_symbolized_profile) {
$pc--;
}
$pc = sprintf("%0*x", $address_length, $pc); $pc = sprintf("%0*x", $address_length, $pc);
$pcs->{$pc} = 1; $pcs->{$pc} = 1;
push @k, $pc; push @k, $pc;

View File

@ -1,8 +1,8 @@
#! /bin/sh #! /bin/sh
# Attempt to guess a canonical system name. # Attempt to guess a canonical system name.
# Copyright 1992-2016 Free Software Foundation, Inc. # Copyright 1992-2014 Free Software Foundation, Inc.
timestamp='2016-10-02' timestamp='2014-03-23'
# This file is free software; you can redistribute it and/or modify it # This file is free software; you can redistribute it and/or modify it
# under the terms of the GNU General Public License as published by # under the terms of the GNU General Public License as published by
@ -24,12 +24,12 @@ timestamp='2016-10-02'
# program. This Exception is an additional permission under section 7 # program. This Exception is an additional permission under section 7
# of the GNU General Public License, version 3 ("GPLv3"). # of the GNU General Public License, version 3 ("GPLv3").
# #
# Originally written by Per Bothner; maintained since 2000 by Ben Elliston. # Originally written by Per Bothner.
# #
# You can get the latest version of this script from: # You can get the latest version of this script from:
# http://git.savannah.gnu.org/gitweb/?p=config.git;a=blob_plain;f=config.guess # http://git.savannah.gnu.org/gitweb/?p=config.git;a=blob_plain;f=config.guess;hb=HEAD
# #
# Please send patches to <config-patches@gnu.org>. # Please send patches with a ChangeLog entry to config-patches@gnu.org.
me=`echo "$0" | sed -e 's,.*/,,'` me=`echo "$0" | sed -e 's,.*/,,'`
@ -50,7 +50,7 @@ version="\
GNU config.guess ($timestamp) GNU config.guess ($timestamp)
Originally written by Per Bothner. Originally written by Per Bothner.
Copyright 1992-2016 Free Software Foundation, Inc. Copyright 1992-2014 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE." warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE."
@ -168,29 +168,19 @@ case "${UNAME_MACHINE}:${UNAME_SYSTEM}:${UNAME_RELEASE}:${UNAME_VERSION}" in
# Note: NetBSD doesn't particularly care about the vendor # Note: NetBSD doesn't particularly care about the vendor
# portion of the name. We always set it to "unknown". # portion of the name. We always set it to "unknown".
sysctl="sysctl -n hw.machine_arch" sysctl="sysctl -n hw.machine_arch"
UNAME_MACHINE_ARCH=`(uname -p 2>/dev/null || \ UNAME_MACHINE_ARCH=`(/sbin/$sysctl 2>/dev/null || \
/sbin/$sysctl 2>/dev/null || \ /usr/sbin/$sysctl 2>/dev/null || echo unknown)`
/usr/sbin/$sysctl 2>/dev/null || \
echo unknown)`
case "${UNAME_MACHINE_ARCH}" in case "${UNAME_MACHINE_ARCH}" in
armeb) machine=armeb-unknown ;; armeb) machine=armeb-unknown ;;
arm*) machine=arm-unknown ;; arm*) machine=arm-unknown ;;
sh3el) machine=shl-unknown ;; sh3el) machine=shl-unknown ;;
sh3eb) machine=sh-unknown ;; sh3eb) machine=sh-unknown ;;
sh5el) machine=sh5le-unknown ;; sh5el) machine=sh5le-unknown ;;
earmv*)
arch=`echo ${UNAME_MACHINE_ARCH} | sed -e 's,^e\(armv[0-9]\).*$,\1,'`
endian=`echo ${UNAME_MACHINE_ARCH} | sed -ne 's,^.*\(eb\)$,\1,p'`
machine=${arch}${endian}-unknown
;;
*) machine=${UNAME_MACHINE_ARCH}-unknown ;; *) machine=${UNAME_MACHINE_ARCH}-unknown ;;
esac esac
# The Operating System including object format, if it has switched # The Operating System including object format, if it has switched
# to ELF recently (or will in the future) and ABI. # to ELF recently, or will in the future.
case "${UNAME_MACHINE_ARCH}" in case "${UNAME_MACHINE_ARCH}" in
earm*)
os=netbsdelf
;;
arm*|i386|m68k|ns32k|sh3*|sparc|vax) arm*|i386|m68k|ns32k|sh3*|sparc|vax)
eval $set_cc_for_build eval $set_cc_for_build
if echo __ELF__ | $CC_FOR_BUILD -E - 2>/dev/null \ if echo __ELF__ | $CC_FOR_BUILD -E - 2>/dev/null \
@ -207,13 +197,6 @@ case "${UNAME_MACHINE}:${UNAME_SYSTEM}:${UNAME_RELEASE}:${UNAME_VERSION}" in
os=netbsd os=netbsd
;; ;;
esac esac
# Determine ABI tags.
case "${UNAME_MACHINE_ARCH}" in
earm*)
expr='s/^earmv[0-9]/-eabi/;s/eb$//'
abi=`echo ${UNAME_MACHINE_ARCH} | sed -e "$expr"`
;;
esac
# The OS release # The OS release
# Debian GNU/NetBSD machines have a different userland, and # Debian GNU/NetBSD machines have a different userland, and
# thus, need a distinct triplet. However, they do not need # thus, need a distinct triplet. However, they do not need
@ -224,13 +207,13 @@ case "${UNAME_MACHINE}:${UNAME_SYSTEM}:${UNAME_RELEASE}:${UNAME_VERSION}" in
release='-gnu' release='-gnu'
;; ;;
*) *)
release=`echo ${UNAME_RELEASE} | sed -e 's/[-_].*//' | cut -d. -f1,2` release=`echo ${UNAME_RELEASE}|sed -e 's/[-_].*/\./'`
;; ;;
esac esac
# Since CPU_TYPE-MANUFACTURER-KERNEL-OPERATING_SYSTEM: # Since CPU_TYPE-MANUFACTURER-KERNEL-OPERATING_SYSTEM:
# contains redundant information, the shorter form: # contains redundant information, the shorter form:
# CPU_TYPE-MANUFACTURER-OPERATING_SYSTEM is used. # CPU_TYPE-MANUFACTURER-OPERATING_SYSTEM is used.
echo "${machine}-${os}${release}${abi}" echo "${machine}-${os}${release}"
exit ;; exit ;;
*:Bitrig:*:*) *:Bitrig:*:*)
UNAME_MACHINE_ARCH=`arch | sed 's/Bitrig.//'` UNAME_MACHINE_ARCH=`arch | sed 's/Bitrig.//'`
@ -240,10 +223,6 @@ case "${UNAME_MACHINE}:${UNAME_SYSTEM}:${UNAME_RELEASE}:${UNAME_VERSION}" in
UNAME_MACHINE_ARCH=`arch | sed 's/OpenBSD.//'` UNAME_MACHINE_ARCH=`arch | sed 's/OpenBSD.//'`
echo ${UNAME_MACHINE_ARCH}-unknown-openbsd${UNAME_RELEASE} echo ${UNAME_MACHINE_ARCH}-unknown-openbsd${UNAME_RELEASE}
exit ;; exit ;;
*:LibertyBSD:*:*)
UNAME_MACHINE_ARCH=`arch | sed 's/^.*BSD\.//'`
echo ${UNAME_MACHINE_ARCH}-unknown-libertybsd${UNAME_RELEASE}
exit ;;
*:ekkoBSD:*:*) *:ekkoBSD:*:*)
echo ${UNAME_MACHINE}-unknown-ekkobsd${UNAME_RELEASE} echo ${UNAME_MACHINE}-unknown-ekkobsd${UNAME_RELEASE}
exit ;; exit ;;
@ -256,9 +235,6 @@ case "${UNAME_MACHINE}:${UNAME_SYSTEM}:${UNAME_RELEASE}:${UNAME_VERSION}" in
*:MirBSD:*:*) *:MirBSD:*:*)
echo ${UNAME_MACHINE}-unknown-mirbsd${UNAME_RELEASE} echo ${UNAME_MACHINE}-unknown-mirbsd${UNAME_RELEASE}
exit ;; exit ;;
*:Sortix:*:*)
echo ${UNAME_MACHINE}-unknown-sortix
exit ;;
alpha:OSF1:*:*) alpha:OSF1:*:*)
case $UNAME_RELEASE in case $UNAME_RELEASE in
*4.0) *4.0)
@ -275,42 +251,42 @@ case "${UNAME_MACHINE}:${UNAME_SYSTEM}:${UNAME_RELEASE}:${UNAME_VERSION}" in
ALPHA_CPU_TYPE=`/usr/sbin/psrinfo -v | sed -n -e 's/^ The alpha \(.*\) processor.*$/\1/p' | head -n 1` ALPHA_CPU_TYPE=`/usr/sbin/psrinfo -v | sed -n -e 's/^ The alpha \(.*\) processor.*$/\1/p' | head -n 1`
case "$ALPHA_CPU_TYPE" in case "$ALPHA_CPU_TYPE" in
"EV4 (21064)") "EV4 (21064)")
UNAME_MACHINE=alpha ;; UNAME_MACHINE="alpha" ;;
"EV4.5 (21064)") "EV4.5 (21064)")
UNAME_MACHINE=alpha ;; UNAME_MACHINE="alpha" ;;
"LCA4 (21066/21068)") "LCA4 (21066/21068)")
UNAME_MACHINE=alpha ;; UNAME_MACHINE="alpha" ;;
"EV5 (21164)") "EV5 (21164)")
UNAME_MACHINE=alphaev5 ;; UNAME_MACHINE="alphaev5" ;;
"EV5.6 (21164A)") "EV5.6 (21164A)")
UNAME_MACHINE=alphaev56 ;; UNAME_MACHINE="alphaev56" ;;
"EV5.6 (21164PC)") "EV5.6 (21164PC)")
UNAME_MACHINE=alphapca56 ;; UNAME_MACHINE="alphapca56" ;;
"EV5.7 (21164PC)") "EV5.7 (21164PC)")
UNAME_MACHINE=alphapca57 ;; UNAME_MACHINE="alphapca57" ;;
"EV6 (21264)") "EV6 (21264)")
UNAME_MACHINE=alphaev6 ;; UNAME_MACHINE="alphaev6" ;;
"EV6.7 (21264A)") "EV6.7 (21264A)")
UNAME_MACHINE=alphaev67 ;; UNAME_MACHINE="alphaev67" ;;
"EV6.8CB (21264C)") "EV6.8CB (21264C)")
UNAME_MACHINE=alphaev68 ;; UNAME_MACHINE="alphaev68" ;;
"EV6.8AL (21264B)") "EV6.8AL (21264B)")
UNAME_MACHINE=alphaev68 ;; UNAME_MACHINE="alphaev68" ;;
"EV6.8CX (21264D)") "EV6.8CX (21264D)")
UNAME_MACHINE=alphaev68 ;; UNAME_MACHINE="alphaev68" ;;
"EV6.9A (21264/EV69A)") "EV6.9A (21264/EV69A)")
UNAME_MACHINE=alphaev69 ;; UNAME_MACHINE="alphaev69" ;;
"EV7 (21364)") "EV7 (21364)")
UNAME_MACHINE=alphaev7 ;; UNAME_MACHINE="alphaev7" ;;
"EV7.9 (21364A)") "EV7.9 (21364A)")
UNAME_MACHINE=alphaev79 ;; UNAME_MACHINE="alphaev79" ;;
esac esac
# A Pn.n version is a patched version. # A Pn.n version is a patched version.
# A Vn.n version is a released version. # A Vn.n version is a released version.
# A Tn.n version is a released field test version. # A Tn.n version is a released field test version.
# A Xn.n version is an unreleased experimental baselevel. # A Xn.n version is an unreleased experimental baselevel.
# 1.2 uses "1.2" for uname -r. # 1.2 uses "1.2" for uname -r.
echo ${UNAME_MACHINE}-dec-osf`echo ${UNAME_RELEASE} | sed -e 's/^[PVTX]//' | tr ABCDEFGHIJKLMNOPQRSTUVWXYZ abcdefghijklmnopqrstuvwxyz` echo ${UNAME_MACHINE}-dec-osf`echo ${UNAME_RELEASE} | sed -e 's/^[PVTX]//' | tr 'ABCDEFGHIJKLMNOPQRSTUVWXYZ' 'abcdefghijklmnopqrstuvwxyz'`
# Reset EXIT trap before exiting to avoid spurious non-zero exit code. # Reset EXIT trap before exiting to avoid spurious non-zero exit code.
exitcode=$? exitcode=$?
trap '' 0 trap '' 0
@ -383,16 +359,16 @@ case "${UNAME_MACHINE}:${UNAME_SYSTEM}:${UNAME_RELEASE}:${UNAME_VERSION}" in
exit ;; exit ;;
i86pc:SunOS:5.*:* | i86xen:SunOS:5.*:*) i86pc:SunOS:5.*:* | i86xen:SunOS:5.*:*)
eval $set_cc_for_build eval $set_cc_for_build
SUN_ARCH=i386 SUN_ARCH="i386"
# If there is a compiler, see if it is configured for 64-bit objects. # If there is a compiler, see if it is configured for 64-bit objects.
# Note that the Sun cc does not turn __LP64__ into 1 like gcc does. # Note that the Sun cc does not turn __LP64__ into 1 like gcc does.
# This test works for both compilers. # This test works for both compilers.
if [ "$CC_FOR_BUILD" != no_compiler_found ]; then if [ "$CC_FOR_BUILD" != 'no_compiler_found' ]; then
if (echo '#ifdef __amd64'; echo IS_64BIT_ARCH; echo '#endif') | \ if (echo '#ifdef __amd64'; echo IS_64BIT_ARCH; echo '#endif') | \
(CCOPTS="" $CC_FOR_BUILD -E - 2>/dev/null) | \ (CCOPTS= $CC_FOR_BUILD -E - 2>/dev/null) | \
grep IS_64BIT_ARCH >/dev/null grep IS_64BIT_ARCH >/dev/null
then then
SUN_ARCH=x86_64 SUN_ARCH="x86_64"
fi fi
fi fi
echo ${SUN_ARCH}-pc-solaris2`echo ${UNAME_RELEASE}|sed -e 's/[^.]*//'` echo ${SUN_ARCH}-pc-solaris2`echo ${UNAME_RELEASE}|sed -e 's/[^.]*//'`
@ -417,7 +393,7 @@ case "${UNAME_MACHINE}:${UNAME_SYSTEM}:${UNAME_RELEASE}:${UNAME_VERSION}" in
exit ;; exit ;;
sun*:*:4.2BSD:*) sun*:*:4.2BSD:*)
UNAME_RELEASE=`(sed 1q /etc/motd | awk '{print substr($5,1,3)}') 2>/dev/null` UNAME_RELEASE=`(sed 1q /etc/motd | awk '{print substr($5,1,3)}') 2>/dev/null`
test "x${UNAME_RELEASE}" = x && UNAME_RELEASE=3 test "x${UNAME_RELEASE}" = "x" && UNAME_RELEASE=3
case "`/bin/arch`" in case "`/bin/arch`" in
sun3) sun3)
echo m68k-sun-sunos${UNAME_RELEASE} echo m68k-sun-sunos${UNAME_RELEASE}
@ -603,9 +579,8 @@ EOF
else else
IBM_ARCH=powerpc IBM_ARCH=powerpc
fi fi
if [ -x /usr/bin/lslpp ] ; then if [ -x /usr/bin/oslevel ] ; then
IBM_REV=`/usr/bin/lslpp -Lqc bos.rte.libc | IBM_REV=`/usr/bin/oslevel`
awk -F: '{ print $3 }' | sed s/[0-9]*$/0/`
else else
IBM_REV=${UNAME_VERSION}.${UNAME_RELEASE} IBM_REV=${UNAME_VERSION}.${UNAME_RELEASE}
fi fi
@ -642,13 +617,13 @@ EOF
sc_cpu_version=`/usr/bin/getconf SC_CPU_VERSION 2>/dev/null` sc_cpu_version=`/usr/bin/getconf SC_CPU_VERSION 2>/dev/null`
sc_kernel_bits=`/usr/bin/getconf SC_KERNEL_BITS 2>/dev/null` sc_kernel_bits=`/usr/bin/getconf SC_KERNEL_BITS 2>/dev/null`
case "${sc_cpu_version}" in case "${sc_cpu_version}" in
523) HP_ARCH=hppa1.0 ;; # CPU_PA_RISC1_0 523) HP_ARCH="hppa1.0" ;; # CPU_PA_RISC1_0
528) HP_ARCH=hppa1.1 ;; # CPU_PA_RISC1_1 528) HP_ARCH="hppa1.1" ;; # CPU_PA_RISC1_1
532) # CPU_PA_RISC2_0 532) # CPU_PA_RISC2_0
case "${sc_kernel_bits}" in case "${sc_kernel_bits}" in
32) HP_ARCH=hppa2.0n ;; 32) HP_ARCH="hppa2.0n" ;;
64) HP_ARCH=hppa2.0w ;; 64) HP_ARCH="hppa2.0w" ;;
'') HP_ARCH=hppa2.0 ;; # HP-UX 10.20 '') HP_ARCH="hppa2.0" ;; # HP-UX 10.20
esac ;; esac ;;
esac esac
fi fi
@ -687,11 +662,11 @@ EOF
exit (0); exit (0);
} }
EOF EOF
(CCOPTS="" $CC_FOR_BUILD -o $dummy $dummy.c 2>/dev/null) && HP_ARCH=`$dummy` (CCOPTS= $CC_FOR_BUILD -o $dummy $dummy.c 2>/dev/null) && HP_ARCH=`$dummy`
test -z "$HP_ARCH" && HP_ARCH=hppa test -z "$HP_ARCH" && HP_ARCH=hppa
fi ;; fi ;;
esac esac
if [ ${HP_ARCH} = hppa2.0w ] if [ ${HP_ARCH} = "hppa2.0w" ]
then then
eval $set_cc_for_build eval $set_cc_for_build
@ -704,12 +679,12 @@ EOF
# $ CC_FOR_BUILD="cc +DA2.0w" ./config.guess # $ CC_FOR_BUILD="cc +DA2.0w" ./config.guess
# => hppa64-hp-hpux11.23 # => hppa64-hp-hpux11.23
if echo __LP64__ | (CCOPTS="" $CC_FOR_BUILD -E - 2>/dev/null) | if echo __LP64__ | (CCOPTS= $CC_FOR_BUILD -E - 2>/dev/null) |
grep -q __LP64__ grep -q __LP64__
then then
HP_ARCH=hppa2.0w HP_ARCH="hppa2.0w"
else else
HP_ARCH=hppa64 HP_ARCH="hppa64"
fi fi
fi fi
echo ${HP_ARCH}-hp-hpux${HPUX_REV} echo ${HP_ARCH}-hp-hpux${HPUX_REV}
@ -814,14 +789,14 @@ EOF
echo craynv-cray-unicosmp${UNAME_RELEASE} | sed -e 's/\.[^.]*$/.X/' echo craynv-cray-unicosmp${UNAME_RELEASE} | sed -e 's/\.[^.]*$/.X/'
exit ;; exit ;;
F30[01]:UNIX_System_V:*:* | F700:UNIX_System_V:*:*) F30[01]:UNIX_System_V:*:* | F700:UNIX_System_V:*:*)
FUJITSU_PROC=`uname -m | tr ABCDEFGHIJKLMNOPQRSTUVWXYZ abcdefghijklmnopqrstuvwxyz` FUJITSU_PROC=`uname -m | tr 'ABCDEFGHIJKLMNOPQRSTUVWXYZ' 'abcdefghijklmnopqrstuvwxyz'`
FUJITSU_SYS=`uname -p | tr ABCDEFGHIJKLMNOPQRSTUVWXYZ abcdefghijklmnopqrstuvwxyz | sed -e 's/\///'` FUJITSU_SYS=`uname -p | tr 'ABCDEFGHIJKLMNOPQRSTUVWXYZ' 'abcdefghijklmnopqrstuvwxyz' | sed -e 's/\///'`
FUJITSU_REL=`echo ${UNAME_RELEASE} | sed -e 's/ /_/'` FUJITSU_REL=`echo ${UNAME_RELEASE} | sed -e 's/ /_/'`
echo "${FUJITSU_PROC}-fujitsu-${FUJITSU_SYS}${FUJITSU_REL}" echo "${FUJITSU_PROC}-fujitsu-${FUJITSU_SYS}${FUJITSU_REL}"
exit ;; exit ;;
5000:UNIX_System_V:4.*:*) 5000:UNIX_System_V:4.*:*)
FUJITSU_SYS=`uname -p | tr ABCDEFGHIJKLMNOPQRSTUVWXYZ abcdefghijklmnopqrstuvwxyz | sed -e 's/\///'` FUJITSU_SYS=`uname -p | tr 'ABCDEFGHIJKLMNOPQRSTUVWXYZ' 'abcdefghijklmnopqrstuvwxyz' | sed -e 's/\///'`
FUJITSU_REL=`echo ${UNAME_RELEASE} | tr ABCDEFGHIJKLMNOPQRSTUVWXYZ abcdefghijklmnopqrstuvwxyz | sed -e 's/ /_/'` FUJITSU_REL=`echo ${UNAME_RELEASE} | tr 'ABCDEFGHIJKLMNOPQRSTUVWXYZ' 'abcdefghijklmnopqrstuvwxyz' | sed -e 's/ /_/'`
echo "sparc-fujitsu-${FUJITSU_SYS}${FUJITSU_REL}" echo "sparc-fujitsu-${FUJITSU_SYS}${FUJITSU_REL}"
exit ;; exit ;;
i*86:BSD/386:*:* | i*86:BSD/OS:*:* | *:Ascend\ Embedded/OS:*:*) i*86:BSD/386:*:* | i*86:BSD/OS:*:* | *:Ascend\ Embedded/OS:*:*)
@ -903,7 +878,7 @@ EOF
exit ;; exit ;;
*:GNU/*:*:*) *:GNU/*:*:*)
# other systems with GNU libc and userland # other systems with GNU libc and userland
echo ${UNAME_MACHINE}-unknown-`echo ${UNAME_SYSTEM} | sed 's,^[^/]*/,,' | tr "[:upper:]" "[:lower:]"``echo ${UNAME_RELEASE}|sed -e 's/[-(].*//'`-${LIBC} echo ${UNAME_MACHINE}-unknown-`echo ${UNAME_SYSTEM} | sed 's,^[^/]*/,,' | tr '[A-Z]' '[a-z]'``echo ${UNAME_RELEASE}|sed -e 's/[-(].*//'`-${LIBC}
exit ;; exit ;;
i*86:Minix:*:*) i*86:Minix:*:*)
echo ${UNAME_MACHINE}-pc-minix echo ${UNAME_MACHINE}-pc-minix
@ -926,7 +901,7 @@ EOF
EV68*) UNAME_MACHINE=alphaev68 ;; EV68*) UNAME_MACHINE=alphaev68 ;;
esac esac
objdump --private-headers /bin/sh | grep -q ld.so.1 objdump --private-headers /bin/sh | grep -q ld.so.1
if test "$?" = 0 ; then LIBC=gnulibc1 ; fi if test "$?" = 0 ; then LIBC="gnulibc1" ; fi
echo ${UNAME_MACHINE}-unknown-linux-${LIBC} echo ${UNAME_MACHINE}-unknown-linux-${LIBC}
exit ;; exit ;;
arc:Linux:*:* | arceb:Linux:*:*) arc:Linux:*:* | arceb:Linux:*:*)
@ -957,9 +932,6 @@ EOF
crisv32:Linux:*:*) crisv32:Linux:*:*)
echo ${UNAME_MACHINE}-axis-linux-${LIBC} echo ${UNAME_MACHINE}-axis-linux-${LIBC}
exit ;; exit ;;
e2k:Linux:*:*)
echo ${UNAME_MACHINE}-unknown-linux-${LIBC}
exit ;;
frv:Linux:*:*) frv:Linux:*:*)
echo ${UNAME_MACHINE}-unknown-linux-${LIBC} echo ${UNAME_MACHINE}-unknown-linux-${LIBC}
exit ;; exit ;;
@ -972,9 +944,6 @@ EOF
ia64:Linux:*:*) ia64:Linux:*:*)
echo ${UNAME_MACHINE}-unknown-linux-${LIBC} echo ${UNAME_MACHINE}-unknown-linux-${LIBC}
exit ;; exit ;;
k1om:Linux:*:*)
echo ${UNAME_MACHINE}-unknown-linux-${LIBC}
exit ;;
m32r*:Linux:*:*) m32r*:Linux:*:*)
echo ${UNAME_MACHINE}-unknown-linux-${LIBC} echo ${UNAME_MACHINE}-unknown-linux-${LIBC}
exit ;; exit ;;
@ -1000,9 +969,6 @@ EOF
eval `$CC_FOR_BUILD -E $dummy.c 2>/dev/null | grep '^CPU'` eval `$CC_FOR_BUILD -E $dummy.c 2>/dev/null | grep '^CPU'`
test x"${CPU}" != x && { echo "${CPU}-unknown-linux-${LIBC}"; exit; } test x"${CPU}" != x && { echo "${CPU}-unknown-linux-${LIBC}"; exit; }
;; ;;
mips64el:Linux:*:*)
echo ${UNAME_MACHINE}-unknown-linux-${LIBC}
exit ;;
openrisc*:Linux:*:*) openrisc*:Linux:*:*)
echo or1k-unknown-linux-${LIBC} echo or1k-unknown-linux-${LIBC}
exit ;; exit ;;
@ -1035,9 +1001,6 @@ EOF
ppcle:Linux:*:*) ppcle:Linux:*:*)
echo powerpcle-unknown-linux-${LIBC} echo powerpcle-unknown-linux-${LIBC}
exit ;; exit ;;
riscv32:Linux:*:* | riscv64:Linux:*:*)
echo ${UNAME_MACHINE}-unknown-linux-${LIBC}
exit ;;
s390:Linux:*:* | s390x:Linux:*:*) s390:Linux:*:* | s390x:Linux:*:*)
echo ${UNAME_MACHINE}-ibm-linux-${LIBC} echo ${UNAME_MACHINE}-ibm-linux-${LIBC}
exit ;; exit ;;
@ -1057,7 +1020,7 @@ EOF
echo ${UNAME_MACHINE}-dec-linux-${LIBC} echo ${UNAME_MACHINE}-dec-linux-${LIBC}
exit ;; exit ;;
x86_64:Linux:*:*) x86_64:Linux:*:*)
echo ${UNAME_MACHINE}-pc-linux-${LIBC} echo ${UNAME_MACHINE}-unknown-linux-${LIBC}
exit ;; exit ;;
xtensa*:Linux:*:*) xtensa*:Linux:*:*)
echo ${UNAME_MACHINE}-unknown-linux-${LIBC} echo ${UNAME_MACHINE}-unknown-linux-${LIBC}
@ -1136,7 +1099,7 @@ EOF
# uname -m prints for DJGPP always 'pc', but it prints nothing about # uname -m prints for DJGPP always 'pc', but it prints nothing about
# the processor, so we play safe by assuming i586. # the processor, so we play safe by assuming i586.
# Note: whatever this is, it MUST be the same as what config.sub # Note: whatever this is, it MUST be the same as what config.sub
# prints for the "djgpp" host, or else GDB configure will decide that # prints for the "djgpp" host, or else GDB configury will decide that
# this is a cross-build. # this is a cross-build.
echo i586-pc-msdosdjgpp echo i586-pc-msdosdjgpp
exit ;; exit ;;
@ -1285,9 +1248,6 @@ EOF
SX-8R:SUPER-UX:*:*) SX-8R:SUPER-UX:*:*)
echo sx8r-nec-superux${UNAME_RELEASE} echo sx8r-nec-superux${UNAME_RELEASE}
exit ;; exit ;;
SX-ACE:SUPER-UX:*:*)
echo sxace-nec-superux${UNAME_RELEASE}
exit ;;
Power*:Rhapsody:*:*) Power*:Rhapsody:*:*)
echo powerpc-apple-rhapsody${UNAME_RELEASE} echo powerpc-apple-rhapsody${UNAME_RELEASE}
exit ;; exit ;;
@ -1301,9 +1261,9 @@ EOF
UNAME_PROCESSOR=powerpc UNAME_PROCESSOR=powerpc
fi fi
if test `echo "$UNAME_RELEASE" | sed -e 's/\..*//'` -le 10 ; then if test `echo "$UNAME_RELEASE" | sed -e 's/\..*//'` -le 10 ; then
if [ "$CC_FOR_BUILD" != no_compiler_found ]; then if [ "$CC_FOR_BUILD" != 'no_compiler_found' ]; then
if (echo '#ifdef __LP64__'; echo IS_64BIT_ARCH; echo '#endif') | \ if (echo '#ifdef __LP64__'; echo IS_64BIT_ARCH; echo '#endif') | \
(CCOPTS="" $CC_FOR_BUILD -E - 2>/dev/null) | \ (CCOPTS= $CC_FOR_BUILD -E - 2>/dev/null) | \
grep IS_64BIT_ARCH >/dev/null grep IS_64BIT_ARCH >/dev/null
then then
case $UNAME_PROCESSOR in case $UNAME_PROCESSOR in
@ -1325,7 +1285,7 @@ EOF
exit ;; exit ;;
*:procnto*:*:* | *:QNX:[0123456789]*:*) *:procnto*:*:* | *:QNX:[0123456789]*:*)
UNAME_PROCESSOR=`uname -p` UNAME_PROCESSOR=`uname -p`
if test "$UNAME_PROCESSOR" = x86; then if test "$UNAME_PROCESSOR" = "x86"; then
UNAME_PROCESSOR=i386 UNAME_PROCESSOR=i386
UNAME_MACHINE=pc UNAME_MACHINE=pc
fi fi
@ -1356,7 +1316,7 @@ EOF
# "uname -m" is not consistent, so use $cputype instead. 386 # "uname -m" is not consistent, so use $cputype instead. 386
# is converted to i386 for consistency with other x86 # is converted to i386 for consistency with other x86
# operating systems. # operating systems.
if test "$cputype" = 386; then if test "$cputype" = "386"; then
UNAME_MACHINE=i386 UNAME_MACHINE=i386
else else
UNAME_MACHINE="$cputype" UNAME_MACHINE="$cputype"
@ -1398,7 +1358,7 @@ EOF
echo i386-pc-xenix echo i386-pc-xenix
exit ;; exit ;;
i*86:skyos:*:*) i*86:skyos:*:*)
echo ${UNAME_MACHINE}-pc-skyos`echo ${UNAME_RELEASE} | sed -e 's/ .*$//'` echo ${UNAME_MACHINE}-pc-skyos`echo ${UNAME_RELEASE}` | sed -e 's/ .*$//'
exit ;; exit ;;
i*86:rdos:*:*) i*86:rdos:*:*)
echo ${UNAME_MACHINE}-pc-rdos echo ${UNAME_MACHINE}-pc-rdos
@ -1409,25 +1369,23 @@ EOF
x86_64:VMkernel:*:*) x86_64:VMkernel:*:*)
echo ${UNAME_MACHINE}-unknown-esx echo ${UNAME_MACHINE}-unknown-esx
exit ;; exit ;;
amd64:Isilon\ OneFS:*:*)
echo x86_64-unknown-onefs
exit ;;
esac esac
cat >&2 <<EOF cat >&2 <<EOF
$0: unable to guess system type $0: unable to guess system type
This script (version $timestamp), has failed to recognize the This script, last modified $timestamp, has failed to recognize
operating system you are using. If your script is old, overwrite the operating system you are using. It is advised that you
config.guess and config.sub with the latest versions from: download the most up to date version of the config scripts from
http://git.savannah.gnu.org/gitweb/?p=config.git;a=blob_plain;f=config.guess http://git.savannah.gnu.org/gitweb/?p=config.git;a=blob_plain;f=config.guess;hb=HEAD
and and
http://git.savannah.gnu.org/gitweb/?p=config.git;a=blob_plain;f=config.sub http://git.savannah.gnu.org/gitweb/?p=config.git;a=blob_plain;f=config.sub;hb=HEAD
If $0 has already been updated, send the following data and any If the version you run ($0) is already up to date, please
information you think might be pertinent to config-patches@gnu.org to send the following data and any information you think might be
provide the necessary information to handle your system. pertinent to <config-patches@gnu.org> in order to provide the needed
information to handle your system.
config.guess timestamp = $timestamp config.guess timestamp = $timestamp

View File

@ -1,8 +1,8 @@
#! /bin/sh #! /bin/sh
# Configuration validation subroutine script. # Configuration validation subroutine script.
# Copyright 1992-2016 Free Software Foundation, Inc. # Copyright 1992-2014 Free Software Foundation, Inc.
timestamp='2016-11-04' timestamp='2014-05-01'
# This file is free software; you can redistribute it and/or modify it # This file is free software; you can redistribute it and/or modify it
# under the terms of the GNU General Public License as published by # under the terms of the GNU General Public License as published by
@ -25,7 +25,7 @@ timestamp='2016-11-04'
# of the GNU General Public License, version 3 ("GPLv3"). # of the GNU General Public License, version 3 ("GPLv3").
# Please send patches to <config-patches@gnu.org>. # Please send patches with a ChangeLog entry to config-patches@gnu.org.
# #
# Configuration subroutine to validate and canonicalize a configuration type. # Configuration subroutine to validate and canonicalize a configuration type.
# Supply the specified configuration type as an argument. # Supply the specified configuration type as an argument.
@ -33,7 +33,7 @@ timestamp='2016-11-04'
# Otherwise, we print the canonical config type on stdout and succeed. # Otherwise, we print the canonical config type on stdout and succeed.
# You can get the latest version of this script from: # You can get the latest version of this script from:
# http://git.savannah.gnu.org/gitweb/?p=config.git;a=blob_plain;f=config.sub # http://git.savannah.gnu.org/gitweb/?p=config.git;a=blob_plain;f=config.sub;hb=HEAD
# This file is supposed to be the same for all GNU packages # This file is supposed to be the same for all GNU packages
# and recognize all the CPU types, system types and aliases # and recognize all the CPU types, system types and aliases
@ -53,7 +53,8 @@ timestamp='2016-11-04'
me=`echo "$0" | sed -e 's,.*/,,'` me=`echo "$0" | sed -e 's,.*/,,'`
usage="\ usage="\
Usage: $0 [OPTION] CPU-MFR-OPSYS or ALIAS Usage: $0 [OPTION] CPU-MFR-OPSYS
$0 [OPTION] ALIAS
Canonicalize a configuration name. Canonicalize a configuration name.
@ -67,7 +68,7 @@ Report bugs and patches to <config-patches@gnu.org>."
version="\ version="\
GNU config.sub ($timestamp) GNU config.sub ($timestamp)
Copyright 1992-2016 Free Software Foundation, Inc. Copyright 1992-2014 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE." warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE."
@ -116,8 +117,8 @@ maybe_os=`echo $1 | sed 's/^\(.*\)-\([^-]*-[^-]*\)$/\2/'`
case $maybe_os in case $maybe_os in
nto-qnx* | linux-gnu* | linux-android* | linux-dietlibc | linux-newlib* | \ nto-qnx* | linux-gnu* | linux-android* | linux-dietlibc | linux-newlib* | \
linux-musl* | linux-uclibc* | uclinux-uclibc* | uclinux-gnu* | kfreebsd*-gnu* | \ linux-musl* | linux-uclibc* | uclinux-uclibc* | uclinux-gnu* | kfreebsd*-gnu* | \
knetbsd*-gnu* | netbsd*-gnu* | netbsd*-eabi* | \ knetbsd*-gnu* | netbsd*-gnu* | \
kopensolaris*-gnu* | cloudabi*-eabi* | \ kopensolaris*-gnu* | \
storm-chaos* | os2-emx* | rtmk-nova*) storm-chaos* | os2-emx* | rtmk-nova*)
os=-$maybe_os os=-$maybe_os
basic_machine=`echo $1 | sed 's/^\(.*\)-\([^-]*-[^-]*\)$/\1/'` basic_machine=`echo $1 | sed 's/^\(.*\)-\([^-]*-[^-]*\)$/\1/'`
@ -254,13 +255,12 @@ case $basic_machine in
| arc | arceb \ | arc | arceb \
| arm | arm[bl]e | arme[lb] | armv[2-8] | armv[3-8][lb] | armv7[arm] \ | arm | arm[bl]e | arme[lb] | armv[2-8] | armv[3-8][lb] | armv7[arm] \
| avr | avr32 \ | avr | avr32 \
| ba \
| be32 | be64 \ | be32 | be64 \
| bfin \ | bfin \
| c4x | c8051 | clipper \ | c4x | c8051 | clipper \
| d10v | d30v | dlx | dsp16xx \ | d10v | d30v | dlx | dsp16xx \
| e2k | epiphany \ | epiphany \
| fido | fr30 | frv | ft32 \ | fido | fr30 | frv \
| h8300 | h8500 | hppa | hppa1.[01] | hppa2.0 | hppa2.0[nw] | hppa64 \ | h8300 | h8500 | hppa | hppa1.[01] | hppa2.0 | hppa2.0[nw] | hppa64 \
| hexagon \ | hexagon \
| i370 | i860 | i960 | ia64 \ | i370 | i860 | i960 | ia64 \
@ -301,12 +301,10 @@ case $basic_machine in
| open8 | or1k | or1knd | or32 \ | open8 | or1k | or1knd | or32 \
| pdp10 | pdp11 | pj | pjl \ | pdp10 | pdp11 | pj | pjl \
| powerpc | powerpc64 | powerpc64le | powerpcle \ | powerpc | powerpc64 | powerpc64le | powerpcle \
| pru \
| pyramid \ | pyramid \
| riscv32 | riscv64 \
| rl78 | rx \ | rl78 | rx \
| score \ | score \
| sh | sh[1234] | sh[24]a | sh[24]aeb | sh[23]e | sh[234]eb | sheb | shbe | shle | sh[1234]le | sh3ele \ | sh | sh[1234] | sh[24]a | sh[24]aeb | sh[23]e | sh[34]eb | sheb | shbe | shle | sh[1234]le | sh3ele \
| sh64 | sh64le \ | sh64 | sh64le \
| sparc | sparc64 | sparc64b | sparc64v | sparc86x | sparclet | sparclite \ | sparc | sparc64 | sparc64b | sparc64v | sparc86x | sparclet | sparclite \
| sparcv8 | sparcv9 | sparcv9b | sparcv9v \ | sparcv8 | sparcv9 | sparcv9b | sparcv9v \
@ -314,7 +312,6 @@ case $basic_machine in
| tahoe | tic4x | tic54x | tic55x | tic6x | tic80 | tron \ | tahoe | tic4x | tic54x | tic55x | tic6x | tic80 | tron \
| ubicom32 \ | ubicom32 \
| v850 | v850e | v850e1 | v850e2 | v850es | v850e2v3 \ | v850 | v850e | v850e1 | v850e2 | v850es | v850e2v3 \
| visium \
| we32k \ | we32k \
| x86 | xc16x | xstormy16 | xtensa \ | x86 | xc16x | xstormy16 | xtensa \
| z8k | z80) | z8k | z80)
@ -329,9 +326,6 @@ case $basic_machine in
c6x) c6x)
basic_machine=tic6x-unknown basic_machine=tic6x-unknown
;; ;;
leon|leon[3-9])
basic_machine=sparc-$basic_machine
;;
m6811 | m68hc11 | m6812 | m68hc12 | m68hcs12x | nvptx | picochip) m6811 | m68hc11 | m6812 | m68hc12 | m68hcs12x | nvptx | picochip)
basic_machine=$basic_machine-unknown basic_machine=$basic_machine-unknown
os=-none os=-none
@ -377,13 +371,12 @@ case $basic_machine in
| alphapca5[67]-* | alpha64pca5[67]-* | arc-* | arceb-* \ | alphapca5[67]-* | alpha64pca5[67]-* | arc-* | arceb-* \
| arm-* | armbe-* | armle-* | armeb-* | armv*-* \ | arm-* | armbe-* | armle-* | armeb-* | armv*-* \
| avr-* | avr32-* \ | avr-* | avr32-* \
| ba-* \
| be32-* | be64-* \ | be32-* | be64-* \
| bfin-* | bs2000-* \ | bfin-* | bs2000-* \
| c[123]* | c30-* | [cjt]90-* | c4x-* \ | c[123]* | c30-* | [cjt]90-* | c4x-* \
| c8051-* | clipper-* | craynv-* | cydra-* \ | c8051-* | clipper-* | craynv-* | cydra-* \
| d10v-* | d30v-* | dlx-* \ | d10v-* | d30v-* | dlx-* \
| e2k-* | elxsi-* \ | elxsi-* \
| f30[01]-* | f700-* | fido-* | fr30-* | frv-* | fx80-* \ | f30[01]-* | f700-* | fido-* | fr30-* | frv-* | fx80-* \
| h8300-* | h8500-* \ | h8300-* | h8500-* \
| hppa-* | hppa1.[01]-* | hppa2.0-* | hppa2.0[nw]-* | hppa64-* \ | hppa-* | hppa1.[01]-* | hppa2.0-* | hppa2.0[nw]-* | hppa64-* \
@ -429,15 +422,13 @@ case $basic_machine in
| orion-* \ | orion-* \
| pdp10-* | pdp11-* | pj-* | pjl-* | pn-* | power-* \ | pdp10-* | pdp11-* | pj-* | pjl-* | pn-* | power-* \
| powerpc-* | powerpc64-* | powerpc64le-* | powerpcle-* \ | powerpc-* | powerpc64-* | powerpc64le-* | powerpcle-* \
| pru-* \
| pyramid-* \ | pyramid-* \
| riscv32-* | riscv64-* \
| rl78-* | romp-* | rs6000-* | rx-* \ | rl78-* | romp-* | rs6000-* | rx-* \
| sh-* | sh[1234]-* | sh[24]a-* | sh[24]aeb-* | sh[23]e-* | sh[34]eb-* | sheb-* | shbe-* \ | sh-* | sh[1234]-* | sh[24]a-* | sh[24]aeb-* | sh[23]e-* | sh[34]eb-* | sheb-* | shbe-* \
| shle-* | sh[1234]le-* | sh3ele-* | sh64-* | sh64le-* \ | shle-* | sh[1234]le-* | sh3ele-* | sh64-* | sh64le-* \
| sparc-* | sparc64-* | sparc64b-* | sparc64v-* | sparc86x-* | sparclet-* \ | sparc-* | sparc64-* | sparc64b-* | sparc64v-* | sparc86x-* | sparclet-* \
| sparclite-* \ | sparclite-* \
| sparcv8-* | sparcv9-* | sparcv9b-* | sparcv9v-* | sv1-* | sx*-* \ | sparcv8-* | sparcv9-* | sparcv9b-* | sparcv9v-* | sv1-* | sx?-* \
| tahoe-* \ | tahoe-* \
| tic30-* | tic4x-* | tic54x-* | tic55x-* | tic6x-* | tic80-* \ | tic30-* | tic4x-* | tic54x-* | tic55x-* | tic6x-* | tic80-* \
| tile*-* \ | tile*-* \
@ -445,7 +436,6 @@ case $basic_machine in
| ubicom32-* \ | ubicom32-* \
| v850-* | v850e-* | v850e1-* | v850es-* | v850e2-* | v850e2v3-* \ | v850-* | v850e-* | v850e1-* | v850es-* | v850e2-* | v850e2v3-* \
| vax-* \ | vax-* \
| visium-* \
| we32k-* \ | we32k-* \
| x86-* | x86_64-* | xc16x-* | xps100-* \ | x86-* | x86_64-* | xc16x-* | xps100-* \
| xstormy16-* | xtensa*-* \ | xstormy16-* | xtensa*-* \
@ -522,9 +512,6 @@ case $basic_machine in
basic_machine=i386-pc basic_machine=i386-pc
os=-aros os=-aros
;; ;;
asmjs)
basic_machine=asmjs-unknown
;;
aux) aux)
basic_machine=m68k-apple basic_machine=m68k-apple
os=-aux os=-aux
@ -645,14 +632,6 @@ case $basic_machine in
basic_machine=m68k-bull basic_machine=m68k-bull
os=-sysv3 os=-sysv3
;; ;;
e500v[12])
basic_machine=powerpc-unknown
os=$os"spe"
;;
e500v[12]-*)
basic_machine=powerpc-`echo $basic_machine | sed 's/^[^-]*-//'`
os=$os"spe"
;;
ebmon29k) ebmon29k)
basic_machine=a29k-amd basic_machine=a29k-amd
os=-ebmon os=-ebmon
@ -794,9 +773,6 @@ case $basic_machine in
basic_machine=m68k-isi basic_machine=m68k-isi
os=-sysv os=-sysv
;; ;;
leon-*|leon[3-9]-*)
basic_machine=sparc-`echo $basic_machine | sed 's/-.*//'`
;;
m68knommu) m68knommu)
basic_machine=m68k-unknown basic_machine=m68k-unknown
os=-linux os=-linux
@ -852,10 +828,6 @@ case $basic_machine in
basic_machine=powerpc-unknown basic_machine=powerpc-unknown
os=-morphos os=-morphos
;; ;;
moxiebox)
basic_machine=moxie-unknown
os=-moxiebox
;;
msdos) msdos)
basic_machine=i386-pc basic_machine=i386-pc
os=-msdos os=-msdos
@ -1032,7 +1004,7 @@ case $basic_machine in
ppc-* | ppcbe-*) ppc-* | ppcbe-*)
basic_machine=powerpc-`echo $basic_machine | sed 's/^[^-]*-//'` basic_machine=powerpc-`echo $basic_machine | sed 's/^[^-]*-//'`
;; ;;
ppcle | powerpclittle) ppcle | powerpclittle | ppc-le | powerpc-little)
basic_machine=powerpcle-unknown basic_machine=powerpcle-unknown
;; ;;
ppcle-* | powerpclittle-*) ppcle-* | powerpclittle-*)
@ -1042,7 +1014,7 @@ case $basic_machine in
;; ;;
ppc64-*) basic_machine=powerpc64-`echo $basic_machine | sed 's/^[^-]*-//'` ppc64-*) basic_machine=powerpc64-`echo $basic_machine | sed 's/^[^-]*-//'`
;; ;;
ppc64le | powerpc64little) ppc64le | powerpc64little | ppc64-le | powerpc64-little)
basic_machine=powerpc64le-unknown basic_machine=powerpc64le-unknown
;; ;;
ppc64le-* | powerpc64little-*) ppc64le-* | powerpc64little-*)
@ -1388,28 +1360,27 @@ case $os in
| -hpux* | -unos* | -osf* | -luna* | -dgux* | -auroraux* | -solaris* \ | -hpux* | -unos* | -osf* | -luna* | -dgux* | -auroraux* | -solaris* \
| -sym* | -kopensolaris* | -plan9* \ | -sym* | -kopensolaris* | -plan9* \
| -amigaos* | -amigados* | -msdos* | -newsos* | -unicos* | -aof* \ | -amigaos* | -amigados* | -msdos* | -newsos* | -unicos* | -aof* \
| -aos* | -aros* | -cloudabi* | -sortix* \ | -aos* | -aros* \
| -nindy* | -vxsim* | -vxworks* | -ebmon* | -hms* | -mvs* \ | -nindy* | -vxsim* | -vxworks* | -ebmon* | -hms* | -mvs* \
| -clix* | -riscos* | -uniplus* | -iris* | -rtu* | -xenix* \ | -clix* | -riscos* | -uniplus* | -iris* | -rtu* | -xenix* \
| -hiux* | -386bsd* | -knetbsd* | -mirbsd* | -netbsd* \ | -hiux* | -386bsd* | -knetbsd* | -mirbsd* | -netbsd* \
| -bitrig* | -openbsd* | -solidbsd* | -libertybsd* \ | -bitrig* | -openbsd* | -solidbsd* \
| -ekkobsd* | -kfreebsd* | -freebsd* | -riscix* | -lynxos* \ | -ekkobsd* | -kfreebsd* | -freebsd* | -riscix* | -lynxos* \
| -bosx* | -nextstep* | -cxux* | -aout* | -elf* | -oabi* \ | -bosx* | -nextstep* | -cxux* | -aout* | -elf* | -oabi* \
| -ptx* | -coff* | -ecoff* | -winnt* | -domain* | -vsta* \ | -ptx* | -coff* | -ecoff* | -winnt* | -domain* | -vsta* \
| -udi* | -eabi* | -lites* | -ieee* | -go32* | -aux* \ | -udi* | -eabi* | -lites* | -ieee* | -go32* | -aux* \
| -chorusos* | -chorusrdb* | -cegcc* \ | -chorusos* | -chorusrdb* | -cegcc* \
| -cygwin* | -msys* | -pe* | -psos* | -moss* | -proelf* | -rtems* \ | -cygwin* | -msys* | -pe* | -psos* | -moss* | -proelf* | -rtems* \
| -midipix* | -mingw32* | -mingw64* | -linux-gnu* | -linux-android* \ | -mingw32* | -mingw64* | -linux-gnu* | -linux-android* \
| -linux-newlib* | -linux-musl* | -linux-uclibc* \ | -linux-newlib* | -linux-musl* | -linux-uclibc* \
| -uxpv* | -beos* | -mpeix* | -udk* | -moxiebox* \ | -uxpv* | -beos* | -mpeix* | -udk* \
| -interix* | -uwin* | -mks* | -rhapsody* | -darwin* | -opened* \ | -interix* | -uwin* | -mks* | -rhapsody* | -darwin* | -opened* \
| -openstep* | -oskit* | -conix* | -pw32* | -nonstopux* \ | -openstep* | -oskit* | -conix* | -pw32* | -nonstopux* \
| -storm-chaos* | -tops10* | -tenex* | -tops20* | -its* \ | -storm-chaos* | -tops10* | -tenex* | -tops20* | -its* \
| -os2* | -vos* | -palmos* | -uclinux* | -nucleus* \ | -os2* | -vos* | -palmos* | -uclinux* | -nucleus* \
| -morphos* | -superux* | -rtmk* | -rtmk-nova* | -windiss* \ | -morphos* | -superux* | -rtmk* | -rtmk-nova* | -windiss* \
| -powermax* | -dnix* | -nx6 | -nx7 | -sei* | -dragonfly* \ | -powermax* | -dnix* | -nx6 | -nx7 | -sei* | -dragonfly* \
| -skyos* | -haiku* | -rdos* | -toppers* | -drops* | -es* \ | -skyos* | -haiku* | -rdos* | -toppers* | -drops* | -es* | -tirtos*)
| -onefs* | -tirtos* | -phoenix* | -fuchsia*)
# Remember, each alternative MUST END IN *, to match a version number. # Remember, each alternative MUST END IN *, to match a version number.
;; ;;
-qnx*) -qnx*)
@ -1433,6 +1404,9 @@ case $os in
-mac*) -mac*)
os=`echo $os | sed -e 's|mac|macos|'` os=`echo $os | sed -e 's|mac|macos|'`
;; ;;
# Apple iOS
-ios*)
;;
-linux-dietlibc) -linux-dietlibc)
os=-linux-dietlibc os=-linux-dietlibc
;; ;;
@ -1541,8 +1515,6 @@ case $os in
;; ;;
-nacl*) -nacl*)
;; ;;
-ios)
;;
-none) -none)
;; ;;
*) *)

1391
deps/jemalloc/configure vendored

File diff suppressed because it is too large Load Diff

View File

@ -1,8 +1,6 @@
dnl Process this file with autoconf to produce a configure script. dnl Process this file with autoconf to produce a configure script.
AC_INIT([Makefile.in]) AC_INIT([Makefile.in])
AC_CONFIG_AUX_DIR([build-aux])
dnl ============================================================================ dnl ============================================================================
dnl Custom macro definitions. dnl Custom macro definitions.
@ -118,7 +116,6 @@ dnl If CFLAGS isn't defined, set CFLAGS to something reasonable. Otherwise,
dnl just prevent autoconf from molesting CFLAGS. dnl just prevent autoconf from molesting CFLAGS.
CFLAGS=$CFLAGS CFLAGS=$CFLAGS
AC_PROG_CC AC_PROG_CC
if test "x$GCC" != "xyes" ; then if test "x$GCC" != "xyes" ; then
AC_CACHE_CHECK([whether compiler is MSVC], AC_CACHE_CHECK([whether compiler is MSVC],
[je_cv_msvc], [je_cv_msvc],
@ -132,58 +129,15 @@ if test "x$GCC" != "xyes" ; then
[je_cv_msvc=no])]) [je_cv_msvc=no])])
fi fi
dnl check if a cray prgenv wrapper compiler is being used
je_cv_cray_prgenv_wrapper=""
if test "x${PE_ENV}" != "x" ; then
case "${CC}" in
CC|cc)
je_cv_cray_prgenv_wrapper="yes"
;;
*)
;;
esac
fi
AC_CACHE_CHECK([whether compiler is cray],
[je_cv_cray],
[AC_COMPILE_IFELSE([AC_LANG_PROGRAM([],
[
#ifndef _CRAYC
int fail[-1];
#endif
])],
[je_cv_cray=yes],
[je_cv_cray=no])])
if test "x${je_cv_cray}" = "xyes" ; then
AC_CACHE_CHECK([whether cray compiler version is 8.4],
[je_cv_cray_84],
[AC_COMPILE_IFELSE([AC_LANG_PROGRAM([],
[
#if !(_RELEASE_MAJOR == 8 && _RELEASE_MINOR == 4)
int fail[-1];
#endif
])],
[je_cv_cray_84=yes],
[je_cv_cray_84=no])])
fi
if test "x$CFLAGS" = "x" ; then if test "x$CFLAGS" = "x" ; then
no_CFLAGS="yes" no_CFLAGS="yes"
if test "x$GCC" = "xyes" ; then if test "x$GCC" = "xyes" ; then
JE_CFLAGS_APPEND([-std=gnu11]) JE_CFLAGS_APPEND([-std=gnu99])
if test "x$je_cv_cflags_appended" = "x-std=gnu11" ; then if test "x$je_cv_cflags_appended" = "x-std=gnu99" ; then
AC_DEFINE_UNQUOTED([JEMALLOC_HAS_RESTRICT]) AC_DEFINE_UNQUOTED([JEMALLOC_HAS_RESTRICT])
else
JE_CFLAGS_APPEND([-std=gnu99])
if test "x$je_cv_cflags_appended" = "x-std=gnu99" ; then
AC_DEFINE_UNQUOTED([JEMALLOC_HAS_RESTRICT])
fi
fi fi
JE_CFLAGS_APPEND([-Wall]) JE_CFLAGS_APPEND([-Wall])
JE_CFLAGS_APPEND([-Werror=declaration-after-statement]) JE_CFLAGS_APPEND([-Werror=declaration-after-statement])
JE_CFLAGS_APPEND([-Wshorten-64-to-32])
JE_CFLAGS_APPEND([-Wsign-compare])
JE_CFLAGS_APPEND([-pipe]) JE_CFLAGS_APPEND([-pipe])
JE_CFLAGS_APPEND([-g3]) JE_CFLAGS_APPEND([-g3])
elif test "x$je_cv_msvc" = "xyes" ; then elif test "x$je_cv_msvc" = "xyes" ; then
@ -194,21 +148,11 @@ if test "x$CFLAGS" = "x" ; then
JE_CFLAGS_APPEND([-FS]) JE_CFLAGS_APPEND([-FS])
CPPFLAGS="$CPPFLAGS -I${srcdir}/include/msvc_compat" CPPFLAGS="$CPPFLAGS -I${srcdir}/include/msvc_compat"
fi fi
if test "x$je_cv_cray" = "xyes" ; then
dnl cray compiler 8.4 has an inlining bug
if test "x$je_cv_cray_84" = "xyes" ; then
JE_CFLAGS_APPEND([-hipa2])
JE_CFLAGS_APPEND([-hnognu])
fi
if test "x$enable_cc_silence" != "xno" ; then
dnl ignore unreachable code warning
JE_CFLAGS_APPEND([-hnomessage=128])
dnl ignore redefinition of "malloc", "free", etc warning
JE_CFLAGS_APPEND([-hnomessage=1357])
fi
fi
fi fi
AC_SUBST([EXTRA_CFLAGS]) dnl Append EXTRA_CFLAGS to CFLAGS, if defined.
if test "x$EXTRA_CFLAGS" != "x" ; then
JE_CFLAGS_APPEND([$EXTRA_CFLAGS])
fi
AC_PROG_CPP AC_PROG_CPP
AC_C_BIGENDIAN([ac_cv_big_endian=1], [ac_cv_big_endian=0]) AC_C_BIGENDIAN([ac_cv_big_endian=1], [ac_cv_big_endian=0])
@ -220,18 +164,13 @@ if test "x${je_cv_msvc}" = "xyes" -a "x${ac_cv_header_inttypes_h}" = "xno"; then
CPPFLAGS="$CPPFLAGS -I${srcdir}/include/msvc_compat/C99" CPPFLAGS="$CPPFLAGS -I${srcdir}/include/msvc_compat/C99"
fi fi
if test "x${je_cv_msvc}" = "xyes" ; then AC_CHECK_SIZEOF([void *])
LG_SIZEOF_PTR=LG_SIZEOF_PTR_WIN if test "x${ac_cv_sizeof_void_p}" = "x8" ; then
AC_MSG_RESULT([Using a predefined value for sizeof(void *): 4 for 32-bit, 8 for 64-bit]) LG_SIZEOF_PTR=3
elif test "x${ac_cv_sizeof_void_p}" = "x4" ; then
LG_SIZEOF_PTR=2
else else
AC_CHECK_SIZEOF([void *]) AC_MSG_ERROR([Unsupported pointer size: ${ac_cv_sizeof_void_p}])
if test "x${ac_cv_sizeof_void_p}" = "x8" ; then
LG_SIZEOF_PTR=3
elif test "x${ac_cv_sizeof_void_p}" = "x4" ; then
LG_SIZEOF_PTR=2
else
AC_MSG_ERROR([Unsupported pointer size: ${ac_cv_sizeof_void_p}])
fi
fi fi
AC_DEFINE_UNQUOTED([LG_SIZEOF_PTR], [$LG_SIZEOF_PTR]) AC_DEFINE_UNQUOTED([LG_SIZEOF_PTR], [$LG_SIZEOF_PTR])
@ -255,16 +194,6 @@ else
fi fi
AC_DEFINE_UNQUOTED([LG_SIZEOF_LONG], [$LG_SIZEOF_LONG]) AC_DEFINE_UNQUOTED([LG_SIZEOF_LONG], [$LG_SIZEOF_LONG])
AC_CHECK_SIZEOF([long long])
if test "x${ac_cv_sizeof_long_long}" = "x8" ; then
LG_SIZEOF_LONG_LONG=3
elif test "x${ac_cv_sizeof_long_long}" = "x4" ; then
LG_SIZEOF_LONG_LONG=2
else
AC_MSG_ERROR([Unsupported long long size: ${ac_cv_sizeof_long_long}])
fi
AC_DEFINE_UNQUOTED([LG_SIZEOF_LONG_LONG], [$LG_SIZEOF_LONG_LONG])
AC_CHECK_SIZEOF([intmax_t]) AC_CHECK_SIZEOF([intmax_t])
if test "x${ac_cv_sizeof_intmax_t}" = "x16" ; then if test "x${ac_cv_sizeof_intmax_t}" = "x16" ; then
LG_SIZEOF_INTMAX_T=4 LG_SIZEOF_INTMAX_T=4
@ -282,22 +211,12 @@ dnl CPU-specific settings.
CPU_SPINWAIT="" CPU_SPINWAIT=""
case "${host_cpu}" in case "${host_cpu}" in
i686|x86_64) i686|x86_64)
if test "x${je_cv_msvc}" = "xyes" ; then AC_CACHE_VAL([je_cv_pause],
AC_CACHE_VAL([je_cv_pause_msvc], [JE_COMPILABLE([pause instruction], [],
[JE_COMPILABLE([pause instruction MSVC], [], [[__asm__ volatile("pause"); return 0;]],
[[_mm_pause(); return 0;]], [je_cv_pause])])
[je_cv_pause_msvc])]) if test "x${je_cv_pause}" = "xyes" ; then
if test "x${je_cv_pause_msvc}" = "xyes" ; then CPU_SPINWAIT='__asm__ volatile("pause")'
CPU_SPINWAIT='_mm_pause()'
fi
else
AC_CACHE_VAL([je_cv_pause],
[JE_COMPILABLE([pause instruction], [],
[[__asm__ volatile("pause"); return 0;]],
[je_cv_pause])])
if test "x${je_cv_pause}" = "xyes" ; then
CPU_SPINWAIT='__asm__ volatile("pause")'
fi
fi fi
;; ;;
powerpc) powerpc)
@ -315,27 +234,17 @@ o="$ac_objext"
a="a" a="a"
exe="$ac_exeext" exe="$ac_exeext"
libprefix="lib" libprefix="lib"
link_whole_archive="0"
DSO_LDFLAGS='-shared -Wl,-soname,$(@F)' DSO_LDFLAGS='-shared -Wl,-soname,$(@F)'
RPATH='-Wl,-rpath,$(1)' RPATH='-Wl,-rpath,$(1)'
SOREV="${so}.${rev}" SOREV="${so}.${rev}"
PIC_CFLAGS='-fPIC -DPIC' PIC_CFLAGS='-fPIC -DPIC'
CTARGET='-o $@' CTARGET='-o $@'
LDTARGET='-o $@' LDTARGET='-o $@'
TEST_LD_MODE=
EXTRA_LDFLAGS= EXTRA_LDFLAGS=
ARFLAGS='crus' ARFLAGS='crus'
AROUT=' $@' AROUT=' $@'
CC_MM=1 CC_MM=1
if test "x$je_cv_cray_prgenv_wrapper" = "xyes" ; then
TEST_LD_MODE='-dynamic'
fi
if test "x${je_cv_cray}" = "xyes" ; then
CC_MM=
fi
AN_MAKEVAR([AR], [AC_PROG_AR]) AN_MAKEVAR([AR], [AC_PROG_AR])
AN_PROGRAM([ar], [AC_PROG_AR]) AN_PROGRAM([ar], [AC_PROG_AR])
AC_DEFUN([AC_PROG_AR], [AC_CHECK_TOOL(AR, ar, :)]) AC_DEFUN([AC_PROG_AR], [AC_CHECK_TOOL(AR, ar, :)])
@ -348,12 +257,13 @@ dnl
dnl Define cpp macros in CPPFLAGS, rather than doing AC_DEFINE(macro), since the dnl Define cpp macros in CPPFLAGS, rather than doing AC_DEFINE(macro), since the
dnl definitions need to be seen before any headers are included, which is a pain dnl definitions need to be seen before any headers are included, which is a pain
dnl to make happen otherwise. dnl to make happen otherwise.
CFLAGS="$CFLAGS"
default_munmap="1" default_munmap="1"
maps_coalesce="1" maps_coalesce="1"
case "${host}" in case "${host}" in
*-*-darwin* | *-*-ios*) *-*-darwin* | *-*-ios*)
CFLAGS="$CFLAGS"
abi="macho" abi="macho"
AC_DEFINE([JEMALLOC_PURGE_MADVISE_FREE], [ ])
RPATH="" RPATH=""
LD_PRELOAD_VAR="DYLD_INSERT_LIBRARIES" LD_PRELOAD_VAR="DYLD_INSERT_LIBRARIES"
so="dylib" so="dylib"
@ -364,37 +274,33 @@ case "${host}" in
sbrk_deprecated="1" sbrk_deprecated="1"
;; ;;
*-*-freebsd*) *-*-freebsd*)
CFLAGS="$CFLAGS"
abi="elf" abi="elf"
AC_DEFINE([JEMALLOC_SYSCTL_VM_OVERCOMMIT], [ ]) AC_DEFINE([JEMALLOC_PURGE_MADVISE_FREE], [ ])
force_lazy_lock="1" force_lazy_lock="1"
;; ;;
*-*-dragonfly*) *-*-dragonfly*)
CFLAGS="$CFLAGS"
abi="elf" abi="elf"
AC_DEFINE([JEMALLOC_PURGE_MADVISE_FREE], [ ])
;; ;;
*-*-openbsd*) *-*-openbsd*)
CFLAGS="$CFLAGS"
abi="elf" abi="elf"
AC_DEFINE([JEMALLOC_PURGE_MADVISE_FREE], [ ])
force_tls="0" force_tls="0"
;; ;;
*-*-bitrig*) *-*-bitrig*)
CFLAGS="$CFLAGS"
abi="elf" abi="elf"
AC_DEFINE([JEMALLOC_PURGE_MADVISE_FREE], [ ])
;; ;;
*-*-linux-android) *-*-linux*)
dnl syscall(2) and secure_getenv(3) are exposed by _GNU_SOURCE. CFLAGS="$CFLAGS"
CPPFLAGS="$CPPFLAGS -D_GNU_SOURCE" CPPFLAGS="$CPPFLAGS -D_GNU_SOURCE"
abi="elf" abi="elf"
AC_DEFINE([JEMALLOC_HAS_ALLOCA_H]) AC_DEFINE([JEMALLOC_HAS_ALLOCA_H])
AC_DEFINE([JEMALLOC_PROC_SYS_VM_OVERCOMMIT_MEMORY], [ ]) AC_DEFINE([JEMALLOC_PURGE_MADVISE_DONTNEED], [ ])
AC_DEFINE([JEMALLOC_THREADED_INIT], [ ])
AC_DEFINE([JEMALLOC_C11ATOMICS])
force_tls="0"
default_munmap="0"
;;
*-*-linux* | *-*-kfreebsd*)
dnl syscall(2) and secure_getenv(3) are exposed by _GNU_SOURCE.
CPPFLAGS="$CPPFLAGS -D_GNU_SOURCE"
abi="elf"
AC_DEFINE([JEMALLOC_HAS_ALLOCA_H])
AC_DEFINE([JEMALLOC_PROC_SYS_VM_OVERCOMMIT_MEMORY], [ ])
AC_DEFINE([JEMALLOC_THREADED_INIT], [ ]) AC_DEFINE([JEMALLOC_THREADED_INIT], [ ])
AC_DEFINE([JEMALLOC_USE_CXX_THROW], [ ]) AC_DEFINE([JEMALLOC_USE_CXX_THROW], [ ])
default_munmap="0" default_munmap="0"
@ -408,12 +314,15 @@ case "${host}" in
#error aout #error aout
#endif #endif
]])], ]])],
[abi="elf"], [CFLAGS="$CFLAGS"; abi="elf"],
[abi="aout"]) [abi="aout"])
AC_MSG_RESULT([$abi]) AC_MSG_RESULT([$abi])
AC_DEFINE([JEMALLOC_PURGE_MADVISE_FREE], [ ])
;; ;;
*-*-solaris2*) *-*-solaris2*)
CFLAGS="$CFLAGS"
abi="elf" abi="elf"
AC_DEFINE([JEMALLOC_PURGE_MADVISE_FREE], [ ])
RPATH='-Wl,-R,$(1)' RPATH='-Wl,-R,$(1)'
dnl Solaris needs this for sigwait(). dnl Solaris needs this for sigwait().
CPPFLAGS="$CPPFLAGS -D_POSIX_PTHREAD_SEMANTICS" CPPFLAGS="$CPPFLAGS -D_POSIX_PTHREAD_SEMANTICS"
@ -432,6 +341,7 @@ case "${host}" in
*-*-mingw* | *-*-cygwin*) *-*-mingw* | *-*-cygwin*)
abi="pecoff" abi="pecoff"
force_tls="0" force_tls="0"
force_lazy_lock="1"
maps_coalesce="0" maps_coalesce="0"
RPATH="" RPATH=""
so="dll" so="dll"
@ -448,7 +358,6 @@ case "${host}" in
else else
importlib="${so}" importlib="${so}"
DSO_LDFLAGS="-shared" DSO_LDFLAGS="-shared"
link_whole_archive="1"
fi fi
a="lib" a="lib"
libprefix="" libprefix=""
@ -486,28 +395,17 @@ AC_SUBST([o])
AC_SUBST([a]) AC_SUBST([a])
AC_SUBST([exe]) AC_SUBST([exe])
AC_SUBST([libprefix]) AC_SUBST([libprefix])
AC_SUBST([link_whole_archive])
AC_SUBST([DSO_LDFLAGS]) AC_SUBST([DSO_LDFLAGS])
AC_SUBST([EXTRA_LDFLAGS]) AC_SUBST([EXTRA_LDFLAGS])
AC_SUBST([SOREV]) AC_SUBST([SOREV])
AC_SUBST([PIC_CFLAGS]) AC_SUBST([PIC_CFLAGS])
AC_SUBST([CTARGET]) AC_SUBST([CTARGET])
AC_SUBST([LDTARGET]) AC_SUBST([LDTARGET])
AC_SUBST([TEST_LD_MODE])
AC_SUBST([MKLIB]) AC_SUBST([MKLIB])
AC_SUBST([ARFLAGS]) AC_SUBST([ARFLAGS])
AC_SUBST([AROUT]) AC_SUBST([AROUT])
AC_SUBST([CC_MM]) AC_SUBST([CC_MM])
dnl Determine whether libm must be linked to use e.g. log(3).
AC_SEARCH_LIBS([log], [m], , [AC_MSG_ERROR([Missing math functions])])
if test "x$ac_cv_search_log" != "xnone required" ; then
LM="$ac_cv_search_log"
else
LM=
fi
AC_SUBST(LM)
JE_COMPILABLE([__attribute__ syntax], JE_COMPILABLE([__attribute__ syntax],
[static __attribute__((unused)) void foo(void){}], [static __attribute__((unused)) void foo(void){}],
[], [],
@ -521,7 +419,6 @@ fi
dnl Check for tls_model attribute support (clang 3.0 still lacks support). dnl Check for tls_model attribute support (clang 3.0 still lacks support).
SAVED_CFLAGS="${CFLAGS}" SAVED_CFLAGS="${CFLAGS}"
JE_CFLAGS_APPEND([-Werror]) JE_CFLAGS_APPEND([-Werror])
JE_CFLAGS_APPEND([-herror_on_warning])
JE_COMPILABLE([tls_model attribute], [], JE_COMPILABLE([tls_model attribute], [],
[static __thread int [static __thread int
__attribute__((tls_model("initial-exec"), unused)) foo; __attribute__((tls_model("initial-exec"), unused)) foo;
@ -537,7 +434,6 @@ fi
dnl Check for alloc_size attribute support. dnl Check for alloc_size attribute support.
SAVED_CFLAGS="${CFLAGS}" SAVED_CFLAGS="${CFLAGS}"
JE_CFLAGS_APPEND([-Werror]) JE_CFLAGS_APPEND([-Werror])
JE_CFLAGS_APPEND([-herror_on_warning])
JE_COMPILABLE([alloc_size attribute], [#include <stdlib.h>], JE_COMPILABLE([alloc_size attribute], [#include <stdlib.h>],
[void *foo(size_t size) __attribute__((alloc_size(1)));], [void *foo(size_t size) __attribute__((alloc_size(1)));],
[je_cv_alloc_size]) [je_cv_alloc_size])
@ -548,7 +444,6 @@ fi
dnl Check for format(gnu_printf, ...) attribute support. dnl Check for format(gnu_printf, ...) attribute support.
SAVED_CFLAGS="${CFLAGS}" SAVED_CFLAGS="${CFLAGS}"
JE_CFLAGS_APPEND([-Werror]) JE_CFLAGS_APPEND([-Werror])
JE_CFLAGS_APPEND([-herror_on_warning])
JE_COMPILABLE([format(gnu_printf, ...) attribute], [#include <stdlib.h>], JE_COMPILABLE([format(gnu_printf, ...) attribute], [#include <stdlib.h>],
[void *foo(const char *format, ...) __attribute__((format(gnu_printf, 1, 2)));], [void *foo(const char *format, ...) __attribute__((format(gnu_printf, 1, 2)));],
[je_cv_format_gnu_printf]) [je_cv_format_gnu_printf])
@ -559,7 +454,6 @@ fi
dnl Check for format(printf, ...) attribute support. dnl Check for format(printf, ...) attribute support.
SAVED_CFLAGS="${CFLAGS}" SAVED_CFLAGS="${CFLAGS}"
JE_CFLAGS_APPEND([-Werror]) JE_CFLAGS_APPEND([-Werror])
JE_CFLAGS_APPEND([-herror_on_warning])
JE_COMPILABLE([format(printf, ...) attribute], [#include <stdlib.h>], JE_COMPILABLE([format(printf, ...) attribute], [#include <stdlib.h>],
[void *foo(const char *format, ...) __attribute__((format(printf, 1, 2)));], [void *foo(const char *format, ...) __attribute__((format(printf, 1, 2)));],
[je_cv_format_printf]) [je_cv_format_printf])
@ -681,15 +575,6 @@ AC_ARG_WITH([install_suffix],
install_suffix="$INSTALL_SUFFIX" install_suffix="$INSTALL_SUFFIX"
AC_SUBST([install_suffix]) AC_SUBST([install_suffix])
dnl Specify default malloc_conf.
AC_ARG_WITH([malloc_conf],
[AS_HELP_STRING([--with-malloc-conf=<malloc_conf>], [config.malloc_conf options string])],
[JEMALLOC_CONFIG_MALLOC_CONF="$with_malloc_conf"],
[JEMALLOC_CONFIG_MALLOC_CONF=""]
)
config_malloc_conf="$JEMALLOC_CONFIG_MALLOC_CONF"
AC_DEFINE_UNQUOTED([JEMALLOC_CONFIG_MALLOC_CONF], ["$config_malloc_conf"])
dnl Substitute @je_@ in jemalloc_protos.h.in, primarily to make generation of dnl Substitute @je_@ in jemalloc_protos.h.in, primarily to make generation of
dnl jemalloc_protos_jet.h easy. dnl jemalloc_protos_jet.h easy.
je_="je_" je_="je_"
@ -954,9 +839,9 @@ fi
AC_MSG_CHECKING([configured backtracing method]) AC_MSG_CHECKING([configured backtracing method])
AC_MSG_RESULT([$backtrace_method]) AC_MSG_RESULT([$backtrace_method])
if test "x$enable_prof" = "x1" ; then if test "x$enable_prof" = "x1" ; then
dnl Heap profiling uses the log(3) function. if test "x$abi" != "xpecoff"; then
if test "x$LM" != "x" ; then dnl Heap profiling uses the log(3) function.
LIBS="$LIBS $LM" LIBS="$LIBS -lm"
fi fi
AC_DEFINE([JEMALLOC_PROF], [ ]) AC_DEFINE([JEMALLOC_PROF], [ ])
@ -1125,28 +1010,11 @@ if test "x$enable_cache_oblivious" = "x1" ; then
fi fi
AC_SUBST([enable_cache_oblivious]) AC_SUBST([enable_cache_oblivious])
JE_COMPILABLE([a program using __builtin_unreachable], [
void foo (void) {
__builtin_unreachable();
}
], [
{
foo();
}
], [je_cv_gcc_builtin_unreachable])
if test "x${je_cv_gcc_builtin_unreachable}" = "xyes" ; then
AC_DEFINE([JEMALLOC_INTERNAL_UNREACHABLE], [__builtin_unreachable])
else
AC_DEFINE([JEMALLOC_INTERNAL_UNREACHABLE], [abort])
fi
dnl ============================================================================ dnl ============================================================================
dnl Check for __builtin_ffsl(), then ffsl(3), and fail if neither are found. dnl Check for __builtin_ffsl(), then ffsl(3), and fail if neither are found.
dnl One of those two functions should (theoretically) exist on all platforms dnl One of those two functions should (theoretically) exist on all platforms
dnl that jemalloc currently has a chance of functioning on without modification. dnl that jemalloc currently has a chance of functioning on without modification.
dnl We additionally assume ffs[ll]() or __builtin_ffs[ll]() are defined if dnl We additionally assume ffs() or __builtin_ffs() are defined if
dnl ffsl() or __builtin_ffsl() are defined, respectively. dnl ffsl() or __builtin_ffsl() are defined, respectively.
JE_COMPILABLE([a program using __builtin_ffsl], [ JE_COMPILABLE([a program using __builtin_ffsl], [
#include <stdio.h> #include <stdio.h>
@ -1159,7 +1027,6 @@ JE_COMPILABLE([a program using __builtin_ffsl], [
} }
], [je_cv_gcc_builtin_ffsl]) ], [je_cv_gcc_builtin_ffsl])
if test "x${je_cv_gcc_builtin_ffsl}" = "xyes" ; then if test "x${je_cv_gcc_builtin_ffsl}" = "xyes" ; then
AC_DEFINE([JEMALLOC_INTERNAL_FFSLL], [__builtin_ffsll])
AC_DEFINE([JEMALLOC_INTERNAL_FFSL], [__builtin_ffsl]) AC_DEFINE([JEMALLOC_INTERNAL_FFSL], [__builtin_ffsl])
AC_DEFINE([JEMALLOC_INTERNAL_FFS], [__builtin_ffs]) AC_DEFINE([JEMALLOC_INTERNAL_FFS], [__builtin_ffs])
else else
@ -1174,7 +1041,6 @@ else
} }
], [je_cv_function_ffsl]) ], [je_cv_function_ffsl])
if test "x${je_cv_function_ffsl}" = "xyes" ; then if test "x${je_cv_function_ffsl}" = "xyes" ; then
AC_DEFINE([JEMALLOC_INTERNAL_FFSLL], [ffsll])
AC_DEFINE([JEMALLOC_INTERNAL_FFSL], [ffsl]) AC_DEFINE([JEMALLOC_INTERNAL_FFSL], [ffsl])
AC_DEFINE([JEMALLOC_INTERNAL_FFS], [ffs]) AC_DEFINE([JEMALLOC_INTERNAL_FFS], [ffs])
else else
@ -1234,7 +1100,7 @@ if test "x$LG_PAGE" = "xdetect"; then
if (f == NULL) { if (f == NULL) {
return 1; return 1;
} }
fprintf(f, "%d", result); fprintf(f, "%d\n", result);
fclose(f); fclose(f);
return 0; return 0;
@ -1267,36 +1133,27 @@ dnl ============================================================================
dnl jemalloc configuration. dnl jemalloc configuration.
dnl dnl
AC_ARG_WITH([version], dnl Set VERSION if source directory is inside a git repository.
[AS_HELP_STRING([--with-version=<major>.<minor>.<bugfix>-<nrev>-g<gid>], if test "x`test ! \"${srcroot}\" && cd \"${srcroot}\"; git rev-parse --is-inside-work-tree 2>/dev/null`" = "xtrue" ; then
[Version string])], dnl Pattern globs aren't powerful enough to match both single- and
[ dnl double-digit version numbers, so iterate over patterns to support up to
echo "${with_version}" | grep ['^[0-9]\+\.[0-9]\+\.[0-9]\+-[0-9]\+-g[0-9a-f]\+$'] 2>&1 1>/dev/null dnl version 99.99.99 without any accidental matches.
if test $? -ne 0 ; then rm -f "${objroot}VERSION"
AC_MSG_ERROR([${with_version} does not match <major>.<minor>.<bugfix>-<nrev>-g<gid>]) for pattern in ['[0-9].[0-9].[0-9]' '[0-9].[0-9].[0-9][0-9]' \
'[0-9].[0-9][0-9].[0-9]' '[0-9].[0-9][0-9].[0-9][0-9]' \
'[0-9][0-9].[0-9].[0-9]' '[0-9][0-9].[0-9].[0-9][0-9]' \
'[0-9][0-9].[0-9][0-9].[0-9]' \
'[0-9][0-9].[0-9][0-9].[0-9][0-9]']; do
if test ! -e "${objroot}VERSION" ; then
(test ! "${srcroot}" && cd "${srcroot}"; git describe --long --abbrev=40 --match="${pattern}") > "${objroot}VERSION.tmp" 2>/dev/null
if test $? -eq 0 ; then
mv "${objroot}VERSION.tmp" "${objroot}VERSION"
break
fi
fi fi
echo "$with_version" > "${objroot}VERSION" done
], [ fi
dnl Set VERSION if source directory is inside a git repository. rm -f "${objroot}VERSION.tmp"
if test "x`test ! \"${srcroot}\" && cd \"${srcroot}\"; git rev-parse --is-inside-work-tree 2>/dev/null`" = "xtrue" ; then
dnl Pattern globs aren't powerful enough to match both single- and
dnl double-digit version numbers, so iterate over patterns to support up
dnl to version 99.99.99 without any accidental matches.
for pattern in ['[0-9].[0-9].[0-9]' '[0-9].[0-9].[0-9][0-9]' \
'[0-9].[0-9][0-9].[0-9]' '[0-9].[0-9][0-9].[0-9][0-9]' \
'[0-9][0-9].[0-9].[0-9]' '[0-9][0-9].[0-9].[0-9][0-9]' \
'[0-9][0-9].[0-9][0-9].[0-9]' \
'[0-9][0-9].[0-9][0-9].[0-9][0-9]']; do
(test ! "${srcroot}" && cd "${srcroot}"; git describe --long --abbrev=40 --match="${pattern}") > "${objroot}VERSION.tmp" 2>/dev/null
if test $? -eq 0 ; then
mv "${objroot}VERSION.tmp" "${objroot}VERSION"
break
fi
done
fi
rm -f "${objroot}VERSION.tmp"
])
if test ! -e "${objroot}VERSION" ; then if test ! -e "${objroot}VERSION" ; then
if test ! -e "${srcroot}VERSION" ; then if test ! -e "${srcroot}VERSION" ; then
AC_MSG_RESULT( AC_MSG_RESULT(
@ -1329,101 +1186,17 @@ if test "x$abi" != "xpecoff" ; then
AC_CHECK_LIB([pthread], [pthread_create], [LIBS="$LIBS -lpthread"], AC_CHECK_LIB([pthread], [pthread_create], [LIBS="$LIBS -lpthread"],
[AC_SEARCH_LIBS([pthread_create], , , [AC_SEARCH_LIBS([pthread_create], , ,
AC_MSG_ERROR([libpthread is missing]))]) AC_MSG_ERROR([libpthread is missing]))])
JE_COMPILABLE([pthread_atfork(3)], [
#include <pthread.h>
], [
pthread_atfork((void *)0, (void *)0, (void *)0);
], [je_cv_pthread_atfork])
if test "x${je_cv_pthread_atfork}" = "xyes" ; then
AC_DEFINE([JEMALLOC_HAVE_PTHREAD_ATFORK], [ ])
fi
fi fi
CPPFLAGS="$CPPFLAGS -D_REENTRANT" CPPFLAGS="$CPPFLAGS -D_REENTRANT"
dnl Check whether clock_gettime(2) is in libc or librt. dnl Check whether clock_gettime(2) is in libc or librt. This function is only
AC_SEARCH_LIBS([clock_gettime], [rt]) dnl used in test code, so save the result to TESTLIBS to avoid poluting LIBS.
SAVED_LIBS="${LIBS}"
dnl Cray wrapper compiler often adds `-lrt` when using `-static`. Check with LIBS=
dnl `-dynamic` as well in case a user tries to dynamically link in jemalloc AC_SEARCH_LIBS([clock_gettime], [rt], [TESTLIBS="${LIBS}"])
if test "x$je_cv_cray_prgenv_wrapper" = "xyes" ; then AC_SUBST([TESTLIBS])
if test "$ac_cv_search_clock_gettime" != "-lrt"; then LIBS="${SAVED_LIBS}"
SAVED_CFLAGS="${CFLAGS}"
unset ac_cv_search_clock_gettime
JE_CFLAGS_APPEND([-dynamic])
AC_SEARCH_LIBS([clock_gettime], [rt])
CFLAGS="${SAVED_CFLAGS}"
fi
fi
dnl check for CLOCK_MONOTONIC_COARSE (Linux-specific).
JE_COMPILABLE([clock_gettime(CLOCK_MONOTONIC_COARSE, ...)], [
#include <time.h>
], [
struct timespec ts;
clock_gettime(CLOCK_MONOTONIC_COARSE, &ts);
], [je_cv_clock_monotonic_coarse])
if test "x${je_cv_clock_monotonic_coarse}" = "xyes" ; then
AC_DEFINE([JEMALLOC_HAVE_CLOCK_MONOTONIC_COARSE])
fi
dnl check for CLOCK_MONOTONIC.
JE_COMPILABLE([clock_gettime(CLOCK_MONOTONIC, ...)], [
#include <unistd.h>
#include <time.h>
], [
struct timespec ts;
clock_gettime(CLOCK_MONOTONIC, &ts);
#if !defined(_POSIX_MONOTONIC_CLOCK) || _POSIX_MONOTONIC_CLOCK < 0
# error _POSIX_MONOTONIC_CLOCK missing/invalid
#endif
], [je_cv_clock_monotonic])
if test "x${je_cv_clock_monotonic}" = "xyes" ; then
AC_DEFINE([JEMALLOC_HAVE_CLOCK_MONOTONIC])
fi
dnl Check for mach_absolute_time().
JE_COMPILABLE([mach_absolute_time()], [
#include <mach/mach_time.h>
], [
mach_absolute_time();
], [je_cv_mach_absolute_time])
if test "x${je_cv_mach_absolute_time}" = "xyes" ; then
AC_DEFINE([JEMALLOC_HAVE_MACH_ABSOLUTE_TIME])
fi
dnl Use syscall(2) (if available) by default.
AC_ARG_ENABLE([syscall],
[AS_HELP_STRING([--disable-syscall], [Disable use of syscall(2)])],
[if test "x$enable_syscall" = "xno" ; then
enable_syscall="0"
else
enable_syscall="1"
fi
],
[enable_syscall="1"]
)
if test "x$enable_syscall" = "x1" ; then
dnl Check if syscall(2) is usable. Treat warnings as errors, so that e.g. OS
dnl X 10.12's deprecation warning prevents use.
SAVED_CFLAGS="${CFLAGS}"
JE_CFLAGS_APPEND([-Werror])
JE_COMPILABLE([syscall(2)], [
#include <sys/syscall.h>
#include <unistd.h>
], [
syscall(SYS_write, 2, "hello", 5);
],
[je_cv_syscall])
CFLAGS="${SAVED_CFLAGS}"
if test "x$je_cv_syscall" = "xyes" ; then
AC_DEFINE([JEMALLOC_USE_SYSCALL], [ ])
fi
fi
dnl Check if the GNU-specific secure_getenv function exists. dnl Check if the GNU-specific secure_getenv function exists.
AC_CHECK_FUNC([secure_getenv], AC_CHECK_FUNC([secure_getenv],
@ -1479,17 +1252,9 @@ fi
], ],
[enable_lazy_lock=""] [enable_lazy_lock=""]
) )
if test "x${enable_lazy_lock}" = "x" ; then if test "x$enable_lazy_lock" = "x" -a "x${force_lazy_lock}" = "x1" ; then
if test "x${force_lazy_lock}" = "x1" ; then AC_MSG_RESULT([Forcing lazy-lock to avoid allocator/threading bootstrap issues])
AC_MSG_RESULT([Forcing lazy-lock to avoid allocator/threading bootstrap issues]) enable_lazy_lock="1"
enable_lazy_lock="1"
else
enable_lazy_lock="0"
fi
fi
if test "x${enable_lazy_lock}" = "x1" -a "x${abi}" = "xpecoff" ; then
AC_MSG_RESULT([Forcing no lazy-lock because thread creation monitoring is unimplemented])
enable_lazy_lock="0"
fi fi
if test "x$enable_lazy_lock" = "x1" ; then if test "x$enable_lazy_lock" = "x1" ; then
if test "x$abi" != "xpecoff" ; then if test "x$abi" != "xpecoff" ; then
@ -1500,6 +1265,8 @@ if test "x$enable_lazy_lock" = "x1" ; then
]) ])
fi fi
AC_DEFINE([JEMALLOC_LAZY_LOCK], [ ]) AC_DEFINE([JEMALLOC_LAZY_LOCK], [ ])
else
enable_lazy_lock="0"
fi fi
AC_SUBST([enable_lazy_lock]) AC_SUBST([enable_lazy_lock])
@ -1622,41 +1389,12 @@ dnl Check for madvise(2).
JE_COMPILABLE([madvise(2)], [ JE_COMPILABLE([madvise(2)], [
#include <sys/mman.h> #include <sys/mman.h>
], [ ], [
madvise((void *)0, 0, 0); {
madvise((void *)0, 0, 0);
}
], [je_cv_madvise]) ], [je_cv_madvise])
if test "x${je_cv_madvise}" = "xyes" ; then if test "x${je_cv_madvise}" = "xyes" ; then
AC_DEFINE([JEMALLOC_HAVE_MADVISE], [ ]) AC_DEFINE([JEMALLOC_HAVE_MADVISE], [ ])
dnl Check for madvise(..., MADV_FREE).
JE_COMPILABLE([madvise(..., MADV_FREE)], [
#include <sys/mman.h>
], [
madvise((void *)0, 0, MADV_FREE);
], [je_cv_madv_free])
if test "x${je_cv_madv_free}" = "xyes" ; then
AC_DEFINE([JEMALLOC_PURGE_MADVISE_FREE], [ ])
fi
dnl Check for madvise(..., MADV_DONTNEED).
JE_COMPILABLE([madvise(..., MADV_DONTNEED)], [
#include <sys/mman.h>
], [
madvise((void *)0, 0, MADV_DONTNEED);
], [je_cv_madv_dontneed])
if test "x${je_cv_madv_dontneed}" = "xyes" ; then
AC_DEFINE([JEMALLOC_PURGE_MADVISE_DONTNEED], [ ])
fi
dnl Check for madvise(..., MADV_[NO]HUGEPAGE).
JE_COMPILABLE([madvise(..., MADV_[[NO]]HUGEPAGE)], [
#include <sys/mman.h>
], [
madvise((void *)0, 0, MADV_HUGEPAGE);
madvise((void *)0, 0, MADV_NOHUGEPAGE);
], [je_cv_thp])
if test "x${je_cv_thp}" = "xyes" ; then
AC_DEFINE([JEMALLOC_THP], [ ])
fi
fi fi
dnl ============================================================================ dnl ============================================================================
@ -1716,25 +1454,6 @@ if test "x${je_cv_builtin_clz}" = "xyes" ; then
AC_DEFINE([JEMALLOC_HAVE_BUILTIN_CLZ], [ ]) AC_DEFINE([JEMALLOC_HAVE_BUILTIN_CLZ], [ ])
fi fi
dnl ============================================================================
dnl Check for os_unfair_lock operations as provided on Darwin.
JE_COMPILABLE([Darwin os_unfair_lock_*()], [
#include <os/lock.h>
#include <AvailabilityMacros.h>
], [
#if MAC_OS_X_VERSION_MIN_REQUIRED < 101200
#error "os_unfair_lock is not supported"
#else
os_unfair_lock lock = OS_UNFAIR_LOCK_INIT;
os_unfair_lock_lock(&lock);
os_unfair_lock_unlock(&lock);
#endif
], [je_cv_os_unfair_lock])
if test "x${je_cv_os_unfair_lock}" = "xyes" ; then
AC_DEFINE([JEMALLOC_OS_UNFAIR_LOCK], [ ])
fi
dnl ============================================================================ dnl ============================================================================
dnl Check for spinlock(3) operations as provided on Darwin. dnl Check for spinlock(3) operations as provided on Darwin.
@ -1979,11 +1698,11 @@ AC_MSG_RESULT([])
AC_MSG_RESULT([CONFIG : ${CONFIG}]) AC_MSG_RESULT([CONFIG : ${CONFIG}])
AC_MSG_RESULT([CC : ${CC}]) AC_MSG_RESULT([CC : ${CC}])
AC_MSG_RESULT([CFLAGS : ${CFLAGS}]) AC_MSG_RESULT([CFLAGS : ${CFLAGS}])
AC_MSG_RESULT([EXTRA_CFLAGS : ${EXTRA_CFLAGS}])
AC_MSG_RESULT([CPPFLAGS : ${CPPFLAGS}]) AC_MSG_RESULT([CPPFLAGS : ${CPPFLAGS}])
AC_MSG_RESULT([LDFLAGS : ${LDFLAGS}]) AC_MSG_RESULT([LDFLAGS : ${LDFLAGS}])
AC_MSG_RESULT([EXTRA_LDFLAGS : ${EXTRA_LDFLAGS}]) AC_MSG_RESULT([EXTRA_LDFLAGS : ${EXTRA_LDFLAGS}])
AC_MSG_RESULT([LIBS : ${LIBS}]) AC_MSG_RESULT([LIBS : ${LIBS}])
AC_MSG_RESULT([TESTLIBS : ${TESTLIBS}])
AC_MSG_RESULT([RPATH_EXTRA : ${RPATH_EXTRA}]) AC_MSG_RESULT([RPATH_EXTRA : ${RPATH_EXTRA}])
AC_MSG_RESULT([]) AC_MSG_RESULT([])
AC_MSG_RESULT([XSLTPROC : ${XSLTPROC}]) AC_MSG_RESULT([XSLTPROC : ${XSLTPROC}])
@ -2005,7 +1724,6 @@ AC_MSG_RESULT([JEMALLOC_PREFIX : ${JEMALLOC_PREFIX}])
AC_MSG_RESULT([JEMALLOC_PRIVATE_NAMESPACE]) AC_MSG_RESULT([JEMALLOC_PRIVATE_NAMESPACE])
AC_MSG_RESULT([ : ${JEMALLOC_PRIVATE_NAMESPACE}]) AC_MSG_RESULT([ : ${JEMALLOC_PRIVATE_NAMESPACE}])
AC_MSG_RESULT([install_suffix : ${install_suffix}]) AC_MSG_RESULT([install_suffix : ${install_suffix}])
AC_MSG_RESULT([malloc_conf : ${config_malloc_conf}])
AC_MSG_RESULT([autogen : ${enable_autogen}]) AC_MSG_RESULT([autogen : ${enable_autogen}])
AC_MSG_RESULT([cc-silence : ${enable_cc_silence}]) AC_MSG_RESULT([cc-silence : ${enable_cc_silence}])
AC_MSG_RESULT([debug : ${enable_debug}]) AC_MSG_RESULT([debug : ${enable_debug}])

View File

@ -1,5 +1,4 @@
<xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version="1.0"> <xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version="1.0">
<xsl:import href="@XSLROOT@/html/docbook.xsl"/> <xsl:import href="@XSLROOT@/html/docbook.xsl"/>
<xsl:import href="@abs_srcroot@doc/stylesheet.xsl"/> <xsl:import href="@abs_srcroot@doc/stylesheet.xsl"/>
<xsl:output method="xml" encoding="utf-8"/>
</xsl:stylesheet> </xsl:stylesheet>

File diff suppressed because it is too large Load Diff

File diff suppressed because one or more lines are too long

View File

@ -52,7 +52,7 @@
<title>LIBRARY</title> <title>LIBRARY</title>
<para>This manual describes jemalloc @jemalloc_version@. More information <para>This manual describes jemalloc @jemalloc_version@. More information
can be found at the <ulink can be found at the <ulink
url="http://jemalloc.net/">jemalloc website</ulink>.</para> url="http://www.canonware.com/jemalloc/">jemalloc website</ulink>.</para>
</refsect1> </refsect1>
<refsynopsisdiv> <refsynopsisdiv>
<title>SYNOPSIS</title> <title>SYNOPSIS</title>
@ -180,20 +180,20 @@
<refsect2> <refsect2>
<title>Standard API</title> <title>Standard API</title>
<para>The <function>malloc()</function> function allocates <para>The <function>malloc<parameter/></function> function allocates
<parameter>size</parameter> bytes of uninitialized memory. The allocated <parameter>size</parameter> bytes of uninitialized memory. The allocated
space is suitably aligned (after possible pointer coercion) for storage space is suitably aligned (after possible pointer coercion) for storage
of any type of object.</para> of any type of object.</para>
<para>The <function>calloc()</function> function allocates <para>The <function>calloc<parameter/></function> function allocates
space for <parameter>number</parameter> objects, each space for <parameter>number</parameter> objects, each
<parameter>size</parameter> bytes in length. The result is identical to <parameter>size</parameter> bytes in length. The result is identical to
calling <function>malloc()</function> with an argument of calling <function>malloc<parameter/></function> with an argument of
<parameter>number</parameter> * <parameter>size</parameter>, with the <parameter>number</parameter> * <parameter>size</parameter>, with the
exception that the allocated memory is explicitly initialized to zero exception that the allocated memory is explicitly initialized to zero
bytes.</para> bytes.</para>
<para>The <function>posix_memalign()</function> function <para>The <function>posix_memalign<parameter/></function> function
allocates <parameter>size</parameter> bytes of memory such that the allocates <parameter>size</parameter> bytes of memory such that the
allocation's base address is a multiple of allocation's base address is a multiple of
<parameter>alignment</parameter>, and returns the allocation in the value <parameter>alignment</parameter>, and returns the allocation in the value
@ -201,7 +201,7 @@
<parameter>alignment</parameter> must be a power of 2 at least as large as <parameter>alignment</parameter> must be a power of 2 at least as large as
<code language="C">sizeof(<type>void *</type>)</code>.</para> <code language="C">sizeof(<type>void *</type>)</code>.</para>
<para>The <function>aligned_alloc()</function> function <para>The <function>aligned_alloc<parameter/></function> function
allocates <parameter>size</parameter> bytes of memory such that the allocates <parameter>size</parameter> bytes of memory such that the
allocation's base address is a multiple of allocation's base address is a multiple of
<parameter>alignment</parameter>. The requested <parameter>alignment</parameter>. The requested
@ -209,7 +209,7 @@
undefined if <parameter>size</parameter> is not an integral multiple of undefined if <parameter>size</parameter> is not an integral multiple of
<parameter>alignment</parameter>.</para> <parameter>alignment</parameter>.</para>
<para>The <function>realloc()</function> function changes the <para>The <function>realloc<parameter/></function> function changes the
size of the previously allocated memory referenced by size of the previously allocated memory referenced by
<parameter>ptr</parameter> to <parameter>size</parameter> bytes. The <parameter>ptr</parameter> to <parameter>size</parameter> bytes. The
contents of the memory are unchanged up to the lesser of the new and old contents of the memory are unchanged up to the lesser of the new and old
@ -217,26 +217,26 @@
portion of the memory are undefined. Upon success, the memory referenced portion of the memory are undefined. Upon success, the memory referenced
by <parameter>ptr</parameter> is freed and a pointer to the newly by <parameter>ptr</parameter> is freed and a pointer to the newly
allocated memory is returned. Note that allocated memory is returned. Note that
<function>realloc()</function> may move the memory allocation, <function>realloc<parameter/></function> may move the memory allocation,
resulting in a different return value than <parameter>ptr</parameter>. resulting in a different return value than <parameter>ptr</parameter>.
If <parameter>ptr</parameter> is <constant>NULL</constant>, the If <parameter>ptr</parameter> is <constant>NULL</constant>, the
<function>realloc()</function> function behaves identically to <function>realloc<parameter/></function> function behaves identically to
<function>malloc()</function> for the specified size.</para> <function>malloc<parameter/></function> for the specified size.</para>
<para>The <function>free()</function> function causes the <para>The <function>free<parameter/></function> function causes the
allocated memory referenced by <parameter>ptr</parameter> to be made allocated memory referenced by <parameter>ptr</parameter> to be made
available for future allocations. If <parameter>ptr</parameter> is available for future allocations. If <parameter>ptr</parameter> is
<constant>NULL</constant>, no action occurs.</para> <constant>NULL</constant>, no action occurs.</para>
</refsect2> </refsect2>
<refsect2> <refsect2>
<title>Non-standard API</title> <title>Non-standard API</title>
<para>The <function>mallocx()</function>, <para>The <function>mallocx<parameter/></function>,
<function>rallocx()</function>, <function>rallocx<parameter/></function>,
<function>xallocx()</function>, <function>xallocx<parameter/></function>,
<function>sallocx()</function>, <function>sallocx<parameter/></function>,
<function>dallocx()</function>, <function>dallocx<parameter/></function>,
<function>sdallocx()</function>, and <function>sdallocx<parameter/></function>, and
<function>nallocx()</function> functions all have a <function>nallocx<parameter/></function> functions all have a
<parameter>flags</parameter> argument that can be used to specify <parameter>flags</parameter> argument that can be used to specify
options. The functions only check the options that are contextually options. The functions only check the options that are contextually
relevant. Use bitwise or (<code language="C">|</code>) operations to relevant. Use bitwise or (<code language="C">|</code>) operations to
@ -307,19 +307,21 @@
</variablelist> </variablelist>
</para> </para>
<para>The <function>mallocx()</function> function allocates at <para>The <function>mallocx<parameter/></function> function allocates at
least <parameter>size</parameter> bytes of memory, and returns a pointer least <parameter>size</parameter> bytes of memory, and returns a pointer
to the base address of the allocation. Behavior is undefined if to the base address of the allocation. Behavior is undefined if
<parameter>size</parameter> is <constant>0</constant>.</para> <parameter>size</parameter> is <constant>0</constant>, or if request size
overflows due to size class and/or alignment constraints.</para>
<para>The <function>rallocx()</function> function resizes the <para>The <function>rallocx<parameter/></function> function resizes the
allocation at <parameter>ptr</parameter> to be at least allocation at <parameter>ptr</parameter> to be at least
<parameter>size</parameter> bytes, and returns a pointer to the base <parameter>size</parameter> bytes, and returns a pointer to the base
address of the resulting allocation, which may or may not have moved from address of the resulting allocation, which may or may not have moved from
its original location. Behavior is undefined if its original location. Behavior is undefined if
<parameter>size</parameter> is <constant>0</constant>.</para> <parameter>size</parameter> is <constant>0</constant>, or if request size
overflows due to size class and/or alignment constraints.</para>
<para>The <function>xallocx()</function> function resizes the <para>The <function>xallocx<parameter/></function> function resizes the
allocation at <parameter>ptr</parameter> in place to be at least allocation at <parameter>ptr</parameter> in place to be at least
<parameter>size</parameter> bytes, and returns the real size of the <parameter>size</parameter> bytes, and returns the real size of the
allocation. If <parameter>extra</parameter> is non-zero, an attempt is allocation. If <parameter>extra</parameter> is non-zero, an attempt is
@ -332,32 +334,32 @@
language="C">(<parameter>size</parameter> + <parameter>extra</parameter> language="C">(<parameter>size</parameter> + <parameter>extra</parameter>
&gt; <constant>SIZE_T_MAX</constant>)</code>.</para> &gt; <constant>SIZE_T_MAX</constant>)</code>.</para>
<para>The <function>sallocx()</function> function returns the <para>The <function>sallocx<parameter/></function> function returns the
real size of the allocation at <parameter>ptr</parameter>.</para> real size of the allocation at <parameter>ptr</parameter>.</para>
<para>The <function>dallocx()</function> function causes the <para>The <function>dallocx<parameter/></function> function causes the
memory referenced by <parameter>ptr</parameter> to be made available for memory referenced by <parameter>ptr</parameter> to be made available for
future allocations.</para> future allocations.</para>
<para>The <function>sdallocx()</function> function is an <para>The <function>sdallocx<parameter/></function> function is an
extension of <function>dallocx()</function> with a extension of <function>dallocx<parameter/></function> with a
<parameter>size</parameter> parameter to allow the caller to pass in the <parameter>size</parameter> parameter to allow the caller to pass in the
allocation size as an optimization. The minimum valid input size is the allocation size as an optimization. The minimum valid input size is the
original requested size of the allocation, and the maximum valid input original requested size of the allocation, and the maximum valid input
size is the corresponding value returned by size is the corresponding value returned by
<function>nallocx()</function> or <function>nallocx<parameter/></function> or
<function>sallocx()</function>.</para> <function>sallocx<parameter/></function>.</para>
<para>The <function>nallocx()</function> function allocates no <para>The <function>nallocx<parameter/></function> function allocates no
memory, but it performs the same size computation as the memory, but it performs the same size computation as the
<function>mallocx()</function> function, and returns the real <function>mallocx<parameter/></function> function, and returns the real
size of the allocation that would result from the equivalent size of the allocation that would result from the equivalent
<function>mallocx()</function> function call, or <function>mallocx<parameter/></function> function call. Behavior is
<constant>0</constant> if the inputs exceed the maximum supported size undefined if <parameter>size</parameter> is <constant>0</constant>, or if
class and/or alignment. Behavior is undefined if request size overflows due to size class and/or alignment
<parameter>size</parameter> is <constant>0</constant>.</para> constraints.</para>
<para>The <function>mallctl()</function> function provides a <para>The <function>mallctl<parameter/></function> function provides a
general interface for introspecting the memory allocator, as well as general interface for introspecting the memory allocator, as well as
setting modifiable parameters and triggering actions. The setting modifiable parameters and triggering actions. The
period-separated <parameter>name</parameter> argument specifies a period-separated <parameter>name</parameter> argument specifies a
@ -372,12 +374,12 @@
<parameter>newlen</parameter>; otherwise pass <constant>NULL</constant> <parameter>newlen</parameter>; otherwise pass <constant>NULL</constant>
and <constant>0</constant>.</para> and <constant>0</constant>.</para>
<para>The <function>mallctlnametomib()</function> function <para>The <function>mallctlnametomib<parameter/></function> function
provides a way to avoid repeated name lookups for applications that provides a way to avoid repeated name lookups for applications that
repeatedly query the same portion of the namespace, by translating a name repeatedly query the same portion of the namespace, by translating a name
to a <quote>Management Information Base</quote> (MIB) that can be passed to a &ldquo;Management Information Base&rdquo; (MIB) that can be passed
repeatedly to <function>mallctlbymib()</function>. Upon repeatedly to <function>mallctlbymib<parameter/></function>. Upon
successful return from <function>mallctlnametomib()</function>, successful return from <function>mallctlnametomib<parameter/></function>,
<parameter>mibp</parameter> contains an array of <parameter>mibp</parameter> contains an array of
<parameter>*miblenp</parameter> integers, where <parameter>*miblenp</parameter> integers, where
<parameter>*miblenp</parameter> is the lesser of the number of components <parameter>*miblenp</parameter> is the lesser of the number of components
@ -406,44 +408,43 @@ for (i = 0; i < nbins; i++) {
mib[2] = i; mib[2] = i;
len = sizeof(bin_size); len = sizeof(bin_size);
mallctlbymib(mib, miblen, (void *)&bin_size, &len, NULL, 0); mallctlbymib(mib, miblen, &bin_size, &len, NULL, 0);
/* Do something with bin_size... */ /* Do something with bin_size... */
}]]></programlisting></para> }]]></programlisting></para>
<para>The <function>malloc_stats_print()</function> function writes <para>The <function>malloc_stats_print<parameter/></function> function
summary statistics via the <parameter>write_cb</parameter> callback writes human-readable summary statistics via the
function pointer and <parameter>cbopaque</parameter> data passed to <parameter>write_cb</parameter> callback function pointer and
<parameter>write_cb</parameter>, or <function>malloc_message()</function> <parameter>cbopaque</parameter> data passed to
if <parameter>write_cb</parameter> is <constant>NULL</constant>. The <parameter>write_cb</parameter>, or
statistics are presented in human-readable form unless <quote>J</quote> is <function>malloc_message<parameter/></function> if
specified as a character within the <parameter>opts</parameter> string, in <parameter>write_cb</parameter> is <constant>NULL</constant>. This
which case the statistics are presented in <ulink function can be called repeatedly. General information that never
url="http://www.json.org/">JSON format</ulink>. This function can be changes during execution can be omitted by specifying "g" as a character
called repeatedly. General information that never changes during
execution can be omitted by specifying <quote>g</quote> as a character
within the <parameter>opts</parameter> string. Note that within the <parameter>opts</parameter> string. Note that
<function>malloc_message()</function> uses the <function>malloc_message<parameter/></function> uses the
<function>mallctl*()</function> functions internally, so inconsistent <function>mallctl*<parameter/></function> functions internally, so
statistics can be reported if multiple threads use these functions inconsistent statistics can be reported if multiple threads use these
simultaneously. If <option>--enable-stats</option> is specified during functions simultaneously. If <option>--enable-stats</option> is
configuration, <quote>m</quote> and <quote>a</quote> can be specified to specified during configuration, &ldquo;m&rdquo; and &ldquo;a&rdquo; can
omit merged arena and per arena statistics, respectively; be specified to omit merged arena and per arena statistics, respectively;
<quote>b</quote>, <quote>l</quote>, and <quote>h</quote> can be specified &ldquo;b&rdquo;, &ldquo;l&rdquo;, and &ldquo;h&rdquo; can be specified to
to omit per size class statistics for bins, large objects, and huge omit per size class statistics for bins, large objects, and huge objects,
objects, respectively. Unrecognized characters are silently ignored. respectively. Unrecognized characters are silently ignored. Note that
Note that thread caching may prevent some statistics from being completely thread caching may prevent some statistics from being completely up to
up to date, since extra locking would be required to merge counters that date, since extra locking would be required to merge counters that track
track thread cache operations.</para> thread cache operations.
</para>
<para>The <function>malloc_usable_size()</function> function <para>The <function>malloc_usable_size<parameter/></function> function
returns the usable size of the allocation pointed to by returns the usable size of the allocation pointed to by
<parameter>ptr</parameter>. The return value may be larger than the size <parameter>ptr</parameter>. The return value may be larger than the size
that was requested during allocation. The that was requested during allocation. The
<function>malloc_usable_size()</function> function is not a <function>malloc_usable_size<parameter/></function> function is not a
mechanism for in-place <function>realloc()</function>; rather mechanism for in-place <function>realloc<parameter/></function>; rather
it is provided solely as a tool for introspection purposes. Any it is provided solely as a tool for introspection purposes. Any
discrepancy between the requested allocation size and the size reported discrepancy between the requested allocation size and the size reported
by <function>malloc_usable_size()</function> should not be by <function>malloc_usable_size<parameter/></function> should not be
depended on, since such behavior is entirely implementation-dependent. depended on, since such behavior is entirely implementation-dependent.
</para> </para>
</refsect2> </refsect2>
@ -454,20 +455,19 @@ for (i = 0; i < nbins; i++) {
routines, the allocator initializes its internals based in part on various routines, the allocator initializes its internals based in part on various
options that can be specified at compile- or run-time.</para> options that can be specified at compile- or run-time.</para>
<para>The string specified via <option>--with-malloc-conf</option>, the <para>The string pointed to by the global variable
string pointed to by the global variable <varname>malloc_conf</varname>, the <varname>malloc_conf</varname>, the &ldquo;name&rdquo; of the file
<quote>name</quote> of the file referenced by the symbolic link named referenced by the symbolic link named <filename
<filename class="symlink">/etc/malloc.conf</filename>, and the value of the class="symlink">/etc/malloc.conf</filename>, and the value of the
environment variable <envar>MALLOC_CONF</envar>, will be interpreted, in environment variable <envar>MALLOC_CONF</envar>, will be interpreted, in
that order, from left to right as options. Note that that order, from left to right as options. Note that
<varname>malloc_conf</varname> may be read before <varname>malloc_conf</varname> may be read before
<function>main()</function> is entered, so the declaration of <function>main<parameter/></function> is entered, so the declaration of
<varname>malloc_conf</varname> should specify an initializer that contains <varname>malloc_conf</varname> should specify an initializer that contains
the final value to be read by jemalloc. <option>--with-malloc-conf</option> the final value to be read by jemalloc. <varname>malloc_conf</varname> is
and <varname>malloc_conf</varname> are compile-time mechanisms, whereas a compile-time setting, whereas <filename
<filename class="symlink">/etc/malloc.conf</filename> and class="symlink">/etc/malloc.conf</filename> and <envar>MALLOC_CONF</envar>
<envar>MALLOC_CONF</envar> can be safely set any time prior to program can be safely set any time prior to program invocation.</para>
invocation.</para>
<para>An options string is a comma-separated list of option:value pairs. <para>An options string is a comma-separated list of option:value pairs.
There is one key corresponding to each <link There is one key corresponding to each <link
@ -517,18 +517,23 @@ for (i = 0; i < nbins; i++) {
common case, but it increases memory usage and fragmentation, since a common case, but it increases memory usage and fragmentation, since a
bounded number of objects can remain allocated in each thread cache.</para> bounded number of objects can remain allocated in each thread cache.</para>
<para>Memory is conceptually broken into equal-sized chunks, where the chunk <para>Memory is conceptually broken into equal-sized chunks, where the
size is a power of two that is greater than the page size. Chunks are chunk size is a power of two that is greater than the page size. Chunks
always aligned to multiples of the chunk size. This alignment makes it are always aligned to multiples of the chunk size. This alignment makes it
possible to find metadata for user objects very quickly. User objects are possible to find metadata for user objects very quickly.</para>
broken into three categories according to size: small, large, and huge.
Multiple small and large objects can reside within a single chunk, whereas <para>User objects are broken into three categories according to size:
huge objects each have one or more chunks backing them. Each chunk that small, large, and huge. Small and large objects are managed entirely by
contains small and/or large objects tracks its contents as runs of arenas; huge objects are additionally aggregated in a single data structure
that is shared by all threads. Huge objects are typically used by
applications infrequently enough that this single data structure is not a
scalability issue.</para>
<para>Each chunk that is managed by an arena tracks its contents as runs of
contiguous pages (unused, backing a set of small objects, or backing one contiguous pages (unused, backing a set of small objects, or backing one
large object). The combination of chunk alignment and chunk page maps makes large object). The combination of chunk alignment and chunk page maps
it possible to determine all metadata regarding small and large allocations makes it possible to determine all metadata regarding small and large
in constant time.</para> allocations in constant time.</para>
<para>Small objects are managed in groups by page runs. Each run maintains <para>Small objects are managed in groups by page runs. Each run maintains
a bitmap to track which regions are in use. Allocation requests that are no a bitmap to track which regions are in use. Allocation requests that are no
@ -541,8 +546,8 @@ for (i = 0; i < nbins; i++) {
are smaller than four times the page size, large size classes are smaller are smaller than four times the page size, large size classes are smaller
than the chunk size (see the <link than the chunk size (see the <link
linkend="opt.lg_chunk"><mallctl>opt.lg_chunk</mallctl></link> option), and linkend="opt.lg_chunk"><mallctl>opt.lg_chunk</mallctl></link> option), and
huge size classes extend from the chunk size up to the largest size class huge size classes extend from the chunk size up to one size class less than
that does not exceed <constant>PTRDIFF_MAX</constant>.</para> the full address space size.</para>
<para>Allocations are packed tightly together, which can be an issue for <para>Allocations are packed tightly together, which can be an issue for
multi-threaded applications. If you need to assure that allocations do not multi-threaded applications. If you need to assure that allocations do not
@ -550,14 +555,14 @@ for (i = 0; i < nbins; i++) {
nearest multiple of the cacheline size, or specify cacheline alignment when nearest multiple of the cacheline size, or specify cacheline alignment when
allocating.</para> allocating.</para>
<para>The <function>realloc()</function>, <para>The <function>realloc<parameter/></function>,
<function>rallocx()</function>, and <function>rallocx<parameter/></function>, and
<function>xallocx()</function> functions may resize allocations <function>xallocx<parameter/></function> functions may resize allocations
without moving them under limited circumstances. Unlike the without moving them under limited circumstances. Unlike the
<function>*allocx()</function> API, the standard API does not <function>*allocx<parameter/></function> API, the standard API does not
officially round up the usable size of an allocation to the nearest size officially round up the usable size of an allocation to the nearest size
class, so technically it is necessary to call class, so technically it is necessary to call
<function>realloc()</function> to grow e.g. a 9-byte allocation to <function>realloc<parameter/></function> to grow e.g. a 9-byte allocation to
16 bytes, or shrink a 16-byte allocation to 9 bytes. Growth and shrinkage 16 bytes, or shrink a 16-byte allocation to 9 bytes. Growth and shrinkage
trivially succeeds in place as long as the pre-size and post-size both round trivially succeeds in place as long as the pre-size and post-size both round
up to the same size class. No other API guarantees are made regarding up to the same size class. No other API guarantees are made regarding
@ -660,7 +665,7 @@ for (i = 0; i < nbins; i++) {
<entry>[1280 KiB, 1536 KiB, 1792 KiB]</entry> <entry>[1280 KiB, 1536 KiB, 1792 KiB]</entry>
</row> </row>
<row> <row>
<entry morerows="8">Huge</entry> <entry morerows="6">Huge</entry>
<entry>256 KiB</entry> <entry>256 KiB</entry>
<entry>[2 MiB]</entry> <entry>[2 MiB]</entry>
</row> </row>
@ -688,14 +693,6 @@ for (i = 0; i < nbins; i++) {
<entry>...</entry> <entry>...</entry>
<entry>...</entry> <entry>...</entry>
</row> </row>
<row>
<entry>512 PiB</entry>
<entry>[2560 PiB, 3 EiB, 3584 PiB, 4 EiB]</entry>
</row>
<row>
<entry>1 EiB</entry>
<entry>[5 EiB, 6 EiB, 7 EiB]</entry>
</row>
</tbody> </tbody>
</tgroup> </tgroup>
</table> </table>
@ -703,7 +700,7 @@ for (i = 0; i < nbins; i++) {
<refsect1 id="mallctl_namespace"> <refsect1 id="mallctl_namespace">
<title>MALLCTL NAMESPACE</title> <title>MALLCTL NAMESPACE</title>
<para>The following names are defined in the namespace accessible via the <para>The following names are defined in the namespace accessible via the
<function>mallctl*()</function> functions. Value types are <function>mallctl*<parameter/></function> functions. Value types are
specified in parentheses, their readable/writable statuses are encoded as specified in parentheses, their readable/writable statuses are encoded as
<literal>rw</literal>, <literal>r-</literal>, <literal>-w</literal>, or <literal>rw</literal>, <literal>r-</literal>, <literal>-w</literal>, or
<literal>--</literal>, and required build configuration flags follow, if <literal>--</literal>, and required build configuration flags follow, if
@ -734,7 +731,7 @@ for (i = 0; i < nbins; i++) {
<literal>rw</literal> <literal>rw</literal>
</term> </term>
<listitem><para>If a value is passed in, refresh the data from which <listitem><para>If a value is passed in, refresh the data from which
the <function>mallctl*()</function> functions report values, the <function>mallctl*<parameter/></function> functions report values,
and increment the epoch. Return the current epoch. This is useful for and increment the epoch. Return the current epoch. This is useful for
detecting whether another thread caused a refresh.</para></listitem> detecting whether another thread caused a refresh.</para></listitem>
</varlistentry> </varlistentry>
@ -779,17 +776,6 @@ for (i = 0; i < nbins; i++) {
during build configuration.</para></listitem> during build configuration.</para></listitem>
</varlistentry> </varlistentry>
<varlistentry id="config.malloc_conf">
<term>
<mallctl>config.malloc_conf</mallctl>
(<type>const char *</type>)
<literal>r-</literal>
</term>
<listitem><para>Embedded configure-time-specified run-time options
string, empty unless <option>--with-malloc-conf</option> was specified
during build configuration.</para></listitem>
</varlistentry>
<varlistentry id="config.munmap"> <varlistentry id="config.munmap">
<term> <term>
<mallctl>config.munmap</mallctl> <mallctl>config.munmap</mallctl>
@ -918,12 +904,12 @@ for (i = 0; i < nbins; i++) {
settings are supported if settings are supported if
<citerefentry><refentrytitle>sbrk</refentrytitle> <citerefentry><refentrytitle>sbrk</refentrytitle>
<manvolnum>2</manvolnum></citerefentry> is supported by the operating <manvolnum>2</manvolnum></citerefentry> is supported by the operating
system: <quote>disabled</quote>, <quote>primary</quote>, and system: &ldquo;disabled&rdquo;, &ldquo;primary&rdquo;, and
<quote>secondary</quote>; otherwise only <quote>disabled</quote> is &ldquo;secondary&rdquo;; otherwise only &ldquo;disabled&rdquo; is
supported. The default is <quote>secondary</quote> if supported. The default is &ldquo;secondary&rdquo; if
<citerefentry><refentrytitle>sbrk</refentrytitle> <citerefentry><refentrytitle>sbrk</refentrytitle>
<manvolnum>2</manvolnum></citerefentry> is supported by the operating <manvolnum>2</manvolnum></citerefentry> is supported by the operating
system; <quote>disabled</quote> otherwise. system; &ldquo;disabled&rdquo; otherwise.
</para></listitem> </para></listitem>
</varlistentry> </varlistentry>
@ -943,7 +929,7 @@ for (i = 0; i < nbins; i++) {
<varlistentry id="opt.narenas"> <varlistentry id="opt.narenas">
<term> <term>
<mallctl>opt.narenas</mallctl> <mallctl>opt.narenas</mallctl>
(<type>unsigned</type>) (<type>size_t</type>)
<literal>r-</literal> <literal>r-</literal>
</term> </term>
<listitem><para>Maximum number of arenas to use for automatic <listitem><para>Maximum number of arenas to use for automatic
@ -951,20 +937,6 @@ for (i = 0; i < nbins; i++) {
number of CPUs, or one if there is a single CPU.</para></listitem> number of CPUs, or one if there is a single CPU.</para></listitem>
</varlistentry> </varlistentry>
<varlistentry id="opt.purge">
<term>
<mallctl>opt.purge</mallctl>
(<type>const char *</type>)
<literal>r-</literal>
</term>
<listitem><para>Purge mode is &ldquo;ratio&rdquo; (default) or
&ldquo;decay&rdquo;. See <link
linkend="opt.lg_dirty_mult"><mallctl>opt.lg_dirty_mult</mallctl></link>
for details of the ratio mode. See <link
linkend="opt.decay_time"><mallctl>opt.decay_time</mallctl></link> for
details of the decay mode.</para></listitem>
</varlistentry>
<varlistentry id="opt.lg_dirty_mult"> <varlistentry id="opt.lg_dirty_mult">
<term> <term>
<mallctl>opt.lg_dirty_mult</mallctl> <mallctl>opt.lg_dirty_mult</mallctl>
@ -987,26 +959,6 @@ for (i = 0; i < nbins; i++) {
for related dynamic control options.</para></listitem> for related dynamic control options.</para></listitem>
</varlistentry> </varlistentry>
<varlistentry id="opt.decay_time">
<term>
<mallctl>opt.decay_time</mallctl>
(<type>ssize_t</type>)
<literal>r-</literal>
</term>
<listitem><para>Approximate time in seconds from the creation of a set
of unused dirty pages until an equivalent set of unused dirty pages is
purged and/or reused. The pages are incrementally purged according to a
sigmoidal decay curve that starts and ends with zero purge rate. A
decay time of 0 causes all unused dirty pages to be purged immediately
upon creation. A decay time of -1 disables purging. The default decay
time is 10 seconds. See <link
linkend="arenas.decay_time"><mallctl>arenas.decay_time</mallctl></link>
and <link
linkend="arena.i.decay_time"><mallctl>arena.&lt;i&gt;.decay_time</mallctl></link>
for related dynamic control options.
</para></listitem>
</varlistentry>
<varlistentry id="opt.stats_print"> <varlistentry id="opt.stats_print">
<term> <term>
<mallctl>opt.stats_print</mallctl> <mallctl>opt.stats_print</mallctl>
@ -1014,19 +966,19 @@ for (i = 0; i < nbins; i++) {
<literal>r-</literal> <literal>r-</literal>
</term> </term>
<listitem><para>Enable/disable statistics printing at exit. If <listitem><para>Enable/disable statistics printing at exit. If
enabled, the <function>malloc_stats_print()</function> enabled, the <function>malloc_stats_print<parameter/></function>
function is called at program exit via an function is called at program exit via an
<citerefentry><refentrytitle>atexit</refentrytitle> <citerefentry><refentrytitle>atexit</refentrytitle>
<manvolnum>3</manvolnum></citerefentry> function. If <manvolnum>3</manvolnum></citerefentry> function. If
<option>--enable-stats</option> is specified during configuration, this <option>--enable-stats</option> is specified during configuration, this
has the potential to cause deadlock for a multi-threaded process that has the potential to cause deadlock for a multi-threaded process that
exits while one or more threads are executing in the memory allocation exits while one or more threads are executing in the memory allocation
functions. Furthermore, <function>atexit()</function> may functions. Furthermore, <function>atexit<parameter/></function> may
allocate memory during application initialization and then deadlock allocate memory during application initialization and then deadlock
internally when jemalloc in turn calls internally when jemalloc in turn calls
<function>atexit()</function>, so this option is not <function>atexit<parameter/></function>, so this option is not
universally usable (though the application can register its own univerally usable (though the application can register its own
<function>atexit()</function> function with equivalent <function>atexit<parameter/></function> function with equivalent
functionality). Therefore, this option should only be used with care; functionality). Therefore, this option should only be used with care;
it is primarily intended as a performance tuning aid during application it is primarily intended as a performance tuning aid during application
development. This option is disabled by default.</para></listitem> development. This option is disabled by default.</para></listitem>
@ -1039,16 +991,15 @@ for (i = 0; i < nbins; i++) {
<literal>r-</literal> <literal>r-</literal>
[<option>--enable-fill</option>] [<option>--enable-fill</option>]
</term> </term>
<listitem><para>Junk filling. If set to <quote>alloc</quote>, each byte <listitem><para>Junk filling. If set to "alloc", each byte of
of uninitialized allocated memory will be initialized to uninitialized allocated memory will be initialized to
<literal>0xa5</literal>. If set to <quote>free</quote>, all deallocated <literal>0xa5</literal>. If set to "free", all deallocated memory will
memory will be initialized to <literal>0x5a</literal>. If set to be initialized to <literal>0x5a</literal>. If set to "true", both
<quote>true</quote>, both allocated and deallocated memory will be allocated and deallocated memory will be initialized, and if set to
initialized, and if set to <quote>false</quote>, junk filling be "false", junk filling be disabled entirely. This is intended for
disabled entirely. This is intended for debugging and will impact debugging and will impact performance negatively. This option is
performance negatively. This option is <quote>false</quote> by default "false" by default unless <option>--enable-debug</option> is specified
unless <option>--enable-debug</option> is specified during during configuration, in which case it is "true" by default unless
configuration, in which case it is <quote>true</quote> by default unless
running inside <ulink running inside <ulink
url="http://valgrind.org/">Valgrind</ulink>.</para></listitem> url="http://valgrind.org/">Valgrind</ulink>.</para></listitem>
</varlistentry> </varlistentry>
@ -1103,8 +1054,8 @@ for (i = 0; i < nbins; i++) {
<listitem><para>Zero filling enabled/disabled. If enabled, each byte <listitem><para>Zero filling enabled/disabled. If enabled, each byte
of uninitialized allocated memory will be initialized to 0. Note that of uninitialized allocated memory will be initialized to 0. Note that
this initialization only happens once for each byte, so this initialization only happens once for each byte, so
<function>realloc()</function> and <function>realloc<parameter/></function> and
<function>rallocx()</function> calls do not zero memory that <function>rallocx<parameter/></function> calls do not zero memory that
was previously allocated. This is intended for debugging and will was previously allocated. This is intended for debugging and will
impact performance negatively. This option is disabled by default. impact performance negatively. This option is disabled by default.
</para></listitem> </para></listitem>
@ -1199,8 +1150,7 @@ malloc_conf = "xmalloc:true";]]></programlisting>
the <command>jeprof</command> command, which is based on the the <command>jeprof</command> command, which is based on the
<command>pprof</command> that is developed as part of the <ulink <command>pprof</command> that is developed as part of the <ulink
url="http://code.google.com/p/gperftools/">gperftools url="http://code.google.com/p/gperftools/">gperftools
package</ulink>. See <link linkend="heap_profile_format">HEAP PROFILE package</ulink>.</para></listitem>
FORMAT</link> for heap profile format documentation.</para></listitem>
</varlistentry> </varlistentry>
<varlistentry id="opt.prof_prefix"> <varlistentry id="opt.prof_prefix">
@ -1327,11 +1277,11 @@ malloc_conf = "xmalloc:true";]]></programlisting>
<filename>&lt;prefix&gt;.&lt;pid&gt;.&lt;seq&gt;.f.heap</filename>, <filename>&lt;prefix&gt;.&lt;pid&gt;.&lt;seq&gt;.f.heap</filename>,
where <literal>&lt;prefix&gt;</literal> is controlled by the <link where <literal>&lt;prefix&gt;</literal> is controlled by the <link
linkend="opt.prof_prefix"><mallctl>opt.prof_prefix</mallctl></link> linkend="opt.prof_prefix"><mallctl>opt.prof_prefix</mallctl></link>
option. Note that <function>atexit()</function> may allocate option. Note that <function>atexit<parameter/></function> may allocate
memory during application initialization and then deadlock internally memory during application initialization and then deadlock internally
when jemalloc in turn calls <function>atexit()</function>, so when jemalloc in turn calls <function>atexit<parameter/></function>, so
this option is not universally usable (though the application can this option is not univerally usable (though the application can
register its own <function>atexit()</function> function with register its own <function>atexit<parameter/></function> function with
equivalent functionality). This option is disabled by equivalent functionality). This option is disabled by
default.</para></listitem> default.</para></listitem>
</varlistentry> </varlistentry>
@ -1390,7 +1340,7 @@ malloc_conf = "xmalloc:true";]]></programlisting>
<link <link
linkend="thread.allocated"><mallctl>thread.allocated</mallctl></link> linkend="thread.allocated"><mallctl>thread.allocated</mallctl></link>
mallctl. This is useful for avoiding the overhead of repeated mallctl. This is useful for avoiding the overhead of repeated
<function>mallctl*()</function> calls.</para></listitem> <function>mallctl*<parameter/></function> calls.</para></listitem>
</varlistentry> </varlistentry>
<varlistentry id="thread.deallocated"> <varlistentry id="thread.deallocated">
@ -1417,7 +1367,7 @@ malloc_conf = "xmalloc:true";]]></programlisting>
<link <link
linkend="thread.deallocated"><mallctl>thread.deallocated</mallctl></link> linkend="thread.deallocated"><mallctl>thread.deallocated</mallctl></link>
mallctl. This is useful for avoiding the overhead of repeated mallctl. This is useful for avoiding the overhead of repeated
<function>mallctl*()</function> calls.</para></listitem> <function>mallctl*<parameter/></function> calls.</para></listitem>
</varlistentry> </varlistentry>
<varlistentry id="thread.tcache.enabled"> <varlistentry id="thread.tcache.enabled">
@ -1468,8 +1418,8 @@ malloc_conf = "xmalloc:true";]]></programlisting>
can cause asynchronous string deallocation. Furthermore, each can cause asynchronous string deallocation. Furthermore, each
invocation of this interface can only read or write; simultaneous invocation of this interface can only read or write; simultaneous
read/write is not supported due to string lifetime limitations. The read/write is not supported due to string lifetime limitations. The
name string must be nil-terminated and comprised only of characters in name string must nil-terminated and comprised only of characters in the
the sets recognized sets recognized
by <citerefentry><refentrytitle>isgraph</refentrytitle> by <citerefentry><refentrytitle>isgraph</refentrytitle>
<manvolnum>3</manvolnum></citerefentry> and <manvolnum>3</manvolnum></citerefentry> and
<citerefentry><refentrytitle>isblank</refentrytitle> <citerefentry><refentrytitle>isblank</refentrytitle>
@ -1517,7 +1467,7 @@ malloc_conf = "xmalloc:true";]]></programlisting>
<listitem><para>Flush the specified thread-specific cache (tcache). The <listitem><para>Flush the specified thread-specific cache (tcache). The
same considerations apply to this interface as to <link same considerations apply to this interface as to <link
linkend="thread.tcache.flush"><mallctl>thread.tcache.flush</mallctl></link>, linkend="thread.tcache.flush"><mallctl>thread.tcache.flush</mallctl></link>,
except that the tcache will never be automatically discarded. except that the tcache will never be automatically be discarded.
</para></listitem> </para></listitem>
</varlistentry> </varlistentry>
@ -1539,44 +1489,12 @@ malloc_conf = "xmalloc:true";]]></programlisting>
(<type>void</type>) (<type>void</type>)
<literal>--</literal> <literal>--</literal>
</term> </term>
<listitem><para>Purge all unused dirty pages for arena &lt;i&gt;, or for <listitem><para>Purge unused dirty pages for arena &lt;i&gt;, or for
all arenas if &lt;i&gt; equals <link all arenas if &lt;i&gt; equals <link
linkend="arenas.narenas"><mallctl>arenas.narenas</mallctl></link>. linkend="arenas.narenas"><mallctl>arenas.narenas</mallctl></link>.
</para></listitem> </para></listitem>
</varlistentry> </varlistentry>
<varlistentry id="arena.i.decay">
<term>
<mallctl>arena.&lt;i&gt;.decay</mallctl>
(<type>void</type>)
<literal>--</literal>
</term>
<listitem><para>Trigger decay-based purging of unused dirty pages for
arena &lt;i&gt;, or for all arenas if &lt;i&gt; equals <link
linkend="arenas.narenas"><mallctl>arenas.narenas</mallctl></link>.
The proportion of unused dirty pages to be purged depends on the current
time; see <link
linkend="opt.decay_time"><mallctl>opt.decay_time</mallctl></link> for
details.</para></listitem>
</varlistentry>
<varlistentry id="arena.i.reset">
<term>
<mallctl>arena.&lt;i&gt;.reset</mallctl>
(<type>void</type>)
<literal>--</literal>
</term>
<listitem><para>Discard all of the arena's extant allocations. This
interface can only be used with arenas created via <link
linkend="arenas.extend"><mallctl>arenas.extend</mallctl></link>. None
of the arena's discarded/cached allocations may accessed afterward. As
part of this requirement, all thread caches which were used to
allocate/deallocate in conjunction with the arena must be flushed
beforehand. This interface cannot be used if running inside Valgrind,
nor if the <link linkend="opt.quarantine">quarantine</link> size is
non-zero.</para></listitem>
</varlistentry>
<varlistentry id="arena.i.dss"> <varlistentry id="arena.i.dss">
<term> <term>
<mallctl>arena.&lt;i&gt;.dss</mallctl> <mallctl>arena.&lt;i&gt;.dss</mallctl>
@ -1605,22 +1523,6 @@ malloc_conf = "xmalloc:true";]]></programlisting>
for additional information.</para></listitem> for additional information.</para></listitem>
</varlistentry> </varlistentry>
<varlistentry id="arena.i.decay_time">
<term>
<mallctl>arena.&lt;i&gt;.decay_time</mallctl>
(<type>ssize_t</type>)
<literal>rw</literal>
</term>
<listitem><para>Current per-arena approximate time in seconds from the
creation of a set of unused dirty pages until an equivalent set of
unused dirty pages is purged and/or reused. Each time this interface is
set, all currently unused dirty pages are considered to have fully
decayed, which causes immediate purging of all unused dirty pages unless
the decay time is set to -1 (i.e. purging disabled). See <link
linkend="opt.decay_time"><mallctl>opt.decay_time</mallctl></link> for
additional information.</para></listitem>
</varlistentry>
<varlistentry id="arena.i.chunk_hooks"> <varlistentry id="arena.i.chunk_hooks">
<term> <term>
<mallctl>arena.&lt;i&gt;.chunk_hooks</mallctl> <mallctl>arena.&lt;i&gt;.chunk_hooks</mallctl>
@ -1855,21 +1757,6 @@ typedef struct {
for additional information.</para></listitem> for additional information.</para></listitem>
</varlistentry> </varlistentry>
<varlistentry id="arenas.decay_time">
<term>
<mallctl>arenas.decay_time</mallctl>
(<type>ssize_t</type>)
<literal>rw</literal>
</term>
<listitem><para>Current default per-arena approximate time in seconds
from the creation of a set of unused dirty pages until an equivalent set
of unused dirty pages is purged and/or reused, used to initialize <link
linkend="arena.i.decay_time"><mallctl>arena.&lt;i&gt;.decay_time</mallctl></link>
during arena creation. See <link
linkend="opt.decay_time"><mallctl>opt.decay_time</mallctl></link> for
additional information.</para></listitem>
</varlistentry>
<varlistentry id="arenas.quantum"> <varlistentry id="arenas.quantum">
<term> <term>
<mallctl>arenas.quantum</mallctl> <mallctl>arenas.quantum</mallctl>
@ -2089,7 +1976,7 @@ typedef struct {
[<option>--enable-prof</option>] [<option>--enable-prof</option>]
</term> </term>
<listitem><para>Average number of bytes allocated between <listitem><para>Average number of bytes allocated between
interval-based profile dumps. See the inverval-based profile dumps. See the
<link <link
linkend="opt.lg_prof_interval"><mallctl>opt.lg_prof_interval</mallctl></link> linkend="opt.lg_prof_interval"><mallctl>opt.lg_prof_interval</mallctl></link>
option for additional information.</para></listitem> option for additional information.</para></listitem>
@ -2188,25 +2075,6 @@ typedef struct {
linkend="stats.resident"><mallctl>stats.resident</mallctl></link>.</para></listitem> linkend="stats.resident"><mallctl>stats.resident</mallctl></link>.</para></listitem>
</varlistentry> </varlistentry>
<varlistentry id="stats.retained">
<term>
<mallctl>stats.retained</mallctl>
(<type>size_t</type>)
<literal>r-</literal>
[<option>--enable-stats</option>]
</term>
<listitem><para>Total number of bytes in virtual memory mappings that
were retained rather than being returned to the operating system via
e.g. <citerefentry><refentrytitle>munmap</refentrytitle>
<manvolnum>2</manvolnum></citerefentry>. Retained virtual memory is
typically untouched, decommitted, or purged, so it has no strongly
associated physical memory (see <link
linkend="arena.i.chunk_hooks">chunk hooks</link> for details). Retained
memory is excluded from mapped memory statistics, e.g. <link
linkend="stats.mapped"><mallctl>stats.mapped</mallctl></link>.
</para></listitem>
</varlistentry>
<varlistentry id="stats.arenas.i.dss"> <varlistentry id="stats.arenas.i.dss">
<term> <term>
<mallctl>stats.arenas.&lt;i&gt;.dss</mallctl> <mallctl>stats.arenas.&lt;i&gt;.dss</mallctl>
@ -2233,19 +2101,6 @@ typedef struct {
for details.</para></listitem> for details.</para></listitem>
</varlistentry> </varlistentry>
<varlistentry id="stats.arenas.i.decay_time">
<term>
<mallctl>stats.arenas.&lt;i&gt;.decay_time</mallctl>
(<type>ssize_t</type>)
<literal>r-</literal>
</term>
<listitem><para>Approximate time in seconds from the creation of a set
of unused dirty pages until an equivalent set of unused dirty pages is
purged and/or reused. See <link
linkend="opt.decay_time"><mallctl>opt.decay_time</mallctl></link>
for details.</para></listitem>
</varlistentry>
<varlistentry id="stats.arenas.i.nthreads"> <varlistentry id="stats.arenas.i.nthreads">
<term> <term>
<mallctl>stats.arenas.&lt;i&gt;.nthreads</mallctl> <mallctl>stats.arenas.&lt;i&gt;.nthreads</mallctl>
@ -2287,18 +2142,6 @@ typedef struct {
<listitem><para>Number of mapped bytes.</para></listitem> <listitem><para>Number of mapped bytes.</para></listitem>
</varlistentry> </varlistentry>
<varlistentry id="stats.arenas.i.retained">
<term>
<mallctl>stats.arenas.&lt;i&gt;.retained</mallctl>
(<type>size_t</type>)
<literal>r-</literal>
[<option>--enable-stats</option>]
</term>
<listitem><para>Number of retained bytes. See <link
linkend="stats.retained"><mallctl>stats.retained</mallctl></link> for
details.</para></listitem>
</varlistentry>
<varlistentry id="stats.arenas.i.metadata.mapped"> <varlistentry id="stats.arenas.i.metadata.mapped">
<term> <term>
<mallctl>stats.arenas.&lt;i&gt;.metadata.mapped</mallctl> <mallctl>stats.arenas.&lt;i&gt;.metadata.mapped</mallctl>
@ -2680,53 +2523,6 @@ typedef struct {
</varlistentry> </varlistentry>
</variablelist> </variablelist>
</refsect1> </refsect1>
<refsect1 id="heap_profile_format">
<title>HEAP PROFILE FORMAT</title>
<para>Although the heap profiling functionality was originally designed to
be compatible with the
<command>pprof</command> command that is developed as part of the <ulink
url="http://code.google.com/p/gperftools/">gperftools
package</ulink>, the addition of per thread heap profiling functionality
required a different heap profile format. The <command>jeprof</command>
command is derived from <command>pprof</command>, with enhancements to
support the heap profile format described here.</para>
<para>In the following hypothetical heap profile, <constant>[...]</constant>
indicates elision for the sake of compactness. <programlisting><![CDATA[
heap_v2/524288
t*: 28106: 56637512 [0: 0]
[...]
t3: 352: 16777344 [0: 0]
[...]
t99: 17754: 29341640 [0: 0]
[...]
@ 0x5f86da8 0x5f5a1dc [...] 0x29e4d4e 0xa200316 0xabb2988 [...]
t*: 13: 6688 [0: 0]
t3: 12: 6496 [0: ]
t99: 1: 192 [0: 0]
[...]
MAPPED_LIBRARIES:
[...]]]></programlisting> The following matches the above heap profile, but most
tokens are replaced with <constant>&lt;description&gt;</constant> to indicate
descriptions of the corresponding fields. <programlisting><![CDATA[
<heap_profile_format_version>/<mean_sample_interval>
<aggregate>: <curobjs>: <curbytes> [<cumobjs>: <cumbytes>]
[...]
<thread_3_aggregate>: <curobjs>: <curbytes>[<cumobjs>: <cumbytes>]
[...]
<thread_99_aggregate>: <curobjs>: <curbytes>[<cumobjs>: <cumbytes>]
[...]
@ <top_frame> <frame> [...] <frame> <frame> <frame> [...]
<backtrace_aggregate>: <curobjs>: <curbytes> [<cumobjs>: <cumbytes>]
<backtrace_thread_3>: <curobjs>: <curbytes> [<cumobjs>: <cumbytes>]
<backtrace_thread_99>: <curobjs>: <curbytes> [<cumobjs>: <cumbytes>]
[...]
MAPPED_LIBRARIES:
</proc/<pid>/maps>]]></programlisting></para>
</refsect1>
<refsect1 id="debugging_malloc_problems"> <refsect1 id="debugging_malloc_problems">
<title>DEBUGGING MALLOC PROBLEMS</title> <title>DEBUGGING MALLOC PROBLEMS</title>
<para>When debugging, it is a good idea to configure/build jemalloc with <para>When debugging, it is a good idea to configure/build jemalloc with
@ -2736,7 +2532,7 @@ MAPPED_LIBRARIES:
of run-time assertions that catch application errors such as double-free, of run-time assertions that catch application errors such as double-free,
write-after-free, etc.</para> write-after-free, etc.</para>
<para>Programs often accidentally depend on <quote>uninitialized</quote> <para>Programs often accidentally depend on &ldquo;uninitialized&rdquo;
memory actually being filled with zero bytes. Junk filling memory actually being filled with zero bytes. Junk filling
(see the <link linkend="opt.junk"><mallctl>opt.junk</mallctl></link> (see the <link linkend="opt.junk"><mallctl>opt.junk</mallctl></link>
option) tends to expose such bugs in the form of obviously incorrect option) tends to expose such bugs in the form of obviously incorrect
@ -2765,29 +2561,29 @@ MAPPED_LIBRARIES:
to override the function which emits the text strings forming the errors to override the function which emits the text strings forming the errors
and warnings if for some reason the <constant>STDERR_FILENO</constant> file and warnings if for some reason the <constant>STDERR_FILENO</constant> file
descriptor is not suitable for this. descriptor is not suitable for this.
<function>malloc_message()</function> takes the <function>malloc_message<parameter/></function> takes the
<parameter>cbopaque</parameter> pointer argument that is <parameter>cbopaque</parameter> pointer argument that is
<constant>NULL</constant> unless overridden by the arguments in a call to <constant>NULL</constant> unless overridden by the arguments in a call to
<function>malloc_stats_print()</function>, followed by a string <function>malloc_stats_print<parameter/></function>, followed by a string
pointer. Please note that doing anything which tries to allocate memory in pointer. Please note that doing anything which tries to allocate memory in
this function is likely to result in a crash or deadlock.</para> this function is likely to result in a crash or deadlock.</para>
<para>All messages are prefixed by <para>All messages are prefixed by
<quote><computeroutput>&lt;jemalloc&gt;: </computeroutput></quote>.</para> &ldquo;<computeroutput>&lt;jemalloc&gt;: </computeroutput>&rdquo;.</para>
</refsect1> </refsect1>
<refsect1 id="return_values"> <refsect1 id="return_values">
<title>RETURN VALUES</title> <title>RETURN VALUES</title>
<refsect2> <refsect2>
<title>Standard API</title> <title>Standard API</title>
<para>The <function>malloc()</function> and <para>The <function>malloc<parameter/></function> and
<function>calloc()</function> functions return a pointer to the <function>calloc<parameter/></function> functions return a pointer to the
allocated memory if successful; otherwise a <constant>NULL</constant> allocated memory if successful; otherwise a <constant>NULL</constant>
pointer is returned and <varname>errno</varname> is set to pointer is returned and <varname>errno</varname> is set to
<errorname>ENOMEM</errorname>.</para> <errorname>ENOMEM</errorname>.</para>
<para>The <function>posix_memalign()</function> function <para>The <function>posix_memalign<parameter/></function> function
returns the value 0 if successful; otherwise it returns an error value. returns the value 0 if successful; otherwise it returns an error value.
The <function>posix_memalign()</function> function will fail The <function>posix_memalign<parameter/></function> function will fail
if: if:
<variablelist> <variablelist>
<varlistentry> <varlistentry>
@ -2806,11 +2602,11 @@ MAPPED_LIBRARIES:
</variablelist> </variablelist>
</para> </para>
<para>The <function>aligned_alloc()</function> function returns <para>The <function>aligned_alloc<parameter/></function> function returns
a pointer to the allocated memory if successful; otherwise a a pointer to the allocated memory if successful; otherwise a
<constant>NULL</constant> pointer is returned and <constant>NULL</constant> pointer is returned and
<varname>errno</varname> is set. The <varname>errno</varname> is set. The
<function>aligned_alloc()</function> function will fail if: <function>aligned_alloc<parameter/></function> function will fail if:
<variablelist> <variablelist>
<varlistentry> <varlistentry>
<term><errorname>EINVAL</errorname></term> <term><errorname>EINVAL</errorname></term>
@ -2827,44 +2623,44 @@ MAPPED_LIBRARIES:
</variablelist> </variablelist>
</para> </para>
<para>The <function>realloc()</function> function returns a <para>The <function>realloc<parameter/></function> function returns a
pointer, possibly identical to <parameter>ptr</parameter>, to the pointer, possibly identical to <parameter>ptr</parameter>, to the
allocated memory if successful; otherwise a <constant>NULL</constant> allocated memory if successful; otherwise a <constant>NULL</constant>
pointer is returned, and <varname>errno</varname> is set to pointer is returned, and <varname>errno</varname> is set to
<errorname>ENOMEM</errorname> if the error was the result of an <errorname>ENOMEM</errorname> if the error was the result of an
allocation failure. The <function>realloc()</function> allocation failure. The <function>realloc<parameter/></function>
function always leaves the original buffer intact when an error occurs. function always leaves the original buffer intact when an error occurs.
</para> </para>
<para>The <function>free()</function> function returns no <para>The <function>free<parameter/></function> function returns no
value.</para> value.</para>
</refsect2> </refsect2>
<refsect2> <refsect2>
<title>Non-standard API</title> <title>Non-standard API</title>
<para>The <function>mallocx()</function> and <para>The <function>mallocx<parameter/></function> and
<function>rallocx()</function> functions return a pointer to <function>rallocx<parameter/></function> functions return a pointer to
the allocated memory if successful; otherwise a <constant>NULL</constant> the allocated memory if successful; otherwise a <constant>NULL</constant>
pointer is returned to indicate insufficient contiguous memory was pointer is returned to indicate insufficient contiguous memory was
available to service the allocation request. </para> available to service the allocation request. </para>
<para>The <function>xallocx()</function> function returns the <para>The <function>xallocx<parameter/></function> function returns the
real size of the resulting resized allocation pointed to by real size of the resulting resized allocation pointed to by
<parameter>ptr</parameter>, which is a value less than <parameter>ptr</parameter>, which is a value less than
<parameter>size</parameter> if the allocation could not be adequately <parameter>size</parameter> if the allocation could not be adequately
grown in place. </para> grown in place. </para>
<para>The <function>sallocx()</function> function returns the <para>The <function>sallocx<parameter/></function> function returns the
real size of the allocation pointed to by <parameter>ptr</parameter>. real size of the allocation pointed to by <parameter>ptr</parameter>.
</para> </para>
<para>The <function>nallocx()</function> returns the real size <para>The <function>nallocx<parameter/></function> returns the real size
that would result from a successful equivalent that would result from a successful equivalent
<function>mallocx()</function> function call, or zero if <function>mallocx<parameter/></function> function call, or zero if
insufficient memory is available to perform the size computation. </para> insufficient memory is available to perform the size computation. </para>
<para>The <function>mallctl()</function>, <para>The <function>mallctl<parameter/></function>,
<function>mallctlnametomib()</function>, and <function>mallctlnametomib<parameter/></function>, and
<function>mallctlbymib()</function> functions return 0 on <function>mallctlbymib<parameter/></function> functions return 0 on
success; otherwise they return an error value. The functions will fail success; otherwise they return an error value. The functions will fail
if: if:
<variablelist> <variablelist>
@ -2900,13 +2696,13 @@ MAPPED_LIBRARIES:
<term><errorname>EFAULT</errorname></term> <term><errorname>EFAULT</errorname></term>
<listitem><para>An interface with side effects failed in some way <listitem><para>An interface with side effects failed in some way
not directly related to <function>mallctl*()</function> not directly related to <function>mallctl*<parameter/></function>
read/write processing.</para></listitem> read/write processing.</para></listitem>
</varlistentry> </varlistentry>
</variablelist> </variablelist>
</para> </para>
<para>The <function>malloc_usable_size()</function> function <para>The <function>malloc_usable_size<parameter/></function> function
returns the usable size of the allocation pointed to by returns the usable size of the allocation pointed to by
<parameter>ptr</parameter>. </para> <parameter>ptr</parameter>. </para>
</refsect2> </refsect2>
@ -2954,13 +2750,13 @@ malloc_conf = "lg_chunk:24";]]></programlisting></para>
</refsect1> </refsect1>
<refsect1 id="standards"> <refsect1 id="standards">
<title>STANDARDS</title> <title>STANDARDS</title>
<para>The <function>malloc()</function>, <para>The <function>malloc<parameter/></function>,
<function>calloc()</function>, <function>calloc<parameter/></function>,
<function>realloc()</function>, and <function>realloc<parameter/></function>, and
<function>free()</function> functions conform to ISO/IEC <function>free<parameter/></function> functions conform to ISO/IEC
9899:1990 (<quote>ISO C90</quote>).</para> 9899:1990 (&ldquo;ISO C90&rdquo;).</para>
<para>The <function>posix_memalign()</function> function conforms <para>The <function>posix_memalign<parameter/></function> function conforms
to IEEE Std 1003.1-2001 (<quote>POSIX.1</quote>).</para> to IEEE Std 1003.1-2001 (&ldquo;POSIX.1&rdquo;).</para>
</refsect1> </refsect1>
</refentry> </refentry>

View File

@ -1,10 +1,7 @@
<xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version="1.0"> <xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version="1.0">
<xsl:param name="funcsynopsis.style">ansi</xsl:param> <xsl:param name="funcsynopsis.style">ansi</xsl:param>
<xsl:param name="function.parens" select="0"/> <xsl:param name="function.parens" select="1"/>
<xsl:template match="function">
<xsl:call-template name="inline.monoseq"/>
</xsl:template>
<xsl:template match="mallctl"> <xsl:template match="mallctl">
<quote><xsl:call-template name="inline.monoseq"/></quote> "<xsl:call-template name="inline.monoseq"/>"
</xsl:template> </xsl:template>
</xsl:stylesheet> </xsl:stylesheet>

File diff suppressed because it is too large Load Diff

View File

@ -1,45 +0,0 @@
/*
* Define a custom assert() in order to reduce the chances of deadlock during
* assertion failure.
*/
#ifndef assert
#define assert(e) do { \
if (unlikely(config_debug && !(e))) { \
malloc_printf( \
"<jemalloc>: %s:%d: Failed assertion: \"%s\"\n", \
__FILE__, __LINE__, #e); \
abort(); \
} \
} while (0)
#endif
#ifndef not_reached
#define not_reached() do { \
if (config_debug) { \
malloc_printf( \
"<jemalloc>: %s:%d: Unreachable code reached\n", \
__FILE__, __LINE__); \
abort(); \
} \
unreachable(); \
} while (0)
#endif
#ifndef not_implemented
#define not_implemented() do { \
if (config_debug) { \
malloc_printf("<jemalloc>: %s:%d: Not implemented\n", \
__FILE__, __LINE__); \
abort(); \
} \
} while (0)
#endif
#ifndef assert_not_implemented
#define assert_not_implemented(e) do { \
if (unlikely(config_debug && !(e))) \
not_implemented(); \
} while (0)
#endif

View File

@ -28,8 +28,8 @@
* callers. * callers.
* *
* <t> atomic_read_<t>(<t> *p) { return (*p); } * <t> atomic_read_<t>(<t> *p) { return (*p); }
* <t> atomic_add_<t>(<t> *p, <t> x) { return (*p += x); } * <t> atomic_add_<t>(<t> *p, <t> x) { return (*p + x); }
* <t> atomic_sub_<t>(<t> *p, <t> x) { return (*p -= x); } * <t> atomic_sub_<t>(<t> *p, <t> x) { return (*p - x); }
* bool atomic_cas_<t>(<t> *p, <t> c, <t> s) * bool atomic_cas_<t>(<t> *p, <t> c, <t> s)
* { * {
* if (*p != c) * if (*p != c)

View File

@ -9,13 +9,12 @@
/******************************************************************************/ /******************************************************************************/
#ifdef JEMALLOC_H_EXTERNS #ifdef JEMALLOC_H_EXTERNS
void *base_alloc(tsdn_t *tsdn, size_t size); void *base_alloc(size_t size);
void base_stats_get(tsdn_t *tsdn, size_t *allocated, size_t *resident, void base_stats_get(size_t *allocated, size_t *resident, size_t *mapped);
size_t *mapped);
bool base_boot(void); bool base_boot(void);
void base_prefork(tsdn_t *tsdn); void base_prefork(void);
void base_postfork_parent(tsdn_t *tsdn); void base_postfork_parent(void);
void base_postfork_child(tsdn_t *tsdn); void base_postfork_child(void);
#endif /* JEMALLOC_H_EXTERNS */ #endif /* JEMALLOC_H_EXTERNS */
/******************************************************************************/ /******************************************************************************/

View File

@ -15,15 +15,6 @@ typedef unsigned long bitmap_t;
#define BITMAP_GROUP_NBITS (ZU(1) << LG_BITMAP_GROUP_NBITS) #define BITMAP_GROUP_NBITS (ZU(1) << LG_BITMAP_GROUP_NBITS)
#define BITMAP_GROUP_NBITS_MASK (BITMAP_GROUP_NBITS-1) #define BITMAP_GROUP_NBITS_MASK (BITMAP_GROUP_NBITS-1)
/*
* Do some analysis on how big the bitmap is before we use a tree. For a brute
* force linear search, if we would have to call ffs_lu() more than 2^3 times,
* use a tree instead.
*/
#if LG_BITMAP_MAXBITS - LG_BITMAP_GROUP_NBITS > 3
# define USE_TREE
#endif
/* Number of groups required to store a given number of bits. */ /* Number of groups required to store a given number of bits. */
#define BITMAP_BITS2GROUPS(nbits) \ #define BITMAP_BITS2GROUPS(nbits) \
((nbits + BITMAP_GROUP_NBITS_MASK) >> LG_BITMAP_GROUP_NBITS) ((nbits + BITMAP_GROUP_NBITS_MASK) >> LG_BITMAP_GROUP_NBITS)
@ -57,8 +48,6 @@ typedef unsigned long bitmap_t;
/* /*
* Maximum number of groups required to support LG_BITMAP_MAXBITS. * Maximum number of groups required to support LG_BITMAP_MAXBITS.
*/ */
#ifdef USE_TREE
#if LG_BITMAP_MAXBITS <= LG_BITMAP_GROUP_NBITS #if LG_BITMAP_MAXBITS <= LG_BITMAP_GROUP_NBITS
# define BITMAP_GROUPS_MAX BITMAP_GROUPS_1_LEVEL(BITMAP_MAXBITS) # define BITMAP_GROUPS_MAX BITMAP_GROUPS_1_LEVEL(BITMAP_MAXBITS)
#elif LG_BITMAP_MAXBITS <= LG_BITMAP_GROUP_NBITS * 2 #elif LG_BITMAP_MAXBITS <= LG_BITMAP_GROUP_NBITS * 2
@ -76,12 +65,6 @@ typedef unsigned long bitmap_t;
(LG_BITMAP_MAXBITS / LG_SIZEOF_BITMAP) \ (LG_BITMAP_MAXBITS / LG_SIZEOF_BITMAP) \
+ !!(LG_BITMAP_MAXBITS % LG_SIZEOF_BITMAP) + !!(LG_BITMAP_MAXBITS % LG_SIZEOF_BITMAP)
#else /* USE_TREE */
#define BITMAP_GROUPS_MAX BITMAP_BITS2GROUPS(BITMAP_MAXBITS)
#endif /* USE_TREE */
#endif /* JEMALLOC_H_TYPES */ #endif /* JEMALLOC_H_TYPES */
/******************************************************************************/ /******************************************************************************/
#ifdef JEMALLOC_H_STRUCTS #ifdef JEMALLOC_H_STRUCTS
@ -95,7 +78,6 @@ struct bitmap_info_s {
/* Logical number of bits in bitmap (stored at bottom level). */ /* Logical number of bits in bitmap (stored at bottom level). */
size_t nbits; size_t nbits;
#ifdef USE_TREE
/* Number of levels necessary for nbits. */ /* Number of levels necessary for nbits. */
unsigned nlevels; unsigned nlevels;
@ -104,10 +86,6 @@ struct bitmap_info_s {
* bottom to top (e.g. the bottom level is stored in levels[0]). * bottom to top (e.g. the bottom level is stored in levels[0]).
*/ */
bitmap_level_t levels[BITMAP_MAX_LEVELS+1]; bitmap_level_t levels[BITMAP_MAX_LEVELS+1];
#else /* USE_TREE */
/* Number of groups necessary for nbits. */
size_t ngroups;
#endif /* USE_TREE */
}; };
#endif /* JEMALLOC_H_STRUCTS */ #endif /* JEMALLOC_H_STRUCTS */
@ -115,8 +93,9 @@ struct bitmap_info_s {
#ifdef JEMALLOC_H_EXTERNS #ifdef JEMALLOC_H_EXTERNS
void bitmap_info_init(bitmap_info_t *binfo, size_t nbits); void bitmap_info_init(bitmap_info_t *binfo, size_t nbits);
size_t bitmap_info_ngroups(const bitmap_info_t *binfo);
size_t bitmap_size(size_t nbits);
void bitmap_init(bitmap_t *bitmap, const bitmap_info_t *binfo); void bitmap_init(bitmap_t *bitmap, const bitmap_info_t *binfo);
size_t bitmap_size(const bitmap_info_t *binfo);
#endif /* JEMALLOC_H_EXTERNS */ #endif /* JEMALLOC_H_EXTERNS */
/******************************************************************************/ /******************************************************************************/
@ -134,20 +113,10 @@ void bitmap_unset(bitmap_t *bitmap, const bitmap_info_t *binfo, size_t bit);
JEMALLOC_INLINE bool JEMALLOC_INLINE bool
bitmap_full(bitmap_t *bitmap, const bitmap_info_t *binfo) bitmap_full(bitmap_t *bitmap, const bitmap_info_t *binfo)
{ {
#ifdef USE_TREE unsigned rgoff = binfo->levels[binfo->nlevels].group_offset - 1;
size_t rgoff = binfo->levels[binfo->nlevels].group_offset - 1;
bitmap_t rg = bitmap[rgoff]; bitmap_t rg = bitmap[rgoff];
/* The bitmap is full iff the root group is 0. */ /* The bitmap is full iff the root group is 0. */
return (rg == 0); return (rg == 0);
#else
size_t i;
for (i = 0; i < binfo->ngroups; i++) {
if (bitmap[i] != 0)
return (false);
}
return (true);
#endif
} }
JEMALLOC_INLINE bool JEMALLOC_INLINE bool
@ -159,7 +128,7 @@ bitmap_get(bitmap_t *bitmap, const bitmap_info_t *binfo, size_t bit)
assert(bit < binfo->nbits); assert(bit < binfo->nbits);
goff = bit >> LG_BITMAP_GROUP_NBITS; goff = bit >> LG_BITMAP_GROUP_NBITS;
g = bitmap[goff]; g = bitmap[goff];
return (!(g & (ZU(1) << (bit & BITMAP_GROUP_NBITS_MASK)))); return (!(g & (1LU << (bit & BITMAP_GROUP_NBITS_MASK))));
} }
JEMALLOC_INLINE void JEMALLOC_INLINE void
@ -174,11 +143,10 @@ bitmap_set(bitmap_t *bitmap, const bitmap_info_t *binfo, size_t bit)
goff = bit >> LG_BITMAP_GROUP_NBITS; goff = bit >> LG_BITMAP_GROUP_NBITS;
gp = &bitmap[goff]; gp = &bitmap[goff];
g = *gp; g = *gp;
assert(g & (ZU(1) << (bit & BITMAP_GROUP_NBITS_MASK))); assert(g & (1LU << (bit & BITMAP_GROUP_NBITS_MASK)));
g ^= ZU(1) << (bit & BITMAP_GROUP_NBITS_MASK); g ^= 1LU << (bit & BITMAP_GROUP_NBITS_MASK);
*gp = g; *gp = g;
assert(bitmap_get(bitmap, binfo, bit)); assert(bitmap_get(bitmap, binfo, bit));
#ifdef USE_TREE
/* Propagate group state transitions up the tree. */ /* Propagate group state transitions up the tree. */
if (g == 0) { if (g == 0) {
unsigned i; unsigned i;
@ -187,14 +155,13 @@ bitmap_set(bitmap_t *bitmap, const bitmap_info_t *binfo, size_t bit)
goff = bit >> LG_BITMAP_GROUP_NBITS; goff = bit >> LG_BITMAP_GROUP_NBITS;
gp = &bitmap[binfo->levels[i].group_offset + goff]; gp = &bitmap[binfo->levels[i].group_offset + goff];
g = *gp; g = *gp;
assert(g & (ZU(1) << (bit & BITMAP_GROUP_NBITS_MASK))); assert(g & (1LU << (bit & BITMAP_GROUP_NBITS_MASK)));
g ^= ZU(1) << (bit & BITMAP_GROUP_NBITS_MASK); g ^= 1LU << (bit & BITMAP_GROUP_NBITS_MASK);
*gp = g; *gp = g;
if (g != 0) if (g != 0)
break; break;
} }
} }
#endif
} }
/* sfu: set first unset. */ /* sfu: set first unset. */
@ -207,24 +174,15 @@ bitmap_sfu(bitmap_t *bitmap, const bitmap_info_t *binfo)
assert(!bitmap_full(bitmap, binfo)); assert(!bitmap_full(bitmap, binfo));
#ifdef USE_TREE
i = binfo->nlevels - 1; i = binfo->nlevels - 1;
g = bitmap[binfo->levels[i].group_offset]; g = bitmap[binfo->levels[i].group_offset];
bit = ffs_lu(g) - 1; bit = jemalloc_ffsl(g) - 1;
while (i > 0) { while (i > 0) {
i--; i--;
g = bitmap[binfo->levels[i].group_offset + bit]; g = bitmap[binfo->levels[i].group_offset + bit];
bit = (bit << LG_BITMAP_GROUP_NBITS) + (ffs_lu(g) - 1); bit = (bit << LG_BITMAP_GROUP_NBITS) + (jemalloc_ffsl(g) - 1);
} }
#else
i = 0;
g = bitmap[0];
while ((bit = ffs_lu(g)) == 0) {
i++;
g = bitmap[i];
}
bit = (i << LG_BITMAP_GROUP_NBITS) + (bit - 1);
#endif
bitmap_set(bitmap, binfo, bit); bitmap_set(bitmap, binfo, bit);
return (bit); return (bit);
} }
@ -235,7 +193,7 @@ bitmap_unset(bitmap_t *bitmap, const bitmap_info_t *binfo, size_t bit)
size_t goff; size_t goff;
bitmap_t *gp; bitmap_t *gp;
bitmap_t g; bitmap_t g;
UNUSED bool propagate; bool propagate;
assert(bit < binfo->nbits); assert(bit < binfo->nbits);
assert(bitmap_get(bitmap, binfo, bit)); assert(bitmap_get(bitmap, binfo, bit));
@ -243,11 +201,10 @@ bitmap_unset(bitmap_t *bitmap, const bitmap_info_t *binfo, size_t bit)
gp = &bitmap[goff]; gp = &bitmap[goff];
g = *gp; g = *gp;
propagate = (g == 0); propagate = (g == 0);
assert((g & (ZU(1) << (bit & BITMAP_GROUP_NBITS_MASK))) == 0); assert((g & (1LU << (bit & BITMAP_GROUP_NBITS_MASK))) == 0);
g ^= ZU(1) << (bit & BITMAP_GROUP_NBITS_MASK); g ^= 1LU << (bit & BITMAP_GROUP_NBITS_MASK);
*gp = g; *gp = g;
assert(!bitmap_get(bitmap, binfo, bit)); assert(!bitmap_get(bitmap, binfo, bit));
#ifdef USE_TREE
/* Propagate group state transitions up the tree. */ /* Propagate group state transitions up the tree. */
if (propagate) { if (propagate) {
unsigned i; unsigned i;
@ -257,15 +214,14 @@ bitmap_unset(bitmap_t *bitmap, const bitmap_info_t *binfo, size_t bit)
gp = &bitmap[binfo->levels[i].group_offset + goff]; gp = &bitmap[binfo->levels[i].group_offset + goff];
g = *gp; g = *gp;
propagate = (g == 0); propagate = (g == 0);
assert((g & (ZU(1) << (bit & BITMAP_GROUP_NBITS_MASK))) assert((g & (1LU << (bit & BITMAP_GROUP_NBITS_MASK)))
== 0); == 0);
g ^= ZU(1) << (bit & BITMAP_GROUP_NBITS_MASK); g ^= 1LU << (bit & BITMAP_GROUP_NBITS_MASK);
*gp = g; *gp = g;
if (!propagate) if (!propagate)
break; break;
} }
} }
#endif /* USE_TREE */
} }
#endif #endif

View File

@ -48,30 +48,32 @@ extern size_t chunk_npages;
extern const chunk_hooks_t chunk_hooks_default; extern const chunk_hooks_t chunk_hooks_default;
chunk_hooks_t chunk_hooks_get(tsdn_t *tsdn, arena_t *arena); chunk_hooks_t chunk_hooks_get(arena_t *arena);
chunk_hooks_t chunk_hooks_set(tsdn_t *tsdn, arena_t *arena, chunk_hooks_t chunk_hooks_set(arena_t *arena,
const chunk_hooks_t *chunk_hooks); const chunk_hooks_t *chunk_hooks);
bool chunk_register(tsdn_t *tsdn, const void *chunk, bool chunk_register(const void *chunk, const extent_node_t *node);
const extent_node_t *node);
void chunk_deregister(const void *chunk, const extent_node_t *node); void chunk_deregister(const void *chunk, const extent_node_t *node);
void *chunk_alloc_base(size_t size); void *chunk_alloc_base(size_t size);
void *chunk_alloc_cache(tsdn_t *tsdn, arena_t *arena, void *chunk_alloc_cache(arena_t *arena, chunk_hooks_t *chunk_hooks,
chunk_hooks_t *chunk_hooks, void *new_addr, size_t size, size_t alignment, void *new_addr, size_t size, size_t alignment, bool *zero,
size_t *sn, bool *zero, bool *commit, bool dalloc_node); bool dalloc_node);
void *chunk_alloc_wrapper(tsdn_t *tsdn, arena_t *arena, void *chunk_alloc_wrapper(arena_t *arena, chunk_hooks_t *chunk_hooks,
chunk_hooks_t *chunk_hooks, void *new_addr, size_t size, size_t alignment, void *new_addr, size_t size, size_t alignment, bool *zero, bool *commit);
size_t *sn, bool *zero, bool *commit); void chunk_dalloc_cache(arena_t *arena, chunk_hooks_t *chunk_hooks,
void chunk_dalloc_cache(tsdn_t *tsdn, arena_t *arena, void *chunk, size_t size, bool committed);
chunk_hooks_t *chunk_hooks, void *chunk, size_t size, size_t sn, void chunk_dalloc_arena(arena_t *arena, chunk_hooks_t *chunk_hooks,
bool committed); void *chunk, size_t size, bool zeroed, bool committed);
void chunk_dalloc_wrapper(tsdn_t *tsdn, arena_t *arena, void chunk_dalloc_wrapper(arena_t *arena, chunk_hooks_t *chunk_hooks,
chunk_hooks_t *chunk_hooks, void *chunk, size_t size, size_t sn, void *chunk, size_t size, bool committed);
bool zeroed, bool committed); bool chunk_purge_arena(arena_t *arena, void *chunk, size_t offset,
bool chunk_purge_wrapper(tsdn_t *tsdn, arena_t *arena,
chunk_hooks_t *chunk_hooks, void *chunk, size_t size, size_t offset,
size_t length); size_t length);
bool chunk_purge_wrapper(arena_t *arena, chunk_hooks_t *chunk_hooks,
void *chunk, size_t size, size_t offset, size_t length);
bool chunk_boot(void); bool chunk_boot(void);
void chunk_prefork(void);
void chunk_postfork_parent(void);
void chunk_postfork_child(void);
#endif /* JEMALLOC_H_EXTERNS */ #endif /* JEMALLOC_H_EXTERNS */
/******************************************************************************/ /******************************************************************************/

View File

@ -23,11 +23,13 @@ extern const char *dss_prec_names[];
dss_prec_t chunk_dss_prec_get(void); dss_prec_t chunk_dss_prec_get(void);
bool chunk_dss_prec_set(dss_prec_t dss_prec); bool chunk_dss_prec_set(dss_prec_t dss_prec);
void *chunk_alloc_dss(tsdn_t *tsdn, arena_t *arena, void *new_addr, void *chunk_alloc_dss(arena_t *arena, void *new_addr, size_t size,
size_t size, size_t alignment, bool *zero, bool *commit); size_t alignment, bool *zero, bool *commit);
bool chunk_in_dss(void *chunk); bool chunk_in_dss(void *chunk);
bool chunk_dss_mergeable(void *chunk_a, void *chunk_b); bool chunk_dss_boot(void);
void chunk_dss_boot(void); void chunk_dss_prefork(void);
void chunk_dss_postfork_parent(void);
void chunk_dss_postfork_child(void);
#endif /* JEMALLOC_H_EXTERNS */ #endif /* JEMALLOC_H_EXTERNS */
/******************************************************************************/ /******************************************************************************/

View File

@ -9,8 +9,8 @@
/******************************************************************************/ /******************************************************************************/
#ifdef JEMALLOC_H_EXTERNS #ifdef JEMALLOC_H_EXTERNS
void *chunk_alloc_mmap(void *new_addr, size_t size, size_t alignment, void *chunk_alloc_mmap(size_t size, size_t alignment, bool *zero,
bool *zero, bool *commit); bool *commit);
bool chunk_dalloc_mmap(void *chunk, size_t size); bool chunk_dalloc_mmap(void *chunk, size_t size);
#endif /* JEMALLOC_H_EXTERNS */ #endif /* JEMALLOC_H_EXTERNS */

View File

@ -40,7 +40,9 @@ struct ckh_s {
#endif #endif
/* Used for pseudo-random number generation. */ /* Used for pseudo-random number generation. */
uint64_t prng_state; #define CKH_A 1103515241
#define CKH_C 12347
uint32_t prng_state;
/* Total number of items. */ /* Total number of items. */
size_t count; size_t count;
@ -72,7 +74,7 @@ bool ckh_iter(ckh_t *ckh, size_t *tabind, void **key, void **data);
bool ckh_insert(tsd_t *tsd, ckh_t *ckh, const void *key, const void *data); bool ckh_insert(tsd_t *tsd, ckh_t *ckh, const void *key, const void *data);
bool ckh_remove(tsd_t *tsd, ckh_t *ckh, const void *searchkey, void **key, bool ckh_remove(tsd_t *tsd, ckh_t *ckh, const void *searchkey, void **key,
void **data); void **data);
bool ckh_search(ckh_t *ckh, const void *searchkey, void **key, void **data); bool ckh_search(ckh_t *ckh, const void *seachkey, void **key, void **data);
void ckh_string_hash(const void *key, size_t r_hash[2]); void ckh_string_hash(const void *key, size_t r_hash[2]);
bool ckh_string_keycomp(const void *k1, const void *k2); bool ckh_string_keycomp(const void *k1, const void *k2);
void ckh_pointer_hash(const void *key, size_t r_hash[2]); void ckh_pointer_hash(const void *key, size_t r_hash[2]);

View File

@ -21,14 +21,13 @@ struct ctl_named_node_s {
/* If (nchildren == 0), this is a terminal node. */ /* If (nchildren == 0), this is a terminal node. */
unsigned nchildren; unsigned nchildren;
const ctl_node_t *children; const ctl_node_t *children;
int (*ctl)(tsd_t *, const size_t *, size_t, void *, int (*ctl)(const size_t *, size_t, void *, size_t *,
size_t *, void *, size_t); void *, size_t);
}; };
struct ctl_indexed_node_s { struct ctl_indexed_node_s {
struct ctl_node_s node; struct ctl_node_s node;
const ctl_named_node_t *(*index)(tsdn_t *, const size_t *, size_t, const ctl_named_node_t *(*index)(const size_t *, size_t, size_t);
size_t);
}; };
struct ctl_arena_stats_s { struct ctl_arena_stats_s {
@ -36,12 +35,8 @@ struct ctl_arena_stats_s {
unsigned nthreads; unsigned nthreads;
const char *dss; const char *dss;
ssize_t lg_dirty_mult; ssize_t lg_dirty_mult;
ssize_t decay_time;
size_t pactive; size_t pactive;
size_t pdirty; size_t pdirty;
/* The remainder are only populated if config_stats is true. */
arena_stats_t astats; arena_stats_t astats;
/* Aggregate stats for small size classes, based on bin stats. */ /* Aggregate stats for small size classes, based on bin stats. */
@ -61,7 +56,6 @@ struct ctl_stats_s {
size_t metadata; size_t metadata;
size_t resident; size_t resident;
size_t mapped; size_t mapped;
size_t retained;
unsigned narenas; unsigned narenas;
ctl_arena_stats_t *arenas; /* (narenas + 1) elements. */ ctl_arena_stats_t *arenas; /* (narenas + 1) elements. */
}; };
@ -70,17 +64,16 @@ struct ctl_stats_s {
/******************************************************************************/ /******************************************************************************/
#ifdef JEMALLOC_H_EXTERNS #ifdef JEMALLOC_H_EXTERNS
int ctl_byname(tsd_t *tsd, const char *name, void *oldp, size_t *oldlenp, int ctl_byname(const char *name, void *oldp, size_t *oldlenp, void *newp,
void *newp, size_t newlen); size_t newlen);
int ctl_nametomib(tsdn_t *tsdn, const char *name, size_t *mibp, int ctl_nametomib(const char *name, size_t *mibp, size_t *miblenp);
size_t *miblenp);
int ctl_bymib(tsd_t *tsd, const size_t *mib, size_t miblen, void *oldp, int ctl_bymib(const size_t *mib, size_t miblen, void *oldp, size_t *oldlenp,
size_t *oldlenp, void *newp, size_t newlen); void *newp, size_t newlen);
bool ctl_boot(void); bool ctl_boot(void);
void ctl_prefork(tsdn_t *tsdn); void ctl_prefork(void);
void ctl_postfork_parent(tsdn_t *tsdn); void ctl_postfork_parent(void);
void ctl_postfork_child(tsdn_t *tsdn); void ctl_postfork_child(void);
#define xmallctl(name, oldp, oldlenp, newp, newlen) do { \ #define xmallctl(name, oldp, oldlenp, newp, newlen) do { \
if (je_mallctl(name, oldp, oldlenp, newp, newlen) \ if (je_mallctl(name, oldp, oldlenp, newp, newlen) \

View File

@ -18,20 +18,6 @@ struct extent_node_s {
/* Total region size. */ /* Total region size. */
size_t en_size; size_t en_size;
/*
* Serial number (potentially non-unique).
*
* In principle serial numbers can wrap around on 32-bit systems if
* JEMALLOC_MUNMAP is defined, but as long as comparison functions fall
* back on address comparison for equal serial numbers, stable (if
* imperfect) ordering is maintained.
*
* Serial numbers may not be unique even in the absence of wrap-around,
* e.g. when splitting an extent and assigning the same serial number to
* both resulting adjacent extents.
*/
size_t en_sn;
/* /*
* The zeroed flag is used by chunk recycling code to track whether * The zeroed flag is used by chunk recycling code to track whether
* memory is zero-filled. * memory is zero-filled.
@ -59,10 +45,10 @@ struct extent_node_s {
qr(extent_node_t) cc_link; qr(extent_node_t) cc_link;
union { union {
/* Linkage for the size/sn/address-ordered tree. */ /* Linkage for the size/address-ordered tree. */
rb_node(extent_node_t) szsnad_link; rb_node(extent_node_t) szad_link;
/* Linkage for arena's achunks, huge, and node_cache lists. */ /* Linkage for arena's huge and node_cache lists. */
ql_elm(extent_node_t) ql_link; ql_elm(extent_node_t) ql_link;
}; };
@ -75,7 +61,7 @@ typedef rb_tree(extent_node_t) extent_tree_t;
/******************************************************************************/ /******************************************************************************/
#ifdef JEMALLOC_H_EXTERNS #ifdef JEMALLOC_H_EXTERNS
rb_proto(, extent_tree_szsnad_, extent_tree_t, extent_node_t) rb_proto(, extent_tree_szad_, extent_tree_t, extent_node_t)
rb_proto(, extent_tree_ad_, extent_tree_t, extent_node_t) rb_proto(, extent_tree_ad_, extent_tree_t, extent_node_t)
@ -87,7 +73,6 @@ rb_proto(, extent_tree_ad_, extent_tree_t, extent_node_t)
arena_t *extent_node_arena_get(const extent_node_t *node); arena_t *extent_node_arena_get(const extent_node_t *node);
void *extent_node_addr_get(const extent_node_t *node); void *extent_node_addr_get(const extent_node_t *node);
size_t extent_node_size_get(const extent_node_t *node); size_t extent_node_size_get(const extent_node_t *node);
size_t extent_node_sn_get(const extent_node_t *node);
bool extent_node_zeroed_get(const extent_node_t *node); bool extent_node_zeroed_get(const extent_node_t *node);
bool extent_node_committed_get(const extent_node_t *node); bool extent_node_committed_get(const extent_node_t *node);
bool extent_node_achunk_get(const extent_node_t *node); bool extent_node_achunk_get(const extent_node_t *node);
@ -95,13 +80,12 @@ prof_tctx_t *extent_node_prof_tctx_get(const extent_node_t *node);
void extent_node_arena_set(extent_node_t *node, arena_t *arena); void extent_node_arena_set(extent_node_t *node, arena_t *arena);
void extent_node_addr_set(extent_node_t *node, void *addr); void extent_node_addr_set(extent_node_t *node, void *addr);
void extent_node_size_set(extent_node_t *node, size_t size); void extent_node_size_set(extent_node_t *node, size_t size);
void extent_node_sn_set(extent_node_t *node, size_t sn);
void extent_node_zeroed_set(extent_node_t *node, bool zeroed); void extent_node_zeroed_set(extent_node_t *node, bool zeroed);
void extent_node_committed_set(extent_node_t *node, bool committed); void extent_node_committed_set(extent_node_t *node, bool committed);
void extent_node_achunk_set(extent_node_t *node, bool achunk); void extent_node_achunk_set(extent_node_t *node, bool achunk);
void extent_node_prof_tctx_set(extent_node_t *node, prof_tctx_t *tctx); void extent_node_prof_tctx_set(extent_node_t *node, prof_tctx_t *tctx);
void extent_node_init(extent_node_t *node, arena_t *arena, void *addr, void extent_node_init(extent_node_t *node, arena_t *arena, void *addr,
size_t size, size_t sn, bool zeroed, bool committed); size_t size, bool zeroed, bool committed);
void extent_node_dirty_linkage_init(extent_node_t *node); void extent_node_dirty_linkage_init(extent_node_t *node);
void extent_node_dirty_insert(extent_node_t *node, void extent_node_dirty_insert(extent_node_t *node,
arena_runs_dirty_link_t *runs_dirty, extent_node_t *chunks_dirty); arena_runs_dirty_link_t *runs_dirty, extent_node_t *chunks_dirty);
@ -130,13 +114,6 @@ extent_node_size_get(const extent_node_t *node)
return (node->en_size); return (node->en_size);
} }
JEMALLOC_INLINE size_t
extent_node_sn_get(const extent_node_t *node)
{
return (node->en_sn);
}
JEMALLOC_INLINE bool JEMALLOC_INLINE bool
extent_node_zeroed_get(const extent_node_t *node) extent_node_zeroed_get(const extent_node_t *node)
{ {
@ -187,13 +164,6 @@ extent_node_size_set(extent_node_t *node, size_t size)
node->en_size = size; node->en_size = size;
} }
JEMALLOC_INLINE void
extent_node_sn_set(extent_node_t *node, size_t sn)
{
node->en_sn = sn;
}
JEMALLOC_INLINE void JEMALLOC_INLINE void
extent_node_zeroed_set(extent_node_t *node, bool zeroed) extent_node_zeroed_set(extent_node_t *node, bool zeroed)
{ {
@ -224,13 +194,12 @@ extent_node_prof_tctx_set(extent_node_t *node, prof_tctx_t *tctx)
JEMALLOC_INLINE void JEMALLOC_INLINE void
extent_node_init(extent_node_t *node, arena_t *arena, void *addr, size_t size, extent_node_init(extent_node_t *node, arena_t *arena, void *addr, size_t size,
size_t sn, bool zeroed, bool committed) bool zeroed, bool committed)
{ {
extent_node_arena_set(node, arena); extent_node_arena_set(node, arena);
extent_node_addr_set(node, addr); extent_node_addr_set(node, addr);
extent_node_size_set(node, size); extent_node_size_set(node, size);
extent_node_sn_set(node, sn);
extent_node_zeroed_set(node, zeroed); extent_node_zeroed_set(node, zeroed);
extent_node_committed_set(node, committed); extent_node_committed_set(node, committed);
extent_node_achunk_set(node, false); extent_node_achunk_set(node, false);

View File

@ -1,6 +1,6 @@
/* /*
* The following hash function is based on MurmurHash3, placed into the public * The following hash function is based on MurmurHash3, placed into the public
* domain by Austin Appleby. See https://github.com/aappleby/smhasher for * domain by Austin Appleby. See http://code.google.com/p/smhasher/ for
* details. * details.
*/ */
/******************************************************************************/ /******************************************************************************/
@ -49,14 +49,6 @@ JEMALLOC_INLINE uint32_t
hash_get_block_32(const uint32_t *p, int i) hash_get_block_32(const uint32_t *p, int i)
{ {
/* Handle unaligned read. */
if (unlikely((uintptr_t)p & (sizeof(uint32_t)-1)) != 0) {
uint32_t ret;
memcpy(&ret, (uint8_t *)(p + i), sizeof(uint32_t));
return (ret);
}
return (p[i]); return (p[i]);
} }
@ -64,14 +56,6 @@ JEMALLOC_INLINE uint64_t
hash_get_block_64(const uint64_t *p, int i) hash_get_block_64(const uint64_t *p, int i)
{ {
/* Handle unaligned read. */
if (unlikely((uintptr_t)p & (sizeof(uint64_t)-1)) != 0) {
uint64_t ret;
memcpy(&ret, (uint8_t *)(p + i), sizeof(uint64_t));
return (ret);
}
return (p[i]); return (p[i]);
} }
@ -337,18 +321,13 @@ hash_x64_128(const void *key, const int len, const uint32_t seed,
JEMALLOC_INLINE void JEMALLOC_INLINE void
hash(const void *key, size_t len, const uint32_t seed, size_t r_hash[2]) hash(const void *key, size_t len, const uint32_t seed, size_t r_hash[2])
{ {
assert(len <= INT_MAX); /* Unfortunate implementation limitation. */
#if (LG_SIZEOF_PTR == 3 && !defined(JEMALLOC_BIG_ENDIAN)) #if (LG_SIZEOF_PTR == 3 && !defined(JEMALLOC_BIG_ENDIAN))
hash_x64_128(key, (int)len, seed, (uint64_t *)r_hash); hash_x64_128(key, len, seed, (uint64_t *)r_hash);
#else #else
{ uint64_t hashes[2];
uint64_t hashes[2]; hash_x86_128(key, len, seed, hashes);
hash_x86_128(key, (int)len, seed, hashes); r_hash[0] = (size_t)hashes[0];
r_hash[0] = (size_t)hashes[0]; r_hash[1] = (size_t)hashes[1];
r_hash[1] = (size_t)hashes[1];
}
#endif #endif
} }
#endif #endif

View File

@ -9,23 +9,24 @@
/******************************************************************************/ /******************************************************************************/
#ifdef JEMALLOC_H_EXTERNS #ifdef JEMALLOC_H_EXTERNS
void *huge_malloc(tsdn_t *tsdn, arena_t *arena, size_t usize, bool zero); void *huge_malloc(tsd_t *tsd, arena_t *arena, size_t size, bool zero,
void *huge_palloc(tsdn_t *tsdn, arena_t *arena, size_t usize, tcache_t *tcache);
size_t alignment, bool zero); void *huge_palloc(tsd_t *tsd, arena_t *arena, size_t size, size_t alignment,
bool huge_ralloc_no_move(tsdn_t *tsdn, void *ptr, size_t oldsize, bool zero, tcache_t *tcache);
size_t usize_min, size_t usize_max, bool zero); bool huge_ralloc_no_move(void *ptr, size_t oldsize, size_t usize_min,
size_t usize_max, bool zero);
void *huge_ralloc(tsd_t *tsd, arena_t *arena, void *ptr, size_t oldsize, void *huge_ralloc(tsd_t *tsd, arena_t *arena, void *ptr, size_t oldsize,
size_t usize, size_t alignment, bool zero, tcache_t *tcache); size_t usize, size_t alignment, bool zero, tcache_t *tcache);
#ifdef JEMALLOC_JET #ifdef JEMALLOC_JET
typedef void (huge_dalloc_junk_t)(void *, size_t); typedef void (huge_dalloc_junk_t)(void *, size_t);
extern huge_dalloc_junk_t *huge_dalloc_junk; extern huge_dalloc_junk_t *huge_dalloc_junk;
#endif #endif
void huge_dalloc(tsdn_t *tsdn, void *ptr); void huge_dalloc(tsd_t *tsd, void *ptr, tcache_t *tcache);
arena_t *huge_aalloc(const void *ptr); arena_t *huge_aalloc(const void *ptr);
size_t huge_salloc(tsdn_t *tsdn, const void *ptr); size_t huge_salloc(const void *ptr);
prof_tctx_t *huge_prof_tctx_get(tsdn_t *tsdn, const void *ptr); prof_tctx_t *huge_prof_tctx_get(const void *ptr);
void huge_prof_tctx_set(tsdn_t *tsdn, const void *ptr, prof_tctx_t *tctx); void huge_prof_tctx_set(const void *ptr, prof_tctx_t *tctx);
void huge_prof_tctx_reset(tsdn_t *tsdn, const void *ptr); void huge_prof_tctx_reset(const void *ptr);
#endif /* JEMALLOC_H_EXTERNS */ #endif /* JEMALLOC_H_EXTERNS */
/******************************************************************************/ /******************************************************************************/

View File

@ -49,7 +49,6 @@ static const bool config_lazy_lock =
false false
#endif #endif
; ;
static const char * const config_malloc_conf = JEMALLOC_CONFIG_MALLOC_CONF;
static const bool config_prof = static const bool config_prof =
#ifdef JEMALLOC_PROF #ifdef JEMALLOC_PROF
true true
@ -161,10 +160,7 @@ static const bool config_cache_oblivious =
#include <malloc/malloc.h> #include <malloc/malloc.h>
#endif #endif
#include "jemalloc/internal/ph.h"
#ifndef __PGI
#define RB_COMPACT #define RB_COMPACT
#endif
#include "jemalloc/internal/rb.h" #include "jemalloc/internal/rb.h"
#include "jemalloc/internal/qr.h" #include "jemalloc/internal/qr.h"
#include "jemalloc/internal/ql.h" #include "jemalloc/internal/ql.h"
@ -187,9 +183,6 @@ static const bool config_cache_oblivious =
#include "jemalloc/internal/jemalloc_internal_macros.h" #include "jemalloc/internal/jemalloc_internal_macros.h"
/* Page size index type. */
typedef unsigned pszind_t;
/* Size class index type. */ /* Size class index type. */
typedef unsigned szind_t; typedef unsigned szind_t;
@ -239,7 +232,7 @@ typedef unsigned szind_t;
# ifdef __alpha__ # ifdef __alpha__
# define LG_QUANTUM 4 # define LG_QUANTUM 4
# endif # endif
# if (defined(__sparc64__) || defined(__sparcv9) || defined(__sparc_v9__)) # if (defined(__sparc64__) || defined(__sparcv9))
# define LG_QUANTUM 4 # define LG_QUANTUM 4
# endif # endif
# if (defined(__amd64__) || defined(__x86_64__) || defined(_M_X64)) # if (defined(__amd64__) || defined(__x86_64__) || defined(_M_X64))
@ -263,9 +256,6 @@ typedef unsigned szind_t;
# ifdef __powerpc__ # ifdef __powerpc__
# define LG_QUANTUM 4 # define LG_QUANTUM 4
# endif # endif
# ifdef __riscv__
# define LG_QUANTUM 4
# endif
# ifdef __s390__ # ifdef __s390__
# define LG_QUANTUM 4 # define LG_QUANTUM 4
# endif # endif
@ -327,17 +317,13 @@ typedef unsigned szind_t;
#define PAGE ((size_t)(1U << LG_PAGE)) #define PAGE ((size_t)(1U << LG_PAGE))
#define PAGE_MASK ((size_t)(PAGE - 1)) #define PAGE_MASK ((size_t)(PAGE - 1))
/* Return the page base address for the page containing address a. */
#define PAGE_ADDR2BASE(a) \
((void *)((uintptr_t)(a) & ~PAGE_MASK))
/* Return the smallest pagesize multiple that is >= s. */ /* Return the smallest pagesize multiple that is >= s. */
#define PAGE_CEILING(s) \ #define PAGE_CEILING(s) \
(((s) + PAGE_MASK) & ~PAGE_MASK) (((s) + PAGE_MASK) & ~PAGE_MASK)
/* Return the nearest aligned address at or below a. */ /* Return the nearest aligned address at or below a. */
#define ALIGNMENT_ADDR2BASE(a, alignment) \ #define ALIGNMENT_ADDR2BASE(a, alignment) \
((void *)((uintptr_t)(a) & ((~(alignment)) + 1))) ((void *)((uintptr_t)(a) & (-(alignment))))
/* Return the offset between a and the nearest aligned address at or below a. */ /* Return the offset between a and the nearest aligned address at or below a. */
#define ALIGNMENT_ADDR2OFFSET(a, alignment) \ #define ALIGNMENT_ADDR2OFFSET(a, alignment) \
@ -345,7 +331,7 @@ typedef unsigned szind_t;
/* Return the smallest alignment multiple that is >= s. */ /* Return the smallest alignment multiple that is >= s. */
#define ALIGNMENT_CEILING(s, alignment) \ #define ALIGNMENT_CEILING(s, alignment) \
(((s) + (alignment - 1)) & ((~(alignment)) + 1)) (((s) + (alignment - 1)) & (-(alignment)))
/* Declare a variable-length array. */ /* Declare a variable-length array. */
#if __STDC_VERSION__ < 199901L #if __STDC_VERSION__ < 199901L
@ -365,19 +351,14 @@ typedef unsigned szind_t;
# define VARIABLE_ARRAY(type, name, count) type name[(count)] # define VARIABLE_ARRAY(type, name, count) type name[(count)]
#endif #endif
#include "jemalloc/internal/nstime.h"
#include "jemalloc/internal/valgrind.h" #include "jemalloc/internal/valgrind.h"
#include "jemalloc/internal/util.h" #include "jemalloc/internal/util.h"
#include "jemalloc/internal/atomic.h" #include "jemalloc/internal/atomic.h"
#include "jemalloc/internal/spin.h"
#include "jemalloc/internal/prng.h" #include "jemalloc/internal/prng.h"
#include "jemalloc/internal/ticker.h"
#include "jemalloc/internal/ckh.h" #include "jemalloc/internal/ckh.h"
#include "jemalloc/internal/size_classes.h" #include "jemalloc/internal/size_classes.h"
#include "jemalloc/internal/smoothstep.h"
#include "jemalloc/internal/stats.h" #include "jemalloc/internal/stats.h"
#include "jemalloc/internal/ctl.h" #include "jemalloc/internal/ctl.h"
#include "jemalloc/internal/witness.h"
#include "jemalloc/internal/mutex.h" #include "jemalloc/internal/mutex.h"
#include "jemalloc/internal/tsd.h" #include "jemalloc/internal/tsd.h"
#include "jemalloc/internal/mb.h" #include "jemalloc/internal/mb.h"
@ -398,19 +379,14 @@ typedef unsigned szind_t;
/******************************************************************************/ /******************************************************************************/
#define JEMALLOC_H_STRUCTS #define JEMALLOC_H_STRUCTS
#include "jemalloc/internal/nstime.h"
#include "jemalloc/internal/valgrind.h" #include "jemalloc/internal/valgrind.h"
#include "jemalloc/internal/util.h" #include "jemalloc/internal/util.h"
#include "jemalloc/internal/atomic.h" #include "jemalloc/internal/atomic.h"
#include "jemalloc/internal/spin.h"
#include "jemalloc/internal/prng.h" #include "jemalloc/internal/prng.h"
#include "jemalloc/internal/ticker.h"
#include "jemalloc/internal/ckh.h" #include "jemalloc/internal/ckh.h"
#include "jemalloc/internal/size_classes.h" #include "jemalloc/internal/size_classes.h"
#include "jemalloc/internal/smoothstep.h"
#include "jemalloc/internal/stats.h" #include "jemalloc/internal/stats.h"
#include "jemalloc/internal/ctl.h" #include "jemalloc/internal/ctl.h"
#include "jemalloc/internal/witness.h"
#include "jemalloc/internal/mutex.h" #include "jemalloc/internal/mutex.h"
#include "jemalloc/internal/mb.h" #include "jemalloc/internal/mb.h"
#include "jemalloc/internal/bitmap.h" #include "jemalloc/internal/bitmap.h"
@ -446,27 +422,13 @@ extern bool opt_redzone;
extern bool opt_utrace; extern bool opt_utrace;
extern bool opt_xmalloc; extern bool opt_xmalloc;
extern bool opt_zero; extern bool opt_zero;
extern unsigned opt_narenas; extern size_t opt_narenas;
extern bool in_valgrind; extern bool in_valgrind;
/* Number of CPUs. */ /* Number of CPUs. */
extern unsigned ncpus; extern unsigned ncpus;
/* Number of arenas used for automatic multiplexing of threads and arenas. */
extern unsigned narenas_auto;
/*
* Arenas that are used to service external requests. Not all elements of the
* arenas array are necessarily used; arenas are created lazily as needed.
*/
extern arena_t **arenas;
/*
* pind2sz_tab encodes the same information as could be computed by
* pind2sz_compute().
*/
extern size_t const pind2sz_tab[NPSIZES];
/* /*
* index2size_tab encodes the same information as could be computed (at * index2size_tab encodes the same information as could be computed (at
* unacceptable cost in some code paths) by index2size_compute(). * unacceptable cost in some code paths) by index2size_compute().
@ -485,35 +447,31 @@ void a0dalloc(void *ptr);
void *bootstrap_malloc(size_t size); void *bootstrap_malloc(size_t size);
void *bootstrap_calloc(size_t num, size_t size); void *bootstrap_calloc(size_t num, size_t size);
void bootstrap_free(void *ptr); void bootstrap_free(void *ptr);
arena_t *arenas_extend(unsigned ind);
arena_t *arena_init(unsigned ind);
unsigned narenas_total_get(void); unsigned narenas_total_get(void);
arena_t *arena_init(tsdn_t *tsdn, unsigned ind); arena_t *arena_get_hard(tsd_t *tsd, unsigned ind, bool init_if_missing);
arena_tdata_t *arena_tdata_get_hard(tsd_t *tsd, unsigned ind); arena_t *arena_choose_hard(tsd_t *tsd);
arena_t *arena_choose_hard(tsd_t *tsd, bool internal);
void arena_migrate(tsd_t *tsd, unsigned oldind, unsigned newind); void arena_migrate(tsd_t *tsd, unsigned oldind, unsigned newind);
unsigned arena_nbound(unsigned ind);
void thread_allocated_cleanup(tsd_t *tsd); void thread_allocated_cleanup(tsd_t *tsd);
void thread_deallocated_cleanup(tsd_t *tsd); void thread_deallocated_cleanup(tsd_t *tsd);
void iarena_cleanup(tsd_t *tsd);
void arena_cleanup(tsd_t *tsd); void arena_cleanup(tsd_t *tsd);
void arenas_tdata_cleanup(tsd_t *tsd); void arenas_cache_cleanup(tsd_t *tsd);
void narenas_tdata_cleanup(tsd_t *tsd); void narenas_cache_cleanup(tsd_t *tsd);
void arenas_tdata_bypass_cleanup(tsd_t *tsd); void arenas_cache_bypass_cleanup(tsd_t *tsd);
void jemalloc_prefork(void); void jemalloc_prefork(void);
void jemalloc_postfork_parent(void); void jemalloc_postfork_parent(void);
void jemalloc_postfork_child(void); void jemalloc_postfork_child(void);
#include "jemalloc/internal/nstime.h"
#include "jemalloc/internal/valgrind.h" #include "jemalloc/internal/valgrind.h"
#include "jemalloc/internal/util.h" #include "jemalloc/internal/util.h"
#include "jemalloc/internal/atomic.h" #include "jemalloc/internal/atomic.h"
#include "jemalloc/internal/spin.h"
#include "jemalloc/internal/prng.h" #include "jemalloc/internal/prng.h"
#include "jemalloc/internal/ticker.h"
#include "jemalloc/internal/ckh.h" #include "jemalloc/internal/ckh.h"
#include "jemalloc/internal/size_classes.h" #include "jemalloc/internal/size_classes.h"
#include "jemalloc/internal/smoothstep.h"
#include "jemalloc/internal/stats.h" #include "jemalloc/internal/stats.h"
#include "jemalloc/internal/ctl.h" #include "jemalloc/internal/ctl.h"
#include "jemalloc/internal/witness.h"
#include "jemalloc/internal/mutex.h" #include "jemalloc/internal/mutex.h"
#include "jemalloc/internal/mb.h" #include "jemalloc/internal/mb.h"
#include "jemalloc/internal/bitmap.h" #include "jemalloc/internal/bitmap.h"
@ -534,21 +492,16 @@ void jemalloc_postfork_child(void);
/******************************************************************************/ /******************************************************************************/
#define JEMALLOC_H_INLINES #define JEMALLOC_H_INLINES
#include "jemalloc/internal/nstime.h"
#include "jemalloc/internal/valgrind.h" #include "jemalloc/internal/valgrind.h"
#include "jemalloc/internal/util.h" #include "jemalloc/internal/util.h"
#include "jemalloc/internal/atomic.h" #include "jemalloc/internal/atomic.h"
#include "jemalloc/internal/spin.h"
#include "jemalloc/internal/prng.h" #include "jemalloc/internal/prng.h"
#include "jemalloc/internal/ticker.h"
#include "jemalloc/internal/ckh.h" #include "jemalloc/internal/ckh.h"
#include "jemalloc/internal/size_classes.h" #include "jemalloc/internal/size_classes.h"
#include "jemalloc/internal/smoothstep.h"
#include "jemalloc/internal/stats.h" #include "jemalloc/internal/stats.h"
#include "jemalloc/internal/ctl.h" #include "jemalloc/internal/ctl.h"
#include "jemalloc/internal/tsd.h"
#include "jemalloc/internal/witness.h"
#include "jemalloc/internal/mutex.h" #include "jemalloc/internal/mutex.h"
#include "jemalloc/internal/tsd.h"
#include "jemalloc/internal/mb.h" #include "jemalloc/internal/mb.h"
#include "jemalloc/internal/extent.h" #include "jemalloc/internal/extent.h"
#include "jemalloc/internal/base.h" #include "jemalloc/internal/base.h"
@ -558,11 +511,6 @@ void jemalloc_postfork_child(void);
#include "jemalloc/internal/huge.h" #include "jemalloc/internal/huge.h"
#ifndef JEMALLOC_ENABLE_INLINE #ifndef JEMALLOC_ENABLE_INLINE
pszind_t psz2ind(size_t psz);
size_t pind2sz_compute(pszind_t pind);
size_t pind2sz_lookup(pszind_t pind);
size_t pind2sz(pszind_t pind);
size_t psz2u(size_t psz);
szind_t size2index_compute(size_t size); szind_t size2index_compute(size_t size);
szind_t size2index_lookup(size_t size); szind_t size2index_lookup(size_t size);
szind_t size2index(size_t size); szind_t size2index(size_t size);
@ -573,121 +521,39 @@ size_t s2u_compute(size_t size);
size_t s2u_lookup(size_t size); size_t s2u_lookup(size_t size);
size_t s2u(size_t size); size_t s2u(size_t size);
size_t sa2u(size_t size, size_t alignment); size_t sa2u(size_t size, size_t alignment);
arena_t *arena_choose_impl(tsd_t *tsd, arena_t *arena, bool internal);
arena_t *arena_choose(tsd_t *tsd, arena_t *arena); arena_t *arena_choose(tsd_t *tsd, arena_t *arena);
arena_t *arena_ichoose(tsd_t *tsd, arena_t *arena); arena_t *arena_get(tsd_t *tsd, unsigned ind, bool init_if_missing,
arena_tdata_t *arena_tdata_get(tsd_t *tsd, unsigned ind,
bool refresh_if_missing); bool refresh_if_missing);
arena_t *arena_get(tsdn_t *tsdn, unsigned ind, bool init_if_missing);
ticker_t *decay_ticker_get(tsd_t *tsd, unsigned ind);
#endif #endif
#if (defined(JEMALLOC_ENABLE_INLINE) || defined(JEMALLOC_C_)) #if (defined(JEMALLOC_ENABLE_INLINE) || defined(JEMALLOC_C_))
JEMALLOC_INLINE pszind_t
psz2ind(size_t psz)
{
if (unlikely(psz > HUGE_MAXCLASS))
return (NPSIZES);
{
pszind_t x = lg_floor((psz<<1)-1);
pszind_t shift = (x < LG_SIZE_CLASS_GROUP + LG_PAGE) ? 0 : x -
(LG_SIZE_CLASS_GROUP + LG_PAGE);
pszind_t grp = shift << LG_SIZE_CLASS_GROUP;
pszind_t lg_delta = (x < LG_SIZE_CLASS_GROUP + LG_PAGE + 1) ?
LG_PAGE : x - LG_SIZE_CLASS_GROUP - 1;
size_t delta_inverse_mask = ZI(-1) << lg_delta;
pszind_t mod = ((((psz-1) & delta_inverse_mask) >> lg_delta)) &
((ZU(1) << LG_SIZE_CLASS_GROUP) - 1);
pszind_t ind = grp + mod;
return (ind);
}
}
JEMALLOC_INLINE size_t
pind2sz_compute(pszind_t pind)
{
{
size_t grp = pind >> LG_SIZE_CLASS_GROUP;
size_t mod = pind & ((ZU(1) << LG_SIZE_CLASS_GROUP) - 1);
size_t grp_size_mask = ~((!!grp)-1);
size_t grp_size = ((ZU(1) << (LG_PAGE +
(LG_SIZE_CLASS_GROUP-1))) << grp) & grp_size_mask;
size_t shift = (grp == 0) ? 1 : grp;
size_t lg_delta = shift + (LG_PAGE-1);
size_t mod_size = (mod+1) << lg_delta;
size_t sz = grp_size + mod_size;
return (sz);
}
}
JEMALLOC_INLINE size_t
pind2sz_lookup(pszind_t pind)
{
size_t ret = (size_t)pind2sz_tab[pind];
assert(ret == pind2sz_compute(pind));
return (ret);
}
JEMALLOC_INLINE size_t
pind2sz(pszind_t pind)
{
assert(pind < NPSIZES);
return (pind2sz_lookup(pind));
}
JEMALLOC_INLINE size_t
psz2u(size_t psz)
{
if (unlikely(psz > HUGE_MAXCLASS))
return (0);
{
size_t x = lg_floor((psz<<1)-1);
size_t lg_delta = (x < LG_SIZE_CLASS_GROUP + LG_PAGE + 1) ?
LG_PAGE : x - LG_SIZE_CLASS_GROUP - 1;
size_t delta = ZU(1) << lg_delta;
size_t delta_mask = delta - 1;
size_t usize = (psz + delta_mask) & ~delta_mask;
return (usize);
}
}
JEMALLOC_INLINE szind_t JEMALLOC_INLINE szind_t
size2index_compute(size_t size) size2index_compute(size_t size)
{ {
if (unlikely(size > HUGE_MAXCLASS))
return (NSIZES);
#if (NTBINS != 0) #if (NTBINS != 0)
if (size <= (ZU(1) << LG_TINY_MAXCLASS)) { if (size <= (ZU(1) << LG_TINY_MAXCLASS)) {
szind_t lg_tmin = LG_TINY_MAXCLASS - NTBINS + 1; size_t lg_tmin = LG_TINY_MAXCLASS - NTBINS + 1;
szind_t lg_ceil = lg_floor(pow2_ceil_zu(size)); size_t lg_ceil = lg_floor(pow2_ceil(size));
return (lg_ceil < lg_tmin ? 0 : lg_ceil - lg_tmin); return (lg_ceil < lg_tmin ? 0 : lg_ceil - lg_tmin);
} }
#endif #endif
{ {
szind_t x = lg_floor((size<<1)-1); size_t x = unlikely(ZI(size) < 0) ? ((size<<1) ?
szind_t shift = (x < LG_SIZE_CLASS_GROUP + LG_QUANTUM) ? 0 : (ZU(1)<<(LG_SIZEOF_PTR+3)) : ((ZU(1)<<(LG_SIZEOF_PTR+3))-1))
: lg_floor((size<<1)-1);
size_t shift = (x < LG_SIZE_CLASS_GROUP + LG_QUANTUM) ? 0 :
x - (LG_SIZE_CLASS_GROUP + LG_QUANTUM); x - (LG_SIZE_CLASS_GROUP + LG_QUANTUM);
szind_t grp = shift << LG_SIZE_CLASS_GROUP; size_t grp = shift << LG_SIZE_CLASS_GROUP;
szind_t lg_delta = (x < LG_SIZE_CLASS_GROUP + LG_QUANTUM + 1) size_t lg_delta = (x < LG_SIZE_CLASS_GROUP + LG_QUANTUM + 1)
? LG_QUANTUM : x - LG_SIZE_CLASS_GROUP - 1; ? LG_QUANTUM : x - LG_SIZE_CLASS_GROUP - 1;
size_t delta_inverse_mask = ZI(-1) << lg_delta; size_t delta_inverse_mask = ZI(-1) << lg_delta;
szind_t mod = ((((size-1) & delta_inverse_mask) >> lg_delta)) & size_t mod = ((((size-1) & delta_inverse_mask) >> lg_delta)) &
((ZU(1) << LG_SIZE_CLASS_GROUP) - 1); ((ZU(1) << LG_SIZE_CLASS_GROUP) - 1);
szind_t index = NTBINS + grp + mod; size_t index = NTBINS + grp + mod;
return (index); return (index);
} }
} }
@ -698,7 +564,8 @@ size2index_lookup(size_t size)
assert(size <= LOOKUP_MAXCLASS); assert(size <= LOOKUP_MAXCLASS);
{ {
szind_t ret = (size2index_tab[(size-1) >> LG_TINY_MIN]); size_t ret = ((size_t)(size2index_tab[(size-1) >>
LG_TINY_MIN]));
assert(ret == size2index_compute(size)); assert(ret == size2index_compute(size));
return (ret); return (ret);
} }
@ -761,18 +628,18 @@ JEMALLOC_ALWAYS_INLINE size_t
s2u_compute(size_t size) s2u_compute(size_t size)
{ {
if (unlikely(size > HUGE_MAXCLASS))
return (0);
#if (NTBINS > 0) #if (NTBINS > 0)
if (size <= (ZU(1) << LG_TINY_MAXCLASS)) { if (size <= (ZU(1) << LG_TINY_MAXCLASS)) {
size_t lg_tmin = LG_TINY_MAXCLASS - NTBINS + 1; size_t lg_tmin = LG_TINY_MAXCLASS - NTBINS + 1;
size_t lg_ceil = lg_floor(pow2_ceil_zu(size)); size_t lg_ceil = lg_floor(pow2_ceil(size));
return (lg_ceil < lg_tmin ? (ZU(1) << lg_tmin) : return (lg_ceil < lg_tmin ? (ZU(1) << lg_tmin) :
(ZU(1) << lg_ceil)); (ZU(1) << lg_ceil));
} }
#endif #endif
{ {
size_t x = lg_floor((size<<1)-1); size_t x = unlikely(ZI(size) < 0) ? ((size<<1) ?
(ZU(1)<<(LG_SIZEOF_PTR+3)) : ((ZU(1)<<(LG_SIZEOF_PTR+3))-1))
: lg_floor((size<<1)-1);
size_t lg_delta = (x < LG_SIZE_CLASS_GROUP + LG_QUANTUM + 1) size_t lg_delta = (x < LG_SIZE_CLASS_GROUP + LG_QUANTUM + 1)
? LG_QUANTUM : x - LG_SIZE_CLASS_GROUP - 1; ? LG_QUANTUM : x - LG_SIZE_CLASS_GROUP - 1;
size_t delta = ZU(1) << lg_delta; size_t delta = ZU(1) << lg_delta;
@ -856,16 +723,17 @@ sa2u(size_t size, size_t alignment)
return (usize); return (usize);
} }
/* Huge size class. Beware of overflow. */ /* Huge size class. Beware of size_t overflow. */
if (unlikely(alignment > HUGE_MAXCLASS))
return (0);
/* /*
* We can't achieve subchunk alignment, so round up alignment to the * We can't achieve subchunk alignment, so round up alignment to the
* minimum that can actually be supported. * minimum that can actually be supported.
*/ */
alignment = CHUNK_CEILING(alignment); alignment = CHUNK_CEILING(alignment);
if (alignment == 0) {
/* size_t overflow. */
return (0);
}
/* Make sure result is a huge size class. */ /* Make sure result is a huge size class. */
if (size <= chunksize) if (size <= chunksize)
@ -891,84 +759,45 @@ sa2u(size_t size, size_t alignment)
/* Choose an arena based on a per-thread value. */ /* Choose an arena based on a per-thread value. */
JEMALLOC_INLINE arena_t * JEMALLOC_INLINE arena_t *
arena_choose_impl(tsd_t *tsd, arena_t *arena, bool internal) arena_choose(tsd_t *tsd, arena_t *arena)
{ {
arena_t *ret; arena_t *ret;
if (arena != NULL) if (arena != NULL)
return (arena); return (arena);
ret = internal ? tsd_iarena_get(tsd) : tsd_arena_get(tsd); if (unlikely((ret = tsd_arena_get(tsd)) == NULL))
if (unlikely(ret == NULL)) ret = arena_choose_hard(tsd);
ret = arena_choose_hard(tsd, internal);
return (ret); return (ret);
} }
JEMALLOC_INLINE arena_t * JEMALLOC_INLINE arena_t *
arena_choose(tsd_t *tsd, arena_t *arena) arena_get(tsd_t *tsd, unsigned ind, bool init_if_missing,
bool refresh_if_missing)
{ {
arena_t *arena;
arena_t **arenas_cache = tsd_arenas_cache_get(tsd);
return (arena_choose_impl(tsd, arena, false)); /* init_if_missing requires refresh_if_missing. */
} assert(!init_if_missing || refresh_if_missing);
JEMALLOC_INLINE arena_t * if (unlikely(arenas_cache == NULL)) {
arena_ichoose(tsd_t *tsd, arena_t *arena) /* arenas_cache hasn't been initialized yet. */
{ return (arena_get_hard(tsd, ind, init_if_missing));
return (arena_choose_impl(tsd, arena, true));
}
JEMALLOC_INLINE arena_tdata_t *
arena_tdata_get(tsd_t *tsd, unsigned ind, bool refresh_if_missing)
{
arena_tdata_t *tdata;
arena_tdata_t *arenas_tdata = tsd_arenas_tdata_get(tsd);
if (unlikely(arenas_tdata == NULL)) {
/* arenas_tdata hasn't been initialized yet. */
return (arena_tdata_get_hard(tsd, ind));
} }
if (unlikely(ind >= tsd_narenas_tdata_get(tsd))) { if (unlikely(ind >= tsd_narenas_cache_get(tsd))) {
/* /*
* ind is invalid, cache is old (too small), or tdata to be * ind is invalid, cache is old (too small), or arena to be
* initialized. * initialized.
*/ */
return (refresh_if_missing ? arena_tdata_get_hard(tsd, ind) : return (refresh_if_missing ? arena_get_hard(tsd, ind,
NULL); init_if_missing) : NULL);
} }
arena = arenas_cache[ind];
tdata = &arenas_tdata[ind]; if (likely(arena != NULL) || !refresh_if_missing)
if (likely(tdata != NULL) || !refresh_if_missing) return (arena);
return (tdata); return (arena_get_hard(tsd, ind, init_if_missing));
return (arena_tdata_get_hard(tsd, ind));
}
JEMALLOC_INLINE arena_t *
arena_get(tsdn_t *tsdn, unsigned ind, bool init_if_missing)
{
arena_t *ret;
assert(ind <= MALLOCX_ARENA_MAX);
ret = arenas[ind];
if (unlikely(ret == NULL)) {
ret = atomic_read_p((void *)&arenas[ind]);
if (init_if_missing && unlikely(ret == NULL))
ret = arena_init(tsdn, ind);
}
return (ret);
}
JEMALLOC_INLINE ticker_t *
decay_ticker_get(tsd_t *tsd, unsigned ind)
{
arena_tdata_t *tdata;
tdata = arena_tdata_get(tsd, ind, true);
if (unlikely(tdata == NULL))
return (NULL);
return (&tdata->decay_ticker);
} }
#endif #endif
@ -989,27 +818,27 @@ decay_ticker_get(tsd_t *tsd, unsigned ind)
#ifndef JEMALLOC_ENABLE_INLINE #ifndef JEMALLOC_ENABLE_INLINE
arena_t *iaalloc(const void *ptr); arena_t *iaalloc(const void *ptr);
size_t isalloc(tsdn_t *tsdn, const void *ptr, bool demote); size_t isalloc(const void *ptr, bool demote);
void *iallocztm(tsdn_t *tsdn, size_t size, szind_t ind, bool zero, void *iallocztm(tsd_t *tsd, size_t size, bool zero, tcache_t *tcache,
tcache_t *tcache, bool is_metadata, arena_t *arena, bool slow_path); bool is_metadata, arena_t *arena);
void *ialloc(tsd_t *tsd, size_t size, szind_t ind, bool zero, void *imalloct(tsd_t *tsd, size_t size, tcache_t *tcache, arena_t *arena);
bool slow_path); void *imalloc(tsd_t *tsd, size_t size);
void *ipallocztm(tsdn_t *tsdn, size_t usize, size_t alignment, bool zero, void *icalloct(tsd_t *tsd, size_t size, tcache_t *tcache, arena_t *arena);
void *icalloc(tsd_t *tsd, size_t size);
void *ipallocztm(tsd_t *tsd, size_t usize, size_t alignment, bool zero,
tcache_t *tcache, bool is_metadata, arena_t *arena); tcache_t *tcache, bool is_metadata, arena_t *arena);
void *ipalloct(tsdn_t *tsdn, size_t usize, size_t alignment, bool zero, void *ipalloct(tsd_t *tsd, size_t usize, size_t alignment, bool zero,
tcache_t *tcache, arena_t *arena); tcache_t *tcache, arena_t *arena);
void *ipalloc(tsd_t *tsd, size_t usize, size_t alignment, bool zero); void *ipalloc(tsd_t *tsd, size_t usize, size_t alignment, bool zero);
size_t ivsalloc(tsdn_t *tsdn, const void *ptr, bool demote); size_t ivsalloc(const void *ptr, bool demote);
size_t u2rz(size_t usize); size_t u2rz(size_t usize);
size_t p2rz(tsdn_t *tsdn, const void *ptr); size_t p2rz(const void *ptr);
void idalloctm(tsdn_t *tsdn, void *ptr, tcache_t *tcache, bool is_metadata, void idalloctm(tsd_t *tsd, void *ptr, tcache_t *tcache, bool is_metadata);
bool slow_path); void idalloct(tsd_t *tsd, void *ptr, tcache_t *tcache);
void idalloc(tsd_t *tsd, void *ptr); void idalloc(tsd_t *tsd, void *ptr);
void iqalloc(tsd_t *tsd, void *ptr, tcache_t *tcache, bool slow_path); void iqalloc(tsd_t *tsd, void *ptr, tcache_t *tcache);
void isdalloct(tsdn_t *tsdn, void *ptr, size_t size, tcache_t *tcache, void isdalloct(tsd_t *tsd, void *ptr, size_t size, tcache_t *tcache);
bool slow_path); void isqalloc(tsd_t *tsd, void *ptr, size_t size, tcache_t *tcache);
void isqalloc(tsd_t *tsd, void *ptr, size_t size, tcache_t *tcache,
bool slow_path);
void *iralloct_realign(tsd_t *tsd, void *ptr, size_t oldsize, size_t size, void *iralloct_realign(tsd_t *tsd, void *ptr, size_t oldsize, size_t size,
size_t extra, size_t alignment, bool zero, tcache_t *tcache, size_t extra, size_t alignment, bool zero, tcache_t *tcache,
arena_t *arena); arena_t *arena);
@ -1017,8 +846,8 @@ void *iralloct(tsd_t *tsd, void *ptr, size_t oldsize, size_t size,
size_t alignment, bool zero, tcache_t *tcache, arena_t *arena); size_t alignment, bool zero, tcache_t *tcache, arena_t *arena);
void *iralloc(tsd_t *tsd, void *ptr, size_t oldsize, size_t size, void *iralloc(tsd_t *tsd, void *ptr, size_t oldsize, size_t size,
size_t alignment, bool zero); size_t alignment, bool zero);
bool ixalloc(tsdn_t *tsdn, void *ptr, size_t oldsize, size_t size, bool ixalloc(void *ptr, size_t oldsize, size_t size, size_t extra,
size_t extra, size_t alignment, bool zero); size_t alignment, bool zero);
#endif #endif
#if (defined(JEMALLOC_ENABLE_INLINE) || defined(JEMALLOC_C_)) #if (defined(JEMALLOC_ENABLE_INLINE) || defined(JEMALLOC_C_))
@ -1033,85 +862,100 @@ iaalloc(const void *ptr)
/* /*
* Typical usage: * Typical usage:
* tsdn_t *tsdn = [...]
* void *ptr = [...] * void *ptr = [...]
* size_t sz = isalloc(tsdn, ptr, config_prof); * size_t sz = isalloc(ptr, config_prof);
*/ */
JEMALLOC_ALWAYS_INLINE size_t JEMALLOC_ALWAYS_INLINE size_t
isalloc(tsdn_t *tsdn, const void *ptr, bool demote) isalloc(const void *ptr, bool demote)
{ {
assert(ptr != NULL); assert(ptr != NULL);
/* Demotion only makes sense if config_prof is true. */ /* Demotion only makes sense if config_prof is true. */
assert(config_prof || !demote); assert(config_prof || !demote);
return (arena_salloc(tsdn, ptr, demote)); return (arena_salloc(ptr, demote));
} }
JEMALLOC_ALWAYS_INLINE void * JEMALLOC_ALWAYS_INLINE void *
iallocztm(tsdn_t *tsdn, size_t size, szind_t ind, bool zero, tcache_t *tcache, iallocztm(tsd_t *tsd, size_t size, bool zero, tcache_t *tcache, bool is_metadata,
bool is_metadata, arena_t *arena, bool slow_path) arena_t *arena)
{ {
void *ret; void *ret;
assert(size != 0); assert(size != 0);
assert(!is_metadata || tcache == NULL);
assert(!is_metadata || arena == NULL || arena->ind < narenas_auto);
ret = arena_malloc(tsdn, arena, size, ind, zero, tcache, slow_path); ret = arena_malloc(tsd, arena, size, zero, tcache);
if (config_stats && is_metadata && likely(ret != NULL)) { if (config_stats && is_metadata && likely(ret != NULL)) {
arena_metadata_allocated_add(iaalloc(ret), arena_metadata_allocated_add(iaalloc(ret), isalloc(ret,
isalloc(tsdn, ret, config_prof));
}
return (ret);
}
JEMALLOC_ALWAYS_INLINE void *
ialloc(tsd_t *tsd, size_t size, szind_t ind, bool zero, bool slow_path)
{
return (iallocztm(tsd_tsdn(tsd), size, ind, zero, tcache_get(tsd, true),
false, NULL, slow_path));
}
JEMALLOC_ALWAYS_INLINE void *
ipallocztm(tsdn_t *tsdn, size_t usize, size_t alignment, bool zero,
tcache_t *tcache, bool is_metadata, arena_t *arena)
{
void *ret;
assert(usize != 0);
assert(usize == sa2u(usize, alignment));
assert(!is_metadata || tcache == NULL);
assert(!is_metadata || arena == NULL || arena->ind < narenas_auto);
ret = arena_palloc(tsdn, arena, usize, alignment, zero, tcache);
assert(ALIGNMENT_ADDR2BASE(ret, alignment) == ret);
if (config_stats && is_metadata && likely(ret != NULL)) {
arena_metadata_allocated_add(iaalloc(ret), isalloc(tsdn, ret,
config_prof)); config_prof));
} }
return (ret); return (ret);
} }
JEMALLOC_ALWAYS_INLINE void * JEMALLOC_ALWAYS_INLINE void *
ipalloct(tsdn_t *tsdn, size_t usize, size_t alignment, bool zero, imalloct(tsd_t *tsd, size_t size, tcache_t *tcache, arena_t *arena)
{
return (iallocztm(tsd, size, false, tcache, false, arena));
}
JEMALLOC_ALWAYS_INLINE void *
imalloc(tsd_t *tsd, size_t size)
{
return (iallocztm(tsd, size, false, tcache_get(tsd, true), false, NULL));
}
JEMALLOC_ALWAYS_INLINE void *
icalloct(tsd_t *tsd, size_t size, tcache_t *tcache, arena_t *arena)
{
return (iallocztm(tsd, size, true, tcache, false, arena));
}
JEMALLOC_ALWAYS_INLINE void *
icalloc(tsd_t *tsd, size_t size)
{
return (iallocztm(tsd, size, true, tcache_get(tsd, true), false, NULL));
}
JEMALLOC_ALWAYS_INLINE void *
ipallocztm(tsd_t *tsd, size_t usize, size_t alignment, bool zero,
tcache_t *tcache, bool is_metadata, arena_t *arena)
{
void *ret;
assert(usize != 0);
assert(usize == sa2u(usize, alignment));
ret = arena_palloc(tsd, arena, usize, alignment, zero, tcache);
assert(ALIGNMENT_ADDR2BASE(ret, alignment) == ret);
if (config_stats && is_metadata && likely(ret != NULL)) {
arena_metadata_allocated_add(iaalloc(ret), isalloc(ret,
config_prof));
}
return (ret);
}
JEMALLOC_ALWAYS_INLINE void *
ipalloct(tsd_t *tsd, size_t usize, size_t alignment, bool zero,
tcache_t *tcache, arena_t *arena) tcache_t *tcache, arena_t *arena)
{ {
return (ipallocztm(tsdn, usize, alignment, zero, tcache, false, arena)); return (ipallocztm(tsd, usize, alignment, zero, tcache, false, arena));
} }
JEMALLOC_ALWAYS_INLINE void * JEMALLOC_ALWAYS_INLINE void *
ipalloc(tsd_t *tsd, size_t usize, size_t alignment, bool zero) ipalloc(tsd_t *tsd, size_t usize, size_t alignment, bool zero)
{ {
return (ipallocztm(tsd_tsdn(tsd), usize, alignment, zero, return (ipallocztm(tsd, usize, alignment, zero, tcache_get(tsd,
tcache_get(tsd, true), false, NULL)); NULL), false, NULL));
} }
JEMALLOC_ALWAYS_INLINE size_t JEMALLOC_ALWAYS_INLINE size_t
ivsalloc(tsdn_t *tsdn, const void *ptr, bool demote) ivsalloc(const void *ptr, bool demote)
{ {
extent_node_t *node; extent_node_t *node;
@ -1123,7 +967,7 @@ ivsalloc(tsdn_t *tsdn, const void *ptr, bool demote)
assert(extent_node_addr_get(node) == ptr || assert(extent_node_addr_get(node) == ptr ||
extent_node_achunk_get(node)); extent_node_achunk_get(node));
return (isalloc(tsdn, ptr, demote)); return (isalloc(ptr, demote));
} }
JEMALLOC_INLINE size_t JEMALLOC_INLINE size_t
@ -1141,62 +985,65 @@ u2rz(size_t usize)
} }
JEMALLOC_INLINE size_t JEMALLOC_INLINE size_t
p2rz(tsdn_t *tsdn, const void *ptr) p2rz(const void *ptr)
{ {
size_t usize = isalloc(tsdn, ptr, false); size_t usize = isalloc(ptr, false);
return (u2rz(usize)); return (u2rz(usize));
} }
JEMALLOC_ALWAYS_INLINE void JEMALLOC_ALWAYS_INLINE void
idalloctm(tsdn_t *tsdn, void *ptr, tcache_t *tcache, bool is_metadata, idalloctm(tsd_t *tsd, void *ptr, tcache_t *tcache, bool is_metadata)
bool slow_path)
{ {
assert(ptr != NULL); assert(ptr != NULL);
assert(!is_metadata || tcache == NULL);
assert(!is_metadata || iaalloc(ptr)->ind < narenas_auto);
if (config_stats && is_metadata) { if (config_stats && is_metadata) {
arena_metadata_allocated_sub(iaalloc(ptr), isalloc(tsdn, ptr, arena_metadata_allocated_sub(iaalloc(ptr), isalloc(ptr,
config_prof)); config_prof));
} }
arena_dalloc(tsdn, ptr, tcache, slow_path); arena_dalloc(tsd, ptr, tcache);
}
JEMALLOC_ALWAYS_INLINE void
idalloct(tsd_t *tsd, void *ptr, tcache_t *tcache)
{
idalloctm(tsd, ptr, tcache, false);
} }
JEMALLOC_ALWAYS_INLINE void JEMALLOC_ALWAYS_INLINE void
idalloc(tsd_t *tsd, void *ptr) idalloc(tsd_t *tsd, void *ptr)
{ {
idalloctm(tsd_tsdn(tsd), ptr, tcache_get(tsd, false), false, true); idalloctm(tsd, ptr, tcache_get(tsd, false), false);
} }
JEMALLOC_ALWAYS_INLINE void JEMALLOC_ALWAYS_INLINE void
iqalloc(tsd_t *tsd, void *ptr, tcache_t *tcache, bool slow_path) iqalloc(tsd_t *tsd, void *ptr, tcache_t *tcache)
{ {
if (slow_path && config_fill && unlikely(opt_quarantine)) if (config_fill && unlikely(opt_quarantine))
quarantine(tsd, ptr); quarantine(tsd, ptr);
else else
idalloctm(tsd_tsdn(tsd), ptr, tcache, false, slow_path); idalloctm(tsd, ptr, tcache, false);
} }
JEMALLOC_ALWAYS_INLINE void JEMALLOC_ALWAYS_INLINE void
isdalloct(tsdn_t *tsdn, void *ptr, size_t size, tcache_t *tcache, isdalloct(tsd_t *tsd, void *ptr, size_t size, tcache_t *tcache)
bool slow_path)
{ {
arena_sdalloc(tsdn, ptr, size, tcache, slow_path); arena_sdalloc(tsd, ptr, size, tcache);
} }
JEMALLOC_ALWAYS_INLINE void JEMALLOC_ALWAYS_INLINE void
isqalloc(tsd_t *tsd, void *ptr, size_t size, tcache_t *tcache, bool slow_path) isqalloc(tsd_t *tsd, void *ptr, size_t size, tcache_t *tcache)
{ {
if (slow_path && config_fill && unlikely(opt_quarantine)) if (config_fill && unlikely(opt_quarantine))
quarantine(tsd, ptr); quarantine(tsd, ptr);
else else
isdalloct(tsd_tsdn(tsd), ptr, size, tcache, slow_path); isdalloct(tsd, ptr, size, tcache);
} }
JEMALLOC_ALWAYS_INLINE void * JEMALLOC_ALWAYS_INLINE void *
@ -1207,18 +1054,17 @@ iralloct_realign(tsd_t *tsd, void *ptr, size_t oldsize, size_t size,
size_t usize, copysize; size_t usize, copysize;
usize = sa2u(size + extra, alignment); usize = sa2u(size + extra, alignment);
if (unlikely(usize == 0 || usize > HUGE_MAXCLASS)) if (usize == 0)
return (NULL); return (NULL);
p = ipalloct(tsd_tsdn(tsd), usize, alignment, zero, tcache, arena); p = ipalloct(tsd, usize, alignment, zero, tcache, arena);
if (p == NULL) { if (p == NULL) {
if (extra == 0) if (extra == 0)
return (NULL); return (NULL);
/* Try again, without extra this time. */ /* Try again, without extra this time. */
usize = sa2u(size, alignment); usize = sa2u(size, alignment);
if (unlikely(usize == 0 || usize > HUGE_MAXCLASS)) if (usize == 0)
return (NULL); return (NULL);
p = ipalloct(tsd_tsdn(tsd), usize, alignment, zero, tcache, p = ipalloct(tsd, usize, alignment, zero, tcache, arena);
arena);
if (p == NULL) if (p == NULL)
return (NULL); return (NULL);
} }
@ -1228,7 +1074,7 @@ iralloct_realign(tsd_t *tsd, void *ptr, size_t oldsize, size_t size,
*/ */
copysize = (size < oldsize) ? size : oldsize; copysize = (size < oldsize) ? size : oldsize;
memcpy(p, ptr, copysize); memcpy(p, ptr, copysize);
isqalloc(tsd, ptr, oldsize, tcache, true); isqalloc(tsd, ptr, oldsize, tcache);
return (p); return (p);
} }
@ -1264,8 +1110,8 @@ iralloc(tsd_t *tsd, void *ptr, size_t oldsize, size_t size, size_t alignment,
} }
JEMALLOC_ALWAYS_INLINE bool JEMALLOC_ALWAYS_INLINE bool
ixalloc(tsdn_t *tsdn, void *ptr, size_t oldsize, size_t size, size_t extra, ixalloc(void *ptr, size_t oldsize, size_t size, size_t extra, size_t alignment,
size_t alignment, bool zero) bool zero)
{ {
assert(ptr != NULL); assert(ptr != NULL);
@ -1277,7 +1123,7 @@ ixalloc(tsdn_t *tsdn, void *ptr, size_t oldsize, size_t size, size_t extra,
return (true); return (true);
} }
return (arena_ralloc_no_move(tsdn, ptr, oldsize, size, extra, zero)); return (arena_ralloc_no_move(ptr, oldsize, size, extra, zero));
} }
#endif #endif

View File

@ -17,18 +17,7 @@
# include <sys/uio.h> # include <sys/uio.h>
# endif # endif
# include <pthread.h> # include <pthread.h>
# ifdef JEMALLOC_OS_UNFAIR_LOCK
# include <os/lock.h>
# endif
# ifdef JEMALLOC_GLIBC_MALLOC_HOOK
# include <sched.h>
# endif
# include <errno.h> # include <errno.h>
# include <sys/time.h>
# include <time.h>
# ifdef JEMALLOC_HAVE_MACH_ABSOLUTE_TIME
# include <mach/mach_time.h>
# endif
#endif #endif
#include <sys/types.h> #include <sys/types.h>

View File

@ -56,9 +56,9 @@
#undef JEMALLOC_HAVE_BUILTIN_CLZ #undef JEMALLOC_HAVE_BUILTIN_CLZ
/* /*
* Defined if os_unfair_lock_*() functions are available, as provided by Darwin. * Defined if madvise(2) is available.
*/ */
#undef JEMALLOC_OS_UNFAIR_LOCK #undef JEMALLOC_HAVE_MADVISE
/* /*
* Defined if OSSpin*() functions are available, as provided by Darwin, and * Defined if OSSpin*() functions are available, as provided by Darwin, and
@ -66,9 +66,6 @@
*/ */
#undef JEMALLOC_OSSPIN #undef JEMALLOC_OSSPIN
/* Defined if syscall(2) is usable. */
#undef JEMALLOC_USE_SYSCALL
/* /*
* Defined if secure_getenv(3) is available. * Defined if secure_getenv(3) is available.
*/ */
@ -79,24 +76,6 @@
*/ */
#undef JEMALLOC_HAVE_ISSETUGID #undef JEMALLOC_HAVE_ISSETUGID
/* Defined if pthread_atfork(3) is available. */
#undef JEMALLOC_HAVE_PTHREAD_ATFORK
/*
* Defined if clock_gettime(CLOCK_MONOTONIC_COARSE, ...) is available.
*/
#undef JEMALLOC_HAVE_CLOCK_MONOTONIC_COARSE
/*
* Defined if clock_gettime(CLOCK_MONOTONIC, ...) is available.
*/
#undef JEMALLOC_HAVE_CLOCK_MONOTONIC
/*
* Defined if mach_absolute_time() is available.
*/
#undef JEMALLOC_HAVE_MACH_ABSOLUTE_TIME
/* /*
* Defined if _malloc_thread_cleanup() exists. At least in the case of * Defined if _malloc_thread_cleanup() exists. At least in the case of
* FreeBSD, pthread_key_create() allocates, which if used during malloc * FreeBSD, pthread_key_create() allocates, which if used during malloc
@ -210,16 +189,9 @@
#undef JEMALLOC_TLS #undef JEMALLOC_TLS
/* /*
* Used to mark unreachable code to quiet "end of non-void" compiler warnings. * ffs()/ffsl() functions to use for bitmapping. Don't use these directly;
* Don't use this directly; instead use unreachable() from util.h * instead, use jemalloc_ffs() or jemalloc_ffsl() from util.h.
*/ */
#undef JEMALLOC_INTERNAL_UNREACHABLE
/*
* ffs*() functions to use for bitmapping. Don't use these directly; instead,
* use ffs_*() from util.h.
*/
#undef JEMALLOC_INTERNAL_FFSLL
#undef JEMALLOC_INTERNAL_FFSL #undef JEMALLOC_INTERNAL_FFSL
#undef JEMALLOC_INTERNAL_FFS #undef JEMALLOC_INTERNAL_FFS
@ -241,35 +213,18 @@
#undef JEMALLOC_ZONE #undef JEMALLOC_ZONE
#undef JEMALLOC_ZONE_VERSION #undef JEMALLOC_ZONE_VERSION
/*
* Methods for determining whether the OS overcommits.
* JEMALLOC_PROC_SYS_VM_OVERCOMMIT_MEMORY: Linux's
* /proc/sys/vm.overcommit_memory file.
* JEMALLOC_SYSCTL_VM_OVERCOMMIT: FreeBSD's vm.overcommit sysctl.
*/
#undef JEMALLOC_SYSCTL_VM_OVERCOMMIT
#undef JEMALLOC_PROC_SYS_VM_OVERCOMMIT_MEMORY
/* Defined if madvise(2) is available. */
#undef JEMALLOC_HAVE_MADVISE
/* /*
* Methods for purging unused pages differ between operating systems. * Methods for purging unused pages differ between operating systems.
* *
* madvise(..., MADV_FREE) : This marks pages as being unused, such that they * madvise(..., MADV_DONTNEED) : On Linux, this immediately discards pages,
* will be discarded rather than swapped out. * such that new pages will be demand-zeroed if
* madvise(..., MADV_DONTNEED) : This immediately discards pages, such that * the address region is later touched.
* new pages will be demand-zeroed if the * madvise(..., MADV_FREE) : On FreeBSD and Darwin, this marks pages as being
* address region is later touched. * unused, such that they will be discarded rather
* than swapped out.
*/ */
#undef JEMALLOC_PURGE_MADVISE_FREE
#undef JEMALLOC_PURGE_MADVISE_DONTNEED #undef JEMALLOC_PURGE_MADVISE_DONTNEED
#undef JEMALLOC_PURGE_MADVISE_FREE
/*
* Defined if transparent huge pages are supported via the MADV_[NO]HUGEPAGE
* arguments to madvise(2).
*/
#undef JEMALLOC_THP
/* Define if operating system has alloca.h header. */ /* Define if operating system has alloca.h header. */
#undef JEMALLOC_HAS_ALLOCA_H #undef JEMALLOC_HAS_ALLOCA_H
@ -286,9 +241,6 @@
/* sizeof(long) == 2^LG_SIZEOF_LONG. */ /* sizeof(long) == 2^LG_SIZEOF_LONG. */
#undef LG_SIZEOF_LONG #undef LG_SIZEOF_LONG
/* sizeof(long long) == 2^LG_SIZEOF_LONG_LONG. */
#undef LG_SIZEOF_LONG_LONG
/* sizeof(intmax_t) == 2^LG_SIZEOF_INTMAX_T. */ /* sizeof(intmax_t) == 2^LG_SIZEOF_INTMAX_T. */
#undef LG_SIZEOF_INTMAX_T #undef LG_SIZEOF_INTMAX_T
@ -307,7 +259,4 @@
*/ */
#undef JEMALLOC_EXPORT #undef JEMALLOC_EXPORT
/* config.malloc_conf options string. */
#undef JEMALLOC_CONFIG_MALLOC_CONF
#endif /* JEMALLOC_INTERNAL_DEFS_H_ */ #endif /* JEMALLOC_INTERNAL_DEFS_H_ */

View File

@ -42,7 +42,7 @@ mb_write(void)
: /* Inputs. */ : /* Inputs. */
: "memory" /* Clobbers. */ : "memory" /* Clobbers. */
); );
# else #else
/* /*
* This is hopefully enough to keep the compiler from reordering * This is hopefully enough to keep the compiler from reordering
* instructions around this one. * instructions around this one.
@ -52,7 +52,7 @@ mb_write(void)
: /* Inputs. */ : /* Inputs. */
: "memory" /* Clobbers. */ : "memory" /* Clobbers. */
); );
# endif #endif
} }
#elif (defined(__amd64__) || defined(__x86_64__)) #elif (defined(__amd64__) || defined(__x86_64__))
JEMALLOC_INLINE void JEMALLOC_INLINE void
@ -104,9 +104,9 @@ mb_write(void)
{ {
malloc_mutex_t mtx; malloc_mutex_t mtx;
malloc_mutex_init(&mtx, "mb", WITNESS_RANK_OMIT); malloc_mutex_init(&mtx);
malloc_mutex_lock(TSDN_NULL, &mtx); malloc_mutex_lock(&mtx);
malloc_mutex_unlock(TSDN_NULL, &mtx); malloc_mutex_unlock(&mtx);
} }
#endif #endif
#endif #endif

View File

@ -5,25 +5,18 @@ typedef struct malloc_mutex_s malloc_mutex_t;
#ifdef _WIN32 #ifdef _WIN32
# define MALLOC_MUTEX_INITIALIZER # define MALLOC_MUTEX_INITIALIZER
#elif (defined(JEMALLOC_OS_UNFAIR_LOCK))
# define MALLOC_MUTEX_INITIALIZER \
{OS_UNFAIR_LOCK_INIT, WITNESS_INITIALIZER(WITNESS_RANK_OMIT)}
#elif (defined(JEMALLOC_OSSPIN)) #elif (defined(JEMALLOC_OSSPIN))
# define MALLOC_MUTEX_INITIALIZER {0, WITNESS_INITIALIZER(WITNESS_RANK_OMIT)} # define MALLOC_MUTEX_INITIALIZER {0}
#elif (defined(JEMALLOC_MUTEX_INIT_CB)) #elif (defined(JEMALLOC_MUTEX_INIT_CB))
# define MALLOC_MUTEX_INITIALIZER \ # define MALLOC_MUTEX_INITIALIZER {PTHREAD_MUTEX_INITIALIZER, NULL}
{PTHREAD_MUTEX_INITIALIZER, NULL, WITNESS_INITIALIZER(WITNESS_RANK_OMIT)}
#else #else
# if (defined(JEMALLOC_HAVE_PTHREAD_MUTEX_ADAPTIVE_NP) && \ # if (defined(JEMALLOC_HAVE_PTHREAD_MUTEX_ADAPTIVE_NP) && \
defined(PTHREAD_ADAPTIVE_MUTEX_INITIALIZER_NP)) defined(PTHREAD_ADAPTIVE_MUTEX_INITIALIZER_NP))
# define MALLOC_MUTEX_TYPE PTHREAD_MUTEX_ADAPTIVE_NP # define MALLOC_MUTEX_TYPE PTHREAD_MUTEX_ADAPTIVE_NP
# define MALLOC_MUTEX_INITIALIZER \ # define MALLOC_MUTEX_INITIALIZER {PTHREAD_ADAPTIVE_MUTEX_INITIALIZER_NP}
{PTHREAD_ADAPTIVE_MUTEX_INITIALIZER_NP, \
WITNESS_INITIALIZER(WITNESS_RANK_OMIT)}
# else # else
# define MALLOC_MUTEX_TYPE PTHREAD_MUTEX_DEFAULT # define MALLOC_MUTEX_TYPE PTHREAD_MUTEX_DEFAULT
# define MALLOC_MUTEX_INITIALIZER \ # define MALLOC_MUTEX_INITIALIZER {PTHREAD_MUTEX_INITIALIZER}
{PTHREAD_MUTEX_INITIALIZER, WITNESS_INITIALIZER(WITNESS_RANK_OMIT)}
# endif # endif
#endif #endif
@ -38,8 +31,6 @@ struct malloc_mutex_s {
# else # else
CRITICAL_SECTION lock; CRITICAL_SECTION lock;
# endif # endif
#elif (defined(JEMALLOC_OS_UNFAIR_LOCK))
os_unfair_lock lock;
#elif (defined(JEMALLOC_OSSPIN)) #elif (defined(JEMALLOC_OSSPIN))
OSSpinLock lock; OSSpinLock lock;
#elif (defined(JEMALLOC_MUTEX_INIT_CB)) #elif (defined(JEMALLOC_MUTEX_INIT_CB))
@ -48,7 +39,6 @@ struct malloc_mutex_s {
#else #else
pthread_mutex_t lock; pthread_mutex_t lock;
#endif #endif
witness_t witness;
}; };
#endif /* JEMALLOC_H_STRUCTS */ #endif /* JEMALLOC_H_STRUCTS */
@ -62,62 +52,52 @@ extern bool isthreaded;
# define isthreaded true # define isthreaded true
#endif #endif
bool malloc_mutex_init(malloc_mutex_t *mutex, const char *name, bool malloc_mutex_init(malloc_mutex_t *mutex);
witness_rank_t rank); void malloc_mutex_prefork(malloc_mutex_t *mutex);
void malloc_mutex_prefork(tsdn_t *tsdn, malloc_mutex_t *mutex); void malloc_mutex_postfork_parent(malloc_mutex_t *mutex);
void malloc_mutex_postfork_parent(tsdn_t *tsdn, malloc_mutex_t *mutex); void malloc_mutex_postfork_child(malloc_mutex_t *mutex);
void malloc_mutex_postfork_child(tsdn_t *tsdn, malloc_mutex_t *mutex); bool mutex_boot(void);
bool malloc_mutex_boot(void);
#endif /* JEMALLOC_H_EXTERNS */ #endif /* JEMALLOC_H_EXTERNS */
/******************************************************************************/ /******************************************************************************/
#ifdef JEMALLOC_H_INLINES #ifdef JEMALLOC_H_INLINES
#ifndef JEMALLOC_ENABLE_INLINE #ifndef JEMALLOC_ENABLE_INLINE
void malloc_mutex_lock(tsdn_t *tsdn, malloc_mutex_t *mutex); void malloc_mutex_lock(malloc_mutex_t *mutex);
void malloc_mutex_unlock(tsdn_t *tsdn, malloc_mutex_t *mutex); void malloc_mutex_unlock(malloc_mutex_t *mutex);
void malloc_mutex_assert_owner(tsdn_t *tsdn, malloc_mutex_t *mutex);
void malloc_mutex_assert_not_owner(tsdn_t *tsdn, malloc_mutex_t *mutex);
#endif #endif
#if (defined(JEMALLOC_ENABLE_INLINE) || defined(JEMALLOC_MUTEX_C_)) #if (defined(JEMALLOC_ENABLE_INLINE) || defined(JEMALLOC_MUTEX_C_))
JEMALLOC_INLINE void JEMALLOC_INLINE void
malloc_mutex_lock(tsdn_t *tsdn, malloc_mutex_t *mutex) malloc_mutex_lock(malloc_mutex_t *mutex)
{ {
if (isthreaded) { if (isthreaded) {
witness_assert_not_owner(tsdn, &mutex->witness);
#ifdef _WIN32 #ifdef _WIN32
# if _WIN32_WINNT >= 0x0600 # if _WIN32_WINNT >= 0x0600
AcquireSRWLockExclusive(&mutex->lock); AcquireSRWLockExclusive(&mutex->lock);
# else # else
EnterCriticalSection(&mutex->lock); EnterCriticalSection(&mutex->lock);
# endif # endif
#elif (defined(JEMALLOC_OS_UNFAIR_LOCK))
os_unfair_lock_lock(&mutex->lock);
#elif (defined(JEMALLOC_OSSPIN)) #elif (defined(JEMALLOC_OSSPIN))
OSSpinLockLock(&mutex->lock); OSSpinLockLock(&mutex->lock);
#else #else
pthread_mutex_lock(&mutex->lock); pthread_mutex_lock(&mutex->lock);
#endif #endif
witness_lock(tsdn, &mutex->witness);
} }
} }
JEMALLOC_INLINE void JEMALLOC_INLINE void
malloc_mutex_unlock(tsdn_t *tsdn, malloc_mutex_t *mutex) malloc_mutex_unlock(malloc_mutex_t *mutex)
{ {
if (isthreaded) { if (isthreaded) {
witness_unlock(tsdn, &mutex->witness);
#ifdef _WIN32 #ifdef _WIN32
# if _WIN32_WINNT >= 0x0600 # if _WIN32_WINNT >= 0x0600
ReleaseSRWLockExclusive(&mutex->lock); ReleaseSRWLockExclusive(&mutex->lock);
# else # else
LeaveCriticalSection(&mutex->lock); LeaveCriticalSection(&mutex->lock);
# endif # endif
#elif (defined(JEMALLOC_OS_UNFAIR_LOCK))
os_unfair_lock_unlock(&mutex->lock);
#elif (defined(JEMALLOC_OSSPIN)) #elif (defined(JEMALLOC_OSSPIN))
OSSpinLockUnlock(&mutex->lock); OSSpinLockUnlock(&mutex->lock);
#else #else
@ -125,22 +105,6 @@ malloc_mutex_unlock(tsdn_t *tsdn, malloc_mutex_t *mutex)
#endif #endif
} }
} }
JEMALLOC_INLINE void
malloc_mutex_assert_owner(tsdn_t *tsdn, malloc_mutex_t *mutex)
{
if (isthreaded)
witness_assert_owner(tsdn, &mutex->witness);
}
JEMALLOC_INLINE void
malloc_mutex_assert_not_owner(tsdn_t *tsdn, malloc_mutex_t *mutex)
{
if (isthreaded)
witness_assert_not_owner(tsdn, &mutex->witness);
}
#endif #endif
#endif /* JEMALLOC_H_INLINES */ #endif /* JEMALLOC_H_INLINES */

View File

@ -1,48 +0,0 @@
/******************************************************************************/
#ifdef JEMALLOC_H_TYPES
typedef struct nstime_s nstime_t;
/* Maximum supported number of seconds (~584 years). */
#define NSTIME_SEC_MAX KQU(18446744072)
#endif /* JEMALLOC_H_TYPES */
/******************************************************************************/
#ifdef JEMALLOC_H_STRUCTS
struct nstime_s {
uint64_t ns;
};
#endif /* JEMALLOC_H_STRUCTS */
/******************************************************************************/
#ifdef JEMALLOC_H_EXTERNS
void nstime_init(nstime_t *time, uint64_t ns);
void nstime_init2(nstime_t *time, uint64_t sec, uint64_t nsec);
uint64_t nstime_ns(const nstime_t *time);
uint64_t nstime_sec(const nstime_t *time);
uint64_t nstime_nsec(const nstime_t *time);
void nstime_copy(nstime_t *time, const nstime_t *source);
int nstime_compare(const nstime_t *a, const nstime_t *b);
void nstime_add(nstime_t *time, const nstime_t *addend);
void nstime_subtract(nstime_t *time, const nstime_t *subtrahend);
void nstime_imultiply(nstime_t *time, uint64_t multiplier);
void nstime_idivide(nstime_t *time, uint64_t divisor);
uint64_t nstime_divide(const nstime_t *time, const nstime_t *divisor);
#ifdef JEMALLOC_JET
typedef bool (nstime_monotonic_t)(void);
extern nstime_monotonic_t *nstime_monotonic;
typedef bool (nstime_update_t)(nstime_t *);
extern nstime_update_t *nstime_update;
#else
bool nstime_monotonic(void);
bool nstime_update(nstime_t *time);
#endif
#endif /* JEMALLOC_H_EXTERNS */
/******************************************************************************/
#ifdef JEMALLOC_H_INLINES
#endif /* JEMALLOC_H_INLINES */
/******************************************************************************/

View File

@ -9,16 +9,13 @@
/******************************************************************************/ /******************************************************************************/
#ifdef JEMALLOC_H_EXTERNS #ifdef JEMALLOC_H_EXTERNS
void *pages_map(void *addr, size_t size, bool *commit); void *pages_map(void *addr, size_t size);
void pages_unmap(void *addr, size_t size); void pages_unmap(void *addr, size_t size);
void *pages_trim(void *addr, size_t alloc_size, size_t leadsize, void *pages_trim(void *addr, size_t alloc_size, size_t leadsize,
size_t size, bool *commit); size_t size);
bool pages_commit(void *addr, size_t size); bool pages_commit(void *addr, size_t size);
bool pages_decommit(void *addr, size_t size); bool pages_decommit(void *addr, size_t size);
bool pages_purge(void *addr, size_t size); bool pages_purge(void *addr, size_t size);
bool pages_huge(void *addr, size_t size);
bool pages_nohuge(void *addr, size_t size);
void pages_boot(void);
#endif /* JEMALLOC_H_EXTERNS */ #endif /* JEMALLOC_H_EXTERNS */
/******************************************************************************/ /******************************************************************************/

View File

@ -1,345 +0,0 @@
/*
* A Pairing Heap implementation.
*
* "The Pairing Heap: A New Form of Self-Adjusting Heap"
* https://www.cs.cmu.edu/~sleator/papers/pairing-heaps.pdf
*
* With auxiliary twopass list, described in a follow on paper.
*
* "Pairing Heaps: Experiments and Analysis"
* http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.106.2988&rep=rep1&type=pdf
*
*******************************************************************************
*/
#ifndef PH_H_
#define PH_H_
/* Node structure. */
#define phn(a_type) \
struct { \
a_type *phn_prev; \
a_type *phn_next; \
a_type *phn_lchild; \
}
/* Root structure. */
#define ph(a_type) \
struct { \
a_type *ph_root; \
}
/* Internal utility macros. */
#define phn_lchild_get(a_type, a_field, a_phn) \
(a_phn->a_field.phn_lchild)
#define phn_lchild_set(a_type, a_field, a_phn, a_lchild) do { \
a_phn->a_field.phn_lchild = a_lchild; \
} while (0)
#define phn_next_get(a_type, a_field, a_phn) \
(a_phn->a_field.phn_next)
#define phn_prev_set(a_type, a_field, a_phn, a_prev) do { \
a_phn->a_field.phn_prev = a_prev; \
} while (0)
#define phn_prev_get(a_type, a_field, a_phn) \
(a_phn->a_field.phn_prev)
#define phn_next_set(a_type, a_field, a_phn, a_next) do { \
a_phn->a_field.phn_next = a_next; \
} while (0)
#define phn_merge_ordered(a_type, a_field, a_phn0, a_phn1, a_cmp) do { \
a_type *phn0child; \
\
assert(a_phn0 != NULL); \
assert(a_phn1 != NULL); \
assert(a_cmp(a_phn0, a_phn1) <= 0); \
\
phn_prev_set(a_type, a_field, a_phn1, a_phn0); \
phn0child = phn_lchild_get(a_type, a_field, a_phn0); \
phn_next_set(a_type, a_field, a_phn1, phn0child); \
if (phn0child != NULL) \
phn_prev_set(a_type, a_field, phn0child, a_phn1); \
phn_lchild_set(a_type, a_field, a_phn0, a_phn1); \
} while (0)
#define phn_merge(a_type, a_field, a_phn0, a_phn1, a_cmp, r_phn) do { \
if (a_phn0 == NULL) \
r_phn = a_phn1; \
else if (a_phn1 == NULL) \
r_phn = a_phn0; \
else if (a_cmp(a_phn0, a_phn1) < 0) { \
phn_merge_ordered(a_type, a_field, a_phn0, a_phn1, \
a_cmp); \
r_phn = a_phn0; \
} else { \
phn_merge_ordered(a_type, a_field, a_phn1, a_phn0, \
a_cmp); \
r_phn = a_phn1; \
} \
} while (0)
#define ph_merge_siblings(a_type, a_field, a_phn, a_cmp, r_phn) do { \
a_type *head = NULL; \
a_type *tail = NULL; \
a_type *phn0 = a_phn; \
a_type *phn1 = phn_next_get(a_type, a_field, phn0); \
\
/* \
* Multipass merge, wherein the first two elements of a FIFO \
* are repeatedly merged, and each result is appended to the \
* singly linked FIFO, until the FIFO contains only a single \
* element. We start with a sibling list but no reference to \
* its tail, so we do a single pass over the sibling list to \
* populate the FIFO. \
*/ \
if (phn1 != NULL) { \
a_type *phnrest = phn_next_get(a_type, a_field, phn1); \
if (phnrest != NULL) \
phn_prev_set(a_type, a_field, phnrest, NULL); \
phn_prev_set(a_type, a_field, phn0, NULL); \
phn_next_set(a_type, a_field, phn0, NULL); \
phn_prev_set(a_type, a_field, phn1, NULL); \
phn_next_set(a_type, a_field, phn1, NULL); \
phn_merge(a_type, a_field, phn0, phn1, a_cmp, phn0); \
head = tail = phn0; \
phn0 = phnrest; \
while (phn0 != NULL) { \
phn1 = phn_next_get(a_type, a_field, phn0); \
if (phn1 != NULL) { \
phnrest = phn_next_get(a_type, a_field, \
phn1); \
if (phnrest != NULL) { \
phn_prev_set(a_type, a_field, \
phnrest, NULL); \
} \
phn_prev_set(a_type, a_field, phn0, \
NULL); \
phn_next_set(a_type, a_field, phn0, \
NULL); \
phn_prev_set(a_type, a_field, phn1, \
NULL); \
phn_next_set(a_type, a_field, phn1, \
NULL); \
phn_merge(a_type, a_field, phn0, phn1, \
a_cmp, phn0); \
phn_next_set(a_type, a_field, tail, \
phn0); \
tail = phn0; \
phn0 = phnrest; \
} else { \
phn_next_set(a_type, a_field, tail, \
phn0); \
tail = phn0; \
phn0 = NULL; \
} \
} \
phn0 = head; \
phn1 = phn_next_get(a_type, a_field, phn0); \
if (phn1 != NULL) { \
while (true) { \
head = phn_next_get(a_type, a_field, \
phn1); \
assert(phn_prev_get(a_type, a_field, \
phn0) == NULL); \
phn_next_set(a_type, a_field, phn0, \
NULL); \
assert(phn_prev_get(a_type, a_field, \
phn1) == NULL); \
phn_next_set(a_type, a_field, phn1, \
NULL); \
phn_merge(a_type, a_field, phn0, phn1, \
a_cmp, phn0); \
if (head == NULL) \
break; \
phn_next_set(a_type, a_field, tail, \
phn0); \
tail = phn0; \
phn0 = head; \
phn1 = phn_next_get(a_type, a_field, \
phn0); \
} \
} \
} \
r_phn = phn0; \
} while (0)
#define ph_merge_aux(a_type, a_field, a_ph, a_cmp) do { \
a_type *phn = phn_next_get(a_type, a_field, a_ph->ph_root); \
if (phn != NULL) { \
phn_prev_set(a_type, a_field, a_ph->ph_root, NULL); \
phn_next_set(a_type, a_field, a_ph->ph_root, NULL); \
phn_prev_set(a_type, a_field, phn, NULL); \
ph_merge_siblings(a_type, a_field, phn, a_cmp, phn); \
assert(phn_next_get(a_type, a_field, phn) == NULL); \
phn_merge(a_type, a_field, a_ph->ph_root, phn, a_cmp, \
a_ph->ph_root); \
} \
} while (0)
#define ph_merge_children(a_type, a_field, a_phn, a_cmp, r_phn) do { \
a_type *lchild = phn_lchild_get(a_type, a_field, a_phn); \
if (lchild == NULL) \
r_phn = NULL; \
else { \
ph_merge_siblings(a_type, a_field, lchild, a_cmp, \
r_phn); \
} \
} while (0)
/*
* The ph_proto() macro generates function prototypes that correspond to the
* functions generated by an equivalently parameterized call to ph_gen().
*/
#define ph_proto(a_attr, a_prefix, a_ph_type, a_type) \
a_attr void a_prefix##new(a_ph_type *ph); \
a_attr bool a_prefix##empty(a_ph_type *ph); \
a_attr a_type *a_prefix##first(a_ph_type *ph); \
a_attr void a_prefix##insert(a_ph_type *ph, a_type *phn); \
a_attr a_type *a_prefix##remove_first(a_ph_type *ph); \
a_attr void a_prefix##remove(a_ph_type *ph, a_type *phn);
/*
* The ph_gen() macro generates a type-specific pairing heap implementation,
* based on the above cpp macros.
*/
#define ph_gen(a_attr, a_prefix, a_ph_type, a_type, a_field, a_cmp) \
a_attr void \
a_prefix##new(a_ph_type *ph) \
{ \
\
memset(ph, 0, sizeof(ph(a_type))); \
} \
a_attr bool \
a_prefix##empty(a_ph_type *ph) \
{ \
\
return (ph->ph_root == NULL); \
} \
a_attr a_type * \
a_prefix##first(a_ph_type *ph) \
{ \
\
if (ph->ph_root == NULL) \
return (NULL); \
ph_merge_aux(a_type, a_field, ph, a_cmp); \
return (ph->ph_root); \
} \
a_attr void \
a_prefix##insert(a_ph_type *ph, a_type *phn) \
{ \
\
memset(&phn->a_field, 0, sizeof(phn(a_type))); \
\
/* \
* Treat the root as an aux list during insertion, and lazily \
* merge during a_prefix##remove_first(). For elements that \
* are inserted, then removed via a_prefix##remove() before the \
* aux list is ever processed, this makes insert/remove \
* constant-time, whereas eager merging would make insert \
* O(log n). \
*/ \
if (ph->ph_root == NULL) \
ph->ph_root = phn; \
else { \
phn_next_set(a_type, a_field, phn, phn_next_get(a_type, \
a_field, ph->ph_root)); \
if (phn_next_get(a_type, a_field, ph->ph_root) != \
NULL) { \
phn_prev_set(a_type, a_field, \
phn_next_get(a_type, a_field, ph->ph_root), \
phn); \
} \
phn_prev_set(a_type, a_field, phn, ph->ph_root); \
phn_next_set(a_type, a_field, ph->ph_root, phn); \
} \
} \
a_attr a_type * \
a_prefix##remove_first(a_ph_type *ph) \
{ \
a_type *ret; \
\
if (ph->ph_root == NULL) \
return (NULL); \
ph_merge_aux(a_type, a_field, ph, a_cmp); \
\
ret = ph->ph_root; \
\
ph_merge_children(a_type, a_field, ph->ph_root, a_cmp, \
ph->ph_root); \
\
return (ret); \
} \
a_attr void \
a_prefix##remove(a_ph_type *ph, a_type *phn) \
{ \
a_type *replace, *parent; \
\
/* \
* We can delete from aux list without merging it, but we need \
* to merge if we are dealing with the root node. \
*/ \
if (ph->ph_root == phn) { \
ph_merge_aux(a_type, a_field, ph, a_cmp); \
if (ph->ph_root == phn) { \
ph_merge_children(a_type, a_field, ph->ph_root, \
a_cmp, ph->ph_root); \
return; \
} \
} \
\
/* Get parent (if phn is leftmost child) before mutating. */ \
if ((parent = phn_prev_get(a_type, a_field, phn)) != NULL) { \
if (phn_lchild_get(a_type, a_field, parent) != phn) \
parent = NULL; \
} \
/* Find a possible replacement node, and link to parent. */ \
ph_merge_children(a_type, a_field, phn, a_cmp, replace); \
/* Set next/prev for sibling linked list. */ \
if (replace != NULL) { \
if (parent != NULL) { \
phn_prev_set(a_type, a_field, replace, parent); \
phn_lchild_set(a_type, a_field, parent, \
replace); \
} else { \
phn_prev_set(a_type, a_field, replace, \
phn_prev_get(a_type, a_field, phn)); \
if (phn_prev_get(a_type, a_field, phn) != \
NULL) { \
phn_next_set(a_type, a_field, \
phn_prev_get(a_type, a_field, phn), \
replace); \
} \
} \
phn_next_set(a_type, a_field, replace, \
phn_next_get(a_type, a_field, phn)); \
if (phn_next_get(a_type, a_field, phn) != NULL) { \
phn_prev_set(a_type, a_field, \
phn_next_get(a_type, a_field, phn), \
replace); \
} \
} else { \
if (parent != NULL) { \
a_type *next = phn_next_get(a_type, a_field, \
phn); \
phn_lchild_set(a_type, a_field, parent, next); \
if (next != NULL) { \
phn_prev_set(a_type, a_field, next, \
parent); \
} \
} else { \
assert(phn_prev_get(a_type, a_field, phn) != \
NULL); \
phn_next_set(a_type, a_field, \
phn_prev_get(a_type, a_field, phn), \
phn_next_get(a_type, a_field, phn)); \
} \
if (phn_next_get(a_type, a_field, phn) != NULL) { \
phn_prev_set(a_type, a_field, \
phn_next_get(a_type, a_field, phn), \
phn_prev_get(a_type, a_field, phn)); \
} \
} \
}
#endif /* PH_H_ */

View File

@ -3,15 +3,12 @@ a0get
a0malloc a0malloc
arena_aalloc arena_aalloc
arena_alloc_junk_small arena_alloc_junk_small
arena_basic_stats_merge
arena_bin_index arena_bin_index
arena_bin_info arena_bin_info
arena_bitselm_get_const arena_bitselm_get
arena_bitselm_get_mutable
arena_boot arena_boot
arena_choose arena_choose
arena_choose_hard arena_choose_hard
arena_choose_impl
arena_chunk_alloc_huge arena_chunk_alloc_huge
arena_chunk_cache_maybe_insert arena_chunk_cache_maybe_insert
arena_chunk_cache_maybe_remove arena_chunk_cache_maybe_remove
@ -28,25 +25,18 @@ arena_dalloc_junk_small
arena_dalloc_large arena_dalloc_large
arena_dalloc_large_junked_locked arena_dalloc_large_junked_locked
arena_dalloc_small arena_dalloc_small
arena_decay_tick
arena_decay_ticks
arena_decay_time_default_get
arena_decay_time_default_set
arena_decay_time_get
arena_decay_time_set
arena_dss_prec_get arena_dss_prec_get
arena_dss_prec_set arena_dss_prec_set
arena_extent_sn_next
arena_get arena_get
arena_ichoose arena_get_hard
arena_init arena_init
arena_lg_dirty_mult_default_get arena_lg_dirty_mult_default_get
arena_lg_dirty_mult_default_set arena_lg_dirty_mult_default_set
arena_lg_dirty_mult_get arena_lg_dirty_mult_get
arena_lg_dirty_mult_set arena_lg_dirty_mult_set
arena_malloc arena_malloc
arena_malloc_hard
arena_malloc_large arena_malloc_large
arena_malloc_small
arena_mapbits_allocated_get arena_mapbits_allocated_get
arena_mapbits_binind_get arena_mapbits_binind_get
arena_mapbits_decommitted_get arena_mapbits_decommitted_get
@ -57,6 +47,9 @@ arena_mapbits_large_binind_set
arena_mapbits_large_get arena_mapbits_large_get
arena_mapbits_large_set arena_mapbits_large_set
arena_mapbits_large_size_get arena_mapbits_large_size_get
arena_mapbitsp_get
arena_mapbitsp_read
arena_mapbitsp_write
arena_mapbits_size_decode arena_mapbits_size_decode
arena_mapbits_size_encode arena_mapbits_size_encode
arena_mapbits_small_runind_get arena_mapbits_small_runind_get
@ -65,33 +58,23 @@ arena_mapbits_unallocated_set
arena_mapbits_unallocated_size_get arena_mapbits_unallocated_size_get
arena_mapbits_unallocated_size_set arena_mapbits_unallocated_size_set
arena_mapbits_unzeroed_get arena_mapbits_unzeroed_get
arena_mapbitsp_get_const
arena_mapbitsp_get_mutable
arena_mapbitsp_read
arena_mapbitsp_write
arena_maxrun arena_maxrun
arena_maybe_purge arena_maybe_purge
arena_metadata_allocated_add arena_metadata_allocated_add
arena_metadata_allocated_get arena_metadata_allocated_get
arena_metadata_allocated_sub arena_metadata_allocated_sub
arena_migrate arena_migrate
arena_miscelm_get_const arena_miscelm_get
arena_miscelm_get_mutable
arena_miscelm_to_pageind arena_miscelm_to_pageind
arena_miscelm_to_rpages arena_miscelm_to_rpages
arena_nbound
arena_new arena_new
arena_node_alloc arena_node_alloc
arena_node_dalloc arena_node_dalloc
arena_nthreads_dec
arena_nthreads_get
arena_nthreads_inc
arena_palloc arena_palloc
arena_postfork_child arena_postfork_child
arena_postfork_parent arena_postfork_parent
arena_prefork0 arena_prefork
arena_prefork1
arena_prefork2
arena_prefork3
arena_prof_accum arena_prof_accum
arena_prof_accum_impl arena_prof_accum_impl
arena_prof_accum_locked arena_prof_accum_locked
@ -100,25 +83,21 @@ arena_prof_tctx_get
arena_prof_tctx_reset arena_prof_tctx_reset
arena_prof_tctx_set arena_prof_tctx_set
arena_ptr_small_binind_get arena_ptr_small_binind_get
arena_purge arena_purge_all
arena_quarantine_junk_small arena_quarantine_junk_small
arena_ralloc arena_ralloc
arena_ralloc_junk_large arena_ralloc_junk_large
arena_ralloc_no_move arena_ralloc_no_move
arena_rd_to_miscelm arena_rd_to_miscelm
arena_redzone_corruption arena_redzone_corruption
arena_reset
arena_run_regind arena_run_regind
arena_run_to_miscelm arena_run_to_miscelm
arena_salloc arena_salloc
arenas_cache_bypass_cleanup
arenas_cache_cleanup
arena_sdalloc arena_sdalloc
arena_stats_merge arena_stats_merge
arena_tcache_fill_small arena_tcache_fill_small
arena_tdata_get
arena_tdata_get_hard
arenas
arenas_tdata_bypass_cleanup
arenas_tdata_cleanup
atomic_add_p atomic_add_p
atomic_add_u atomic_add_u
atomic_add_uint32 atomic_add_uint32
@ -134,11 +113,6 @@ atomic_sub_u
atomic_sub_uint32 atomic_sub_uint32
atomic_sub_uint64 atomic_sub_uint64
atomic_sub_z atomic_sub_z
atomic_write_p
atomic_write_u
atomic_write_uint32
atomic_write_uint64
atomic_write_z
base_alloc base_alloc
base_boot base_boot
base_postfork_child base_postfork_child
@ -148,6 +122,7 @@ base_stats_get
bitmap_full bitmap_full
bitmap_get bitmap_get
bitmap_info_init bitmap_info_init
bitmap_info_ngroups
bitmap_init bitmap_init
bitmap_set bitmap_set
bitmap_sfu bitmap_sfu
@ -164,25 +139,32 @@ chunk_alloc_dss
chunk_alloc_mmap chunk_alloc_mmap
chunk_alloc_wrapper chunk_alloc_wrapper
chunk_boot chunk_boot
chunk_dalloc_arena
chunk_dalloc_cache chunk_dalloc_cache
chunk_dalloc_mmap chunk_dalloc_mmap
chunk_dalloc_wrapper chunk_dalloc_wrapper
chunk_deregister chunk_deregister
chunk_dss_boot chunk_dss_boot
chunk_dss_mergeable chunk_dss_postfork_child
chunk_dss_postfork_parent
chunk_dss_prec_get chunk_dss_prec_get
chunk_dss_prec_set chunk_dss_prec_set
chunk_dss_prefork
chunk_hooks_default chunk_hooks_default
chunk_hooks_get chunk_hooks_get
chunk_hooks_set chunk_hooks_set
chunk_in_dss chunk_in_dss
chunk_lookup chunk_lookup
chunk_npages chunk_npages
chunk_postfork_child
chunk_postfork_parent
chunk_prefork
chunk_purge_arena
chunk_purge_wrapper chunk_purge_wrapper
chunk_register chunk_register
chunks_rtree
chunksize chunksize
chunksize_mask chunksize_mask
chunks_rtree
ckh_count ckh_count
ckh_delete ckh_delete
ckh_insert ckh_insert
@ -201,7 +183,6 @@ ctl_nametomib
ctl_postfork_child ctl_postfork_child
ctl_postfork_parent ctl_postfork_parent
ctl_prefork ctl_prefork
decay_ticker_get
dss_prec_names dss_prec_names
extent_node_achunk_get extent_node_achunk_get
extent_node_achunk_set extent_node_achunk_set
@ -209,8 +190,6 @@ extent_node_addr_get
extent_node_addr_set extent_node_addr_set
extent_node_arena_get extent_node_arena_get
extent_node_arena_set extent_node_arena_set
extent_node_committed_get
extent_node_committed_set
extent_node_dirty_insert extent_node_dirty_insert
extent_node_dirty_linkage_init extent_node_dirty_linkage_init
extent_node_dirty_remove extent_node_dirty_remove
@ -219,12 +198,8 @@ extent_node_prof_tctx_get
extent_node_prof_tctx_set extent_node_prof_tctx_set
extent_node_size_get extent_node_size_get
extent_node_size_set extent_node_size_set
extent_node_sn_get
extent_node_sn_set
extent_node_zeroed_get extent_node_zeroed_get
extent_node_zeroed_set extent_node_zeroed_set
extent_tree_ad_destroy
extent_tree_ad_destroy_recurse
extent_tree_ad_empty extent_tree_ad_empty
extent_tree_ad_first extent_tree_ad_first
extent_tree_ad_insert extent_tree_ad_insert
@ -242,31 +217,23 @@ extent_tree_ad_reverse_iter
extent_tree_ad_reverse_iter_recurse extent_tree_ad_reverse_iter_recurse
extent_tree_ad_reverse_iter_start extent_tree_ad_reverse_iter_start
extent_tree_ad_search extent_tree_ad_search
extent_tree_szsnad_destroy extent_tree_szad_empty
extent_tree_szsnad_destroy_recurse extent_tree_szad_first
extent_tree_szsnad_empty extent_tree_szad_insert
extent_tree_szsnad_first extent_tree_szad_iter
extent_tree_szsnad_insert extent_tree_szad_iter_recurse
extent_tree_szsnad_iter extent_tree_szad_iter_start
extent_tree_szsnad_iter_recurse extent_tree_szad_last
extent_tree_szsnad_iter_start extent_tree_szad_new
extent_tree_szsnad_last extent_tree_szad_next
extent_tree_szsnad_new extent_tree_szad_nsearch
extent_tree_szsnad_next extent_tree_szad_prev
extent_tree_szsnad_nsearch extent_tree_szad_psearch
extent_tree_szsnad_prev extent_tree_szad_remove
extent_tree_szsnad_psearch extent_tree_szad_reverse_iter
extent_tree_szsnad_remove extent_tree_szad_reverse_iter_recurse
extent_tree_szsnad_reverse_iter extent_tree_szad_reverse_iter_start
extent_tree_szsnad_reverse_iter_recurse extent_tree_szad_search
extent_tree_szsnad_reverse_iter_start
extent_tree_szsnad_search
ffs_llu
ffs_lu
ffs_u
ffs_u32
ffs_u64
ffs_zu
get_errno get_errno
hash hash
hash_fmix_32 hash_fmix_32
@ -290,16 +257,19 @@ huge_ralloc
huge_ralloc_no_move huge_ralloc_no_move
huge_salloc huge_salloc
iaalloc iaalloc
ialloc
iallocztm iallocztm
iarena_cleanup icalloc
icalloct
idalloc idalloc
idalloct
idalloctm idalloctm
in_valgrind imalloc
imalloct
index2size index2size
index2size_compute index2size_compute
index2size_lookup index2size_lookup
index2size_tab index2size_tab
in_valgrind
ipalloc ipalloc
ipalloct ipalloct
ipallocztm ipallocztm
@ -318,11 +288,7 @@ jemalloc_postfork_parent
jemalloc_prefork jemalloc_prefork
large_maxclass large_maxclass
lg_floor lg_floor
lg_prof_sample
malloc_cprintf malloc_cprintf
malloc_mutex_assert_not_owner
malloc_mutex_assert_owner
malloc_mutex_boot
malloc_mutex_init malloc_mutex_init
malloc_mutex_lock malloc_mutex_lock
malloc_mutex_postfork_child malloc_mutex_postfork_child
@ -344,29 +310,12 @@ malloc_write
map_bias map_bias
map_misc_offset map_misc_offset
mb_write mb_write
narenas_auto mutex_boot
narenas_tdata_cleanup narenas_cache_cleanup
narenas_total_get narenas_total_get
ncpus ncpus
nhbins nhbins
nhclasses
nlclasses
nstime_add
nstime_compare
nstime_copy
nstime_divide
nstime_idivide
nstime_imultiply
nstime_init
nstime_init2
nstime_monotonic
nstime_ns
nstime_nsec
nstime_sec
nstime_subtract
nstime_update
opt_abort opt_abort
opt_decay_time
opt_dss opt_dss
opt_junk opt_junk
opt_junk_alloc opt_junk_alloc
@ -385,7 +334,6 @@ opt_prof_gdump
opt_prof_leak opt_prof_leak
opt_prof_prefix opt_prof_prefix
opt_prof_thread_active_init opt_prof_thread_active_init
opt_purge
opt_quarantine opt_quarantine
opt_redzone opt_redzone
opt_stats_print opt_stats_print
@ -394,32 +342,13 @@ opt_utrace
opt_xmalloc opt_xmalloc
opt_zero opt_zero
p2rz p2rz
pages_boot
pages_commit pages_commit
pages_decommit pages_decommit
pages_huge
pages_map pages_map
pages_nohuge
pages_purge pages_purge
pages_trim pages_trim
pages_unmap pages_unmap
pind2sz pow2_ceil
pind2sz_compute
pind2sz_lookup
pind2sz_tab
pow2_ceil_u32
pow2_ceil_u64
pow2_ceil_zu
prng_lg_range_u32
prng_lg_range_u64
prng_lg_range_zu
prng_range_u32
prng_range_u64
prng_range_zu
prng_state_next_u32
prng_state_next_u64
prng_state_next_zu
prof_active
prof_active_get prof_active_get
prof_active_get_unlocked prof_active_get_unlocked
prof_active_set prof_active_set
@ -429,7 +358,6 @@ prof_backtrace
prof_boot0 prof_boot0
prof_boot1 prof_boot1
prof_boot2 prof_boot2
prof_bt_count
prof_dump_header prof_dump_header
prof_dump_open prof_dump_open
prof_free prof_free
@ -447,8 +375,7 @@ prof_malloc_sample_object
prof_mdump prof_mdump
prof_postfork_child prof_postfork_child
prof_postfork_parent prof_postfork_parent
prof_prefork0 prof_prefork
prof_prefork1
prof_realloc prof_realloc
prof_reset prof_reset
prof_sample_accum_update prof_sample_accum_update
@ -457,7 +384,6 @@ prof_tctx_get
prof_tctx_reset prof_tctx_reset
prof_tctx_set prof_tctx_set
prof_tdata_cleanup prof_tdata_cleanup
prof_tdata_count
prof_tdata_get prof_tdata_get
prof_tdata_init prof_tdata_init
prof_tdata_reinit prof_tdata_reinit
@ -467,13 +393,11 @@ prof_thread_active_init_set
prof_thread_active_set prof_thread_active_set
prof_thread_name_get prof_thread_name_get
prof_thread_name_set prof_thread_name_set
psz2ind
psz2u
purge_mode_names
quarantine quarantine
quarantine_alloc_hook quarantine_alloc_hook
quarantine_alloc_hook_work quarantine_alloc_hook_work
quarantine_cleanup quarantine_cleanup
register_zone
rtree_child_read rtree_child_read
rtree_child_read_hard rtree_child_read_hard
rtree_child_tryread rtree_child_tryread
@ -489,8 +413,6 @@ rtree_subtree_read_hard
rtree_subtree_tryread rtree_subtree_tryread
rtree_val_read rtree_val_read
rtree_val_write rtree_val_write
run_quantize_ceil
run_quantize_floor
s2u s2u
s2u_compute s2u_compute
s2u_lookup s2u_lookup
@ -500,8 +422,6 @@ size2index
size2index_compute size2index_compute
size2index_lookup size2index_lookup
size2index_tab size2index_tab
spin_adaptive
spin_init
stats_cactive stats_cactive
stats_cactive_add stats_cactive_add
stats_cactive_get stats_cactive_get
@ -511,6 +431,8 @@ tcache_alloc_easy
tcache_alloc_large tcache_alloc_large
tcache_alloc_small tcache_alloc_small
tcache_alloc_small_hard tcache_alloc_small_hard
tcache_arena_associate
tcache_arena_dissociate
tcache_arena_reassociate tcache_arena_reassociate
tcache_bin_flush_large tcache_bin_flush_large
tcache_bin_flush_small tcache_bin_flush_small
@ -529,103 +451,49 @@ tcache_flush
tcache_get tcache_get
tcache_get_hard tcache_get_hard
tcache_maxclass tcache_maxclass
tcache_salloc
tcache_stats_merge
tcaches tcaches
tcache_salloc
tcaches_create tcaches_create
tcaches_destroy tcaches_destroy
tcaches_flush tcaches_flush
tcaches_get tcaches_get
tcache_stats_merge
thread_allocated_cleanup thread_allocated_cleanup
thread_deallocated_cleanup thread_deallocated_cleanup
ticker_copy
ticker_init
ticker_read
ticker_tick
ticker_ticks
tsd_arena_get tsd_arena_get
tsd_arena_set tsd_arena_set
tsd_arenap_get
tsd_arenas_tdata_bypass_get
tsd_arenas_tdata_bypass_set
tsd_arenas_tdata_bypassp_get
tsd_arenas_tdata_get
tsd_arenas_tdata_set
tsd_arenas_tdatap_get
tsd_boot tsd_boot
tsd_boot0 tsd_boot0
tsd_boot1 tsd_boot1
tsd_booted tsd_booted
tsd_booted_get
tsd_cleanup tsd_cleanup
tsd_cleanup_wrapper tsd_cleanup_wrapper
tsd_fetch tsd_fetch
tsd_fetch_impl
tsd_get tsd_get
tsd_get_allocates tsd_wrapper_get
tsd_iarena_get tsd_wrapper_set
tsd_iarena_set
tsd_iarenap_get
tsd_initialized tsd_initialized
tsd_init_check_recursion tsd_init_check_recursion
tsd_init_finish tsd_init_finish
tsd_init_head tsd_init_head
tsd_narenas_tdata_get
tsd_narenas_tdata_set
tsd_narenas_tdatap_get
tsd_wrapper_get
tsd_wrapper_set
tsd_nominal tsd_nominal
tsd_prof_tdata_get
tsd_prof_tdata_set
tsd_prof_tdatap_get
tsd_quarantine_get tsd_quarantine_get
tsd_quarantine_set tsd_quarantine_set
tsd_quarantinep_get
tsd_set tsd_set
tsd_tcache_enabled_get tsd_tcache_enabled_get
tsd_tcache_enabled_set tsd_tcache_enabled_set
tsd_tcache_enabledp_get
tsd_tcache_get tsd_tcache_get
tsd_tcache_set tsd_tcache_set
tsd_tcachep_get
tsd_thread_allocated_get
tsd_thread_allocated_set
tsd_thread_allocatedp_get
tsd_thread_deallocated_get
tsd_thread_deallocated_set
tsd_thread_deallocatedp_get
tsd_tls tsd_tls
tsd_tsd tsd_tsd
tsd_tsdn tsd_prof_tdata_get
tsd_witness_fork_get tsd_prof_tdata_set
tsd_witness_fork_set tsd_thread_allocated_get
tsd_witness_forkp_get tsd_thread_allocated_set
tsd_witnesses_get tsd_thread_deallocated_get
tsd_witnesses_set tsd_thread_deallocated_set
tsd_witnessesp_get
tsdn_fetch
tsdn_null
tsdn_tsd
u2rz u2rz
valgrind_freelike_block valgrind_freelike_block
valgrind_make_mem_defined valgrind_make_mem_defined
valgrind_make_mem_noaccess valgrind_make_mem_noaccess
valgrind_make_mem_undefined valgrind_make_mem_undefined
witness_assert_lockless
witness_assert_not_owner
witness_assert_owner
witness_fork_cleanup
witness_init
witness_lock
witness_lock_error
witness_lockless_error
witness_not_owner_error
witness_owner
witness_owner_error
witness_postfork_child
witness_postfork_parent
witness_prefork
witness_unlock
witnesses_cleanup
zone_register

View File

@ -18,13 +18,31 @@
* proportional to bit position. For example, the lowest bit has a cycle of 2, * proportional to bit position. For example, the lowest bit has a cycle of 2,
* the next has a cycle of 4, etc. For this reason, we prefer to use the upper * the next has a cycle of 4, etc. For this reason, we prefer to use the upper
* bits. * bits.
*
* Macro parameters:
* uint32_t r : Result.
* unsigned lg_range : (0..32], number of least significant bits to return.
* uint32_t state : Seed value.
* const uint32_t a, c : See above discussion.
*/ */
#define prng32(r, lg_range, state, a, c) do { \
assert((lg_range) > 0); \
assert((lg_range) <= 32); \
\
r = (state * (a)) + (c); \
state = r; \
r >>= (32 - (lg_range)); \
} while (false)
#define PRNG_A_32 UINT32_C(1103515241) /* Same as prng32(), but 64 bits of pseudo-randomness, using uint64_t. */
#define PRNG_C_32 UINT32_C(12347) #define prng64(r, lg_range, state, a, c) do { \
assert((lg_range) > 0); \
#define PRNG_A_64 UINT64_C(6364136223846793005) assert((lg_range) <= 64); \
#define PRNG_C_64 UINT64_C(1442695040888963407) \
r = (state * (a)) + (c); \
state = r; \
r >>= (64 - (lg_range)); \
} while (false)
#endif /* JEMALLOC_H_TYPES */ #endif /* JEMALLOC_H_TYPES */
/******************************************************************************/ /******************************************************************************/
@ -38,170 +56,5 @@
/******************************************************************************/ /******************************************************************************/
#ifdef JEMALLOC_H_INLINES #ifdef JEMALLOC_H_INLINES
#ifndef JEMALLOC_ENABLE_INLINE
uint32_t prng_state_next_u32(uint32_t state);
uint64_t prng_state_next_u64(uint64_t state);
size_t prng_state_next_zu(size_t state);
uint32_t prng_lg_range_u32(uint32_t *state, unsigned lg_range,
bool atomic);
uint64_t prng_lg_range_u64(uint64_t *state, unsigned lg_range);
size_t prng_lg_range_zu(size_t *state, unsigned lg_range, bool atomic);
uint32_t prng_range_u32(uint32_t *state, uint32_t range, bool atomic);
uint64_t prng_range_u64(uint64_t *state, uint64_t range);
size_t prng_range_zu(size_t *state, size_t range, bool atomic);
#endif
#if (defined(JEMALLOC_ENABLE_INLINE) || defined(JEMALLOC_PRNG_C_))
JEMALLOC_ALWAYS_INLINE uint32_t
prng_state_next_u32(uint32_t state)
{
return ((state * PRNG_A_32) + PRNG_C_32);
}
JEMALLOC_ALWAYS_INLINE uint64_t
prng_state_next_u64(uint64_t state)
{
return ((state * PRNG_A_64) + PRNG_C_64);
}
JEMALLOC_ALWAYS_INLINE size_t
prng_state_next_zu(size_t state)
{
#if LG_SIZEOF_PTR == 2
return ((state * PRNG_A_32) + PRNG_C_32);
#elif LG_SIZEOF_PTR == 3
return ((state * PRNG_A_64) + PRNG_C_64);
#else
#error Unsupported pointer size
#endif
}
JEMALLOC_ALWAYS_INLINE uint32_t
prng_lg_range_u32(uint32_t *state, unsigned lg_range, bool atomic)
{
uint32_t ret, state1;
assert(lg_range > 0);
assert(lg_range <= 32);
if (atomic) {
uint32_t state0;
do {
state0 = atomic_read_uint32(state);
state1 = prng_state_next_u32(state0);
} while (atomic_cas_uint32(state, state0, state1));
} else {
state1 = prng_state_next_u32(*state);
*state = state1;
}
ret = state1 >> (32 - lg_range);
return (ret);
}
/* 64-bit atomic operations cannot be supported on all relevant platforms. */
JEMALLOC_ALWAYS_INLINE uint64_t
prng_lg_range_u64(uint64_t *state, unsigned lg_range)
{
uint64_t ret, state1;
assert(lg_range > 0);
assert(lg_range <= 64);
state1 = prng_state_next_u64(*state);
*state = state1;
ret = state1 >> (64 - lg_range);
return (ret);
}
JEMALLOC_ALWAYS_INLINE size_t
prng_lg_range_zu(size_t *state, unsigned lg_range, bool atomic)
{
size_t ret, state1;
assert(lg_range > 0);
assert(lg_range <= ZU(1) << (3 + LG_SIZEOF_PTR));
if (atomic) {
size_t state0;
do {
state0 = atomic_read_z(state);
state1 = prng_state_next_zu(state0);
} while (atomic_cas_z(state, state0, state1));
} else {
state1 = prng_state_next_zu(*state);
*state = state1;
}
ret = state1 >> ((ZU(1) << (3 + LG_SIZEOF_PTR)) - lg_range);
return (ret);
}
JEMALLOC_ALWAYS_INLINE uint32_t
prng_range_u32(uint32_t *state, uint32_t range, bool atomic)
{
uint32_t ret;
unsigned lg_range;
assert(range > 1);
/* Compute the ceiling of lg(range). */
lg_range = ffs_u32(pow2_ceil_u32(range)) - 1;
/* Generate a result in [0..range) via repeated trial. */
do {
ret = prng_lg_range_u32(state, lg_range, atomic);
} while (ret >= range);
return (ret);
}
JEMALLOC_ALWAYS_INLINE uint64_t
prng_range_u64(uint64_t *state, uint64_t range)
{
uint64_t ret;
unsigned lg_range;
assert(range > 1);
/* Compute the ceiling of lg(range). */
lg_range = ffs_u64(pow2_ceil_u64(range)) - 1;
/* Generate a result in [0..range) via repeated trial. */
do {
ret = prng_lg_range_u64(state, lg_range);
} while (ret >= range);
return (ret);
}
JEMALLOC_ALWAYS_INLINE size_t
prng_range_zu(size_t *state, size_t range, bool atomic)
{
size_t ret;
unsigned lg_range;
assert(range > 1);
/* Compute the ceiling of lg(range). */
lg_range = ffs_u64(pow2_ceil_u64(range)) - 1;
/* Generate a result in [0..range) via repeated trial. */
do {
ret = prng_lg_range_zu(state, lg_range, atomic);
} while (ret >= range);
return (ret);
}
#endif
#endif /* JEMALLOC_H_INLINES */ #endif /* JEMALLOC_H_INLINES */
/******************************************************************************/ /******************************************************************************/

View File

@ -281,7 +281,7 @@ extern uint64_t prof_interval;
extern size_t lg_prof_sample; extern size_t lg_prof_sample;
void prof_alloc_rollback(tsd_t *tsd, prof_tctx_t *tctx, bool updated); void prof_alloc_rollback(tsd_t *tsd, prof_tctx_t *tctx, bool updated);
void prof_malloc_sample_object(tsdn_t *tsdn, const void *ptr, size_t usize, void prof_malloc_sample_object(const void *ptr, size_t usize,
prof_tctx_t *tctx); prof_tctx_t *tctx);
void prof_free_sampled_object(tsd_t *tsd, size_t usize, prof_tctx_t *tctx); void prof_free_sampled_object(tsd_t *tsd, size_t usize, prof_tctx_t *tctx);
void bt_init(prof_bt_t *bt, void **vec); void bt_init(prof_bt_t *bt, void **vec);
@ -293,33 +293,32 @@ size_t prof_bt_count(void);
const prof_cnt_t *prof_cnt_all(void); const prof_cnt_t *prof_cnt_all(void);
typedef int (prof_dump_open_t)(bool, const char *); typedef int (prof_dump_open_t)(bool, const char *);
extern prof_dump_open_t *prof_dump_open; extern prof_dump_open_t *prof_dump_open;
typedef bool (prof_dump_header_t)(tsdn_t *, bool, const prof_cnt_t *); typedef bool (prof_dump_header_t)(bool, const prof_cnt_t *);
extern prof_dump_header_t *prof_dump_header; extern prof_dump_header_t *prof_dump_header;
#endif #endif
void prof_idump(tsdn_t *tsdn); void prof_idump(void);
bool prof_mdump(tsd_t *tsd, const char *filename); bool prof_mdump(const char *filename);
void prof_gdump(tsdn_t *tsdn); void prof_gdump(void);
prof_tdata_t *prof_tdata_init(tsd_t *tsd); prof_tdata_t *prof_tdata_init(tsd_t *tsd);
prof_tdata_t *prof_tdata_reinit(tsd_t *tsd, prof_tdata_t *tdata); prof_tdata_t *prof_tdata_reinit(tsd_t *tsd, prof_tdata_t *tdata);
void prof_reset(tsd_t *tsd, size_t lg_sample); void prof_reset(tsd_t *tsd, size_t lg_sample);
void prof_tdata_cleanup(tsd_t *tsd); void prof_tdata_cleanup(tsd_t *tsd);
bool prof_active_get(tsdn_t *tsdn); const char *prof_thread_name_get(void);
bool prof_active_set(tsdn_t *tsdn, bool active); bool prof_active_get(void);
const char *prof_thread_name_get(tsd_t *tsd); bool prof_active_set(bool active);
int prof_thread_name_set(tsd_t *tsd, const char *thread_name); int prof_thread_name_set(tsd_t *tsd, const char *thread_name);
bool prof_thread_active_get(tsd_t *tsd); bool prof_thread_active_get(void);
bool prof_thread_active_set(tsd_t *tsd, bool active); bool prof_thread_active_set(bool active);
bool prof_thread_active_init_get(tsdn_t *tsdn); bool prof_thread_active_init_get(void);
bool prof_thread_active_init_set(tsdn_t *tsdn, bool active_init); bool prof_thread_active_init_set(bool active_init);
bool prof_gdump_get(tsdn_t *tsdn); bool prof_gdump_get(void);
bool prof_gdump_set(tsdn_t *tsdn, bool active); bool prof_gdump_set(bool active);
void prof_boot0(void); void prof_boot0(void);
void prof_boot1(void); void prof_boot1(void);
bool prof_boot2(tsd_t *tsd); bool prof_boot2(void);
void prof_prefork0(tsdn_t *tsdn); void prof_prefork(void);
void prof_prefork1(tsdn_t *tsdn); void prof_postfork_parent(void);
void prof_postfork_parent(tsdn_t *tsdn); void prof_postfork_child(void);
void prof_postfork_child(tsdn_t *tsdn);
void prof_sample_threshold_update(prof_tdata_t *tdata); void prof_sample_threshold_update(prof_tdata_t *tdata);
#endif /* JEMALLOC_H_EXTERNS */ #endif /* JEMALLOC_H_EXTERNS */
@ -330,17 +329,17 @@ void prof_sample_threshold_update(prof_tdata_t *tdata);
bool prof_active_get_unlocked(void); bool prof_active_get_unlocked(void);
bool prof_gdump_get_unlocked(void); bool prof_gdump_get_unlocked(void);
prof_tdata_t *prof_tdata_get(tsd_t *tsd, bool create); prof_tdata_t *prof_tdata_get(tsd_t *tsd, bool create);
prof_tctx_t *prof_tctx_get(tsdn_t *tsdn, const void *ptr);
void prof_tctx_set(tsdn_t *tsdn, const void *ptr, size_t usize,
prof_tctx_t *tctx);
void prof_tctx_reset(tsdn_t *tsdn, const void *ptr, size_t usize,
const void *old_ptr, prof_tctx_t *tctx);
bool prof_sample_accum_update(tsd_t *tsd, size_t usize, bool commit, bool prof_sample_accum_update(tsd_t *tsd, size_t usize, bool commit,
prof_tdata_t **tdata_out); prof_tdata_t **tdata_out);
prof_tctx_t *prof_alloc_prep(tsd_t *tsd, size_t usize, bool prof_active, prof_tctx_t *prof_alloc_prep(tsd_t *tsd, size_t usize, bool prof_active,
bool update); bool update);
void prof_malloc(tsdn_t *tsdn, const void *ptr, size_t usize, prof_tctx_t *prof_tctx_get(const void *ptr);
void prof_tctx_set(const void *ptr, size_t usize, prof_tctx_t *tctx);
void prof_tctx_reset(const void *ptr, size_t usize, const void *old_ptr,
prof_tctx_t *tctx); prof_tctx_t *tctx);
void prof_malloc_sample_object(const void *ptr, size_t usize,
prof_tctx_t *tctx);
void prof_malloc(const void *ptr, size_t usize, prof_tctx_t *tctx);
void prof_realloc(tsd_t *tsd, const void *ptr, size_t usize, void prof_realloc(tsd_t *tsd, const void *ptr, size_t usize,
prof_tctx_t *tctx, bool prof_active, bool updated, const void *old_ptr, prof_tctx_t *tctx, bool prof_active, bool updated, const void *old_ptr,
size_t old_usize, prof_tctx_t *old_tctx); size_t old_usize, prof_tctx_t *old_tctx);
@ -398,34 +397,34 @@ prof_tdata_get(tsd_t *tsd, bool create)
} }
JEMALLOC_ALWAYS_INLINE prof_tctx_t * JEMALLOC_ALWAYS_INLINE prof_tctx_t *
prof_tctx_get(tsdn_t *tsdn, const void *ptr) prof_tctx_get(const void *ptr)
{ {
cassert(config_prof); cassert(config_prof);
assert(ptr != NULL); assert(ptr != NULL);
return (arena_prof_tctx_get(tsdn, ptr)); return (arena_prof_tctx_get(ptr));
} }
JEMALLOC_ALWAYS_INLINE void JEMALLOC_ALWAYS_INLINE void
prof_tctx_set(tsdn_t *tsdn, const void *ptr, size_t usize, prof_tctx_t *tctx) prof_tctx_set(const void *ptr, size_t usize, prof_tctx_t *tctx)
{ {
cassert(config_prof); cassert(config_prof);
assert(ptr != NULL); assert(ptr != NULL);
arena_prof_tctx_set(tsdn, ptr, usize, tctx); arena_prof_tctx_set(ptr, usize, tctx);
} }
JEMALLOC_ALWAYS_INLINE void JEMALLOC_ALWAYS_INLINE void
prof_tctx_reset(tsdn_t *tsdn, const void *ptr, size_t usize, const void *old_ptr, prof_tctx_reset(const void *ptr, size_t usize, const void *old_ptr,
prof_tctx_t *old_tctx) prof_tctx_t *old_tctx)
{ {
cassert(config_prof); cassert(config_prof);
assert(ptr != NULL); assert(ptr != NULL);
arena_prof_tctx_reset(tsdn, ptr, usize, old_ptr, old_tctx); arena_prof_tctx_reset(ptr, usize, old_ptr, old_tctx);
} }
JEMALLOC_ALWAYS_INLINE bool JEMALLOC_ALWAYS_INLINE bool
@ -437,16 +436,16 @@ prof_sample_accum_update(tsd_t *tsd, size_t usize, bool update,
cassert(config_prof); cassert(config_prof);
tdata = prof_tdata_get(tsd, true); tdata = prof_tdata_get(tsd, true);
if (unlikely((uintptr_t)tdata <= (uintptr_t)PROF_TDATA_STATE_MAX)) if ((uintptr_t)tdata <= (uintptr_t)PROF_TDATA_STATE_MAX)
tdata = NULL; tdata = NULL;
if (tdata_out != NULL) if (tdata_out != NULL)
*tdata_out = tdata; *tdata_out = tdata;
if (unlikely(tdata == NULL)) if (tdata == NULL)
return (true); return (true);
if (likely(tdata->bytes_until_sample >= usize)) { if (tdata->bytes_until_sample >= usize) {
if (update) if (update)
tdata->bytes_until_sample -= usize; tdata->bytes_until_sample -= usize;
return (true); return (true);
@ -480,17 +479,17 @@ prof_alloc_prep(tsd_t *tsd, size_t usize, bool prof_active, bool update)
} }
JEMALLOC_ALWAYS_INLINE void JEMALLOC_ALWAYS_INLINE void
prof_malloc(tsdn_t *tsdn, const void *ptr, size_t usize, prof_tctx_t *tctx) prof_malloc(const void *ptr, size_t usize, prof_tctx_t *tctx)
{ {
cassert(config_prof); cassert(config_prof);
assert(ptr != NULL); assert(ptr != NULL);
assert(usize == isalloc(tsdn, ptr, true)); assert(usize == isalloc(ptr, true));
if (unlikely((uintptr_t)tctx > (uintptr_t)1U)) if (unlikely((uintptr_t)tctx > (uintptr_t)1U))
prof_malloc_sample_object(tsdn, ptr, usize, tctx); prof_malloc_sample_object(ptr, usize, tctx);
else else
prof_tctx_set(tsdn, ptr, usize, (prof_tctx_t *)(uintptr_t)1U); prof_tctx_set(ptr, usize, (prof_tctx_t *)(uintptr_t)1U);
} }
JEMALLOC_ALWAYS_INLINE void JEMALLOC_ALWAYS_INLINE void
@ -504,7 +503,7 @@ prof_realloc(tsd_t *tsd, const void *ptr, size_t usize, prof_tctx_t *tctx,
assert(ptr != NULL || (uintptr_t)tctx <= (uintptr_t)1U); assert(ptr != NULL || (uintptr_t)tctx <= (uintptr_t)1U);
if (prof_active && !updated && ptr != NULL) { if (prof_active && !updated && ptr != NULL) {
assert(usize == isalloc(tsd_tsdn(tsd), ptr, true)); assert(usize == isalloc(ptr, true));
if (prof_sample_accum_update(tsd, usize, true, NULL)) { if (prof_sample_accum_update(tsd, usize, true, NULL)) {
/* /*
* Don't sample. The usize passed to prof_alloc_prep() * Don't sample. The usize passed to prof_alloc_prep()
@ -513,7 +512,6 @@ prof_realloc(tsd_t *tsd, const void *ptr, size_t usize, prof_tctx_t *tctx,
* though its actual usize was insufficient to cross the * though its actual usize was insufficient to cross the
* sample threshold. * sample threshold.
*/ */
prof_alloc_rollback(tsd, tctx, true);
tctx = (prof_tctx_t *)(uintptr_t)1U; tctx = (prof_tctx_t *)(uintptr_t)1U;
} }
} }
@ -522,9 +520,9 @@ prof_realloc(tsd_t *tsd, const void *ptr, size_t usize, prof_tctx_t *tctx,
old_sampled = ((uintptr_t)old_tctx > (uintptr_t)1U); old_sampled = ((uintptr_t)old_tctx > (uintptr_t)1U);
if (unlikely(sampled)) if (unlikely(sampled))
prof_malloc_sample_object(tsd_tsdn(tsd), ptr, usize, tctx); prof_malloc_sample_object(ptr, usize, tctx);
else else
prof_tctx_reset(tsd_tsdn(tsd), ptr, usize, old_ptr, old_tctx); prof_tctx_reset(ptr, usize, old_ptr, old_tctx);
if (unlikely(old_sampled)) if (unlikely(old_sampled))
prof_free_sampled_object(tsd, old_usize, old_tctx); prof_free_sampled_object(tsd, old_usize, old_tctx);
@ -533,10 +531,10 @@ prof_realloc(tsd_t *tsd, const void *ptr, size_t usize, prof_tctx_t *tctx,
JEMALLOC_ALWAYS_INLINE void JEMALLOC_ALWAYS_INLINE void
prof_free(tsd_t *tsd, const void *ptr, size_t usize) prof_free(tsd_t *tsd, const void *ptr, size_t usize)
{ {
prof_tctx_t *tctx = prof_tctx_get(tsd_tsdn(tsd), ptr); prof_tctx_t *tctx = prof_tctx_get(ptr);
cassert(config_prof); cassert(config_prof);
assert(usize == isalloc(tsd_tsdn(tsd), ptr, true)); assert(usize == isalloc(ptr, true));
if (unlikely((uintptr_t)tctx > (uintptr_t)1U)) if (unlikely((uintptr_t)tctx > (uintptr_t)1U))
prof_free_sampled_object(tsd, usize, tctx); prof_free_sampled_object(tsd, usize, tctx);

View File

@ -42,6 +42,7 @@ struct { \
#define rb_tree(a_type) \ #define rb_tree(a_type) \
struct { \ struct { \
a_type *rbt_root; \ a_type *rbt_root; \
a_type rbt_nil; \
} }
/* Left accessors. */ /* Left accessors. */
@ -78,15 +79,6 @@ struct { \
(a_node)->a_field.rbn_right_red = (a_type *) (((intptr_t) \ (a_node)->a_field.rbn_right_red = (a_type *) (((intptr_t) \
(a_node)->a_field.rbn_right_red) & ((ssize_t)-2)); \ (a_node)->a_field.rbn_right_red) & ((ssize_t)-2)); \
} while (0) } while (0)
/* Node initializer. */
#define rbt_node_new(a_type, a_field, a_rbt, a_node) do { \
/* Bookkeeping bit cannot be used by node pointer. */ \
assert(((uintptr_t)(a_node) & 0x1) == 0); \
rbtn_left_set(a_type, a_field, (a_node), NULL); \
rbtn_right_set(a_type, a_field, (a_node), NULL); \
rbtn_red_set(a_type, a_field, (a_node)); \
} while (0)
#else #else
/* Right accessors. */ /* Right accessors. */
#define rbtn_right_get(a_type, a_field, a_node) \ #define rbtn_right_get(a_type, a_field, a_node) \
@ -107,26 +99,28 @@ struct { \
#define rbtn_black_set(a_type, a_field, a_node) do { \ #define rbtn_black_set(a_type, a_field, a_node) do { \
(a_node)->a_field.rbn_red = false; \ (a_node)->a_field.rbn_red = false; \
} while (0) } while (0)
#endif
/* Node initializer. */ /* Node initializer. */
#define rbt_node_new(a_type, a_field, a_rbt, a_node) do { \ #define rbt_node_new(a_type, a_field, a_rbt, a_node) do { \
rbtn_left_set(a_type, a_field, (a_node), NULL); \ rbtn_left_set(a_type, a_field, (a_node), &(a_rbt)->rbt_nil); \
rbtn_right_set(a_type, a_field, (a_node), NULL); \ rbtn_right_set(a_type, a_field, (a_node), &(a_rbt)->rbt_nil); \
rbtn_red_set(a_type, a_field, (a_node)); \ rbtn_red_set(a_type, a_field, (a_node)); \
} while (0) } while (0)
#endif
/* Tree initializer. */ /* Tree initializer. */
#define rb_new(a_type, a_field, a_rbt) do { \ #define rb_new(a_type, a_field, a_rbt) do { \
(a_rbt)->rbt_root = NULL; \ (a_rbt)->rbt_root = &(a_rbt)->rbt_nil; \
rbt_node_new(a_type, a_field, a_rbt, &(a_rbt)->rbt_nil); \
rbtn_black_set(a_type, a_field, &(a_rbt)->rbt_nil); \
} while (0) } while (0)
/* Internal utility macros. */ /* Internal utility macros. */
#define rbtn_first(a_type, a_field, a_rbt, a_root, r_node) do { \ #define rbtn_first(a_type, a_field, a_rbt, a_root, r_node) do { \
(r_node) = (a_root); \ (r_node) = (a_root); \
if ((r_node) != NULL) { \ if ((r_node) != &(a_rbt)->rbt_nil) { \
for (; \ for (; \
rbtn_left_get(a_type, a_field, (r_node)) != NULL; \ rbtn_left_get(a_type, a_field, (r_node)) != &(a_rbt)->rbt_nil;\
(r_node) = rbtn_left_get(a_type, a_field, (r_node))) { \ (r_node) = rbtn_left_get(a_type, a_field, (r_node))) { \
} \ } \
} \ } \
@ -134,9 +128,10 @@ struct { \
#define rbtn_last(a_type, a_field, a_rbt, a_root, r_node) do { \ #define rbtn_last(a_type, a_field, a_rbt, a_root, r_node) do { \
(r_node) = (a_root); \ (r_node) = (a_root); \
if ((r_node) != NULL) { \ if ((r_node) != &(a_rbt)->rbt_nil) { \
for (; rbtn_right_get(a_type, a_field, (r_node)) != NULL; \ for (; rbtn_right_get(a_type, a_field, (r_node)) != \
(r_node) = rbtn_right_get(a_type, a_field, (r_node))) { \ &(a_rbt)->rbt_nil; (r_node) = rbtn_right_get(a_type, a_field, \
(r_node))) { \
} \ } \
} \ } \
} while (0) } while (0)
@ -174,11 +169,11 @@ a_prefix##next(a_rbt_type *rbtree, a_type *node); \
a_attr a_type * \ a_attr a_type * \
a_prefix##prev(a_rbt_type *rbtree, a_type *node); \ a_prefix##prev(a_rbt_type *rbtree, a_type *node); \
a_attr a_type * \ a_attr a_type * \
a_prefix##search(a_rbt_type *rbtree, const a_type *key); \ a_prefix##search(a_rbt_type *rbtree, a_type *key); \
a_attr a_type * \ a_attr a_type * \
a_prefix##nsearch(a_rbt_type *rbtree, const a_type *key); \ a_prefix##nsearch(a_rbt_type *rbtree, a_type *key); \
a_attr a_type * \ a_attr a_type * \
a_prefix##psearch(a_rbt_type *rbtree, const a_type *key); \ a_prefix##psearch(a_rbt_type *rbtree, a_type *key); \
a_attr void \ a_attr void \
a_prefix##insert(a_rbt_type *rbtree, a_type *node); \ a_prefix##insert(a_rbt_type *rbtree, a_type *node); \
a_attr void \ a_attr void \
@ -188,10 +183,7 @@ a_prefix##iter(a_rbt_type *rbtree, a_type *start, a_type *(*cb)( \
a_rbt_type *, a_type *, void *), void *arg); \ a_rbt_type *, a_type *, void *), void *arg); \
a_attr a_type * \ a_attr a_type * \
a_prefix##reverse_iter(a_rbt_type *rbtree, a_type *start, \ a_prefix##reverse_iter(a_rbt_type *rbtree, a_type *start, \
a_type *(*cb)(a_rbt_type *, a_type *, void *), void *arg); \ a_type *(*cb)(a_rbt_type *, a_type *, void *), void *arg);
a_attr void \
a_prefix##destroy(a_rbt_type *rbtree, void (*cb)(a_type *, void *), \
void *arg);
/* /*
* The rb_gen() macro generates a type-specific red-black tree implementation, * The rb_gen() macro generates a type-specific red-black tree implementation,
@ -262,7 +254,7 @@ a_prefix##destroy(a_rbt_type *rbtree, void (*cb)(a_type *, void *), \
* last/first. * last/first.
* *
* static ex_node_t * * static ex_node_t *
* ex_search(ex_t *tree, const ex_node_t *key); * ex_search(ex_t *tree, ex_node_t *key);
* Description: Search for node that matches key. * Description: Search for node that matches key.
* Args: * Args:
* tree: Pointer to an initialized red-black tree object. * tree: Pointer to an initialized red-black tree object.
@ -270,9 +262,9 @@ a_prefix##destroy(a_rbt_type *rbtree, void (*cb)(a_type *, void *), \
* Ret: Node in tree that matches key, or NULL if no match. * Ret: Node in tree that matches key, or NULL if no match.
* *
* static ex_node_t * * static ex_node_t *
* ex_nsearch(ex_t *tree, const ex_node_t *key); * ex_nsearch(ex_t *tree, ex_node_t *key);
* static ex_node_t * * static ex_node_t *
* ex_psearch(ex_t *tree, const ex_node_t *key); * ex_psearch(ex_t *tree, ex_node_t *key);
* Description: Search for node that matches key. If no match is found, * Description: Search for node that matches key. If no match is found,
* return what would be key's successor/predecessor, were * return what would be key's successor/predecessor, were
* key in tree. * key in tree.
@ -320,20 +312,6 @@ a_prefix##destroy(a_rbt_type *rbtree, void (*cb)(a_type *, void *), \
* arg : Opaque pointer passed to cb(). * arg : Opaque pointer passed to cb().
* Ret: NULL if iteration completed, or the non-NULL callback return value * Ret: NULL if iteration completed, or the non-NULL callback return value
* that caused termination of the iteration. * that caused termination of the iteration.
*
* static void
* ex_destroy(ex_t *tree, void (*cb)(ex_node_t *, void *), void *arg);
* Description: Iterate over the tree with post-order traversal, remove
* each node, and run the callback if non-null. This is
* used for destroying a tree without paying the cost to
* rebalance it. The tree must not be otherwise altered
* during traversal.
* Args:
* tree: Pointer to an initialized red-black tree object.
* cb : Callback function, which, if non-null, is called for each node
* during iteration. There is no way to stop iteration once it
* has begun.
* arg : Opaque pointer passed to cb().
*/ */
#define rb_gen(a_attr, a_prefix, a_rbt_type, a_type, a_field, a_cmp) \ #define rb_gen(a_attr, a_prefix, a_rbt_type, a_type, a_field, a_cmp) \
a_attr void \ a_attr void \
@ -342,30 +320,36 @@ a_prefix##new(a_rbt_type *rbtree) { \
} \ } \
a_attr bool \ a_attr bool \
a_prefix##empty(a_rbt_type *rbtree) { \ a_prefix##empty(a_rbt_type *rbtree) { \
return (rbtree->rbt_root == NULL); \ return (rbtree->rbt_root == &rbtree->rbt_nil); \
} \ } \
a_attr a_type * \ a_attr a_type * \
a_prefix##first(a_rbt_type *rbtree) { \ a_prefix##first(a_rbt_type *rbtree) { \
a_type *ret; \ a_type *ret; \
rbtn_first(a_type, a_field, rbtree, rbtree->rbt_root, ret); \ rbtn_first(a_type, a_field, rbtree, rbtree->rbt_root, ret); \
if (ret == &rbtree->rbt_nil) { \
ret = NULL; \
} \
return (ret); \ return (ret); \
} \ } \
a_attr a_type * \ a_attr a_type * \
a_prefix##last(a_rbt_type *rbtree) { \ a_prefix##last(a_rbt_type *rbtree) { \
a_type *ret; \ a_type *ret; \
rbtn_last(a_type, a_field, rbtree, rbtree->rbt_root, ret); \ rbtn_last(a_type, a_field, rbtree, rbtree->rbt_root, ret); \
if (ret == &rbtree->rbt_nil) { \
ret = NULL; \
} \
return (ret); \ return (ret); \
} \ } \
a_attr a_type * \ a_attr a_type * \
a_prefix##next(a_rbt_type *rbtree, a_type *node) { \ a_prefix##next(a_rbt_type *rbtree, a_type *node) { \
a_type *ret; \ a_type *ret; \
if (rbtn_right_get(a_type, a_field, node) != NULL) { \ if (rbtn_right_get(a_type, a_field, node) != &rbtree->rbt_nil) { \
rbtn_first(a_type, a_field, rbtree, rbtn_right_get(a_type, \ rbtn_first(a_type, a_field, rbtree, rbtn_right_get(a_type, \
a_field, node), ret); \ a_field, node), ret); \
} else { \ } else { \
a_type *tnode = rbtree->rbt_root; \ a_type *tnode = rbtree->rbt_root; \
assert(tnode != NULL); \ assert(tnode != &rbtree->rbt_nil); \
ret = NULL; \ ret = &rbtree->rbt_nil; \
while (true) { \ while (true) { \
int cmp = (a_cmp)(node, tnode); \ int cmp = (a_cmp)(node, tnode); \
if (cmp < 0) { \ if (cmp < 0) { \
@ -376,21 +360,24 @@ a_prefix##next(a_rbt_type *rbtree, a_type *node) { \
} else { \ } else { \
break; \ break; \
} \ } \
assert(tnode != NULL); \ assert(tnode != &rbtree->rbt_nil); \
} \ } \
} \ } \
if (ret == &rbtree->rbt_nil) { \
ret = (NULL); \
} \
return (ret); \ return (ret); \
} \ } \
a_attr a_type * \ a_attr a_type * \
a_prefix##prev(a_rbt_type *rbtree, a_type *node) { \ a_prefix##prev(a_rbt_type *rbtree, a_type *node) { \
a_type *ret; \ a_type *ret; \
if (rbtn_left_get(a_type, a_field, node) != NULL) { \ if (rbtn_left_get(a_type, a_field, node) != &rbtree->rbt_nil) { \
rbtn_last(a_type, a_field, rbtree, rbtn_left_get(a_type, \ rbtn_last(a_type, a_field, rbtree, rbtn_left_get(a_type, \
a_field, node), ret); \ a_field, node), ret); \
} else { \ } else { \
a_type *tnode = rbtree->rbt_root; \ a_type *tnode = rbtree->rbt_root; \
assert(tnode != NULL); \ assert(tnode != &rbtree->rbt_nil); \
ret = NULL; \ ret = &rbtree->rbt_nil; \
while (true) { \ while (true) { \
int cmp = (a_cmp)(node, tnode); \ int cmp = (a_cmp)(node, tnode); \
if (cmp < 0) { \ if (cmp < 0) { \
@ -401,17 +388,20 @@ a_prefix##prev(a_rbt_type *rbtree, a_type *node) { \
} else { \ } else { \
break; \ break; \
} \ } \
assert(tnode != NULL); \ assert(tnode != &rbtree->rbt_nil); \
} \ } \
} \ } \
if (ret == &rbtree->rbt_nil) { \
ret = (NULL); \
} \
return (ret); \ return (ret); \
} \ } \
a_attr a_type * \ a_attr a_type * \
a_prefix##search(a_rbt_type *rbtree, const a_type *key) { \ a_prefix##search(a_rbt_type *rbtree, a_type *key) { \
a_type *ret; \ a_type *ret; \
int cmp; \ int cmp; \
ret = rbtree->rbt_root; \ ret = rbtree->rbt_root; \
while (ret != NULL \ while (ret != &rbtree->rbt_nil \
&& (cmp = (a_cmp)(key, ret)) != 0) { \ && (cmp = (a_cmp)(key, ret)) != 0) { \
if (cmp < 0) { \ if (cmp < 0) { \
ret = rbtn_left_get(a_type, a_field, ret); \ ret = rbtn_left_get(a_type, a_field, ret); \
@ -419,14 +409,17 @@ a_prefix##search(a_rbt_type *rbtree, const a_type *key) { \
ret = rbtn_right_get(a_type, a_field, ret); \ ret = rbtn_right_get(a_type, a_field, ret); \
} \ } \
} \ } \
if (ret == &rbtree->rbt_nil) { \
ret = (NULL); \
} \
return (ret); \ return (ret); \
} \ } \
a_attr a_type * \ a_attr a_type * \
a_prefix##nsearch(a_rbt_type *rbtree, const a_type *key) { \ a_prefix##nsearch(a_rbt_type *rbtree, a_type *key) { \
a_type *ret; \ a_type *ret; \
a_type *tnode = rbtree->rbt_root; \ a_type *tnode = rbtree->rbt_root; \
ret = NULL; \ ret = &rbtree->rbt_nil; \
while (tnode != NULL) { \ while (tnode != &rbtree->rbt_nil) { \
int cmp = (a_cmp)(key, tnode); \ int cmp = (a_cmp)(key, tnode); \
if (cmp < 0) { \ if (cmp < 0) { \
ret = tnode; \ ret = tnode; \
@ -438,14 +431,17 @@ a_prefix##nsearch(a_rbt_type *rbtree, const a_type *key) { \
break; \ break; \
} \ } \
} \ } \
if (ret == &rbtree->rbt_nil) { \
ret = (NULL); \
} \
return (ret); \ return (ret); \
} \ } \
a_attr a_type * \ a_attr a_type * \
a_prefix##psearch(a_rbt_type *rbtree, const a_type *key) { \ a_prefix##psearch(a_rbt_type *rbtree, a_type *key) { \
a_type *ret; \ a_type *ret; \
a_type *tnode = rbtree->rbt_root; \ a_type *tnode = rbtree->rbt_root; \
ret = NULL; \ ret = &rbtree->rbt_nil; \
while (tnode != NULL) { \ while (tnode != &rbtree->rbt_nil) { \
int cmp = (a_cmp)(key, tnode); \ int cmp = (a_cmp)(key, tnode); \
if (cmp < 0) { \ if (cmp < 0) { \
tnode = rbtn_left_get(a_type, a_field, tnode); \ tnode = rbtn_left_get(a_type, a_field, tnode); \
@ -457,6 +453,9 @@ a_prefix##psearch(a_rbt_type *rbtree, const a_type *key) { \
break; \ break; \
} \ } \
} \ } \
if (ret == &rbtree->rbt_nil) { \
ret = (NULL); \
} \
return (ret); \ return (ret); \
} \ } \
a_attr void \ a_attr void \
@ -468,7 +467,7 @@ a_prefix##insert(a_rbt_type *rbtree, a_type *node) { \
rbt_node_new(a_type, a_field, rbtree, node); \ rbt_node_new(a_type, a_field, rbtree, node); \
/* Wind. */ \ /* Wind. */ \
path->node = rbtree->rbt_root; \ path->node = rbtree->rbt_root; \
for (pathp = path; pathp->node != NULL; pathp++) { \ for (pathp = path; pathp->node != &rbtree->rbt_nil; pathp++) { \
int cmp = pathp->cmp = a_cmp(node, pathp->node); \ int cmp = pathp->cmp = a_cmp(node, pathp->node); \
assert(cmp != 0); \ assert(cmp != 0); \
if (cmp < 0) { \ if (cmp < 0) { \
@ -488,8 +487,7 @@ a_prefix##insert(a_rbt_type *rbtree, a_type *node) { \
rbtn_left_set(a_type, a_field, cnode, left); \ rbtn_left_set(a_type, a_field, cnode, left); \
if (rbtn_red_get(a_type, a_field, left)) { \ if (rbtn_red_get(a_type, a_field, left)) { \
a_type *leftleft = rbtn_left_get(a_type, a_field, left);\ a_type *leftleft = rbtn_left_get(a_type, a_field, left);\
if (leftleft != NULL && rbtn_red_get(a_type, a_field, \ if (rbtn_red_get(a_type, a_field, leftleft)) { \
leftleft)) { \
/* Fix up 4-node. */ \ /* Fix up 4-node. */ \
a_type *tnode; \ a_type *tnode; \
rbtn_black_set(a_type, a_field, leftleft); \ rbtn_black_set(a_type, a_field, leftleft); \
@ -504,8 +502,7 @@ a_prefix##insert(a_rbt_type *rbtree, a_type *node) { \
rbtn_right_set(a_type, a_field, cnode, right); \ rbtn_right_set(a_type, a_field, cnode, right); \
if (rbtn_red_get(a_type, a_field, right)) { \ if (rbtn_red_get(a_type, a_field, right)) { \
a_type *left = rbtn_left_get(a_type, a_field, cnode); \ a_type *left = rbtn_left_get(a_type, a_field, cnode); \
if (left != NULL && rbtn_red_get(a_type, a_field, \ if (rbtn_red_get(a_type, a_field, left)) { \
left)) { \
/* Split 4-node. */ \ /* Split 4-node. */ \
rbtn_black_set(a_type, a_field, left); \ rbtn_black_set(a_type, a_field, left); \
rbtn_black_set(a_type, a_field, right); \ rbtn_black_set(a_type, a_field, right); \
@ -538,7 +535,7 @@ a_prefix##remove(a_rbt_type *rbtree, a_type *node) { \
/* Wind. */ \ /* Wind. */ \
nodep = NULL; /* Silence compiler warning. */ \ nodep = NULL; /* Silence compiler warning. */ \
path->node = rbtree->rbt_root; \ path->node = rbtree->rbt_root; \
for (pathp = path; pathp->node != NULL; pathp++) { \ for (pathp = path; pathp->node != &rbtree->rbt_nil; pathp++) { \
int cmp = pathp->cmp = a_cmp(node, pathp->node); \ int cmp = pathp->cmp = a_cmp(node, pathp->node); \
if (cmp < 0) { \ if (cmp < 0) { \
pathp[1].node = rbtn_left_get(a_type, a_field, \ pathp[1].node = rbtn_left_get(a_type, a_field, \
@ -550,7 +547,7 @@ a_prefix##remove(a_rbt_type *rbtree, a_type *node) { \
/* Find node's successor, in preparation for swap. */ \ /* Find node's successor, in preparation for swap. */ \
pathp->cmp = 1; \ pathp->cmp = 1; \
nodep = pathp; \ nodep = pathp; \
for (pathp++; pathp->node != NULL; \ for (pathp++; pathp->node != &rbtree->rbt_nil; \
pathp++) { \ pathp++) { \
pathp->cmp = -1; \ pathp->cmp = -1; \
pathp[1].node = rbtn_left_get(a_type, a_field, \ pathp[1].node = rbtn_left_get(a_type, a_field, \
@ -593,7 +590,7 @@ a_prefix##remove(a_rbt_type *rbtree, a_type *node) { \
} \ } \
} else { \ } else { \
a_type *left = rbtn_left_get(a_type, a_field, node); \ a_type *left = rbtn_left_get(a_type, a_field, node); \
if (left != NULL) { \ if (left != &rbtree->rbt_nil) { \
/* node has no successor, but it has a left child. */\ /* node has no successor, but it has a left child. */\
/* Splice node out, without losing the left child. */\ /* Splice node out, without losing the left child. */\
assert(!rbtn_red_get(a_type, a_field, node)); \ assert(!rbtn_red_get(a_type, a_field, node)); \
@ -613,32 +610,33 @@ a_prefix##remove(a_rbt_type *rbtree, a_type *node) { \
return; \ return; \
} else if (pathp == path) { \ } else if (pathp == path) { \
/* The tree only contained one node. */ \ /* The tree only contained one node. */ \
rbtree->rbt_root = NULL; \ rbtree->rbt_root = &rbtree->rbt_nil; \
return; \ return; \
} \ } \
} \ } \
if (rbtn_red_get(a_type, a_field, pathp->node)) { \ if (rbtn_red_get(a_type, a_field, pathp->node)) { \
/* Prune red node, which requires no fixup. */ \ /* Prune red node, which requires no fixup. */ \
assert(pathp[-1].cmp < 0); \ assert(pathp[-1].cmp < 0); \
rbtn_left_set(a_type, a_field, pathp[-1].node, NULL); \ rbtn_left_set(a_type, a_field, pathp[-1].node, \
&rbtree->rbt_nil); \
return; \ return; \
} \ } \
/* The node to be pruned is black, so unwind until balance is */\ /* The node to be pruned is black, so unwind until balance is */\
/* restored. */\ /* restored. */\
pathp->node = NULL; \ pathp->node = &rbtree->rbt_nil; \
for (pathp--; (uintptr_t)pathp >= (uintptr_t)path; pathp--) { \ for (pathp--; (uintptr_t)pathp >= (uintptr_t)path; pathp--) { \
assert(pathp->cmp != 0); \ assert(pathp->cmp != 0); \
if (pathp->cmp < 0) { \ if (pathp->cmp < 0) { \
rbtn_left_set(a_type, a_field, pathp->node, \ rbtn_left_set(a_type, a_field, pathp->node, \
pathp[1].node); \ pathp[1].node); \
assert(!rbtn_red_get(a_type, a_field, pathp[1].node)); \
if (rbtn_red_get(a_type, a_field, pathp->node)) { \ if (rbtn_red_get(a_type, a_field, pathp->node)) { \
a_type *right = rbtn_right_get(a_type, a_field, \ a_type *right = rbtn_right_get(a_type, a_field, \
pathp->node); \ pathp->node); \
a_type *rightleft = rbtn_left_get(a_type, a_field, \ a_type *rightleft = rbtn_left_get(a_type, a_field, \
right); \ right); \
a_type *tnode; \ a_type *tnode; \
if (rightleft != NULL && rbtn_red_get(a_type, a_field, \ if (rbtn_red_get(a_type, a_field, rightleft)) { \
rightleft)) { \
/* In the following diagrams, ||, //, and \\ */\ /* In the following diagrams, ||, //, and \\ */\
/* indicate the path to the removed node. */\ /* indicate the path to the removed node. */\
/* */\ /* */\
@ -681,8 +679,7 @@ a_prefix##remove(a_rbt_type *rbtree, a_type *node) { \
pathp->node); \ pathp->node); \
a_type *rightleft = rbtn_left_get(a_type, a_field, \ a_type *rightleft = rbtn_left_get(a_type, a_field, \
right); \ right); \
if (rightleft != NULL && rbtn_red_get(a_type, a_field, \ if (rbtn_red_get(a_type, a_field, rightleft)) { \
rightleft)) { \
/* || */\ /* || */\
/* pathp(b) */\ /* pathp(b) */\
/* // \ */\ /* // \ */\
@ -736,8 +733,7 @@ a_prefix##remove(a_rbt_type *rbtree, a_type *node) { \
left); \ left); \
a_type *leftrightleft = rbtn_left_get(a_type, a_field, \ a_type *leftrightleft = rbtn_left_get(a_type, a_field, \
leftright); \ leftright); \
if (leftrightleft != NULL && rbtn_red_get(a_type, \ if (rbtn_red_get(a_type, a_field, leftrightleft)) { \
a_field, leftrightleft)) { \
/* || */\ /* || */\
/* pathp(b) */\ /* pathp(b) */\
/* / \\ */\ /* / \\ */\
@ -763,7 +759,7 @@ a_prefix##remove(a_rbt_type *rbtree, a_type *node) { \
/* (b) */\ /* (b) */\
/* / */\ /* / */\
/* (b) */\ /* (b) */\
assert(leftright != NULL); \ assert(leftright != &rbtree->rbt_nil); \
rbtn_red_set(a_type, a_field, leftright); \ rbtn_red_set(a_type, a_field, leftright); \
rbtn_rotate_right(a_type, a_field, pathp->node, \ rbtn_rotate_right(a_type, a_field, pathp->node, \
tnode); \ tnode); \
@ -786,8 +782,7 @@ a_prefix##remove(a_rbt_type *rbtree, a_type *node) { \
return; \ return; \
} else if (rbtn_red_get(a_type, a_field, pathp->node)) { \ } else if (rbtn_red_get(a_type, a_field, pathp->node)) { \
a_type *leftleft = rbtn_left_get(a_type, a_field, left);\ a_type *leftleft = rbtn_left_get(a_type, a_field, left);\
if (leftleft != NULL && rbtn_red_get(a_type, a_field, \ if (rbtn_red_get(a_type, a_field, leftleft)) { \
leftleft)) { \
/* || */\ /* || */\
/* pathp(r) */\ /* pathp(r) */\
/* / \\ */\ /* / \\ */\
@ -825,8 +820,7 @@ a_prefix##remove(a_rbt_type *rbtree, a_type *node) { \
} \ } \
} else { \ } else { \
a_type *leftleft = rbtn_left_get(a_type, a_field, left);\ a_type *leftleft = rbtn_left_get(a_type, a_field, left);\
if (leftleft != NULL && rbtn_red_get(a_type, a_field, \ if (rbtn_red_get(a_type, a_field, leftleft)) { \
leftleft)) { \
/* || */\ /* || */\
/* pathp(b) */\ /* pathp(b) */\
/* / \\ */\ /* / \\ */\
@ -872,13 +866,13 @@ a_prefix##remove(a_rbt_type *rbtree, a_type *node) { \
a_attr a_type * \ a_attr a_type * \
a_prefix##iter_recurse(a_rbt_type *rbtree, a_type *node, \ a_prefix##iter_recurse(a_rbt_type *rbtree, a_type *node, \
a_type *(*cb)(a_rbt_type *, a_type *, void *), void *arg) { \ a_type *(*cb)(a_rbt_type *, a_type *, void *), void *arg) { \
if (node == NULL) { \ if (node == &rbtree->rbt_nil) { \
return (NULL); \ return (&rbtree->rbt_nil); \
} else { \ } else { \
a_type *ret; \ a_type *ret; \
if ((ret = a_prefix##iter_recurse(rbtree, rbtn_left_get(a_type, \ if ((ret = a_prefix##iter_recurse(rbtree, rbtn_left_get(a_type, \
a_field, node), cb, arg)) != NULL || (ret = cb(rbtree, node, \ a_field, node), cb, arg)) != &rbtree->rbt_nil \
arg)) != NULL) { \ || (ret = cb(rbtree, node, arg)) != NULL) { \
return (ret); \ return (ret); \
} \ } \
return (a_prefix##iter_recurse(rbtree, rbtn_right_get(a_type, \ return (a_prefix##iter_recurse(rbtree, rbtn_right_get(a_type, \
@ -892,8 +886,8 @@ a_prefix##iter_start(a_rbt_type *rbtree, a_type *start, a_type *node, \
if (cmp < 0) { \ if (cmp < 0) { \
a_type *ret; \ a_type *ret; \
if ((ret = a_prefix##iter_start(rbtree, start, \ if ((ret = a_prefix##iter_start(rbtree, start, \
rbtn_left_get(a_type, a_field, node), cb, arg)) != NULL || \ rbtn_left_get(a_type, a_field, node), cb, arg)) != \
(ret = cb(rbtree, node, arg)) != NULL) { \ &rbtree->rbt_nil || (ret = cb(rbtree, node, arg)) != NULL) { \
return (ret); \ return (ret); \
} \ } \
return (a_prefix##iter_recurse(rbtree, rbtn_right_get(a_type, \ return (a_prefix##iter_recurse(rbtree, rbtn_right_get(a_type, \
@ -920,18 +914,21 @@ a_prefix##iter(a_rbt_type *rbtree, a_type *start, a_type *(*cb)( \
} else { \ } else { \
ret = a_prefix##iter_recurse(rbtree, rbtree->rbt_root, cb, arg);\ ret = a_prefix##iter_recurse(rbtree, rbtree->rbt_root, cb, arg);\
} \ } \
if (ret == &rbtree->rbt_nil) { \
ret = NULL; \
} \
return (ret); \ return (ret); \
} \ } \
a_attr a_type * \ a_attr a_type * \
a_prefix##reverse_iter_recurse(a_rbt_type *rbtree, a_type *node, \ a_prefix##reverse_iter_recurse(a_rbt_type *rbtree, a_type *node, \
a_type *(*cb)(a_rbt_type *, a_type *, void *), void *arg) { \ a_type *(*cb)(a_rbt_type *, a_type *, void *), void *arg) { \
if (node == NULL) { \ if (node == &rbtree->rbt_nil) { \
return (NULL); \ return (&rbtree->rbt_nil); \
} else { \ } else { \
a_type *ret; \ a_type *ret; \
if ((ret = a_prefix##reverse_iter_recurse(rbtree, \ if ((ret = a_prefix##reverse_iter_recurse(rbtree, \
rbtn_right_get(a_type, a_field, node), cb, arg)) != NULL || \ rbtn_right_get(a_type, a_field, node), cb, arg)) != \
(ret = cb(rbtree, node, arg)) != NULL) { \ &rbtree->rbt_nil || (ret = cb(rbtree, node, arg)) != NULL) { \
return (ret); \ return (ret); \
} \ } \
return (a_prefix##reverse_iter_recurse(rbtree, \ return (a_prefix##reverse_iter_recurse(rbtree, \
@ -946,8 +943,8 @@ a_prefix##reverse_iter_start(a_rbt_type *rbtree, a_type *start, \
if (cmp > 0) { \ if (cmp > 0) { \
a_type *ret; \ a_type *ret; \
if ((ret = a_prefix##reverse_iter_start(rbtree, start, \ if ((ret = a_prefix##reverse_iter_start(rbtree, start, \
rbtn_right_get(a_type, a_field, node), cb, arg)) != NULL || \ rbtn_right_get(a_type, a_field, node), cb, arg)) != \
(ret = cb(rbtree, node, arg)) != NULL) { \ &rbtree->rbt_nil || (ret = cb(rbtree, node, arg)) != NULL) { \
return (ret); \ return (ret); \
} \ } \
return (a_prefix##reverse_iter_recurse(rbtree, \ return (a_prefix##reverse_iter_recurse(rbtree, \
@ -975,29 +972,10 @@ a_prefix##reverse_iter(a_rbt_type *rbtree, a_type *start, \
ret = a_prefix##reverse_iter_recurse(rbtree, rbtree->rbt_root, \ ret = a_prefix##reverse_iter_recurse(rbtree, rbtree->rbt_root, \
cb, arg); \ cb, arg); \
} \ } \
if (ret == &rbtree->rbt_nil) { \
ret = NULL; \
} \
return (ret); \ return (ret); \
} \
a_attr void \
a_prefix##destroy_recurse(a_rbt_type *rbtree, a_type *node, void (*cb)( \
a_type *, void *), void *arg) { \
if (node == NULL) { \
return; \
} \
a_prefix##destroy_recurse(rbtree, rbtn_left_get(a_type, a_field, \
node), cb, arg); \
rbtn_left_set(a_type, a_field, (node), NULL); \
a_prefix##destroy_recurse(rbtree, rbtn_right_get(a_type, a_field, \
node), cb, arg); \
rbtn_right_set(a_type, a_field, (node), NULL); \
if (cb) { \
cb(node, arg); \
} \
} \
a_attr void \
a_prefix##destroy(a_rbt_type *rbtree, void (*cb)(a_type *, void *), \
void *arg) { \
a_prefix##destroy_recurse(rbtree, rbtree->rbt_root, cb, arg); \
rbtree->rbt_root = NULL; \
} }
#endif /* RB_H_ */ #endif /* RB_H_ */

View File

@ -15,10 +15,9 @@ typedef struct rtree_s rtree_t;
* machine address width. * machine address width.
*/ */
#define LG_RTREE_BITS_PER_LEVEL 4 #define LG_RTREE_BITS_PER_LEVEL 4
#define RTREE_BITS_PER_LEVEL (1U << LG_RTREE_BITS_PER_LEVEL) #define RTREE_BITS_PER_LEVEL (ZU(1) << LG_RTREE_BITS_PER_LEVEL)
/* Maximum rtree height. */
#define RTREE_HEIGHT_MAX \ #define RTREE_HEIGHT_MAX \
((1U << (LG_SIZEOF_PTR+3)) / RTREE_BITS_PER_LEVEL) ((ZU(1) << (LG_SIZEOF_PTR+3)) / RTREE_BITS_PER_LEVEL)
/* Used for two-stage lock-free node initialization. */ /* Used for two-stage lock-free node initialization. */
#define RTREE_NODE_INITIALIZING ((rtree_node_elm_t *)0x1) #define RTREE_NODE_INITIALIZING ((rtree_node_elm_t *)0x1)
@ -112,25 +111,22 @@ unsigned rtree_start_level(rtree_t *rtree, uintptr_t key);
uintptr_t rtree_subkey(rtree_t *rtree, uintptr_t key, unsigned level); uintptr_t rtree_subkey(rtree_t *rtree, uintptr_t key, unsigned level);
bool rtree_node_valid(rtree_node_elm_t *node); bool rtree_node_valid(rtree_node_elm_t *node);
rtree_node_elm_t *rtree_child_tryread(rtree_node_elm_t *elm, rtree_node_elm_t *rtree_child_tryread(rtree_node_elm_t *elm);
bool dependent);
rtree_node_elm_t *rtree_child_read(rtree_t *rtree, rtree_node_elm_t *elm, rtree_node_elm_t *rtree_child_read(rtree_t *rtree, rtree_node_elm_t *elm,
unsigned level, bool dependent); unsigned level);
extent_node_t *rtree_val_read(rtree_t *rtree, rtree_node_elm_t *elm, extent_node_t *rtree_val_read(rtree_t *rtree, rtree_node_elm_t *elm,
bool dependent); bool dependent);
void rtree_val_write(rtree_t *rtree, rtree_node_elm_t *elm, void rtree_val_write(rtree_t *rtree, rtree_node_elm_t *elm,
const extent_node_t *val); const extent_node_t *val);
rtree_node_elm_t *rtree_subtree_tryread(rtree_t *rtree, unsigned level, rtree_node_elm_t *rtree_subtree_tryread(rtree_t *rtree, unsigned level);
bool dependent); rtree_node_elm_t *rtree_subtree_read(rtree_t *rtree, unsigned level);
rtree_node_elm_t *rtree_subtree_read(rtree_t *rtree, unsigned level,
bool dependent);
extent_node_t *rtree_get(rtree_t *rtree, uintptr_t key, bool dependent); extent_node_t *rtree_get(rtree_t *rtree, uintptr_t key, bool dependent);
bool rtree_set(rtree_t *rtree, uintptr_t key, const extent_node_t *val); bool rtree_set(rtree_t *rtree, uintptr_t key, const extent_node_t *val);
#endif #endif
#if (defined(JEMALLOC_ENABLE_INLINE) || defined(JEMALLOC_RTREE_C_)) #if (defined(JEMALLOC_ENABLE_INLINE) || defined(JEMALLOC_RTREE_C_))
JEMALLOC_ALWAYS_INLINE unsigned JEMALLOC_INLINE unsigned
rtree_start_level(rtree_t *rtree, uintptr_t key) rtree_start_level(rtree_t *rtree, uintptr_t key)
{ {
unsigned start_level; unsigned start_level;
@ -144,7 +140,7 @@ rtree_start_level(rtree_t *rtree, uintptr_t key)
return (start_level); return (start_level);
} }
JEMALLOC_ALWAYS_INLINE uintptr_t JEMALLOC_INLINE uintptr_t
rtree_subkey(rtree_t *rtree, uintptr_t key, unsigned level) rtree_subkey(rtree_t *rtree, uintptr_t key, unsigned level)
{ {
@ -153,40 +149,37 @@ rtree_subkey(rtree_t *rtree, uintptr_t key, unsigned level)
rtree->levels[level].bits) - 1)); rtree->levels[level].bits) - 1));
} }
JEMALLOC_ALWAYS_INLINE bool JEMALLOC_INLINE bool
rtree_node_valid(rtree_node_elm_t *node) rtree_node_valid(rtree_node_elm_t *node)
{ {
return ((uintptr_t)node > (uintptr_t)RTREE_NODE_INITIALIZING); return ((uintptr_t)node > (uintptr_t)RTREE_NODE_INITIALIZING);
} }
JEMALLOC_ALWAYS_INLINE rtree_node_elm_t * JEMALLOC_INLINE rtree_node_elm_t *
rtree_child_tryread(rtree_node_elm_t *elm, bool dependent) rtree_child_tryread(rtree_node_elm_t *elm)
{ {
rtree_node_elm_t *child; rtree_node_elm_t *child;
/* Double-checked read (first read may be stale. */ /* Double-checked read (first read may be stale. */
child = elm->child; child = elm->child;
if (!dependent && !rtree_node_valid(child)) if (!rtree_node_valid(child))
child = atomic_read_p(&elm->pun); child = atomic_read_p(&elm->pun);
assert(!dependent || child != NULL);
return (child); return (child);
} }
JEMALLOC_ALWAYS_INLINE rtree_node_elm_t * JEMALLOC_INLINE rtree_node_elm_t *
rtree_child_read(rtree_t *rtree, rtree_node_elm_t *elm, unsigned level, rtree_child_read(rtree_t *rtree, rtree_node_elm_t *elm, unsigned level)
bool dependent)
{ {
rtree_node_elm_t *child; rtree_node_elm_t *child;
child = rtree_child_tryread(elm, dependent); child = rtree_child_tryread(elm);
if (!dependent && unlikely(!rtree_node_valid(child))) if (unlikely(!rtree_node_valid(child)))
child = rtree_child_read_hard(rtree, elm, level); child = rtree_child_read_hard(rtree, elm, level);
assert(!dependent || child != NULL);
return (child); return (child);
} }
JEMALLOC_ALWAYS_INLINE extent_node_t * JEMALLOC_INLINE extent_node_t *
rtree_val_read(rtree_t *rtree, rtree_node_elm_t *elm, bool dependent) rtree_val_read(rtree_t *rtree, rtree_node_elm_t *elm, bool dependent)
{ {
@ -215,119 +208,54 @@ rtree_val_write(rtree_t *rtree, rtree_node_elm_t *elm, const extent_node_t *val)
atomic_write_p(&elm->pun, val); atomic_write_p(&elm->pun, val);
} }
JEMALLOC_ALWAYS_INLINE rtree_node_elm_t * JEMALLOC_INLINE rtree_node_elm_t *
rtree_subtree_tryread(rtree_t *rtree, unsigned level, bool dependent) rtree_subtree_tryread(rtree_t *rtree, unsigned level)
{ {
rtree_node_elm_t *subtree; rtree_node_elm_t *subtree;
/* Double-checked read (first read may be stale. */ /* Double-checked read (first read may be stale. */
subtree = rtree->levels[level].subtree; subtree = rtree->levels[level].subtree;
if (!dependent && unlikely(!rtree_node_valid(subtree))) if (!rtree_node_valid(subtree))
subtree = atomic_read_p(&rtree->levels[level].subtree_pun); subtree = atomic_read_p(&rtree->levels[level].subtree_pun);
assert(!dependent || subtree != NULL);
return (subtree); return (subtree);
} }
JEMALLOC_ALWAYS_INLINE rtree_node_elm_t * JEMALLOC_INLINE rtree_node_elm_t *
rtree_subtree_read(rtree_t *rtree, unsigned level, bool dependent) rtree_subtree_read(rtree_t *rtree, unsigned level)
{ {
rtree_node_elm_t *subtree; rtree_node_elm_t *subtree;
subtree = rtree_subtree_tryread(rtree, level, dependent); subtree = rtree_subtree_tryread(rtree, level);
if (!dependent && unlikely(!rtree_node_valid(subtree))) if (unlikely(!rtree_node_valid(subtree)))
subtree = rtree_subtree_read_hard(rtree, level); subtree = rtree_subtree_read_hard(rtree, level);
assert(!dependent || subtree != NULL);
return (subtree); return (subtree);
} }
JEMALLOC_ALWAYS_INLINE extent_node_t * JEMALLOC_INLINE extent_node_t *
rtree_get(rtree_t *rtree, uintptr_t key, bool dependent) rtree_get(rtree_t *rtree, uintptr_t key, bool dependent)
{ {
uintptr_t subkey; uintptr_t subkey;
unsigned start_level; unsigned i, start_level;
rtree_node_elm_t *node; rtree_node_elm_t *node, *child;
start_level = rtree_start_level(rtree, key); start_level = rtree_start_level(rtree, key);
node = rtree_subtree_tryread(rtree, start_level, dependent); for (i = start_level, node = rtree_subtree_tryread(rtree, start_level);
#define RTREE_GET_BIAS (RTREE_HEIGHT_MAX - rtree->height) /**/; i++, node = child) {
switch (start_level + RTREE_GET_BIAS) { if (!dependent && unlikely(!rtree_node_valid(node)))
#define RTREE_GET_SUBTREE(level) \ return (NULL);
case level: \ subkey = rtree_subkey(rtree, key, i);
assert(level < (RTREE_HEIGHT_MAX-1)); \ if (i == rtree->height - 1) {
if (!dependent && unlikely(!rtree_node_valid(node))) \ /*
return (NULL); \ * node is a leaf, so it contains values rather than
subkey = rtree_subkey(rtree, key, level - \ * child pointers.
RTREE_GET_BIAS); \ */
node = rtree_child_tryread(&node[subkey], dependent); \ return (rtree_val_read(rtree, &node[subkey],
/* Fall through. */ dependent));
#define RTREE_GET_LEAF(level) \ }
case level: \ assert(i < rtree->height - 1);
assert(level == (RTREE_HEIGHT_MAX-1)); \ child = rtree_child_tryread(&node[subkey]);
if (!dependent && unlikely(!rtree_node_valid(node))) \
return (NULL); \
subkey = rtree_subkey(rtree, key, level - \
RTREE_GET_BIAS); \
/* \
* node is a leaf, so it contains values rather than \
* child pointers. \
*/ \
return (rtree_val_read(rtree, &node[subkey], \
dependent));
#if RTREE_HEIGHT_MAX > 1
RTREE_GET_SUBTREE(0)
#endif
#if RTREE_HEIGHT_MAX > 2
RTREE_GET_SUBTREE(1)
#endif
#if RTREE_HEIGHT_MAX > 3
RTREE_GET_SUBTREE(2)
#endif
#if RTREE_HEIGHT_MAX > 4
RTREE_GET_SUBTREE(3)
#endif
#if RTREE_HEIGHT_MAX > 5
RTREE_GET_SUBTREE(4)
#endif
#if RTREE_HEIGHT_MAX > 6
RTREE_GET_SUBTREE(5)
#endif
#if RTREE_HEIGHT_MAX > 7
RTREE_GET_SUBTREE(6)
#endif
#if RTREE_HEIGHT_MAX > 8
RTREE_GET_SUBTREE(7)
#endif
#if RTREE_HEIGHT_MAX > 9
RTREE_GET_SUBTREE(8)
#endif
#if RTREE_HEIGHT_MAX > 10
RTREE_GET_SUBTREE(9)
#endif
#if RTREE_HEIGHT_MAX > 11
RTREE_GET_SUBTREE(10)
#endif
#if RTREE_HEIGHT_MAX > 12
RTREE_GET_SUBTREE(11)
#endif
#if RTREE_HEIGHT_MAX > 13
RTREE_GET_SUBTREE(12)
#endif
#if RTREE_HEIGHT_MAX > 14
RTREE_GET_SUBTREE(13)
#endif
#if RTREE_HEIGHT_MAX > 15
RTREE_GET_SUBTREE(14)
#endif
#if RTREE_HEIGHT_MAX > 16
# error Unsupported RTREE_HEIGHT_MAX
#endif
RTREE_GET_LEAF(RTREE_HEIGHT_MAX-1)
#undef RTREE_GET_SUBTREE
#undef RTREE_GET_LEAF
default: not_reached();
} }
#undef RTREE_GET_BIAS
not_reached(); not_reached();
} }
@ -340,7 +268,7 @@ rtree_set(rtree_t *rtree, uintptr_t key, const extent_node_t *val)
start_level = rtree_start_level(rtree, key); start_level = rtree_start_level(rtree, key);
node = rtree_subtree_read(rtree, start_level, false); node = rtree_subtree_read(rtree, start_level);
if (node == NULL) if (node == NULL)
return (true); return (true);
for (i = start_level; /**/; i++, node = child) { for (i = start_level; /**/; i++, node = child) {
@ -354,7 +282,7 @@ rtree_set(rtree_t *rtree, uintptr_t key, const extent_node_t *val)
return (false); return (false);
} }
assert(i + 1 < rtree->height); assert(i + 1 < rtree->height);
child = rtree_child_read(rtree, &node[subkey], i, false); child = rtree_child_read(rtree, &node[subkey], i);
if (child == NULL) if (child == NULL)
return (true); return (true);
} }

View File

@ -48,21 +48,6 @@ size_class() {
lg_p=$5 lg_p=$5
lg_kmax=$6 lg_kmax=$6
if [ ${lg_delta} -ge ${lg_p} ] ; then
psz="yes"
else
pow2 ${lg_p}; p=${pow2_result}
pow2 ${lg_grp}; grp=${pow2_result}
pow2 ${lg_delta}; delta=${pow2_result}
sz=$((${grp} + ${delta} * ${ndelta}))
npgs=$((${sz} / ${p}))
if [ ${sz} -eq $((${npgs} * ${p})) ] ; then
psz="yes"
else
psz="no"
fi
fi
lg ${ndelta}; lg_ndelta=${lg_result}; pow2 ${lg_ndelta} lg ${ndelta}; lg_ndelta=${lg_result}; pow2 ${lg_ndelta}
if [ ${pow2_result} -lt ${ndelta} ] ; then if [ ${pow2_result} -lt ${ndelta} ] ; then
rem="yes" rem="yes"
@ -89,15 +74,14 @@ size_class() {
else else
lg_delta_lookup="no" lg_delta_lookup="no"
fi fi
printf ' SC(%3d, %6d, %8d, %6d, %3s, %3s, %2s) \\\n' ${index} ${lg_grp} ${lg_delta} ${ndelta} ${psz} ${bin} ${lg_delta_lookup} printf ' SC(%3d, %6d, %8d, %6d, %3s, %2s) \\\n' ${index} ${lg_grp} ${lg_delta} ${ndelta} ${bin} ${lg_delta_lookup}
# Defined upon return: # Defined upon return:
# - psz ("yes" or "no")
# - bin ("yes" or "no")
# - lg_delta_lookup (${lg_delta} or "no") # - lg_delta_lookup (${lg_delta} or "no")
# - bin ("yes" or "no")
} }
sep_line() { sep_line() {
echo " \\" echo " \\"
} }
size_classes() { size_classes() {
@ -111,13 +95,12 @@ size_classes() {
pow2 ${lg_g}; g=${pow2_result} pow2 ${lg_g}; g=${pow2_result}
echo "#define SIZE_CLASSES \\" echo "#define SIZE_CLASSES \\"
echo " /* index, lg_grp, lg_delta, ndelta, psz, bin, lg_delta_lookup */ \\" echo " /* index, lg_grp, lg_delta, ndelta, bin, lg_delta_lookup */ \\"
ntbins=0 ntbins=0
nlbins=0 nlbins=0
lg_tiny_maxclass='"NA"' lg_tiny_maxclass='"NA"'
nbins=0 nbins=0
npsizes=0
# Tiny size classes. # Tiny size classes.
ndelta=0 ndelta=0
@ -129,9 +112,6 @@ size_classes() {
if [ ${lg_delta_lookup} != "no" ] ; then if [ ${lg_delta_lookup} != "no" ] ; then
nlbins=$((${index} + 1)) nlbins=$((${index} + 1))
fi fi
if [ ${psz} = "yes" ] ; then
npsizes=$((${npsizes} + 1))
fi
if [ ${bin} != "no" ] ; then if [ ${bin} != "no" ] ; then
nbins=$((${index} + 1)) nbins=$((${index} + 1))
fi fi
@ -153,25 +133,19 @@ size_classes() {
index=$((${index} + 1)) index=$((${index} + 1))
lg_grp=$((${lg_grp} + 1)) lg_grp=$((${lg_grp} + 1))
lg_delta=$((${lg_delta} + 1)) lg_delta=$((${lg_delta} + 1))
if [ ${psz} = "yes" ] ; then
npsizes=$((${npsizes} + 1))
fi
fi fi
while [ ${ndelta} -lt ${g} ] ; do while [ ${ndelta} -lt ${g} ] ; do
size_class ${index} ${lg_grp} ${lg_delta} ${ndelta} ${lg_p} ${lg_kmax} size_class ${index} ${lg_grp} ${lg_delta} ${ndelta} ${lg_p} ${lg_kmax}
index=$((${index} + 1)) index=$((${index} + 1))
ndelta=$((${ndelta} + 1)) ndelta=$((${ndelta} + 1))
if [ ${psz} = "yes" ] ; then
npsizes=$((${npsizes} + 1))
fi
done done
# All remaining groups. # All remaining groups.
lg_grp=$((${lg_grp} + ${lg_g})) lg_grp=$((${lg_grp} + ${lg_g}))
while [ ${lg_grp} -lt $((${ptr_bits} - 1)) ] ; do while [ ${lg_grp} -lt ${ptr_bits} ] ; do
sep_line sep_line
ndelta=1 ndelta=1
if [ ${lg_grp} -eq $((${ptr_bits} - 2)) ] ; then if [ ${lg_grp} -eq $((${ptr_bits} - 1)) ] ; then
ndelta_limit=$((${g} - 1)) ndelta_limit=$((${g} - 1))
else else
ndelta_limit=${g} ndelta_limit=${g}
@ -183,9 +157,6 @@ size_classes() {
# Final written value is correct: # Final written value is correct:
lookup_maxclass="((((size_t)1) << ${lg_grp}) + (((size_t)${ndelta}) << ${lg_delta}))" lookup_maxclass="((((size_t)1) << ${lg_grp}) + (((size_t)${ndelta}) << ${lg_delta}))"
fi fi
if [ ${psz} = "yes" ] ; then
npsizes=$((${npsizes} + 1))
fi
if [ ${bin} != "no" ] ; then if [ ${bin} != "no" ] ; then
nbins=$((${index} + 1)) nbins=$((${index} + 1))
# Final written value is correct: # Final written value is correct:
@ -212,7 +183,6 @@ size_classes() {
# - nlbins # - nlbins
# - nbins # - nbins
# - nsizes # - nsizes
# - npsizes
# - lg_tiny_maxclass # - lg_tiny_maxclass
# - lookup_maxclass # - lookup_maxclass
# - small_maxclass # - small_maxclass
@ -230,13 +200,13 @@ cat <<EOF
* be defined prior to inclusion, and it in turn defines: * be defined prior to inclusion, and it in turn defines:
* *
* LG_SIZE_CLASS_GROUP: Lg of size class count for each size doubling. * LG_SIZE_CLASS_GROUP: Lg of size class count for each size doubling.
* SIZE_CLASSES: Complete table of SC(index, lg_grp, lg_delta, ndelta, psz, * SIZE_CLASSES: Complete table of
* bin, lg_delta_lookup) tuples. * SC(index, lg_grp, lg_delta, ndelta, bin, lg_delta_lookup)
* tuples.
* index: Size class index. * index: Size class index.
* lg_grp: Lg group base size (no deltas added). * lg_grp: Lg group base size (no deltas added).
* lg_delta: Lg delta to previous size class. * lg_delta: Lg delta to previous size class.
* ndelta: Delta multiplier. size == 1<<lg_grp + ndelta<<lg_delta * ndelta: Delta multiplier. size == 1<<lg_grp + ndelta<<lg_delta
* psz: 'yes' if a multiple of the page size, 'no' otherwise.
* bin: 'yes' if a small bin size class, 'no' otherwise. * bin: 'yes' if a small bin size class, 'no' otherwise.
* lg_delta_lookup: Same as lg_delta if a lookup table size class, 'no' * lg_delta_lookup: Same as lg_delta if a lookup table size class, 'no'
* otherwise. * otherwise.
@ -244,7 +214,6 @@ cat <<EOF
* NLBINS: Number of bins supported by the lookup table. * NLBINS: Number of bins supported by the lookup table.
* NBINS: Number of small size class bins. * NBINS: Number of small size class bins.
* NSIZES: Number of size classes. * NSIZES: Number of size classes.
* NPSIZES: Number of size classes that are a multiple of (1U << LG_PAGE).
* LG_TINY_MAXCLASS: Lg of maximum tiny size class. * LG_TINY_MAXCLASS: Lg of maximum tiny size class.
* LOOKUP_MAXCLASS: Maximum size class included in lookup table. * LOOKUP_MAXCLASS: Maximum size class included in lookup table.
* SMALL_MAXCLASS: Maximum small size class. * SMALL_MAXCLASS: Maximum small size class.
@ -269,7 +238,6 @@ for lg_z in ${lg_zarr} ; do
echo "#define NLBINS ${nlbins}" echo "#define NLBINS ${nlbins}"
echo "#define NBINS ${nbins}" echo "#define NBINS ${nbins}"
echo "#define NSIZES ${nsizes}" echo "#define NSIZES ${nsizes}"
echo "#define NPSIZES ${npsizes}"
echo "#define LG_TINY_MAXCLASS ${lg_tiny_maxclass}" echo "#define LG_TINY_MAXCLASS ${lg_tiny_maxclass}"
echo "#define LOOKUP_MAXCLASS ${lookup_maxclass}" echo "#define LOOKUP_MAXCLASS ${lookup_maxclass}"
echo "#define SMALL_MAXCLASS ${small_maxclass}" echo "#define SMALL_MAXCLASS ${small_maxclass}"

View File

@ -1,246 +0,0 @@
/*
* This file was generated by the following command:
* sh smoothstep.sh smoother 200 24 3 15
*/
/******************************************************************************/
#ifdef JEMALLOC_H_TYPES
/*
* This header defines a precomputed table based on the smoothstep family of
* sigmoidal curves (https://en.wikipedia.org/wiki/Smoothstep) that grow from 0
* to 1 in 0 <= x <= 1. The table is stored as integer fixed point values so
* that floating point math can be avoided.
*
* 3 2
* smoothstep(x) = -2x + 3x
*
* 5 4 3
* smootherstep(x) = 6x - 15x + 10x
*
* 7 6 5 4
* smootheststep(x) = -20x + 70x - 84x + 35x
*/
#define SMOOTHSTEP_VARIANT "smoother"
#define SMOOTHSTEP_NSTEPS 200
#define SMOOTHSTEP_BFP 24
#define SMOOTHSTEP \
/* STEP(step, h, x, y) */ \
STEP( 1, UINT64_C(0x0000000000000014), 0.005, 0.000001240643750) \
STEP( 2, UINT64_C(0x00000000000000a5), 0.010, 0.000009850600000) \
STEP( 3, UINT64_C(0x0000000000000229), 0.015, 0.000032995181250) \
STEP( 4, UINT64_C(0x0000000000000516), 0.020, 0.000077619200000) \
STEP( 5, UINT64_C(0x00000000000009dc), 0.025, 0.000150449218750) \
STEP( 6, UINT64_C(0x00000000000010e8), 0.030, 0.000257995800000) \
STEP( 7, UINT64_C(0x0000000000001aa4), 0.035, 0.000406555756250) \
STEP( 8, UINT64_C(0x0000000000002777), 0.040, 0.000602214400000) \
STEP( 9, UINT64_C(0x00000000000037c2), 0.045, 0.000850847793750) \
STEP( 10, UINT64_C(0x0000000000004be6), 0.050, 0.001158125000000) \
STEP( 11, UINT64_C(0x000000000000643c), 0.055, 0.001529510331250) \
STEP( 12, UINT64_C(0x000000000000811f), 0.060, 0.001970265600000) \
STEP( 13, UINT64_C(0x000000000000a2e2), 0.065, 0.002485452368750) \
STEP( 14, UINT64_C(0x000000000000c9d8), 0.070, 0.003079934200000) \
STEP( 15, UINT64_C(0x000000000000f64f), 0.075, 0.003758378906250) \
STEP( 16, UINT64_C(0x0000000000012891), 0.080, 0.004525260800000) \
STEP( 17, UINT64_C(0x00000000000160e7), 0.085, 0.005384862943750) \
STEP( 18, UINT64_C(0x0000000000019f95), 0.090, 0.006341279400000) \
STEP( 19, UINT64_C(0x000000000001e4dc), 0.095, 0.007398417481250) \
STEP( 20, UINT64_C(0x00000000000230fc), 0.100, 0.008560000000000) \
STEP( 21, UINT64_C(0x0000000000028430), 0.105, 0.009829567518750) \
STEP( 22, UINT64_C(0x000000000002deb0), 0.110, 0.011210480600000) \
STEP( 23, UINT64_C(0x00000000000340b1), 0.115, 0.012705922056250) \
STEP( 24, UINT64_C(0x000000000003aa67), 0.120, 0.014318899200000) \
STEP( 25, UINT64_C(0x0000000000041c00), 0.125, 0.016052246093750) \
STEP( 26, UINT64_C(0x00000000000495a8), 0.130, 0.017908625800000) \
STEP( 27, UINT64_C(0x000000000005178b), 0.135, 0.019890532631250) \
STEP( 28, UINT64_C(0x000000000005a1cf), 0.140, 0.022000294400000) \
STEP( 29, UINT64_C(0x0000000000063498), 0.145, 0.024240074668750) \
STEP( 30, UINT64_C(0x000000000006d009), 0.150, 0.026611875000000) \
STEP( 31, UINT64_C(0x000000000007743f), 0.155, 0.029117537206250) \
STEP( 32, UINT64_C(0x0000000000082157), 0.160, 0.031758745600000) \
STEP( 33, UINT64_C(0x000000000008d76b), 0.165, 0.034537029243750) \
STEP( 34, UINT64_C(0x0000000000099691), 0.170, 0.037453764200000) \
STEP( 35, UINT64_C(0x00000000000a5edf), 0.175, 0.040510175781250) \
STEP( 36, UINT64_C(0x00000000000b3067), 0.180, 0.043707340800000) \
STEP( 37, UINT64_C(0x00000000000c0b38), 0.185, 0.047046189818750) \
STEP( 38, UINT64_C(0x00000000000cef5e), 0.190, 0.050527509400000) \
STEP( 39, UINT64_C(0x00000000000ddce6), 0.195, 0.054151944356250) \
STEP( 40, UINT64_C(0x00000000000ed3d8), 0.200, 0.057920000000000) \
STEP( 41, UINT64_C(0x00000000000fd439), 0.205, 0.061832044393750) \
STEP( 42, UINT64_C(0x000000000010de0e), 0.210, 0.065888310600000) \
STEP( 43, UINT64_C(0x000000000011f158), 0.215, 0.070088898931250) \
STEP( 44, UINT64_C(0x0000000000130e17), 0.220, 0.074433779200000) \
STEP( 45, UINT64_C(0x0000000000143448), 0.225, 0.078922792968750) \
STEP( 46, UINT64_C(0x00000000001563e7), 0.230, 0.083555655800000) \
STEP( 47, UINT64_C(0x0000000000169cec), 0.235, 0.088331959506250) \
STEP( 48, UINT64_C(0x000000000017df4f), 0.240, 0.093251174400000) \
STEP( 49, UINT64_C(0x0000000000192b04), 0.245, 0.098312651543750) \
STEP( 50, UINT64_C(0x00000000001a8000), 0.250, 0.103515625000000) \
STEP( 51, UINT64_C(0x00000000001bde32), 0.255, 0.108859214081250) \
STEP( 52, UINT64_C(0x00000000001d458b), 0.260, 0.114342425600000) \
STEP( 53, UINT64_C(0x00000000001eb5f8), 0.265, 0.119964156118750) \
STEP( 54, UINT64_C(0x0000000000202f65), 0.270, 0.125723194200000) \
STEP( 55, UINT64_C(0x000000000021b1bb), 0.275, 0.131618222656250) \
STEP( 56, UINT64_C(0x0000000000233ce3), 0.280, 0.137647820800000) \
STEP( 57, UINT64_C(0x000000000024d0c3), 0.285, 0.143810466693750) \
STEP( 58, UINT64_C(0x0000000000266d40), 0.290, 0.150104539400000) \
STEP( 59, UINT64_C(0x000000000028123d), 0.295, 0.156528321231250) \
STEP( 60, UINT64_C(0x000000000029bf9c), 0.300, 0.163080000000000) \
STEP( 61, UINT64_C(0x00000000002b753d), 0.305, 0.169757671268750) \
STEP( 62, UINT64_C(0x00000000002d32fe), 0.310, 0.176559340600000) \
STEP( 63, UINT64_C(0x00000000002ef8bc), 0.315, 0.183482925806250) \
STEP( 64, UINT64_C(0x000000000030c654), 0.320, 0.190526259200000) \
STEP( 65, UINT64_C(0x0000000000329b9f), 0.325, 0.197687089843750) \
STEP( 66, UINT64_C(0x0000000000347875), 0.330, 0.204963085800000) \
STEP( 67, UINT64_C(0x0000000000365cb0), 0.335, 0.212351836381250) \
STEP( 68, UINT64_C(0x0000000000384825), 0.340, 0.219850854400000) \
STEP( 69, UINT64_C(0x00000000003a3aa8), 0.345, 0.227457578418750) \
STEP( 70, UINT64_C(0x00000000003c340f), 0.350, 0.235169375000000) \
STEP( 71, UINT64_C(0x00000000003e342b), 0.355, 0.242983540956250) \
STEP( 72, UINT64_C(0x0000000000403ace), 0.360, 0.250897305600000) \
STEP( 73, UINT64_C(0x00000000004247c8), 0.365, 0.258907832993750) \
STEP( 74, UINT64_C(0x0000000000445ae9), 0.370, 0.267012224200000) \
STEP( 75, UINT64_C(0x0000000000467400), 0.375, 0.275207519531250) \
STEP( 76, UINT64_C(0x00000000004892d8), 0.380, 0.283490700800000) \
STEP( 77, UINT64_C(0x00000000004ab740), 0.385, 0.291858693568750) \
STEP( 78, UINT64_C(0x00000000004ce102), 0.390, 0.300308369400000) \
STEP( 79, UINT64_C(0x00000000004f0fe9), 0.395, 0.308836548106250) \
STEP( 80, UINT64_C(0x00000000005143bf), 0.400, 0.317440000000000) \
STEP( 81, UINT64_C(0x0000000000537c4d), 0.405, 0.326115448143750) \
STEP( 82, UINT64_C(0x000000000055b95b), 0.410, 0.334859570600000) \
STEP( 83, UINT64_C(0x000000000057fab1), 0.415, 0.343669002681250) \
STEP( 84, UINT64_C(0x00000000005a4015), 0.420, 0.352540339200000) \
STEP( 85, UINT64_C(0x00000000005c894e), 0.425, 0.361470136718750) \
STEP( 86, UINT64_C(0x00000000005ed622), 0.430, 0.370454915800000) \
STEP( 87, UINT64_C(0x0000000000612655), 0.435, 0.379491163256250) \
STEP( 88, UINT64_C(0x00000000006379ac), 0.440, 0.388575334400000) \
STEP( 89, UINT64_C(0x000000000065cfeb), 0.445, 0.397703855293750) \
STEP( 90, UINT64_C(0x00000000006828d6), 0.450, 0.406873125000000) \
STEP( 91, UINT64_C(0x00000000006a842f), 0.455, 0.416079517831250) \
STEP( 92, UINT64_C(0x00000000006ce1bb), 0.460, 0.425319385600000) \
STEP( 93, UINT64_C(0x00000000006f413a), 0.465, 0.434589059868750) \
STEP( 94, UINT64_C(0x000000000071a270), 0.470, 0.443884854200000) \
STEP( 95, UINT64_C(0x000000000074051d), 0.475, 0.453203066406250) \
STEP( 96, UINT64_C(0x0000000000766905), 0.480, 0.462539980800000) \
STEP( 97, UINT64_C(0x000000000078cde7), 0.485, 0.471891870443750) \
STEP( 98, UINT64_C(0x00000000007b3387), 0.490, 0.481254999400000) \
STEP( 99, UINT64_C(0x00000000007d99a4), 0.495, 0.490625624981250) \
STEP( 100, UINT64_C(0x0000000000800000), 0.500, 0.500000000000000) \
STEP( 101, UINT64_C(0x000000000082665b), 0.505, 0.509374375018750) \
STEP( 102, UINT64_C(0x000000000084cc78), 0.510, 0.518745000600000) \
STEP( 103, UINT64_C(0x0000000000873218), 0.515, 0.528108129556250) \
STEP( 104, UINT64_C(0x00000000008996fa), 0.520, 0.537460019200000) \
STEP( 105, UINT64_C(0x00000000008bfae2), 0.525, 0.546796933593750) \
STEP( 106, UINT64_C(0x00000000008e5d8f), 0.530, 0.556115145800000) \
STEP( 107, UINT64_C(0x000000000090bec5), 0.535, 0.565410940131250) \
STEP( 108, UINT64_C(0x0000000000931e44), 0.540, 0.574680614400000) \
STEP( 109, UINT64_C(0x0000000000957bd0), 0.545, 0.583920482168750) \
STEP( 110, UINT64_C(0x000000000097d729), 0.550, 0.593126875000000) \
STEP( 111, UINT64_C(0x00000000009a3014), 0.555, 0.602296144706250) \
STEP( 112, UINT64_C(0x00000000009c8653), 0.560, 0.611424665600000) \
STEP( 113, UINT64_C(0x00000000009ed9aa), 0.565, 0.620508836743750) \
STEP( 114, UINT64_C(0x0000000000a129dd), 0.570, 0.629545084200000) \
STEP( 115, UINT64_C(0x0000000000a376b1), 0.575, 0.638529863281250) \
STEP( 116, UINT64_C(0x0000000000a5bfea), 0.580, 0.647459660800000) \
STEP( 117, UINT64_C(0x0000000000a8054e), 0.585, 0.656330997318750) \
STEP( 118, UINT64_C(0x0000000000aa46a4), 0.590, 0.665140429400000) \
STEP( 119, UINT64_C(0x0000000000ac83b2), 0.595, 0.673884551856250) \
STEP( 120, UINT64_C(0x0000000000aebc40), 0.600, 0.682560000000000) \
STEP( 121, UINT64_C(0x0000000000b0f016), 0.605, 0.691163451893750) \
STEP( 122, UINT64_C(0x0000000000b31efd), 0.610, 0.699691630600000) \
STEP( 123, UINT64_C(0x0000000000b548bf), 0.615, 0.708141306431250) \
STEP( 124, UINT64_C(0x0000000000b76d27), 0.620, 0.716509299200000) \
STEP( 125, UINT64_C(0x0000000000b98c00), 0.625, 0.724792480468750) \
STEP( 126, UINT64_C(0x0000000000bba516), 0.630, 0.732987775800000) \
STEP( 127, UINT64_C(0x0000000000bdb837), 0.635, 0.741092167006250) \
STEP( 128, UINT64_C(0x0000000000bfc531), 0.640, 0.749102694400000) \
STEP( 129, UINT64_C(0x0000000000c1cbd4), 0.645, 0.757016459043750) \
STEP( 130, UINT64_C(0x0000000000c3cbf0), 0.650, 0.764830625000000) \
STEP( 131, UINT64_C(0x0000000000c5c557), 0.655, 0.772542421581250) \
STEP( 132, UINT64_C(0x0000000000c7b7da), 0.660, 0.780149145600000) \
STEP( 133, UINT64_C(0x0000000000c9a34f), 0.665, 0.787648163618750) \
STEP( 134, UINT64_C(0x0000000000cb878a), 0.670, 0.795036914200000) \
STEP( 135, UINT64_C(0x0000000000cd6460), 0.675, 0.802312910156250) \
STEP( 136, UINT64_C(0x0000000000cf39ab), 0.680, 0.809473740800000) \
STEP( 137, UINT64_C(0x0000000000d10743), 0.685, 0.816517074193750) \
STEP( 138, UINT64_C(0x0000000000d2cd01), 0.690, 0.823440659400000) \
STEP( 139, UINT64_C(0x0000000000d48ac2), 0.695, 0.830242328731250) \
STEP( 140, UINT64_C(0x0000000000d64063), 0.700, 0.836920000000000) \
STEP( 141, UINT64_C(0x0000000000d7edc2), 0.705, 0.843471678768750) \
STEP( 142, UINT64_C(0x0000000000d992bf), 0.710, 0.849895460600000) \
STEP( 143, UINT64_C(0x0000000000db2f3c), 0.715, 0.856189533306250) \
STEP( 144, UINT64_C(0x0000000000dcc31c), 0.720, 0.862352179200000) \
STEP( 145, UINT64_C(0x0000000000de4e44), 0.725, 0.868381777343750) \
STEP( 146, UINT64_C(0x0000000000dfd09a), 0.730, 0.874276805800000) \
STEP( 147, UINT64_C(0x0000000000e14a07), 0.735, 0.880035843881250) \
STEP( 148, UINT64_C(0x0000000000e2ba74), 0.740, 0.885657574400000) \
STEP( 149, UINT64_C(0x0000000000e421cd), 0.745, 0.891140785918750) \
STEP( 150, UINT64_C(0x0000000000e58000), 0.750, 0.896484375000000) \
STEP( 151, UINT64_C(0x0000000000e6d4fb), 0.755, 0.901687348456250) \
STEP( 152, UINT64_C(0x0000000000e820b0), 0.760, 0.906748825600000) \
STEP( 153, UINT64_C(0x0000000000e96313), 0.765, 0.911668040493750) \
STEP( 154, UINT64_C(0x0000000000ea9c18), 0.770, 0.916444344200000) \
STEP( 155, UINT64_C(0x0000000000ebcbb7), 0.775, 0.921077207031250) \
STEP( 156, UINT64_C(0x0000000000ecf1e8), 0.780, 0.925566220800000) \
STEP( 157, UINT64_C(0x0000000000ee0ea7), 0.785, 0.929911101068750) \
STEP( 158, UINT64_C(0x0000000000ef21f1), 0.790, 0.934111689400000) \
STEP( 159, UINT64_C(0x0000000000f02bc6), 0.795, 0.938167955606250) \
STEP( 160, UINT64_C(0x0000000000f12c27), 0.800, 0.942080000000000) \
STEP( 161, UINT64_C(0x0000000000f22319), 0.805, 0.945848055643750) \
STEP( 162, UINT64_C(0x0000000000f310a1), 0.810, 0.949472490600000) \
STEP( 163, UINT64_C(0x0000000000f3f4c7), 0.815, 0.952953810181250) \
STEP( 164, UINT64_C(0x0000000000f4cf98), 0.820, 0.956292659200000) \
STEP( 165, UINT64_C(0x0000000000f5a120), 0.825, 0.959489824218750) \
STEP( 166, UINT64_C(0x0000000000f6696e), 0.830, 0.962546235800000) \
STEP( 167, UINT64_C(0x0000000000f72894), 0.835, 0.965462970756250) \
STEP( 168, UINT64_C(0x0000000000f7dea8), 0.840, 0.968241254400000) \
STEP( 169, UINT64_C(0x0000000000f88bc0), 0.845, 0.970882462793750) \
STEP( 170, UINT64_C(0x0000000000f92ff6), 0.850, 0.973388125000000) \
STEP( 171, UINT64_C(0x0000000000f9cb67), 0.855, 0.975759925331250) \
STEP( 172, UINT64_C(0x0000000000fa5e30), 0.860, 0.977999705600000) \
STEP( 173, UINT64_C(0x0000000000fae874), 0.865, 0.980109467368750) \
STEP( 174, UINT64_C(0x0000000000fb6a57), 0.870, 0.982091374200000) \
STEP( 175, UINT64_C(0x0000000000fbe400), 0.875, 0.983947753906250) \
STEP( 176, UINT64_C(0x0000000000fc5598), 0.880, 0.985681100800000) \
STEP( 177, UINT64_C(0x0000000000fcbf4e), 0.885, 0.987294077943750) \
STEP( 178, UINT64_C(0x0000000000fd214f), 0.890, 0.988789519400000) \
STEP( 179, UINT64_C(0x0000000000fd7bcf), 0.895, 0.990170432481250) \
STEP( 180, UINT64_C(0x0000000000fdcf03), 0.900, 0.991440000000000) \
STEP( 181, UINT64_C(0x0000000000fe1b23), 0.905, 0.992601582518750) \
STEP( 182, UINT64_C(0x0000000000fe606a), 0.910, 0.993658720600000) \
STEP( 183, UINT64_C(0x0000000000fe9f18), 0.915, 0.994615137056250) \
STEP( 184, UINT64_C(0x0000000000fed76e), 0.920, 0.995474739200000) \
STEP( 185, UINT64_C(0x0000000000ff09b0), 0.925, 0.996241621093750) \
STEP( 186, UINT64_C(0x0000000000ff3627), 0.930, 0.996920065800000) \
STEP( 187, UINT64_C(0x0000000000ff5d1d), 0.935, 0.997514547631250) \
STEP( 188, UINT64_C(0x0000000000ff7ee0), 0.940, 0.998029734400000) \
STEP( 189, UINT64_C(0x0000000000ff9bc3), 0.945, 0.998470489668750) \
STEP( 190, UINT64_C(0x0000000000ffb419), 0.950, 0.998841875000000) \
STEP( 191, UINT64_C(0x0000000000ffc83d), 0.955, 0.999149152206250) \
STEP( 192, UINT64_C(0x0000000000ffd888), 0.960, 0.999397785600000) \
STEP( 193, UINT64_C(0x0000000000ffe55b), 0.965, 0.999593444243750) \
STEP( 194, UINT64_C(0x0000000000ffef17), 0.970, 0.999742004200000) \
STEP( 195, UINT64_C(0x0000000000fff623), 0.975, 0.999849550781250) \
STEP( 196, UINT64_C(0x0000000000fffae9), 0.980, 0.999922380800000) \
STEP( 197, UINT64_C(0x0000000000fffdd6), 0.985, 0.999967004818750) \
STEP( 198, UINT64_C(0x0000000000ffff5a), 0.990, 0.999990149400000) \
STEP( 199, UINT64_C(0x0000000000ffffeb), 0.995, 0.999998759356250) \
STEP( 200, UINT64_C(0x0000000001000000), 1.000, 1.000000000000000) \
#endif /* JEMALLOC_H_TYPES */
/******************************************************************************/
#ifdef JEMALLOC_H_STRUCTS
#endif /* JEMALLOC_H_STRUCTS */
/******************************************************************************/
#ifdef JEMALLOC_H_EXTERNS
#endif /* JEMALLOC_H_EXTERNS */
/******************************************************************************/
#ifdef JEMALLOC_H_INLINES
#endif /* JEMALLOC_H_INLINES */
/******************************************************************************/

View File

@ -1,115 +0,0 @@
#!/bin/sh
#
# Generate a discrete lookup table for a sigmoid function in the smoothstep
# family (https://en.wikipedia.org/wiki/Smoothstep), where the lookup table
# entries correspond to x in [1/nsteps, 2/nsteps, ..., nsteps/nsteps]. Encode
# the entries using a binary fixed point representation.
#
# Usage: smoothstep.sh <variant> <nsteps> <bfp> <xprec> <yprec>
#
# <variant> is in {smooth, smoother, smoothest}.
# <nsteps> must be greater than zero.
# <bfp> must be in [0..62]; reasonable values are roughly [10..30].
# <xprec> is x decimal precision.
# <yprec> is y decimal precision.
#set -x
cmd="sh smoothstep.sh $*"
variant=$1
nsteps=$2
bfp=$3
xprec=$4
yprec=$5
case "${variant}" in
smooth)
;;
smoother)
;;
smoothest)
;;
*)
echo "Unsupported variant"
exit 1
;;
esac
smooth() {
step=$1
y=`echo ${yprec} k ${step} ${nsteps} / sx _2 lx 3 ^ '*' 3 lx 2 ^ '*' + p | dc | tr -d '\\\\\n' | sed -e 's#^\.#0.#g'`
h=`echo ${yprec} k 2 ${bfp} ^ ${y} '*' p | dc | tr -d '\\\\\n' | sed -e 's#^\.#0.#g' | tr '.' ' ' | awk '{print $1}' `
}
smoother() {
step=$1
y=`echo ${yprec} k ${step} ${nsteps} / sx 6 lx 5 ^ '*' _15 lx 4 ^ '*' + 10 lx 3 ^ '*' + p | dc | tr -d '\\\\\n' | sed -e 's#^\.#0.#g'`
h=`echo ${yprec} k 2 ${bfp} ^ ${y} '*' p | dc | tr -d '\\\\\n' | sed -e 's#^\.#0.#g' | tr '.' ' ' | awk '{print $1}' `
}
smoothest() {
step=$1
y=`echo ${yprec} k ${step} ${nsteps} / sx _20 lx 7 ^ '*' 70 lx 6 ^ '*' + _84 lx 5 ^ '*' + 35 lx 4 ^ '*' + p | dc | tr -d '\\\\\n' | sed -e 's#^\.#0.#g'`
h=`echo ${yprec} k 2 ${bfp} ^ ${y} '*' p | dc | tr -d '\\\\\n' | sed -e 's#^\.#0.#g' | tr '.' ' ' | awk '{print $1}' `
}
cat <<EOF
/*
* This file was generated by the following command:
* $cmd
*/
/******************************************************************************/
#ifdef JEMALLOC_H_TYPES
/*
* This header defines a precomputed table based on the smoothstep family of
* sigmoidal curves (https://en.wikipedia.org/wiki/Smoothstep) that grow from 0
* to 1 in 0 <= x <= 1. The table is stored as integer fixed point values so
* that floating point math can be avoided.
*
* 3 2
* smoothstep(x) = -2x + 3x
*
* 5 4 3
* smootherstep(x) = 6x - 15x + 10x
*
* 7 6 5 4
* smootheststep(x) = -20x + 70x - 84x + 35x
*/
#define SMOOTHSTEP_VARIANT "${variant}"
#define SMOOTHSTEP_NSTEPS ${nsteps}
#define SMOOTHSTEP_BFP ${bfp}
#define SMOOTHSTEP \\
/* STEP(step, h, x, y) */ \\
EOF
s=1
while [ $s -le $nsteps ] ; do
$variant ${s}
x=`echo ${xprec} k ${s} ${nsteps} / p | dc | tr -d '\\\\\n' | sed -e 's#^\.#0.#g'`
printf ' STEP(%4d, UINT64_C(0x%016x), %s, %s) \\\n' ${s} ${h} ${x} ${y}
s=$((s+1))
done
echo
cat <<EOF
#endif /* JEMALLOC_H_TYPES */
/******************************************************************************/
#ifdef JEMALLOC_H_STRUCTS
#endif /* JEMALLOC_H_STRUCTS */
/******************************************************************************/
#ifdef JEMALLOC_H_EXTERNS
#endif /* JEMALLOC_H_EXTERNS */
/******************************************************************************/
#ifdef JEMALLOC_H_INLINES
#endif /* JEMALLOC_H_INLINES */
/******************************************************************************/
EOF

View File

@ -1,51 +0,0 @@
/******************************************************************************/
#ifdef JEMALLOC_H_TYPES
typedef struct spin_s spin_t;
#endif /* JEMALLOC_H_TYPES */
/******************************************************************************/
#ifdef JEMALLOC_H_STRUCTS
struct spin_s {
unsigned iteration;
};
#endif /* JEMALLOC_H_STRUCTS */
/******************************************************************************/
#ifdef JEMALLOC_H_EXTERNS
#endif /* JEMALLOC_H_EXTERNS */
/******************************************************************************/
#ifdef JEMALLOC_H_INLINES
#ifndef JEMALLOC_ENABLE_INLINE
void spin_init(spin_t *spin);
void spin_adaptive(spin_t *spin);
#endif
#if (defined(JEMALLOC_ENABLE_INLINE) || defined(JEMALLOC_SPIN_C_))
JEMALLOC_INLINE void
spin_init(spin_t *spin)
{
spin->iteration = 0;
}
JEMALLOC_INLINE void
spin_adaptive(spin_t *spin)
{
volatile uint64_t i;
for (i = 0; i < (KQU(1) << spin->iteration); i++)
CPU_SPINWAIT;
if (spin->iteration < 63)
spin->iteration++;
}
#endif
#endif /* JEMALLOC_H_INLINES */
/******************************************************************************/

View File

@ -102,14 +102,6 @@ struct arena_stats_s {
/* Number of bytes currently mapped. */ /* Number of bytes currently mapped. */
size_t mapped; size_t mapped;
/*
* Number of bytes currently retained as a side effect of munmap() being
* disabled/bypassed. Retained bytes are technically mapped (though
* always decommitted or purged), but they are excluded from the mapped
* statistic (above).
*/
size_t retained;
/* /*
* Total number of purge sweeps, total number of madvise calls made, * Total number of purge sweeps, total number of madvise calls made,
* and total pages purged in order to keep dirty unused memory under * and total pages purged in order to keep dirty unused memory under
@ -176,9 +168,6 @@ JEMALLOC_INLINE void
stats_cactive_add(size_t size) stats_cactive_add(size_t size)
{ {
assert(size > 0);
assert((size & chunksize_mask) == 0);
atomic_add_z(&stats_cactive, size); atomic_add_z(&stats_cactive, size);
} }
@ -186,9 +175,6 @@ JEMALLOC_INLINE void
stats_cactive_sub(size_t size) stats_cactive_sub(size_t size)
{ {
assert(size > 0);
assert((size & chunksize_mask) == 0);
atomic_sub_z(&stats_cactive, size); atomic_sub_z(&stats_cactive, size);
} }
#endif #endif

View File

@ -70,20 +70,13 @@ struct tcache_bin_s {
int low_water; /* Min # cached since last GC. */ int low_water; /* Min # cached since last GC. */
unsigned lg_fill_div; /* Fill (ncached_max >> lg_fill_div). */ unsigned lg_fill_div; /* Fill (ncached_max >> lg_fill_div). */
unsigned ncached; /* # of cached objects. */ unsigned ncached; /* # of cached objects. */
/*
* To make use of adjacent cacheline prefetch, the items in the avail
* stack goes to higher address for newer allocations. avail points
* just above the available space, which means that
* avail[-ncached, ... -1] are available items and the lowest item will
* be allocated first.
*/
void **avail; /* Stack of available objects. */ void **avail; /* Stack of available objects. */
}; };
struct tcache_s { struct tcache_s {
ql_elm(tcache_t) link; /* Used for aggregating stats. */ ql_elm(tcache_t) link; /* Used for aggregating stats. */
uint64_t prof_accumbytes;/* Cleared after arena_prof_accum(). */ uint64_t prof_accumbytes;/* Cleared after arena_prof_accum(). */
ticker_t gc_ticker; /* Drives incremental GC. */ unsigned ev_cnt; /* Event count since incremental GC. */
szind_t next_gc_bin; /* Next bin to GC. */ szind_t next_gc_bin; /* Next bin to GC. */
tcache_bin_t tbins[1]; /* Dynamically sized. */ tcache_bin_t tbins[1]; /* Dynamically sized. */
/* /*
@ -115,7 +108,7 @@ extern tcache_bin_info_t *tcache_bin_info;
* Number of tcache bins. There are NBINS small-object bins, plus 0 or more * Number of tcache bins. There are NBINS small-object bins, plus 0 or more
* large-object bins. * large-object bins.
*/ */
extern unsigned nhbins; extern size_t nhbins;
/* Maximum cached size class. */ /* Maximum cached size class. */
extern size_t tcache_maxclass; extern size_t tcache_maxclass;
@ -130,25 +123,27 @@ extern size_t tcache_maxclass;
*/ */
extern tcaches_t *tcaches; extern tcaches_t *tcaches;
size_t tcache_salloc(tsdn_t *tsdn, const void *ptr); size_t tcache_salloc(const void *ptr);
void tcache_event_hard(tsd_t *tsd, tcache_t *tcache); void tcache_event_hard(tsd_t *tsd, tcache_t *tcache);
void *tcache_alloc_small_hard(tsdn_t *tsdn, arena_t *arena, tcache_t *tcache, void *tcache_alloc_small_hard(tsd_t *tsd, arena_t *arena, tcache_t *tcache,
tcache_bin_t *tbin, szind_t binind, bool *tcache_success); tcache_bin_t *tbin, szind_t binind);
void tcache_bin_flush_small(tsd_t *tsd, tcache_t *tcache, tcache_bin_t *tbin, void tcache_bin_flush_small(tsd_t *tsd, tcache_t *tcache, tcache_bin_t *tbin,
szind_t binind, unsigned rem); szind_t binind, unsigned rem);
void tcache_bin_flush_large(tsd_t *tsd, tcache_bin_t *tbin, szind_t binind, void tcache_bin_flush_large(tsd_t *tsd, tcache_bin_t *tbin, szind_t binind,
unsigned rem, tcache_t *tcache); unsigned rem, tcache_t *tcache);
void tcache_arena_reassociate(tsdn_t *tsdn, tcache_t *tcache, void tcache_arena_associate(tcache_t *tcache, arena_t *arena);
arena_t *oldarena, arena_t *newarena); void tcache_arena_reassociate(tcache_t *tcache, arena_t *oldarena,
arena_t *newarena);
void tcache_arena_dissociate(tcache_t *tcache, arena_t *arena);
tcache_t *tcache_get_hard(tsd_t *tsd); tcache_t *tcache_get_hard(tsd_t *tsd);
tcache_t *tcache_create(tsdn_t *tsdn, arena_t *arena); tcache_t *tcache_create(tsd_t *tsd, arena_t *arena);
void tcache_cleanup(tsd_t *tsd); void tcache_cleanup(tsd_t *tsd);
void tcache_enabled_cleanup(tsd_t *tsd); void tcache_enabled_cleanup(tsd_t *tsd);
void tcache_stats_merge(tsdn_t *tsdn, tcache_t *tcache, arena_t *arena); void tcache_stats_merge(tcache_t *tcache, arena_t *arena);
bool tcaches_create(tsd_t *tsd, unsigned *r_ind); bool tcaches_create(tsd_t *tsd, unsigned *r_ind);
void tcaches_flush(tsd_t *tsd, unsigned ind); void tcaches_flush(tsd_t *tsd, unsigned ind);
void tcaches_destroy(tsd_t *tsd, unsigned ind); void tcaches_destroy(tsd_t *tsd, unsigned ind);
bool tcache_boot(tsdn_t *tsdn); bool tcache_boot(void);
#endif /* JEMALLOC_H_EXTERNS */ #endif /* JEMALLOC_H_EXTERNS */
/******************************************************************************/ /******************************************************************************/
@ -160,15 +155,15 @@ void tcache_flush(void);
bool tcache_enabled_get(void); bool tcache_enabled_get(void);
tcache_t *tcache_get(tsd_t *tsd, bool create); tcache_t *tcache_get(tsd_t *tsd, bool create);
void tcache_enabled_set(bool enabled); void tcache_enabled_set(bool enabled);
void *tcache_alloc_easy(tcache_bin_t *tbin, bool *tcache_success); void *tcache_alloc_easy(tcache_bin_t *tbin);
void *tcache_alloc_small(tsd_t *tsd, arena_t *arena, tcache_t *tcache, void *tcache_alloc_small(tsd_t *tsd, arena_t *arena, tcache_t *tcache,
size_t size, szind_t ind, bool zero, bool slow_path); size_t size, bool zero);
void *tcache_alloc_large(tsd_t *tsd, arena_t *arena, tcache_t *tcache, void *tcache_alloc_large(tsd_t *tsd, arena_t *arena, tcache_t *tcache,
size_t size, szind_t ind, bool zero, bool slow_path); size_t size, bool zero);
void tcache_dalloc_small(tsd_t *tsd, tcache_t *tcache, void *ptr, void tcache_dalloc_small(tsd_t *tsd, tcache_t *tcache, void *ptr,
szind_t binind, bool slow_path); szind_t binind);
void tcache_dalloc_large(tsd_t *tsd, tcache_t *tcache, void *ptr, void tcache_dalloc_large(tsd_t *tsd, tcache_t *tcache, void *ptr,
size_t size, bool slow_path); size_t size);
tcache_t *tcaches_get(tsd_t *tsd, unsigned ind); tcache_t *tcaches_get(tsd_t *tsd, unsigned ind);
#endif #endif
@ -245,74 +240,51 @@ tcache_event(tsd_t *tsd, tcache_t *tcache)
if (TCACHE_GC_INCR == 0) if (TCACHE_GC_INCR == 0)
return; return;
if (unlikely(ticker_tick(&tcache->gc_ticker))) tcache->ev_cnt++;
assert(tcache->ev_cnt <= TCACHE_GC_INCR);
if (unlikely(tcache->ev_cnt == TCACHE_GC_INCR))
tcache_event_hard(tsd, tcache); tcache_event_hard(tsd, tcache);
} }
JEMALLOC_ALWAYS_INLINE void * JEMALLOC_ALWAYS_INLINE void *
tcache_alloc_easy(tcache_bin_t *tbin, bool *tcache_success) tcache_alloc_easy(tcache_bin_t *tbin)
{ {
void *ret; void *ret;
if (unlikely(tbin->ncached == 0)) { if (unlikely(tbin->ncached == 0)) {
tbin->low_water = -1; tbin->low_water = -1;
*tcache_success = false;
return (NULL); return (NULL);
} }
/*
* tcache_success (instead of ret) should be checked upon the return of
* this function. We avoid checking (ret == NULL) because there is
* never a null stored on the avail stack (which is unknown to the
* compiler), and eagerly checking ret would cause pipeline stall
* (waiting for the cacheline).
*/
*tcache_success = true;
ret = *(tbin->avail - tbin->ncached);
tbin->ncached--; tbin->ncached--;
if (unlikely((int)tbin->ncached < tbin->low_water)) if (unlikely((int)tbin->ncached < tbin->low_water))
tbin->low_water = tbin->ncached; tbin->low_water = tbin->ncached;
ret = tbin->avail[tbin->ncached];
return (ret); return (ret);
} }
JEMALLOC_ALWAYS_INLINE void * JEMALLOC_ALWAYS_INLINE void *
tcache_alloc_small(tsd_t *tsd, arena_t *arena, tcache_t *tcache, size_t size, tcache_alloc_small(tsd_t *tsd, arena_t *arena, tcache_t *tcache, size_t size,
szind_t binind, bool zero, bool slow_path) bool zero)
{ {
void *ret; void *ret;
szind_t binind;
size_t usize;
tcache_bin_t *tbin; tcache_bin_t *tbin;
bool tcache_success;
size_t usize JEMALLOC_CC_SILENCE_INIT(0);
binind = size2index(size);
assert(binind < NBINS); assert(binind < NBINS);
tbin = &tcache->tbins[binind]; tbin = &tcache->tbins[binind];
ret = tcache_alloc_easy(tbin, &tcache_success); usize = index2size(binind);
assert(tcache_success == (ret != NULL)); ret = tcache_alloc_easy(tbin);
if (unlikely(!tcache_success)) { if (unlikely(ret == NULL)) {
bool tcache_hard_success; ret = tcache_alloc_small_hard(tsd, arena, tcache, tbin, binind);
arena = arena_choose(tsd, arena); if (ret == NULL)
if (unlikely(arena == NULL))
return (NULL);
ret = tcache_alloc_small_hard(tsd_tsdn(tsd), arena, tcache,
tbin, binind, &tcache_hard_success);
if (tcache_hard_success == false)
return (NULL); return (NULL);
} }
assert(tcache_salloc(ret) == usize);
assert(ret);
/*
* Only compute usize if required. The checks in the following if
* statement are all static.
*/
if (config_prof || (slow_path && config_fill) || unlikely(zero)) {
usize = index2size(binind);
assert(tcache_salloc(tsd_tsdn(tsd), ret) == usize);
}
if (likely(!zero)) { if (likely(!zero)) {
if (slow_path && config_fill) { if (config_fill) {
if (unlikely(opt_junk_alloc)) { if (unlikely(opt_junk_alloc)) {
arena_alloc_junk_small(ret, arena_alloc_junk_small(ret,
&arena_bin_info[binind], false); &arena_bin_info[binind], false);
@ -320,7 +292,7 @@ tcache_alloc_small(tsd_t *tsd, arena_t *arena, tcache_t *tcache, size_t size,
memset(ret, 0, usize); memset(ret, 0, usize);
} }
} else { } else {
if (slow_path && config_fill && unlikely(opt_junk_alloc)) { if (config_fill && unlikely(opt_junk_alloc)) {
arena_alloc_junk_small(ret, &arena_bin_info[binind], arena_alloc_junk_small(ret, &arena_bin_info[binind],
true); true);
} }
@ -337,38 +309,28 @@ tcache_alloc_small(tsd_t *tsd, arena_t *arena, tcache_t *tcache, size_t size,
JEMALLOC_ALWAYS_INLINE void * JEMALLOC_ALWAYS_INLINE void *
tcache_alloc_large(tsd_t *tsd, arena_t *arena, tcache_t *tcache, size_t size, tcache_alloc_large(tsd_t *tsd, arena_t *arena, tcache_t *tcache, size_t size,
szind_t binind, bool zero, bool slow_path) bool zero)
{ {
void *ret; void *ret;
szind_t binind;
size_t usize;
tcache_bin_t *tbin; tcache_bin_t *tbin;
bool tcache_success;
binind = size2index(size);
usize = index2size(binind);
assert(usize <= tcache_maxclass);
assert(binind < nhbins); assert(binind < nhbins);
tbin = &tcache->tbins[binind]; tbin = &tcache->tbins[binind];
ret = tcache_alloc_easy(tbin, &tcache_success); ret = tcache_alloc_easy(tbin);
assert(tcache_success == (ret != NULL)); if (unlikely(ret == NULL)) {
if (unlikely(!tcache_success)) {
/* /*
* Only allocate one large object at a time, because it's quite * Only allocate one large object at a time, because it's quite
* expensive to create one and not use it. * expensive to create one and not use it.
*/ */
arena = arena_choose(tsd, arena); ret = arena_malloc_large(arena, usize, zero);
if (unlikely(arena == NULL))
return (NULL);
ret = arena_malloc_large(tsd_tsdn(tsd), arena, binind, zero);
if (ret == NULL) if (ret == NULL)
return (NULL); return (NULL);
} else { } else {
size_t usize JEMALLOC_CC_SILENCE_INIT(0);
/* Only compute usize on demand */
if (config_prof || (slow_path && config_fill) ||
unlikely(zero)) {
usize = index2size(binind);
assert(usize <= tcache_maxclass);
}
if (config_prof && usize == LARGE_MINCLASS) { if (config_prof && usize == LARGE_MINCLASS) {
arena_chunk_t *chunk = arena_chunk_t *chunk =
(arena_chunk_t *)CHUNK_ADDR2BASE(ret); (arena_chunk_t *)CHUNK_ADDR2BASE(ret);
@ -378,11 +340,10 @@ tcache_alloc_large(tsd_t *tsd, arena_t *arena, tcache_t *tcache, size_t size,
BININD_INVALID); BININD_INVALID);
} }
if (likely(!zero)) { if (likely(!zero)) {
if (slow_path && config_fill) { if (config_fill) {
if (unlikely(opt_junk_alloc)) { if (unlikely(opt_junk_alloc))
memset(ret, JEMALLOC_ALLOC_JUNK, memset(ret, 0xa5, usize);
usize); else if (unlikely(opt_zero))
} else if (unlikely(opt_zero))
memset(ret, 0, usize); memset(ret, 0, usize);
} }
} else } else
@ -399,15 +360,14 @@ tcache_alloc_large(tsd_t *tsd, arena_t *arena, tcache_t *tcache, size_t size,
} }
JEMALLOC_ALWAYS_INLINE void JEMALLOC_ALWAYS_INLINE void
tcache_dalloc_small(tsd_t *tsd, tcache_t *tcache, void *ptr, szind_t binind, tcache_dalloc_small(tsd_t *tsd, tcache_t *tcache, void *ptr, szind_t binind)
bool slow_path)
{ {
tcache_bin_t *tbin; tcache_bin_t *tbin;
tcache_bin_info_t *tbin_info; tcache_bin_info_t *tbin_info;
assert(tcache_salloc(tsd_tsdn(tsd), ptr) <= SMALL_MAXCLASS); assert(tcache_salloc(ptr) <= SMALL_MAXCLASS);
if (slow_path && config_fill && unlikely(opt_junk_free)) if (config_fill && unlikely(opt_junk_free))
arena_dalloc_junk_small(ptr, &arena_bin_info[binind]); arena_dalloc_junk_small(ptr, &arena_bin_info[binind]);
tbin = &tcache->tbins[binind]; tbin = &tcache->tbins[binind];
@ -417,27 +377,26 @@ tcache_dalloc_small(tsd_t *tsd, tcache_t *tcache, void *ptr, szind_t binind,
(tbin_info->ncached_max >> 1)); (tbin_info->ncached_max >> 1));
} }
assert(tbin->ncached < tbin_info->ncached_max); assert(tbin->ncached < tbin_info->ncached_max);
tbin->avail[tbin->ncached] = ptr;
tbin->ncached++; tbin->ncached++;
*(tbin->avail - tbin->ncached) = ptr;
tcache_event(tsd, tcache); tcache_event(tsd, tcache);
} }
JEMALLOC_ALWAYS_INLINE void JEMALLOC_ALWAYS_INLINE void
tcache_dalloc_large(tsd_t *tsd, tcache_t *tcache, void *ptr, size_t size, tcache_dalloc_large(tsd_t *tsd, tcache_t *tcache, void *ptr, size_t size)
bool slow_path)
{ {
szind_t binind; szind_t binind;
tcache_bin_t *tbin; tcache_bin_t *tbin;
tcache_bin_info_t *tbin_info; tcache_bin_info_t *tbin_info;
assert((size & PAGE_MASK) == 0); assert((size & PAGE_MASK) == 0);
assert(tcache_salloc(tsd_tsdn(tsd), ptr) > SMALL_MAXCLASS); assert(tcache_salloc(ptr) > SMALL_MAXCLASS);
assert(tcache_salloc(tsd_tsdn(tsd), ptr) <= tcache_maxclass); assert(tcache_salloc(ptr) <= tcache_maxclass);
binind = size2index(size); binind = size2index(size);
if (slow_path && config_fill && unlikely(opt_junk_free)) if (config_fill && unlikely(opt_junk_free))
arena_dalloc_junk_large(ptr, size); arena_dalloc_junk_large(ptr, size);
tbin = &tcache->tbins[binind]; tbin = &tcache->tbins[binind];
@ -447,8 +406,8 @@ tcache_dalloc_large(tsd_t *tsd, tcache_t *tcache, void *ptr, size_t size,
(tbin_info->ncached_max >> 1), tcache); (tbin_info->ncached_max >> 1), tcache);
} }
assert(tbin->ncached < tbin_info->ncached_max); assert(tbin->ncached < tbin_info->ncached_max);
tbin->avail[tbin->ncached] = ptr;
tbin->ncached++; tbin->ncached++;
*(tbin->avail - tbin->ncached) = ptr;
tcache_event(tsd, tcache); tcache_event(tsd, tcache);
} }
@ -457,10 +416,8 @@ JEMALLOC_ALWAYS_INLINE tcache_t *
tcaches_get(tsd_t *tsd, unsigned ind) tcaches_get(tsd_t *tsd, unsigned ind)
{ {
tcaches_t *elm = &tcaches[ind]; tcaches_t *elm = &tcaches[ind];
if (unlikely(elm->tcache == NULL)) { if (unlikely(elm->tcache == NULL))
elm->tcache = tcache_create(tsd_tsdn(tsd), arena_choose(tsd, elm->tcache = tcache_create(tsd, arena_choose(tsd, NULL));
NULL));
}
return (elm->tcache); return (elm->tcache);
} }
#endif #endif

View File

@ -1,75 +0,0 @@
/******************************************************************************/
#ifdef JEMALLOC_H_TYPES
typedef struct ticker_s ticker_t;
#endif /* JEMALLOC_H_TYPES */
/******************************************************************************/
#ifdef JEMALLOC_H_STRUCTS
struct ticker_s {
int32_t tick;
int32_t nticks;
};
#endif /* JEMALLOC_H_STRUCTS */
/******************************************************************************/
#ifdef JEMALLOC_H_EXTERNS
#endif /* JEMALLOC_H_EXTERNS */
/******************************************************************************/
#ifdef JEMALLOC_H_INLINES
#ifndef JEMALLOC_ENABLE_INLINE
void ticker_init(ticker_t *ticker, int32_t nticks);
void ticker_copy(ticker_t *ticker, const ticker_t *other);
int32_t ticker_read(const ticker_t *ticker);
bool ticker_ticks(ticker_t *ticker, int32_t nticks);
bool ticker_tick(ticker_t *ticker);
#endif
#if (defined(JEMALLOC_ENABLE_INLINE) || defined(JEMALLOC_TICKER_C_))
JEMALLOC_INLINE void
ticker_init(ticker_t *ticker, int32_t nticks)
{
ticker->tick = nticks;
ticker->nticks = nticks;
}
JEMALLOC_INLINE void
ticker_copy(ticker_t *ticker, const ticker_t *other)
{
*ticker = *other;
}
JEMALLOC_INLINE int32_t
ticker_read(const ticker_t *ticker)
{
return (ticker->tick);
}
JEMALLOC_INLINE bool
ticker_ticks(ticker_t *ticker, int32_t nticks)
{
if (unlikely(ticker->tick < nticks)) {
ticker->tick = ticker->nticks;
return (true);
}
ticker->tick -= nticks;
return(false);
}
JEMALLOC_INLINE bool
ticker_tick(ticker_t *ticker)
{
return (ticker_ticks(ticker, 1));
}
#endif
#endif /* JEMALLOC_H_INLINES */
/******************************************************************************/

View File

@ -13,9 +13,6 @@ typedef struct tsd_init_head_s tsd_init_head_t;
#endif #endif
typedef struct tsd_s tsd_t; typedef struct tsd_s tsd_t;
typedef struct tsdn_s tsdn_t;
#define TSDN_NULL ((tsdn_t *)0)
typedef enum { typedef enum {
tsd_state_uninitialized, tsd_state_uninitialized,
@ -47,8 +44,7 @@ typedef enum {
* The result is a set of generated functions, e.g.: * The result is a set of generated functions, e.g.:
* *
* bool example_tsd_boot(void) {...} * bool example_tsd_boot(void) {...}
* bool example_tsd_booted_get(void) {...} * example_t *example_tsd_get() {...}
* example_t *example_tsd_get(bool init) {...}
* void example_tsd_set(example_t *val) {...} * void example_tsd_set(example_t *val) {...}
* *
* Note that all of the functions deal in terms of (a_type *) rather than * Note that all of the functions deal in terms of (a_type *) rather than
@ -102,10 +98,8 @@ a_attr void \
a_name##tsd_boot1(void); \ a_name##tsd_boot1(void); \
a_attr bool \ a_attr bool \
a_name##tsd_boot(void); \ a_name##tsd_boot(void); \
a_attr bool \
a_name##tsd_booted_get(void); \
a_attr a_type * \ a_attr a_type * \
a_name##tsd_get(bool init); \ a_name##tsd_get(void); \
a_attr void \ a_attr void \
a_name##tsd_set(a_type *val); a_name##tsd_set(a_type *val);
@ -207,21 +201,9 @@ a_name##tsd_boot(void) \
\ \
return (a_name##tsd_boot0()); \ return (a_name##tsd_boot0()); \
} \ } \
a_attr bool \
a_name##tsd_booted_get(void) \
{ \
\
return (a_name##tsd_booted); \
} \
a_attr bool \
a_name##tsd_get_allocates(void) \
{ \
\
return (false); \
} \
/* Get/set. */ \ /* Get/set. */ \
a_attr a_type * \ a_attr a_type * \
a_name##tsd_get(bool init) \ a_name##tsd_get(void) \
{ \ { \
\ \
assert(a_name##tsd_booted); \ assert(a_name##tsd_booted); \
@ -264,21 +246,9 @@ a_name##tsd_boot(void) \
\ \
return (a_name##tsd_boot0()); \ return (a_name##tsd_boot0()); \
} \ } \
a_attr bool \
a_name##tsd_booted_get(void) \
{ \
\
return (a_name##tsd_booted); \
} \
a_attr bool \
a_name##tsd_get_allocates(void) \
{ \
\
return (false); \
} \
/* Get/set. */ \ /* Get/set. */ \
a_attr a_type * \ a_attr a_type * \
a_name##tsd_get(bool init) \ a_name##tsd_get(void) \
{ \ { \
\ \
assert(a_name##tsd_booted); \ assert(a_name##tsd_booted); \
@ -337,14 +307,14 @@ a_name##tsd_wrapper_set(a_name##tsd_wrapper_t *wrapper) \
} \ } \
} \ } \
a_attr a_name##tsd_wrapper_t * \ a_attr a_name##tsd_wrapper_t * \
a_name##tsd_wrapper_get(bool init) \ a_name##tsd_wrapper_get(void) \
{ \ { \
DWORD error = GetLastError(); \ DWORD error = GetLastError(); \
a_name##tsd_wrapper_t *wrapper = (a_name##tsd_wrapper_t *) \ a_name##tsd_wrapper_t *wrapper = (a_name##tsd_wrapper_t *) \
TlsGetValue(a_name##tsd_tsd); \ TlsGetValue(a_name##tsd_tsd); \
SetLastError(error); \ SetLastError(error); \
\ \
if (init && unlikely(wrapper == NULL)) { \ if (unlikely(wrapper == NULL)) { \
wrapper = (a_name##tsd_wrapper_t *) \ wrapper = (a_name##tsd_wrapper_t *) \
malloc_tsd_malloc(sizeof(a_name##tsd_wrapper_t)); \ malloc_tsd_malloc(sizeof(a_name##tsd_wrapper_t)); \
if (wrapper == NULL) { \ if (wrapper == NULL) { \
@ -398,28 +368,14 @@ a_name##tsd_boot(void) \
a_name##tsd_boot1(); \ a_name##tsd_boot1(); \
return (false); \ return (false); \
} \ } \
a_attr bool \
a_name##tsd_booted_get(void) \
{ \
\
return (a_name##tsd_booted); \
} \
a_attr bool \
a_name##tsd_get_allocates(void) \
{ \
\
return (true); \
} \
/* Get/set. */ \ /* Get/set. */ \
a_attr a_type * \ a_attr a_type * \
a_name##tsd_get(bool init) \ a_name##tsd_get(void) \
{ \ { \
a_name##tsd_wrapper_t *wrapper; \ a_name##tsd_wrapper_t *wrapper; \
\ \
assert(a_name##tsd_booted); \ assert(a_name##tsd_booted); \
wrapper = a_name##tsd_wrapper_get(init); \ wrapper = a_name##tsd_wrapper_get(); \
if (a_name##tsd_get_allocates() && !init && wrapper == NULL) \
return (NULL); \
return (&wrapper->val); \ return (&wrapper->val); \
} \ } \
a_attr void \ a_attr void \
@ -428,7 +384,7 @@ a_name##tsd_set(a_type *val) \
a_name##tsd_wrapper_t *wrapper; \ a_name##tsd_wrapper_t *wrapper; \
\ \
assert(a_name##tsd_booted); \ assert(a_name##tsd_booted); \
wrapper = a_name##tsd_wrapper_get(true); \ wrapper = a_name##tsd_wrapper_get(); \
wrapper->val = *(val); \ wrapper->val = *(val); \
if (a_cleanup != malloc_tsd_no_cleanup) \ if (a_cleanup != malloc_tsd_no_cleanup) \
wrapper->initialized = true; \ wrapper->initialized = true; \
@ -472,12 +428,12 @@ a_name##tsd_wrapper_set(a_name##tsd_wrapper_t *wrapper) \
} \ } \
} \ } \
a_attr a_name##tsd_wrapper_t * \ a_attr a_name##tsd_wrapper_t * \
a_name##tsd_wrapper_get(bool init) \ a_name##tsd_wrapper_get(void) \
{ \ { \
a_name##tsd_wrapper_t *wrapper = (a_name##tsd_wrapper_t *) \ a_name##tsd_wrapper_t *wrapper = (a_name##tsd_wrapper_t *) \
pthread_getspecific(a_name##tsd_tsd); \ pthread_getspecific(a_name##tsd_tsd); \
\ \
if (init && unlikely(wrapper == NULL)) { \ if (unlikely(wrapper == NULL)) { \
tsd_init_block_t block; \ tsd_init_block_t block; \
wrapper = tsd_init_check_recursion( \ wrapper = tsd_init_check_recursion( \
&a_name##tsd_init_head, &block); \ &a_name##tsd_init_head, &block); \
@ -534,28 +490,14 @@ a_name##tsd_boot(void) \
a_name##tsd_boot1(); \ a_name##tsd_boot1(); \
return (false); \ return (false); \
} \ } \
a_attr bool \
a_name##tsd_booted_get(void) \
{ \
\
return (a_name##tsd_booted); \
} \
a_attr bool \
a_name##tsd_get_allocates(void) \
{ \
\
return (true); \
} \
/* Get/set. */ \ /* Get/set. */ \
a_attr a_type * \ a_attr a_type * \
a_name##tsd_get(bool init) \ a_name##tsd_get(void) \
{ \ { \
a_name##tsd_wrapper_t *wrapper; \ a_name##tsd_wrapper_t *wrapper; \
\ \
assert(a_name##tsd_booted); \ assert(a_name##tsd_booted); \
wrapper = a_name##tsd_wrapper_get(init); \ wrapper = a_name##tsd_wrapper_get(); \
if (a_name##tsd_get_allocates() && !init && wrapper == NULL) \
return (NULL); \
return (&wrapper->val); \ return (&wrapper->val); \
} \ } \
a_attr void \ a_attr void \
@ -564,7 +506,7 @@ a_name##tsd_set(a_type *val) \
a_name##tsd_wrapper_t *wrapper; \ a_name##tsd_wrapper_t *wrapper; \
\ \
assert(a_name##tsd_booted); \ assert(a_name##tsd_booted); \
wrapper = a_name##tsd_wrapper_get(true); \ wrapper = a_name##tsd_wrapper_get(); \
wrapper->val = *(val); \ wrapper->val = *(val); \
if (a_cleanup != malloc_tsd_no_cleanup) \ if (a_cleanup != malloc_tsd_no_cleanup) \
wrapper->initialized = true; \ wrapper->initialized = true; \
@ -594,15 +536,12 @@ struct tsd_init_head_s {
O(thread_allocated, uint64_t) \ O(thread_allocated, uint64_t) \
O(thread_deallocated, uint64_t) \ O(thread_deallocated, uint64_t) \
O(prof_tdata, prof_tdata_t *) \ O(prof_tdata, prof_tdata_t *) \
O(iarena, arena_t *) \
O(arena, arena_t *) \ O(arena, arena_t *) \
O(arenas_tdata, arena_tdata_t *) \ O(arenas_cache, arena_t **) \
O(narenas_tdata, unsigned) \ O(narenas_cache, unsigned) \
O(arenas_tdata_bypass, bool) \ O(arenas_cache_bypass, bool) \
O(tcache_enabled, tcache_enabled_t) \ O(tcache_enabled, tcache_enabled_t) \
O(quarantine, quarantine_t *) \ O(quarantine, quarantine_t *) \
O(witnesses, witness_list_t) \
O(witness_fork, bool) \
#define TSD_INITIALIZER { \ #define TSD_INITIALIZER { \
tsd_state_uninitialized, \ tsd_state_uninitialized, \
@ -612,13 +551,10 @@ struct tsd_init_head_s {
NULL, \ NULL, \
NULL, \ NULL, \
NULL, \ NULL, \
NULL, \
0, \ 0, \
false, \ false, \
tcache_enabled_default, \ tcache_enabled_default, \
NULL, \ NULL \
ql_head_initializer(witnesses), \
false \
} }
struct tsd_s { struct tsd_s {
@ -629,15 +565,6 @@ MALLOC_TSD
#undef O #undef O
}; };
/*
* Wrapper around tsd_t that makes it possible to avoid implicit conversion
* between tsd_t and tsdn_t, where tsdn_t is "nullable" and has to be
* explicitly converted to tsd_t, which is non-nullable.
*/
struct tsdn_s {
tsd_t tsd;
};
static const tsd_t tsd_initializer = TSD_INITIALIZER; static const tsd_t tsd_initializer = TSD_INITIALIZER;
malloc_tsd_types(, tsd_t) malloc_tsd_types(, tsd_t)
@ -650,7 +577,7 @@ void *malloc_tsd_malloc(size_t size);
void malloc_tsd_dalloc(void *wrapper); void malloc_tsd_dalloc(void *wrapper);
void malloc_tsd_no_cleanup(void *arg); void malloc_tsd_no_cleanup(void *arg);
void malloc_tsd_cleanup_register(bool (*f)(void)); void malloc_tsd_cleanup_register(bool (*f)(void));
tsd_t *malloc_tsd_boot0(void); bool malloc_tsd_boot0(void);
void malloc_tsd_boot1(void); void malloc_tsd_boot1(void);
#if (!defined(JEMALLOC_MALLOC_THREAD_CLEANUP) && !defined(JEMALLOC_TLS) && \ #if (!defined(JEMALLOC_MALLOC_THREAD_CLEANUP) && !defined(JEMALLOC_TLS) && \
!defined(_WIN32)) !defined(_WIN32))
@ -667,9 +594,7 @@ void tsd_cleanup(void *arg);
#ifndef JEMALLOC_ENABLE_INLINE #ifndef JEMALLOC_ENABLE_INLINE
malloc_tsd_protos(JEMALLOC_ATTR(unused), , tsd_t) malloc_tsd_protos(JEMALLOC_ATTR(unused), , tsd_t)
tsd_t *tsd_fetch_impl(bool init);
tsd_t *tsd_fetch(void); tsd_t *tsd_fetch(void);
tsdn_t *tsd_tsdn(tsd_t *tsd);
bool tsd_nominal(tsd_t *tsd); bool tsd_nominal(tsd_t *tsd);
#define O(n, t) \ #define O(n, t) \
t *tsd_##n##p_get(tsd_t *tsd); \ t *tsd_##n##p_get(tsd_t *tsd); \
@ -677,9 +602,6 @@ t tsd_##n##_get(tsd_t *tsd); \
void tsd_##n##_set(tsd_t *tsd, t n); void tsd_##n##_set(tsd_t *tsd, t n);
MALLOC_TSD MALLOC_TSD
#undef O #undef O
tsdn_t *tsdn_fetch(void);
bool tsdn_null(const tsdn_t *tsdn);
tsd_t *tsdn_tsd(tsdn_t *tsdn);
#endif #endif
#if (defined(JEMALLOC_ENABLE_INLINE) || defined(JEMALLOC_TSD_C_)) #if (defined(JEMALLOC_ENABLE_INLINE) || defined(JEMALLOC_TSD_C_))
@ -687,13 +609,9 @@ malloc_tsd_externs(, tsd_t)
malloc_tsd_funcs(JEMALLOC_ALWAYS_INLINE, , tsd_t, tsd_initializer, tsd_cleanup) malloc_tsd_funcs(JEMALLOC_ALWAYS_INLINE, , tsd_t, tsd_initializer, tsd_cleanup)
JEMALLOC_ALWAYS_INLINE tsd_t * JEMALLOC_ALWAYS_INLINE tsd_t *
tsd_fetch_impl(bool init) tsd_fetch(void)
{ {
tsd_t *tsd = tsd_get(init); tsd_t *tsd = tsd_get();
if (!init && tsd_get_allocates() && tsd == NULL)
return (NULL);
assert(tsd != NULL);
if (unlikely(tsd->state != tsd_state_nominal)) { if (unlikely(tsd->state != tsd_state_nominal)) {
if (tsd->state == tsd_state_uninitialized) { if (tsd->state == tsd_state_uninitialized) {
@ -710,20 +628,6 @@ tsd_fetch_impl(bool init)
return (tsd); return (tsd);
} }
JEMALLOC_ALWAYS_INLINE tsd_t *
tsd_fetch(void)
{
return (tsd_fetch_impl(true));
}
JEMALLOC_ALWAYS_INLINE tsdn_t *
tsd_tsdn(tsd_t *tsd)
{
return ((tsdn_t *)tsd);
}
JEMALLOC_INLINE bool JEMALLOC_INLINE bool
tsd_nominal(tsd_t *tsd) tsd_nominal(tsd_t *tsd)
{ {
@ -755,32 +659,6 @@ tsd_##n##_set(tsd_t *tsd, t n) \
} }
MALLOC_TSD MALLOC_TSD
#undef O #undef O
JEMALLOC_ALWAYS_INLINE tsdn_t *
tsdn_fetch(void)
{
if (!tsd_booted_get())
return (NULL);
return (tsd_tsdn(tsd_fetch_impl(false)));
}
JEMALLOC_ALWAYS_INLINE bool
tsdn_null(const tsdn_t *tsdn)
{
return (tsdn == NULL);
}
JEMALLOC_ALWAYS_INLINE tsd_t *
tsdn_tsd(tsdn_t *tsdn)
{
assert(!tsdn_null(tsdn));
return (&tsdn->tsd);
}
#endif #endif
#endif /* JEMALLOC_H_INLINES */ #endif /* JEMALLOC_H_INLINES */

View File

@ -40,14 +40,6 @@
*/ */
#define MALLOC_PRINTF_BUFSIZE 4096 #define MALLOC_PRINTF_BUFSIZE 4096
/* Junk fill patterns. */
#ifndef JEMALLOC_ALLOC_JUNK
# define JEMALLOC_ALLOC_JUNK ((uint8_t)0xa5)
#endif
#ifndef JEMALLOC_FREE_JUNK
# define JEMALLOC_FREE_JUNK ((uint8_t)0x5a)
#endif
/* /*
* Wrap a cpp argument that contains commas such that it isn't broken up into * Wrap a cpp argument that contains commas such that it isn't broken up into
* multiple arguments. * multiple arguments.
@ -65,21 +57,73 @@
# define JEMALLOC_CC_SILENCE_INIT(v) # define JEMALLOC_CC_SILENCE_INIT(v)
#endif #endif
#define JEMALLOC_GNUC_PREREQ(major, minor) \
(!defined(__clang__) && \
(__GNUC__ > (major) || (__GNUC__ == (major) && __GNUC_MINOR__ >= (minor))))
#ifndef __has_builtin
# define __has_builtin(builtin) (0)
#endif
#define JEMALLOC_CLANG_HAS_BUILTIN(builtin) \
(defined(__clang__) && __has_builtin(builtin))
#ifdef __GNUC__ #ifdef __GNUC__
# define likely(x) __builtin_expect(!!(x), 1) # define likely(x) __builtin_expect(!!(x), 1)
# define unlikely(x) __builtin_expect(!!(x), 0) # define unlikely(x) __builtin_expect(!!(x), 0)
# if JEMALLOC_GNUC_PREREQ(4, 6) || \
JEMALLOC_CLANG_HAS_BUILTIN(__builtin_unreachable)
# define unreachable() __builtin_unreachable()
# else
# define unreachable()
# endif
#else #else
# define likely(x) !!(x) # define likely(x) !!(x)
# define unlikely(x) !!(x) # define unlikely(x) !!(x)
# define unreachable()
#endif #endif
#if !defined(JEMALLOC_INTERNAL_UNREACHABLE) /*
# error JEMALLOC_INTERNAL_UNREACHABLE should have been defined by configure * Define a custom assert() in order to reduce the chances of deadlock during
* assertion failure.
*/
#ifndef assert
#define assert(e) do { \
if (unlikely(config_debug && !(e))) { \
malloc_printf( \
"<jemalloc>: %s:%d: Failed assertion: \"%s\"\n", \
__FILE__, __LINE__, #e); \
abort(); \
} \
} while (0)
#endif #endif
#define unreachable() JEMALLOC_INTERNAL_UNREACHABLE() #ifndef not_reached
#define not_reached() do { \
if (config_debug) { \
malloc_printf( \
"<jemalloc>: %s:%d: Unreachable code reached\n", \
__FILE__, __LINE__); \
abort(); \
} \
unreachable(); \
} while (0)
#endif
#include "jemalloc/internal/assert.h" #ifndef not_implemented
#define not_implemented() do { \
if (config_debug) { \
malloc_printf("<jemalloc>: %s:%d: Not implemented\n", \
__FILE__, __LINE__); \
abort(); \
} \
} while (0)
#endif
#ifndef assert_not_implemented
#define assert_not_implemented(e) do { \
if (unlikely(config_debug && !(e))) \
not_implemented(); \
} while (0)
#endif
/* Use to assert a particular configuration, e.g., cassert(config_debug). */ /* Use to assert a particular configuration, e.g., cassert(config_debug). */
#define cassert(c) do { \ #define cassert(c) do { \
@ -104,9 +148,9 @@ void malloc_write(const char *s);
* malloc_vsnprintf() supports a subset of snprintf(3) that avoids floating * malloc_vsnprintf() supports a subset of snprintf(3) that avoids floating
* point math. * point math.
*/ */
size_t malloc_vsnprintf(char *str, size_t size, const char *format, int malloc_vsnprintf(char *str, size_t size, const char *format,
va_list ap); va_list ap);
size_t malloc_snprintf(char *str, size_t size, const char *format, ...) int malloc_snprintf(char *str, size_t size, const char *format, ...)
JEMALLOC_FORMAT_PRINTF(3, 4); JEMALLOC_FORMAT_PRINTF(3, 4);
void malloc_vcprintf(void (*write_cb)(void *, const char *), void *cbopaque, void malloc_vcprintf(void (*write_cb)(void *, const char *), void *cbopaque,
const char *format, va_list ap); const char *format, va_list ap);
@ -119,16 +163,10 @@ void malloc_printf(const char *format, ...) JEMALLOC_FORMAT_PRINTF(1, 2);
#ifdef JEMALLOC_H_INLINES #ifdef JEMALLOC_H_INLINES
#ifndef JEMALLOC_ENABLE_INLINE #ifndef JEMALLOC_ENABLE_INLINE
unsigned ffs_llu(unsigned long long bitmap); int jemalloc_ffsl(long bitmap);
unsigned ffs_lu(unsigned long bitmap); int jemalloc_ffs(int bitmap);
unsigned ffs_u(unsigned bitmap); size_t pow2_ceil(size_t x);
unsigned ffs_zu(size_t bitmap); size_t lg_floor(size_t x);
unsigned ffs_u64(uint64_t bitmap);
unsigned ffs_u32(uint32_t bitmap);
uint64_t pow2_ceil_u64(uint64_t x);
uint32_t pow2_ceil_u32(uint32_t x);
size_t pow2_ceil_zu(size_t x);
unsigned lg_floor(size_t x);
void set_errno(int errnum); void set_errno(int errnum);
int get_errno(void); int get_errno(void);
#endif #endif
@ -136,115 +174,44 @@ int get_errno(void);
#if (defined(JEMALLOC_ENABLE_INLINE) || defined(JEMALLOC_UTIL_C_)) #if (defined(JEMALLOC_ENABLE_INLINE) || defined(JEMALLOC_UTIL_C_))
/* Sanity check. */ /* Sanity check. */
#if !defined(JEMALLOC_INTERNAL_FFSLL) || !defined(JEMALLOC_INTERNAL_FFSL) \ #if !defined(JEMALLOC_INTERNAL_FFSL) || !defined(JEMALLOC_INTERNAL_FFS)
|| !defined(JEMALLOC_INTERNAL_FFS) # error Both JEMALLOC_INTERNAL_FFSL && JEMALLOC_INTERNAL_FFS should have been defined by configure
# error JEMALLOC_INTERNAL_FFS{,L,LL} should have been defined by configure
#endif #endif
JEMALLOC_ALWAYS_INLINE unsigned JEMALLOC_ALWAYS_INLINE int
ffs_llu(unsigned long long bitmap) jemalloc_ffsl(long bitmap)
{
return (JEMALLOC_INTERNAL_FFSLL(bitmap));
}
JEMALLOC_ALWAYS_INLINE unsigned
ffs_lu(unsigned long bitmap)
{ {
return (JEMALLOC_INTERNAL_FFSL(bitmap)); return (JEMALLOC_INTERNAL_FFSL(bitmap));
} }
JEMALLOC_ALWAYS_INLINE unsigned JEMALLOC_ALWAYS_INLINE int
ffs_u(unsigned bitmap) jemalloc_ffs(int bitmap)
{ {
return (JEMALLOC_INTERNAL_FFS(bitmap)); return (JEMALLOC_INTERNAL_FFS(bitmap));
} }
JEMALLOC_ALWAYS_INLINE unsigned
ffs_zu(size_t bitmap)
{
#if LG_SIZEOF_PTR == LG_SIZEOF_INT
return (ffs_u(bitmap));
#elif LG_SIZEOF_PTR == LG_SIZEOF_LONG
return (ffs_lu(bitmap));
#elif LG_SIZEOF_PTR == LG_SIZEOF_LONG_LONG
return (ffs_llu(bitmap));
#else
#error No implementation for size_t ffs()
#endif
}
JEMALLOC_ALWAYS_INLINE unsigned
ffs_u64(uint64_t bitmap)
{
#if LG_SIZEOF_LONG == 3
return (ffs_lu(bitmap));
#elif LG_SIZEOF_LONG_LONG == 3
return (ffs_llu(bitmap));
#else
#error No implementation for 64-bit ffs()
#endif
}
JEMALLOC_ALWAYS_INLINE unsigned
ffs_u32(uint32_t bitmap)
{
#if LG_SIZEOF_INT == 2
return (ffs_u(bitmap));
#else
#error No implementation for 32-bit ffs()
#endif
return (ffs_u(bitmap));
}
JEMALLOC_INLINE uint64_t
pow2_ceil_u64(uint64_t x)
{
x--;
x |= x >> 1;
x |= x >> 2;
x |= x >> 4;
x |= x >> 8;
x |= x >> 16;
x |= x >> 32;
x++;
return (x);
}
JEMALLOC_INLINE uint32_t
pow2_ceil_u32(uint32_t x)
{
x--;
x |= x >> 1;
x |= x >> 2;
x |= x >> 4;
x |= x >> 8;
x |= x >> 16;
x++;
return (x);
}
/* Compute the smallest power of 2 that is >= x. */ /* Compute the smallest power of 2 that is >= x. */
JEMALLOC_INLINE size_t JEMALLOC_INLINE size_t
pow2_ceil_zu(size_t x) pow2_ceil(size_t x)
{ {
x--;
x |= x >> 1;
x |= x >> 2;
x |= x >> 4;
x |= x >> 8;
x |= x >> 16;
#if (LG_SIZEOF_PTR == 3) #if (LG_SIZEOF_PTR == 3)
return (pow2_ceil_u64(x)); x |= x >> 32;
#else
return (pow2_ceil_u32(x));
#endif #endif
x++;
return (x);
} }
#if (defined(__i386__) || defined(__amd64__) || defined(__x86_64__)) #if (defined(__i386__) || defined(__amd64__) || defined(__x86_64__))
JEMALLOC_INLINE unsigned JEMALLOC_INLINE size_t
lg_floor(size_t x) lg_floor(size_t x)
{ {
size_t ret; size_t ret;
@ -255,11 +222,10 @@ lg_floor(size_t x)
: "=r"(ret) // Outputs. : "=r"(ret) // Outputs.
: "r"(x) // Inputs. : "r"(x) // Inputs.
); );
assert(ret < UINT_MAX); return (ret);
return ((unsigned)ret);
} }
#elif (defined(_MSC_VER)) #elif (defined(_MSC_VER))
JEMALLOC_INLINE unsigned JEMALLOC_INLINE size_t
lg_floor(size_t x) lg_floor(size_t x)
{ {
unsigned long ret; unsigned long ret;
@ -271,13 +237,12 @@ lg_floor(size_t x)
#elif (LG_SIZEOF_PTR == 2) #elif (LG_SIZEOF_PTR == 2)
_BitScanReverse(&ret, x); _BitScanReverse(&ret, x);
#else #else
# error "Unsupported type size for lg_floor()" # error "Unsupported type sizes for lg_floor()"
#endif #endif
assert(ret < UINT_MAX); return (ret);
return ((unsigned)ret);
} }
#elif (defined(JEMALLOC_HAVE_BUILTIN_CLZ)) #elif (defined(JEMALLOC_HAVE_BUILTIN_CLZ))
JEMALLOC_INLINE unsigned JEMALLOC_INLINE size_t
lg_floor(size_t x) lg_floor(size_t x)
{ {
@ -288,11 +253,11 @@ lg_floor(size_t x)
#elif (LG_SIZEOF_PTR == LG_SIZEOF_LONG) #elif (LG_SIZEOF_PTR == LG_SIZEOF_LONG)
return (((8 << LG_SIZEOF_PTR) - 1) - __builtin_clzl(x)); return (((8 << LG_SIZEOF_PTR) - 1) - __builtin_clzl(x));
#else #else
# error "Unsupported type size for lg_floor()" # error "Unsupported type sizes for lg_floor()"
#endif #endif
} }
#else #else
JEMALLOC_INLINE unsigned JEMALLOC_INLINE size_t
lg_floor(size_t x) lg_floor(size_t x)
{ {
@ -303,13 +268,20 @@ lg_floor(size_t x)
x |= (x >> 4); x |= (x >> 4);
x |= (x >> 8); x |= (x >> 8);
x |= (x >> 16); x |= (x >> 16);
#if (LG_SIZEOF_PTR == 3) #if (LG_SIZEOF_PTR == 3 && LG_SIZEOF_PTR == LG_SIZEOF_LONG)
x |= (x >> 32); x |= (x >> 32);
#endif if (x == KZU(0xffffffffffffffff))
if (x == SIZE_T_MAX) return (63);
return ((8 << LG_SIZEOF_PTR) - 1);
x++; x++;
return (ffs_zu(x) - 2); return (jemalloc_ffsl(x) - 2);
#elif (LG_SIZEOF_PTR == 2)
if (x == KZU(0xffffffff))
return (31);
x++;
return (jemalloc_ffs(x) - 2);
#else
# error "Unsupported type sizes for lg_floor()"
#endif
} }
#endif #endif

View File

@ -30,31 +30,17 @@
* calls must be embedded in macros rather than in functions so that when * calls must be embedded in macros rather than in functions so that when
* Valgrind reports errors, there are no extra stack frames in the backtraces. * Valgrind reports errors, there are no extra stack frames in the backtraces.
*/ */
#define JEMALLOC_VALGRIND_MALLOC(cond, tsdn, ptr, usize, zero) do { \ #define JEMALLOC_VALGRIND_MALLOC(cond, ptr, usize, zero) do { \
if (unlikely(in_valgrind && cond)) { \ if (unlikely(in_valgrind && cond)) \
VALGRIND_MALLOCLIKE_BLOCK(ptr, usize, p2rz(tsdn, ptr), \ VALGRIND_MALLOCLIKE_BLOCK(ptr, usize, p2rz(ptr), zero); \
zero); \
} \
} while (0) } while (0)
#define JEMALLOC_VALGRIND_REALLOC_MOVED_no(ptr, old_ptr) \ #define JEMALLOC_VALGRIND_REALLOC(maybe_moved, ptr, usize, \
(false) ptr_maybe_null, old_ptr, old_usize, old_rzsize, old_ptr_maybe_null, \
#define JEMALLOC_VALGRIND_REALLOC_MOVED_maybe(ptr, old_ptr) \ zero) do { \
((ptr) != (old_ptr))
#define JEMALLOC_VALGRIND_REALLOC_PTR_NULL_no(ptr) \
(false)
#define JEMALLOC_VALGRIND_REALLOC_PTR_NULL_maybe(ptr) \
(ptr == NULL)
#define JEMALLOC_VALGRIND_REALLOC_OLD_PTR_NULL_no(old_ptr) \
(false)
#define JEMALLOC_VALGRIND_REALLOC_OLD_PTR_NULL_maybe(old_ptr) \
(old_ptr == NULL)
#define JEMALLOC_VALGRIND_REALLOC(moved, tsdn, ptr, usize, ptr_null, \
old_ptr, old_usize, old_rzsize, old_ptr_null, zero) do { \
if (unlikely(in_valgrind)) { \ if (unlikely(in_valgrind)) { \
size_t rzsize = p2rz(tsdn, ptr); \ size_t rzsize = p2rz(ptr); \
\ \
if (!JEMALLOC_VALGRIND_REALLOC_MOVED_##moved(ptr, \ if (!maybe_moved || ptr == old_ptr) { \
old_ptr)) { \
VALGRIND_RESIZEINPLACE_BLOCK(ptr, old_usize, \ VALGRIND_RESIZEINPLACE_BLOCK(ptr, old_usize, \
usize, rzsize); \ usize, rzsize); \
if (zero && old_usize < usize) { \ if (zero && old_usize < usize) { \
@ -63,13 +49,11 @@
old_usize), usize - old_usize); \ old_usize), usize - old_usize); \
} \ } \
} else { \ } else { \
if (!JEMALLOC_VALGRIND_REALLOC_OLD_PTR_NULL_## \ if (!old_ptr_maybe_null || old_ptr != NULL) { \
old_ptr_null(old_ptr)) { \
valgrind_freelike_block(old_ptr, \ valgrind_freelike_block(old_ptr, \
old_rzsize); \ old_rzsize); \
} \ } \
if (!JEMALLOC_VALGRIND_REALLOC_PTR_NULL_## \ if (!ptr_maybe_null || ptr != NULL) { \
ptr_null(ptr)) { \
size_t copy_size = (old_usize < usize) \ size_t copy_size = (old_usize < usize) \
? old_usize : usize; \ ? old_usize : usize; \
size_t tail_size = usize - copy_size; \ size_t tail_size = usize - copy_size; \
@ -97,8 +81,8 @@
#define JEMALLOC_VALGRIND_MAKE_MEM_NOACCESS(ptr, usize) do {} while (0) #define JEMALLOC_VALGRIND_MAKE_MEM_NOACCESS(ptr, usize) do {} while (0)
#define JEMALLOC_VALGRIND_MAKE_MEM_UNDEFINED(ptr, usize) do {} while (0) #define JEMALLOC_VALGRIND_MAKE_MEM_UNDEFINED(ptr, usize) do {} while (0)
#define JEMALLOC_VALGRIND_MAKE_MEM_DEFINED(ptr, usize) do {} while (0) #define JEMALLOC_VALGRIND_MAKE_MEM_DEFINED(ptr, usize) do {} while (0)
#define JEMALLOC_VALGRIND_MALLOC(cond, tsdn, ptr, usize, zero) do {} while (0) #define JEMALLOC_VALGRIND_MALLOC(cond, ptr, usize, zero) do {} while (0)
#define JEMALLOC_VALGRIND_REALLOC(maybe_moved, tsdn, ptr, usize, \ #define JEMALLOC_VALGRIND_REALLOC(maybe_moved, ptr, usize, \
ptr_maybe_null, old_ptr, old_usize, old_rzsize, old_ptr_maybe_null, \ ptr_maybe_null, old_ptr, old_usize, old_rzsize, old_ptr_maybe_null, \
zero) do {} while (0) zero) do {} while (0)
#define JEMALLOC_VALGRIND_FREE(ptr, rzsize) do {} while (0) #define JEMALLOC_VALGRIND_FREE(ptr, rzsize) do {} while (0)

View File

@ -1,266 +0,0 @@
/******************************************************************************/
#ifdef JEMALLOC_H_TYPES
typedef struct witness_s witness_t;
typedef unsigned witness_rank_t;
typedef ql_head(witness_t) witness_list_t;
typedef int witness_comp_t (const witness_t *, const witness_t *);
/*
* Lock ranks. Witnesses with rank WITNESS_RANK_OMIT are completely ignored by
* the witness machinery.
*/
#define WITNESS_RANK_OMIT 0U
#define WITNESS_RANK_INIT 1U
#define WITNESS_RANK_CTL 1U
#define WITNESS_RANK_ARENAS 2U
#define WITNESS_RANK_PROF_DUMP 3U
#define WITNESS_RANK_PROF_BT2GCTX 4U
#define WITNESS_RANK_PROF_TDATAS 5U
#define WITNESS_RANK_PROF_TDATA 6U
#define WITNESS_RANK_PROF_GCTX 7U
#define WITNESS_RANK_ARENA 8U
#define WITNESS_RANK_ARENA_CHUNKS 9U
#define WITNESS_RANK_ARENA_NODE_CACHE 10
#define WITNESS_RANK_BASE 11U
#define WITNESS_RANK_LEAF 0xffffffffU
#define WITNESS_RANK_ARENA_BIN WITNESS_RANK_LEAF
#define WITNESS_RANK_ARENA_HUGE WITNESS_RANK_LEAF
#define WITNESS_RANK_DSS WITNESS_RANK_LEAF
#define WITNESS_RANK_PROF_ACTIVE WITNESS_RANK_LEAF
#define WITNESS_RANK_PROF_DUMP_SEQ WITNESS_RANK_LEAF
#define WITNESS_RANK_PROF_GDUMP WITNESS_RANK_LEAF
#define WITNESS_RANK_PROF_NEXT_THR_UID WITNESS_RANK_LEAF
#define WITNESS_RANK_PROF_THREAD_ACTIVE_INIT WITNESS_RANK_LEAF
#define WITNESS_INITIALIZER(rank) {"initializer", rank, NULL, {NULL, NULL}}
#endif /* JEMALLOC_H_TYPES */
/******************************************************************************/
#ifdef JEMALLOC_H_STRUCTS
struct witness_s {
/* Name, used for printing lock order reversal messages. */
const char *name;
/*
* Witness rank, where 0 is lowest and UINT_MAX is highest. Witnesses
* must be acquired in order of increasing rank.
*/
witness_rank_t rank;
/*
* If two witnesses are of equal rank and they have the samp comp
* function pointer, it is called as a last attempt to differentiate
* between witnesses of equal rank.
*/
witness_comp_t *comp;
/* Linkage for thread's currently owned locks. */
ql_elm(witness_t) link;
};
#endif /* JEMALLOC_H_STRUCTS */
/******************************************************************************/
#ifdef JEMALLOC_H_EXTERNS
void witness_init(witness_t *witness, const char *name, witness_rank_t rank,
witness_comp_t *comp);
#ifdef JEMALLOC_JET
typedef void (witness_lock_error_t)(const witness_list_t *, const witness_t *);
extern witness_lock_error_t *witness_lock_error;
#else
void witness_lock_error(const witness_list_t *witnesses,
const witness_t *witness);
#endif
#ifdef JEMALLOC_JET
typedef void (witness_owner_error_t)(const witness_t *);
extern witness_owner_error_t *witness_owner_error;
#else
void witness_owner_error(const witness_t *witness);
#endif
#ifdef JEMALLOC_JET
typedef void (witness_not_owner_error_t)(const witness_t *);
extern witness_not_owner_error_t *witness_not_owner_error;
#else
void witness_not_owner_error(const witness_t *witness);
#endif
#ifdef JEMALLOC_JET
typedef void (witness_lockless_error_t)(const witness_list_t *);
extern witness_lockless_error_t *witness_lockless_error;
#else
void witness_lockless_error(const witness_list_t *witnesses);
#endif
void witnesses_cleanup(tsd_t *tsd);
void witness_fork_cleanup(tsd_t *tsd);
void witness_prefork(tsd_t *tsd);
void witness_postfork_parent(tsd_t *tsd);
void witness_postfork_child(tsd_t *tsd);
#endif /* JEMALLOC_H_EXTERNS */
/******************************************************************************/
#ifdef JEMALLOC_H_INLINES
#ifndef JEMALLOC_ENABLE_INLINE
bool witness_owner(tsd_t *tsd, const witness_t *witness);
void witness_assert_owner(tsdn_t *tsdn, const witness_t *witness);
void witness_assert_not_owner(tsdn_t *tsdn, const witness_t *witness);
void witness_assert_lockless(tsdn_t *tsdn);
void witness_lock(tsdn_t *tsdn, witness_t *witness);
void witness_unlock(tsdn_t *tsdn, witness_t *witness);
#endif
#if (defined(JEMALLOC_ENABLE_INLINE) || defined(JEMALLOC_MUTEX_C_))
JEMALLOC_INLINE bool
witness_owner(tsd_t *tsd, const witness_t *witness)
{
witness_list_t *witnesses;
witness_t *w;
witnesses = tsd_witnessesp_get(tsd);
ql_foreach(w, witnesses, link) {
if (w == witness)
return (true);
}
return (false);
}
JEMALLOC_INLINE void
witness_assert_owner(tsdn_t *tsdn, const witness_t *witness)
{
tsd_t *tsd;
if (!config_debug)
return;
if (tsdn_null(tsdn))
return;
tsd = tsdn_tsd(tsdn);
if (witness->rank == WITNESS_RANK_OMIT)
return;
if (witness_owner(tsd, witness))
return;
witness_owner_error(witness);
}
JEMALLOC_INLINE void
witness_assert_not_owner(tsdn_t *tsdn, const witness_t *witness)
{
tsd_t *tsd;
witness_list_t *witnesses;
witness_t *w;
if (!config_debug)
return;
if (tsdn_null(tsdn))
return;
tsd = tsdn_tsd(tsdn);
if (witness->rank == WITNESS_RANK_OMIT)
return;
witnesses = tsd_witnessesp_get(tsd);
ql_foreach(w, witnesses, link) {
if (w == witness)
witness_not_owner_error(witness);
}
}
JEMALLOC_INLINE void
witness_assert_lockless(tsdn_t *tsdn)
{
tsd_t *tsd;
witness_list_t *witnesses;
witness_t *w;
if (!config_debug)
return;
if (tsdn_null(tsdn))
return;
tsd = tsdn_tsd(tsdn);
witnesses = tsd_witnessesp_get(tsd);
w = ql_last(witnesses, link);
if (w != NULL)
witness_lockless_error(witnesses);
}
JEMALLOC_INLINE void
witness_lock(tsdn_t *tsdn, witness_t *witness)
{
tsd_t *tsd;
witness_list_t *witnesses;
witness_t *w;
if (!config_debug)
return;
if (tsdn_null(tsdn))
return;
tsd = tsdn_tsd(tsdn);
if (witness->rank == WITNESS_RANK_OMIT)
return;
witness_assert_not_owner(tsdn, witness);
witnesses = tsd_witnessesp_get(tsd);
w = ql_last(witnesses, link);
if (w == NULL) {
/* No other locks; do nothing. */
} else if (tsd_witness_fork_get(tsd) && w->rank <= witness->rank) {
/* Forking, and relaxed ranking satisfied. */
} else if (w->rank > witness->rank) {
/* Not forking, rank order reversal. */
witness_lock_error(witnesses, witness);
} else if (w->rank == witness->rank && (w->comp == NULL || w->comp !=
witness->comp || w->comp(w, witness) > 0)) {
/*
* Missing/incompatible comparison function, or comparison
* function indicates rank order reversal.
*/
witness_lock_error(witnesses, witness);
}
ql_elm_new(witness, link);
ql_tail_insert(witnesses, witness, link);
}
JEMALLOC_INLINE void
witness_unlock(tsdn_t *tsdn, witness_t *witness)
{
tsd_t *tsd;
witness_list_t *witnesses;
if (!config_debug)
return;
if (tsdn_null(tsdn))
return;
tsd = tsdn_tsd(tsdn);
if (witness->rank == WITNESS_RANK_OMIT)
return;
/*
* Check whether owner before removal, rather than relying on
* witness_assert_owner() to abort, so that unit tests can test this
* function's failure mode without causing undefined behavior.
*/
if (witness_owner(tsd, witness)) {
witnesses = tsd_witnessesp_get(tsd);
ql_remove(witnesses, witness, link);
} else
witness_assert_owner(tsdn, witness);
}
#endif
#endif /* JEMALLOC_H_INLINES */
/******************************************************************************/

View File

@ -33,13 +33,5 @@
*/ */
#undef JEMALLOC_USE_CXX_THROW #undef JEMALLOC_USE_CXX_THROW
#ifdef _MSC_VER
# ifdef _WIN64
# define LG_SIZEOF_PTR_WIN 3
# else
# define LG_SIZEOF_PTR_WIN 2
# endif
#endif
/* sizeof(void *) == 2^LG_SIZEOF_PTR. */ /* sizeof(void *) == 2^LG_SIZEOF_PTR. */
#undef LG_SIZEOF_PTR #undef LG_SIZEOF_PTR

View File

@ -11,13 +11,12 @@
#define JEMALLOC_VERSION_NREV @jemalloc_version_nrev@ #define JEMALLOC_VERSION_NREV @jemalloc_version_nrev@
#define JEMALLOC_VERSION_GID "@jemalloc_version_gid@" #define JEMALLOC_VERSION_GID "@jemalloc_version_gid@"
# define MALLOCX_LG_ALIGN(la) ((int)(la)) # define MALLOCX_LG_ALIGN(la) (la)
# if LG_SIZEOF_PTR == 2 # if LG_SIZEOF_PTR == 2
# define MALLOCX_ALIGN(a) ((int)(ffs((int)(a))-1)) # define MALLOCX_ALIGN(a) (ffs(a)-1)
# else # else
# define MALLOCX_ALIGN(a) \ # define MALLOCX_ALIGN(a) \
((int)(((size_t)(a) < (size_t)INT_MAX) ? ffs((int)(a))-1 : \ ((a < (size_t)INT_MAX) ? ffs(a)-1 : ffs(a>>32)+31)
ffs((int)(((size_t)(a))>>32))+31))
# endif # endif
# define MALLOCX_ZERO ((int)0x40) # define MALLOCX_ZERO ((int)0x40)
/* /*
@ -29,7 +28,7 @@
/* /*
* Bias arena index bits so that 0 encodes "use an automatically chosen arena". * Bias arena index bits so that 0 encodes "use an automatically chosen arena".
*/ */
# define MALLOCX_ARENA(a) ((((int)(a))+1) << 20) # define MALLOCX_ARENA(a) ((int)(((a)+1) << 20))
#if defined(__cplusplus) && defined(JEMALLOC_USE_CXX_THROW) #if defined(__cplusplus) && defined(JEMALLOC_USE_CXX_THROW)
# define JEMALLOC_CXX_THROW throw() # define JEMALLOC_CXX_THROW throw()
@ -37,7 +36,32 @@
# define JEMALLOC_CXX_THROW # define JEMALLOC_CXX_THROW
#endif #endif
#if _MSC_VER #ifdef JEMALLOC_HAVE_ATTR
# define JEMALLOC_ATTR(s) __attribute__((s))
# define JEMALLOC_ALIGNED(s) JEMALLOC_ATTR(aligned(s))
# ifdef JEMALLOC_HAVE_ATTR_ALLOC_SIZE
# define JEMALLOC_ALLOC_SIZE(s) JEMALLOC_ATTR(alloc_size(s))
# define JEMALLOC_ALLOC_SIZE2(s1, s2) JEMALLOC_ATTR(alloc_size(s1, s2))
# else
# define JEMALLOC_ALLOC_SIZE(s)
# define JEMALLOC_ALLOC_SIZE2(s1, s2)
# endif
# ifndef JEMALLOC_EXPORT
# define JEMALLOC_EXPORT JEMALLOC_ATTR(visibility("default"))
# endif
# ifdef JEMALLOC_HAVE_ATTR_FORMAT_GNU_PRINTF
# define JEMALLOC_FORMAT_PRINTF(s, i) JEMALLOC_ATTR(format(gnu_printf, s, i))
# elif defined(JEMALLOC_HAVE_ATTR_FORMAT_PRINTF)
# define JEMALLOC_FORMAT_PRINTF(s, i) JEMALLOC_ATTR(format(printf, s, i))
# else
# define JEMALLOC_FORMAT_PRINTF(s, i)
# endif
# define JEMALLOC_NOINLINE JEMALLOC_ATTR(noinline)
# define JEMALLOC_NOTHROW JEMALLOC_ATTR(nothrow)
# define JEMALLOC_SECTION(s) JEMALLOC_ATTR(section(s))
# define JEMALLOC_RESTRICT_RETURN
# define JEMALLOC_ALLOCATOR
#elif _MSC_VER
# define JEMALLOC_ATTR(s) # define JEMALLOC_ATTR(s)
# define JEMALLOC_ALIGNED(s) __declspec(align(s)) # define JEMALLOC_ALIGNED(s) __declspec(align(s))
# define JEMALLOC_ALLOC_SIZE(s) # define JEMALLOC_ALLOC_SIZE(s)
@ -63,31 +87,6 @@
# else # else
# define JEMALLOC_ALLOCATOR # define JEMALLOC_ALLOCATOR
# endif # endif
#elif defined(JEMALLOC_HAVE_ATTR)
# define JEMALLOC_ATTR(s) __attribute__((s))
# define JEMALLOC_ALIGNED(s) JEMALLOC_ATTR(aligned(s))
# ifdef JEMALLOC_HAVE_ATTR_ALLOC_SIZE
# define JEMALLOC_ALLOC_SIZE(s) JEMALLOC_ATTR(alloc_size(s))
# define JEMALLOC_ALLOC_SIZE2(s1, s2) JEMALLOC_ATTR(alloc_size(s1, s2))
# else
# define JEMALLOC_ALLOC_SIZE(s)
# define JEMALLOC_ALLOC_SIZE2(s1, s2)
# endif
# ifndef JEMALLOC_EXPORT
# define JEMALLOC_EXPORT JEMALLOC_ATTR(visibility("default"))
# endif
# ifdef JEMALLOC_HAVE_ATTR_FORMAT_GNU_PRINTF
# define JEMALLOC_FORMAT_PRINTF(s, i) JEMALLOC_ATTR(format(gnu_printf, s, i))
# elif defined(JEMALLOC_HAVE_ATTR_FORMAT_PRINTF)
# define JEMALLOC_FORMAT_PRINTF(s, i) JEMALLOC_ATTR(format(printf, s, i))
# else
# define JEMALLOC_FORMAT_PRINTF(s, i)
# endif
# define JEMALLOC_NOINLINE JEMALLOC_ATTR(noinline)
# define JEMALLOC_NOTHROW JEMALLOC_ATTR(nothrow)
# define JEMALLOC_SECTION(s) JEMALLOC_ATTR(section(s))
# define JEMALLOC_RESTRICT_RETURN
# define JEMALLOC_ALLOCATOR
#else #else
# define JEMALLOC_ATTR(s) # define JEMALLOC_ATTR(s)
# define JEMALLOC_ALIGNED(s) # define JEMALLOC_ALIGNED(s)

View File

@ -21,37 +21,7 @@ static __forceinline int ffs(int x)
return (ffsl(x)); return (ffsl(x));
} }
# ifdef _M_X64
# pragma intrinsic(_BitScanForward64)
# endif
static __forceinline int ffsll(unsigned __int64 x)
{
unsigned long i;
#ifdef _M_X64
if (_BitScanForward64(&i, x))
return (i + 1);
return (0);
#else #else
// Fallback for 32-bit build where 64-bit version not available
// assuming little endian
union {
unsigned __int64 ll;
unsigned long l[2];
} s;
s.ll = x;
if (_BitScanForward(&i, s.l[0]))
return (i + 1);
else if(_BitScanForward(&i, s.l[1]))
return (i + 33);
return (0);
#endif
}
#else
# define ffsll(x) __builtin_ffsll(x)
# define ffsl(x) __builtin_ffsl(x) # define ffsl(x) __builtin_ffsl(x)
# define ffs(x) __builtin_ffs(x) # define ffs(x) __builtin_ffs(x)
#endif #endif

View File

@ -1,6 +1,26 @@
#ifndef MSVC_COMPAT_WINDOWS_EXTRA_H #ifndef MSVC_COMPAT_WINDOWS_EXTRA_H
#define MSVC_COMPAT_WINDOWS_EXTRA_H #define MSVC_COMPAT_WINDOWS_EXTRA_H
#include <errno.h> #ifndef ENOENT
# define ENOENT ERROR_PATH_NOT_FOUND
#endif
#ifndef EINVAL
# define EINVAL ERROR_BAD_ARGUMENTS
#endif
#ifndef EAGAIN
# define EAGAIN ERROR_OUTOFMEMORY
#endif
#ifndef EPERM
# define EPERM ERROR_WRITE_FAULT
#endif
#ifndef EFAULT
# define EFAULT ERROR_INVALID_ADDRESS
#endif
#ifndef ENOMEM
# define ENOMEM ERROR_NOT_ENOUGH_MEMORY
#endif
#ifndef ERANGE
# define ERANGE ERROR_INVALID_DATA
#endif
#endif /* MSVC_COMPAT_WINDOWS_EXTRA_H */ #endif /* MSVC_COMPAT_WINDOWS_EXTRA_H */

View File

@ -6,7 +6,7 @@ install_suffix=@install_suffix@
Name: jemalloc Name: jemalloc
Description: A general purpose malloc(3) implementation that emphasizes fragmentation avoidance and scalable concurrency support. Description: A general purpose malloc(3) implementation that emphasizes fragmentation avoidance and scalable concurrency support.
URL: http://jemalloc.net/ URL: http://www.canonware.com/jemalloc
Version: @jemalloc_version@ Version: @jemalloc_version@
Cflags: -I${includedir} Cflags: -I${includedir}
Libs: -L${libdir} -ljemalloc${install_suffix} Libs: -L${libdir} -ljemalloc${install_suffix}

View File

@ -1,24 +0,0 @@
How to build jemalloc for Windows
=================================
1. Install Cygwin with at least the following packages:
* autoconf
* autogen
* gawk
* grep
* sed
2. Install Visual Studio 2015 with Visual C++
3. Add Cygwin\bin to the PATH environment variable
4. Open "VS2015 x86 Native Tools Command Prompt"
(note: x86/x64 doesn't matter at this point)
5. Generate header files:
sh -c "CC=cl ./autogen.sh"
6. Now the project can be opened and built in Visual Studio:
msvc\jemalloc_vc2015.sln

View File

@ -1,63 +0,0 @@

Microsoft Visual Studio Solution File, Format Version 12.00
# Visual Studio 14
VisualStudioVersion = 14.0.24720.0
MinimumVisualStudioVersion = 10.0.40219.1
Project("{2150E333-8FDC-42A3-9474-1A3956D46DE8}") = "Solution Items", "Solution Items", "{70A99006-6DE9-472B-8F83-4CEE6C616DF3}"
ProjectSection(SolutionItems) = preProject
ReadMe.txt = ReadMe.txt
EndProjectSection
EndProject
Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "jemalloc", "projects\vc2015\jemalloc\jemalloc.vcxproj", "{8D6BB292-9E1C-413D-9F98-4864BDC1514A}"
EndProject
Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "test_threads", "projects\vc2015\test_threads\test_threads.vcxproj", "{09028CFD-4EB7-491D-869C-0708DB97ED44}"
EndProject
Global
GlobalSection(SolutionConfigurationPlatforms) = preSolution
Debug|x64 = Debug|x64
Debug|x86 = Debug|x86
Debug-static|x64 = Debug-static|x64
Debug-static|x86 = Debug-static|x86
Release|x64 = Release|x64
Release|x86 = Release|x86
Release-static|x64 = Release-static|x64
Release-static|x86 = Release-static|x86
EndGlobalSection
GlobalSection(ProjectConfigurationPlatforms) = postSolution
{8D6BB292-9E1C-413D-9F98-4864BDC1514A}.Debug|x64.ActiveCfg = Debug|x64
{8D6BB292-9E1C-413D-9F98-4864BDC1514A}.Debug|x64.Build.0 = Debug|x64
{8D6BB292-9E1C-413D-9F98-4864BDC1514A}.Debug|x86.ActiveCfg = Debug|Win32
{8D6BB292-9E1C-413D-9F98-4864BDC1514A}.Debug|x86.Build.0 = Debug|Win32
{8D6BB292-9E1C-413D-9F98-4864BDC1514A}.Debug-static|x64.ActiveCfg = Debug-static|x64
{8D6BB292-9E1C-413D-9F98-4864BDC1514A}.Debug-static|x64.Build.0 = Debug-static|x64
{8D6BB292-9E1C-413D-9F98-4864BDC1514A}.Debug-static|x86.ActiveCfg = Debug-static|Win32
{8D6BB292-9E1C-413D-9F98-4864BDC1514A}.Debug-static|x86.Build.0 = Debug-static|Win32
{8D6BB292-9E1C-413D-9F98-4864BDC1514A}.Release|x64.ActiveCfg = Release|x64
{8D6BB292-9E1C-413D-9F98-4864BDC1514A}.Release|x64.Build.0 = Release|x64
{8D6BB292-9E1C-413D-9F98-4864BDC1514A}.Release|x86.ActiveCfg = Release|Win32
{8D6BB292-9E1C-413D-9F98-4864BDC1514A}.Release|x86.Build.0 = Release|Win32
{8D6BB292-9E1C-413D-9F98-4864BDC1514A}.Release-static|x64.ActiveCfg = Release-static|x64
{8D6BB292-9E1C-413D-9F98-4864BDC1514A}.Release-static|x64.Build.0 = Release-static|x64
{8D6BB292-9E1C-413D-9F98-4864BDC1514A}.Release-static|x86.ActiveCfg = Release-static|Win32
{8D6BB292-9E1C-413D-9F98-4864BDC1514A}.Release-static|x86.Build.0 = Release-static|Win32
{09028CFD-4EB7-491D-869C-0708DB97ED44}.Debug|x64.ActiveCfg = Debug|x64
{09028CFD-4EB7-491D-869C-0708DB97ED44}.Debug|x64.Build.0 = Debug|x64
{09028CFD-4EB7-491D-869C-0708DB97ED44}.Debug|x86.ActiveCfg = Debug|Win32
{09028CFD-4EB7-491D-869C-0708DB97ED44}.Debug|x86.Build.0 = Debug|Win32
{09028CFD-4EB7-491D-869C-0708DB97ED44}.Debug-static|x64.ActiveCfg = Debug-static|x64
{09028CFD-4EB7-491D-869C-0708DB97ED44}.Debug-static|x64.Build.0 = Debug-static|x64
{09028CFD-4EB7-491D-869C-0708DB97ED44}.Debug-static|x86.ActiveCfg = Debug-static|Win32
{09028CFD-4EB7-491D-869C-0708DB97ED44}.Debug-static|x86.Build.0 = Debug-static|Win32
{09028CFD-4EB7-491D-869C-0708DB97ED44}.Release|x64.ActiveCfg = Release|x64
{09028CFD-4EB7-491D-869C-0708DB97ED44}.Release|x64.Build.0 = Release|x64
{09028CFD-4EB7-491D-869C-0708DB97ED44}.Release|x86.ActiveCfg = Release|Win32
{09028CFD-4EB7-491D-869C-0708DB97ED44}.Release|x86.Build.0 = Release|Win32
{09028CFD-4EB7-491D-869C-0708DB97ED44}.Release-static|x64.ActiveCfg = Release-static|x64
{09028CFD-4EB7-491D-869C-0708DB97ED44}.Release-static|x64.Build.0 = Release-static|x64
{09028CFD-4EB7-491D-869C-0708DB97ED44}.Release-static|x86.ActiveCfg = Release-static|Win32
{09028CFD-4EB7-491D-869C-0708DB97ED44}.Release-static|x86.Build.0 = Release-static|Win32
EndGlobalSection
GlobalSection(SolutionProperties) = preSolution
HideSolutionNode = FALSE
EndGlobalSection
EndGlobal

View File

@ -1,402 +0,0 @@
<?xml version="1.0" encoding="utf-8"?>
<Project DefaultTargets="Build" ToolsVersion="14.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
<ItemGroup Label="ProjectConfigurations">
<ProjectConfiguration Include="Debug-static|Win32">
<Configuration>Debug-static</Configuration>
<Platform>Win32</Platform>
</ProjectConfiguration>
<ProjectConfiguration Include="Debug-static|x64">
<Configuration>Debug-static</Configuration>
<Platform>x64</Platform>
</ProjectConfiguration>
<ProjectConfiguration Include="Debug|Win32">
<Configuration>Debug</Configuration>
<Platform>Win32</Platform>
</ProjectConfiguration>
<ProjectConfiguration Include="Release-static|Win32">
<Configuration>Release-static</Configuration>
<Platform>Win32</Platform>
</ProjectConfiguration>
<ProjectConfiguration Include="Release-static|x64">
<Configuration>Release-static</Configuration>
<Platform>x64</Platform>
</ProjectConfiguration>
<ProjectConfiguration Include="Release|Win32">
<Configuration>Release</Configuration>
<Platform>Win32</Platform>
</ProjectConfiguration>
<ProjectConfiguration Include="Debug|x64">
<Configuration>Debug</Configuration>
<Platform>x64</Platform>
</ProjectConfiguration>
<ProjectConfiguration Include="Release|x64">
<Configuration>Release</Configuration>
<Platform>x64</Platform>
</ProjectConfiguration>
</ItemGroup>
<ItemGroup>
<ClInclude Include="..\..\..\..\include\jemalloc\internal\arena.h" />
<ClInclude Include="..\..\..\..\include\jemalloc\internal\assert.h" />
<ClInclude Include="..\..\..\..\include\jemalloc\internal\atomic.h" />
<ClInclude Include="..\..\..\..\include\jemalloc\internal\base.h" />
<ClInclude Include="..\..\..\..\include\jemalloc\internal\bitmap.h" />
<ClInclude Include="..\..\..\..\include\jemalloc\internal\chunk.h" />
<ClInclude Include="..\..\..\..\include\jemalloc\internal\chunk_dss.h" />
<ClInclude Include="..\..\..\..\include\jemalloc\internal\chunk_mmap.h" />
<ClInclude Include="..\..\..\..\include\jemalloc\internal\ckh.h" />
<ClInclude Include="..\..\..\..\include\jemalloc\internal\ctl.h" />
<ClInclude Include="..\..\..\..\include\jemalloc\internal\extent.h" />
<ClInclude Include="..\..\..\..\include\jemalloc\internal\hash.h" />
<ClInclude Include="..\..\..\..\include\jemalloc\internal\huge.h" />
<ClInclude Include="..\..\..\..\include\jemalloc\internal\jemalloc_internal.h" />
<ClInclude Include="..\..\..\..\include\jemalloc\internal\jemalloc_internal_decls.h" />
<ClInclude Include="..\..\..\..\include\jemalloc\internal\jemalloc_internal_defs.h" />
<ClInclude Include="..\..\..\..\include\jemalloc\internal\jemalloc_internal_macros.h" />
<ClInclude Include="..\..\..\..\include\jemalloc\internal\mb.h" />
<ClInclude Include="..\..\..\..\include\jemalloc\internal\mutex.h" />
<ClInclude Include="..\..\..\..\include\jemalloc\internal\nstime.h" />
<ClInclude Include="..\..\..\..\include\jemalloc\internal\pages.h" />
<ClInclude Include="..\..\..\..\include\jemalloc\internal\ph.h" />
<ClInclude Include="..\..\..\..\include\jemalloc\internal\private_namespace.h" />
<ClInclude Include="..\..\..\..\include\jemalloc\internal\private_unnamespace.h" />
<ClInclude Include="..\..\..\..\include\jemalloc\internal\prng.h" />
<ClInclude Include="..\..\..\..\include\jemalloc\internal\prof.h" />
<ClInclude Include="..\..\..\..\include\jemalloc\internal\public_namespace.h" />
<ClInclude Include="..\..\..\..\include\jemalloc\internal\public_unnamespace.h" />
<ClInclude Include="..\..\..\..\include\jemalloc\internal\ql.h" />
<ClInclude Include="..\..\..\..\include\jemalloc\internal\qr.h" />
<ClInclude Include="..\..\..\..\include\jemalloc\internal\quarantine.h" />
<ClInclude Include="..\..\..\..\include\jemalloc\internal\rb.h" />
<ClInclude Include="..\..\..\..\include\jemalloc\internal\rtree.h" />
<ClInclude Include="..\..\..\..\include\jemalloc\internal\size_classes.h" />
<ClInclude Include="..\..\..\..\include\jemalloc\internal\smoothstep.h" />
<ClInclude Include="..\..\..\..\include\jemalloc\internal\spin.h" />
<ClInclude Include="..\..\..\..\include\jemalloc\internal\stats.h" />
<ClInclude Include="..\..\..\..\include\jemalloc\internal\tcache.h" />
<ClInclude Include="..\..\..\..\include\jemalloc\internal\ticker.h" />
<ClInclude Include="..\..\..\..\include\jemalloc\internal\tsd.h" />
<ClInclude Include="..\..\..\..\include\jemalloc\internal\util.h" />
<ClInclude Include="..\..\..\..\include\jemalloc\internal\witness.h" />
<ClInclude Include="..\..\..\..\include\jemalloc\jemalloc.h" />
<ClInclude Include="..\..\..\..\include\jemalloc\jemalloc_defs.h" />
<ClInclude Include="..\..\..\..\include\jemalloc\jemalloc_macros.h" />
<ClInclude Include="..\..\..\..\include\jemalloc\jemalloc_mangle.h" />
<ClInclude Include="..\..\..\..\include\jemalloc\jemalloc_protos.h" />
<ClInclude Include="..\..\..\..\include\jemalloc\jemalloc_protos_jet.h" />
<ClInclude Include="..\..\..\..\include\jemalloc\jemalloc_rename.h" />
<ClInclude Include="..\..\..\..\include\jemalloc\jemalloc_typedefs.h" />
<ClInclude Include="..\..\..\..\include\msvc_compat\C99\stdbool.h" />
<ClInclude Include="..\..\..\..\include\msvc_compat\C99\stdint.h" />
<ClInclude Include="..\..\..\..\include\msvc_compat\strings.h" />
<ClInclude Include="..\..\..\..\include\msvc_compat\windows_extra.h" />
</ItemGroup>
<ItemGroup>
<ClCompile Include="..\..\..\..\src\arena.c" />
<ClCompile Include="..\..\..\..\src\atomic.c" />
<ClCompile Include="..\..\..\..\src\base.c" />
<ClCompile Include="..\..\..\..\src\bitmap.c" />
<ClCompile Include="..\..\..\..\src\chunk.c" />
<ClCompile Include="..\..\..\..\src\chunk_dss.c" />
<ClCompile Include="..\..\..\..\src\chunk_mmap.c" />
<ClCompile Include="..\..\..\..\src\ckh.c" />
<ClCompile Include="..\..\..\..\src\ctl.c" />
<ClCompile Include="..\..\..\..\src\extent.c" />
<ClCompile Include="..\..\..\..\src\hash.c" />
<ClCompile Include="..\..\..\..\src\huge.c" />
<ClCompile Include="..\..\..\..\src\jemalloc.c" />
<ClCompile Include="..\..\..\..\src\mb.c" />
<ClCompile Include="..\..\..\..\src\mutex.c" />
<ClCompile Include="..\..\..\..\src\nstime.c" />
<ClCompile Include="..\..\..\..\src\pages.c" />
<ClCompile Include="..\..\..\..\src\prng.c" />
<ClCompile Include="..\..\..\..\src\prof.c" />
<ClCompile Include="..\..\..\..\src\quarantine.c" />
<ClCompile Include="..\..\..\..\src\rtree.c" />
<ClCompile Include="..\..\..\..\src\spin.c" />
<ClCompile Include="..\..\..\..\src\stats.c" />
<ClCompile Include="..\..\..\..\src\tcache.c" />
<ClCompile Include="..\..\..\..\src\ticker.c" />
<ClCompile Include="..\..\..\..\src\tsd.c" />
<ClCompile Include="..\..\..\..\src\util.c" />
<ClCompile Include="..\..\..\..\src\witness.c" />
</ItemGroup>
<PropertyGroup Label="Globals">
<ProjectGuid>{8D6BB292-9E1C-413D-9F98-4864BDC1514A}</ProjectGuid>
<Keyword>Win32Proj</Keyword>
<RootNamespace>jemalloc</RootNamespace>
<WindowsTargetPlatformVersion>8.1</WindowsTargetPlatformVersion>
</PropertyGroup>
<Import Project="$(VCTargetsPath)\Microsoft.Cpp.Default.props" />
<PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Debug|Win32'" Label="Configuration">
<ConfigurationType>DynamicLibrary</ConfigurationType>
<UseDebugLibraries>true</UseDebugLibraries>
<PlatformToolset>v140</PlatformToolset>
<CharacterSet>MultiByte</CharacterSet>
</PropertyGroup>
<PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Debug-static|Win32'" Label="Configuration">
<ConfigurationType>StaticLibrary</ConfigurationType>
<UseDebugLibraries>true</UseDebugLibraries>
<PlatformToolset>v140</PlatformToolset>
<CharacterSet>MultiByte</CharacterSet>
</PropertyGroup>
<PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Release|Win32'" Label="Configuration">
<ConfigurationType>DynamicLibrary</ConfigurationType>
<UseDebugLibraries>false</UseDebugLibraries>
<PlatformToolset>v140</PlatformToolset>
<WholeProgramOptimization>true</WholeProgramOptimization>
<CharacterSet>MultiByte</CharacterSet>
</PropertyGroup>
<PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Release-static|Win32'" Label="Configuration">
<ConfigurationType>StaticLibrary</ConfigurationType>
<UseDebugLibraries>false</UseDebugLibraries>
<PlatformToolset>v140</PlatformToolset>
<WholeProgramOptimization>true</WholeProgramOptimization>
<CharacterSet>MultiByte</CharacterSet>
</PropertyGroup>
<PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Debug|x64'" Label="Configuration">
<ConfigurationType>DynamicLibrary</ConfigurationType>
<UseDebugLibraries>true</UseDebugLibraries>
<PlatformToolset>v140</PlatformToolset>
<CharacterSet>MultiByte</CharacterSet>
</PropertyGroup>
<PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Debug-static|x64'" Label="Configuration">
<ConfigurationType>StaticLibrary</ConfigurationType>
<UseDebugLibraries>true</UseDebugLibraries>
<PlatformToolset>v140</PlatformToolset>
<CharacterSet>MultiByte</CharacterSet>
</PropertyGroup>
<PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Release|x64'" Label="Configuration">
<ConfigurationType>DynamicLibrary</ConfigurationType>
<UseDebugLibraries>false</UseDebugLibraries>
<PlatformToolset>v140</PlatformToolset>
<WholeProgramOptimization>true</WholeProgramOptimization>
<CharacterSet>MultiByte</CharacterSet>
</PropertyGroup>
<PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Release-static|x64'" Label="Configuration">
<ConfigurationType>StaticLibrary</ConfigurationType>
<UseDebugLibraries>false</UseDebugLibraries>
<PlatformToolset>v140</PlatformToolset>
<WholeProgramOptimization>true</WholeProgramOptimization>
<CharacterSet>MultiByte</CharacterSet>
</PropertyGroup>
<Import Project="$(VCTargetsPath)\Microsoft.Cpp.props" />
<ImportGroup Label="ExtensionSettings">
</ImportGroup>
<ImportGroup Label="Shared">
</ImportGroup>
<ImportGroup Label="PropertySheets" Condition="'$(Configuration)|$(Platform)'=='Debug|Win32'">
<Import Project="$(UserRootDir)\Microsoft.Cpp.$(Platform).user.props" Condition="exists('$(UserRootDir)\Microsoft.Cpp.$(Platform).user.props')" Label="LocalAppDataPlatform" />
</ImportGroup>
<ImportGroup Condition="'$(Configuration)|$(Platform)'=='Debug-static|Win32'" Label="PropertySheets">
<Import Project="$(UserRootDir)\Microsoft.Cpp.$(Platform).user.props" Condition="exists('$(UserRootDir)\Microsoft.Cpp.$(Platform).user.props')" Label="LocalAppDataPlatform" />
</ImportGroup>
<ImportGroup Label="PropertySheets" Condition="'$(Configuration)|$(Platform)'=='Release|Win32'">
<Import Project="$(UserRootDir)\Microsoft.Cpp.$(Platform).user.props" Condition="exists('$(UserRootDir)\Microsoft.Cpp.$(Platform).user.props')" Label="LocalAppDataPlatform" />
</ImportGroup>
<ImportGroup Condition="'$(Configuration)|$(Platform)'=='Release-static|Win32'" Label="PropertySheets">
<Import Project="$(UserRootDir)\Microsoft.Cpp.$(Platform).user.props" Condition="exists('$(UserRootDir)\Microsoft.Cpp.$(Platform).user.props')" Label="LocalAppDataPlatform" />
</ImportGroup>
<ImportGroup Label="PropertySheets" Condition="'$(Configuration)|$(Platform)'=='Debug|x64'">
<Import Project="$(UserRootDir)\Microsoft.Cpp.$(Platform).user.props" Condition="exists('$(UserRootDir)\Microsoft.Cpp.$(Platform).user.props')" Label="LocalAppDataPlatform" />
</ImportGroup>
<ImportGroup Condition="'$(Configuration)|$(Platform)'=='Debug-static|x64'" Label="PropertySheets">
<Import Project="$(UserRootDir)\Microsoft.Cpp.$(Platform).user.props" Condition="exists('$(UserRootDir)\Microsoft.Cpp.$(Platform).user.props')" Label="LocalAppDataPlatform" />
</ImportGroup>
<ImportGroup Label="PropertySheets" Condition="'$(Configuration)|$(Platform)'=='Release|x64'">
<Import Project="$(UserRootDir)\Microsoft.Cpp.$(Platform).user.props" Condition="exists('$(UserRootDir)\Microsoft.Cpp.$(Platform).user.props')" Label="LocalAppDataPlatform" />
</ImportGroup>
<ImportGroup Condition="'$(Configuration)|$(Platform)'=='Release-static|x64'" Label="PropertySheets">
<Import Project="$(UserRootDir)\Microsoft.Cpp.$(Platform).user.props" Condition="exists('$(UserRootDir)\Microsoft.Cpp.$(Platform).user.props')" Label="LocalAppDataPlatform" />
</ImportGroup>
<PropertyGroup Label="UserMacros" />
<PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Debug|Win32'">
<OutDir>$(SolutionDir)$(Platform)\$(Configuration)\</OutDir>
<IntDir>$(Platform)\$(Configuration)\</IntDir>
<TargetName>$(ProjectName)d</TargetName>
</PropertyGroup>
<PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Debug-static|Win32'">
<OutDir>$(SolutionDir)$(Platform)\$(Configuration)\</OutDir>
<IntDir>$(Platform)\$(Configuration)\</IntDir>
<TargetName>$(ProjectName)-$(PlatformToolset)-$(Configuration)</TargetName>
</PropertyGroup>
<PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Release|Win32'">
<OutDir>$(SolutionDir)$(Platform)\$(Configuration)\</OutDir>
<IntDir>$(Platform)\$(Configuration)\</IntDir>
</PropertyGroup>
<PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Release-static|Win32'">
<OutDir>$(SolutionDir)$(Platform)\$(Configuration)\</OutDir>
<IntDir>$(Platform)\$(Configuration)\</IntDir>
<TargetName>$(ProjectName)-$(PlatformToolset)-$(Configuration)</TargetName>
</PropertyGroup>
<PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Debug|x64'">
<OutDir>$(SolutionDir)$(Platform)\$(Configuration)\</OutDir>
<IntDir>$(Platform)\$(Configuration)\</IntDir>
<TargetName>$(ProjectName)d</TargetName>
</PropertyGroup>
<PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Debug-static|x64'">
<OutDir>$(SolutionDir)$(Platform)\$(Configuration)\</OutDir>
<IntDir>$(Platform)\$(Configuration)\</IntDir>
<TargetName>$(ProjectName)-vc$(PlatformToolsetVersion)-$(Configuration)</TargetName>
</PropertyGroup>
<PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Release|x64'">
<OutDir>$(SolutionDir)$(Platform)\$(Configuration)\</OutDir>
<IntDir>$(Platform)\$(Configuration)\</IntDir>
</PropertyGroup>
<PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Release-static|x64'">
<OutDir>$(SolutionDir)$(Platform)\$(Configuration)\</OutDir>
<IntDir>$(Platform)\$(Configuration)\</IntDir>
<TargetName>$(ProjectName)-vc$(PlatformToolsetVersion)-$(Configuration)</TargetName>
</PropertyGroup>
<ItemDefinitionGroup Condition="'$(Configuration)|$(Platform)'=='Debug|Win32'">
<ClCompile>
<PrecompiledHeader>
</PrecompiledHeader>
<WarningLevel>Level3</WarningLevel>
<Optimization>Disabled</Optimization>
<PreprocessorDefinitions>_REENTRANT;_WINDLL;DLLEXPORT;JEMALLOC_DEBUG;_DEBUG;%(PreprocessorDefinitions)</PreprocessorDefinitions>
<AdditionalIncludeDirectories>..\..\..\..\include;..\..\..\..\include\msvc_compat;%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>
<DisableSpecificWarnings>4090;4146;4267;4334</DisableSpecificWarnings>
<ProgramDataBaseFileName>$(OutputPath)$(TargetName).pdb</ProgramDataBaseFileName>
</ClCompile>
<Link>
<SubSystem>Windows</SubSystem>
<GenerateDebugInformation>true</GenerateDebugInformation>
</Link>
</ItemDefinitionGroup>
<ItemDefinitionGroup Condition="'$(Configuration)|$(Platform)'=='Debug-static|Win32'">
<ClCompile>
<PrecompiledHeader>
</PrecompiledHeader>
<WarningLevel>Level3</WarningLevel>
<Optimization>Disabled</Optimization>
<PreprocessorDefinitions>JEMALLOC_DEBUG;_REENTRANT;JEMALLOC_EXPORT=;_DEBUG;_LIB;%(PreprocessorDefinitions)</PreprocessorDefinitions>
<AdditionalIncludeDirectories>..\..\..\..\include;..\..\..\..\include\msvc_compat;%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>
<RuntimeLibrary>MultiThreadedDebug</RuntimeLibrary>
<DisableSpecificWarnings>4090;4146;4267;4334</DisableSpecificWarnings>
<ProgramDataBaseFileName>$(OutputPath)$(TargetName).pdb</ProgramDataBaseFileName>
</ClCompile>
<Link>
<SubSystem>Windows</SubSystem>
<GenerateDebugInformation>true</GenerateDebugInformation>
</Link>
</ItemDefinitionGroup>
<ItemDefinitionGroup Condition="'$(Configuration)|$(Platform)'=='Debug|x64'">
<ClCompile>
<PrecompiledHeader>
</PrecompiledHeader>
<WarningLevel>Level3</WarningLevel>
<Optimization>Disabled</Optimization>
<PreprocessorDefinitions>_REENTRANT;_WINDLL;DLLEXPORT;JEMALLOC_DEBUG;_DEBUG;%(PreprocessorDefinitions)</PreprocessorDefinitions>
<AdditionalIncludeDirectories>..\..\..\..\include;..\..\..\..\include\msvc_compat;%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>
<DisableSpecificWarnings>4090;4146;4267;4334</DisableSpecificWarnings>
<ProgramDataBaseFileName>$(OutputPath)$(TargetName).pdb</ProgramDataBaseFileName>
</ClCompile>
<Link>
<SubSystem>Windows</SubSystem>
<GenerateDebugInformation>true</GenerateDebugInformation>
</Link>
</ItemDefinitionGroup>
<ItemDefinitionGroup Condition="'$(Configuration)|$(Platform)'=='Debug-static|x64'">
<ClCompile>
<PrecompiledHeader>
</PrecompiledHeader>
<WarningLevel>Level3</WarningLevel>
<Optimization>Disabled</Optimization>
<PreprocessorDefinitions>JEMALLOC_DEBUG;_REENTRANT;JEMALLOC_EXPORT=;_DEBUG;_LIB;%(PreprocessorDefinitions)</PreprocessorDefinitions>
<AdditionalIncludeDirectories>..\..\..\..\include;..\..\..\..\include\msvc_compat;%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>
<RuntimeLibrary>MultiThreadedDebug</RuntimeLibrary>
<DisableSpecificWarnings>4090;4146;4267;4334</DisableSpecificWarnings>
<DebugInformationFormat>OldStyle</DebugInformationFormat>
<MinimalRebuild>false</MinimalRebuild>
</ClCompile>
<Link>
<SubSystem>Windows</SubSystem>
<GenerateDebugInformation>true</GenerateDebugInformation>
</Link>
</ItemDefinitionGroup>
<ItemDefinitionGroup Condition="'$(Configuration)|$(Platform)'=='Release|Win32'">
<ClCompile>
<WarningLevel>Level3</WarningLevel>
<PrecompiledHeader>
</PrecompiledHeader>
<Optimization>MaxSpeed</Optimization>
<FunctionLevelLinking>true</FunctionLevelLinking>
<IntrinsicFunctions>true</IntrinsicFunctions>
<PreprocessorDefinitions>_REENTRANT;_WINDLL;DLLEXPORT;NDEBUG;%(PreprocessorDefinitions)</PreprocessorDefinitions>
<AdditionalIncludeDirectories>..\..\..\..\include;..\..\..\..\include\msvc_compat;%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>
<DisableSpecificWarnings>4090;4146;4267;4334</DisableSpecificWarnings>
<ProgramDataBaseFileName>$(OutputPath)$(TargetName).pdb</ProgramDataBaseFileName>
</ClCompile>
<Link>
<SubSystem>Windows</SubSystem>
<GenerateDebugInformation>true</GenerateDebugInformation>
<EnableCOMDATFolding>true</EnableCOMDATFolding>
<OptimizeReferences>true</OptimizeReferences>
</Link>
</ItemDefinitionGroup>
<ItemDefinitionGroup Condition="'$(Configuration)|$(Platform)'=='Release-static|Win32'">
<ClCompile>
<WarningLevel>Level3</WarningLevel>
<PrecompiledHeader>
</PrecompiledHeader>
<Optimization>MaxSpeed</Optimization>
<FunctionLevelLinking>true</FunctionLevelLinking>
<IntrinsicFunctions>true</IntrinsicFunctions>
<PreprocessorDefinitions>_REENTRANT;JEMALLOC_EXPORT=;NDEBUG;_LIB;%(PreprocessorDefinitions)</PreprocessorDefinitions>
<AdditionalIncludeDirectories>..\..\..\..\include;..\..\..\..\include\msvc_compat;%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>
<RuntimeLibrary>MultiThreaded</RuntimeLibrary>
<DisableSpecificWarnings>4090;4146;4267;4334</DisableSpecificWarnings>
<ProgramDataBaseFileName>$(OutputPath)$(TargetName).pdb</ProgramDataBaseFileName>
</ClCompile>
<Link>
<SubSystem>Windows</SubSystem>
<GenerateDebugInformation>true</GenerateDebugInformation>
<EnableCOMDATFolding>true</EnableCOMDATFolding>
<OptimizeReferences>true</OptimizeReferences>
</Link>
</ItemDefinitionGroup>
<ItemDefinitionGroup Condition="'$(Configuration)|$(Platform)'=='Release|x64'">
<ClCompile>
<WarningLevel>Level3</WarningLevel>
<PrecompiledHeader>
</PrecompiledHeader>
<Optimization>MaxSpeed</Optimization>
<FunctionLevelLinking>true</FunctionLevelLinking>
<IntrinsicFunctions>true</IntrinsicFunctions>
<AdditionalIncludeDirectories>..\..\..\..\include;..\..\..\..\include\msvc_compat;%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>
<PreprocessorDefinitions>_REENTRANT;_WINDLL;DLLEXPORT;NDEBUG;%(PreprocessorDefinitions)</PreprocessorDefinitions>
<DisableSpecificWarnings>4090;4146;4267;4334</DisableSpecificWarnings>
<ProgramDataBaseFileName>$(OutputPath)$(TargetName).pdb</ProgramDataBaseFileName>
</ClCompile>
<Link>
<SubSystem>Windows</SubSystem>
<GenerateDebugInformation>true</GenerateDebugInformation>
<EnableCOMDATFolding>true</EnableCOMDATFolding>
<OptimizeReferences>true</OptimizeReferences>
</Link>
</ItemDefinitionGroup>
<ItemDefinitionGroup Condition="'$(Configuration)|$(Platform)'=='Release-static|x64'">
<ClCompile>
<WarningLevel>Level3</WarningLevel>
<PrecompiledHeader>
</PrecompiledHeader>
<Optimization>MaxSpeed</Optimization>
<FunctionLevelLinking>true</FunctionLevelLinking>
<IntrinsicFunctions>true</IntrinsicFunctions>
<PreprocessorDefinitions>_REENTRANT;JEMALLOC_EXPORT=;NDEBUG;_LIB;%(PreprocessorDefinitions)</PreprocessorDefinitions>
<AdditionalIncludeDirectories>..\..\..\..\include;..\..\..\..\include\msvc_compat;%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>
<RuntimeLibrary>MultiThreaded</RuntimeLibrary>
<DisableSpecificWarnings>4090;4146;4267;4334</DisableSpecificWarnings>
<DebugInformationFormat>OldStyle</DebugInformationFormat>
</ClCompile>
<Link>
<SubSystem>Windows</SubSystem>
<GenerateDebugInformation>true</GenerateDebugInformation>
<EnableCOMDATFolding>true</EnableCOMDATFolding>
<OptimizeReferences>true</OptimizeReferences>
</Link>
</ItemDefinitionGroup>
<Import Project="$(VCTargetsPath)\Microsoft.Cpp.targets" />
<ImportGroup Label="ExtensionTargets">
</ImportGroup>
</Project>

View File

@ -1,272 +0,0 @@
<?xml version="1.0" encoding="utf-8"?>
<Project ToolsVersion="4.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
<ItemGroup>
<Filter Include="Source Files">
<UniqueIdentifier>{4FC737F1-C7A5-4376-A066-2A32D752A2FF}</UniqueIdentifier>
<Extensions>cpp;c;cc;cxx;def;odl;idl;hpj;bat;asm;asmx</Extensions>
</Filter>
<Filter Include="Header Files">
<UniqueIdentifier>{93995380-89BD-4b04-88EB-625FBE52EBFB}</UniqueIdentifier>
<Extensions>h;hh;hpp;hxx;hm;inl;inc;xsd</Extensions>
</Filter>
<Filter Include="Header Files\internal">
<UniqueIdentifier>{5697dfa3-16cf-4932-b428-6e0ec6e9f98e}</UniqueIdentifier>
</Filter>
<Filter Include="Header Files\msvc_compat">
<UniqueIdentifier>{0cbd2ca6-42a7-4f82-8517-d7e7a14fd986}</UniqueIdentifier>
</Filter>
<Filter Include="Header Files\msvc_compat\C99">
<UniqueIdentifier>{0abe6f30-49b5-46dd-8aca-6e33363fa52c}</UniqueIdentifier>
</Filter>
</ItemGroup>
<ItemGroup>
<ClInclude Include="..\..\..\..\include\jemalloc\jemalloc.h">
<Filter>Header Files</Filter>
</ClInclude>
<ClInclude Include="..\..\..\..\include\jemalloc\jemalloc_defs.h">
<Filter>Header Files</Filter>
</ClInclude>
<ClInclude Include="..\..\..\..\include\jemalloc\jemalloc_macros.h">
<Filter>Header Files</Filter>
</ClInclude>
<ClInclude Include="..\..\..\..\include\jemalloc\jemalloc_mangle.h">
<Filter>Header Files</Filter>
</ClInclude>
<ClInclude Include="..\..\..\..\include\jemalloc\jemalloc_protos.h">
<Filter>Header Files</Filter>
</ClInclude>
<ClInclude Include="..\..\..\..\include\jemalloc\jemalloc_protos_jet.h">
<Filter>Header Files</Filter>
</ClInclude>
<ClInclude Include="..\..\..\..\include\jemalloc\jemalloc_rename.h">
<Filter>Header Files</Filter>
</ClInclude>
<ClInclude Include="..\..\..\..\include\jemalloc\jemalloc_typedefs.h">
<Filter>Header Files</Filter>
</ClInclude>
<ClInclude Include="..\..\..\..\include\jemalloc\internal\arena.h">
<Filter>Header Files\internal</Filter>
</ClInclude>
<ClInclude Include="..\..\..\..\include\jemalloc\internal\assert.h">
<Filter>Header Files\internal</Filter>
</ClInclude>
<ClInclude Include="..\..\..\..\include\jemalloc\internal\atomic.h">
<Filter>Header Files\internal</Filter>
</ClInclude>
<ClInclude Include="..\..\..\..\include\jemalloc\internal\base.h">
<Filter>Header Files\internal</Filter>
</ClInclude>
<ClInclude Include="..\..\..\..\include\jemalloc\internal\bitmap.h">
<Filter>Header Files\internal</Filter>
</ClInclude>
<ClInclude Include="..\..\..\..\include\jemalloc\internal\chunk.h">
<Filter>Header Files\internal</Filter>
</ClInclude>
<ClInclude Include="..\..\..\..\include\jemalloc\internal\chunk_dss.h">
<Filter>Header Files\internal</Filter>
</ClInclude>
<ClInclude Include="..\..\..\..\include\jemalloc\internal\chunk_mmap.h">
<Filter>Header Files\internal</Filter>
</ClInclude>
<ClInclude Include="..\..\..\..\include\jemalloc\internal\ckh.h">
<Filter>Header Files\internal</Filter>
</ClInclude>
<ClInclude Include="..\..\..\..\include\jemalloc\internal\ctl.h">
<Filter>Header Files\internal</Filter>
</ClInclude>
<ClInclude Include="..\..\..\..\include\jemalloc\internal\extent.h">
<Filter>Header Files\internal</Filter>
</ClInclude>
<ClInclude Include="..\..\..\..\include\jemalloc\internal\hash.h">
<Filter>Header Files\internal</Filter>
</ClInclude>
<ClInclude Include="..\..\..\..\include\jemalloc\internal\huge.h">
<Filter>Header Files\internal</Filter>
</ClInclude>
<ClInclude Include="..\..\..\..\include\jemalloc\internal\jemalloc_internal.h">
<Filter>Header Files\internal</Filter>
</ClInclude>
<ClInclude Include="..\..\..\..\include\jemalloc\internal\jemalloc_internal_decls.h">
<Filter>Header Files\internal</Filter>
</ClInclude>
<ClInclude Include="..\..\..\..\include\jemalloc\internal\jemalloc_internal_defs.h">
<Filter>Header Files\internal</Filter>
</ClInclude>
<ClInclude Include="..\..\..\..\include\jemalloc\internal\jemalloc_internal_macros.h">
<Filter>Header Files\internal</Filter>
</ClInclude>
<ClInclude Include="..\..\..\..\include\jemalloc\internal\mb.h">
<Filter>Header Files\internal</Filter>
</ClInclude>
<ClInclude Include="..\..\..\..\include\jemalloc\internal\mutex.h">
<Filter>Header Files\internal</Filter>
</ClInclude>
<ClInclude Include="..\..\..\..\include\jemalloc\internal\nstime.h">
<Filter>Header Files\internal</Filter>
</ClInclude>
<ClInclude Include="..\..\..\..\include\jemalloc\internal\pages.h">
<Filter>Header Files\internal</Filter>
</ClInclude>
<ClInclude Include="..\..\..\..\include\jemalloc\internal\ph.h">
<Filter>Header Files\internal</Filter>
</ClInclude>
<ClInclude Include="..\..\..\..\include\jemalloc\internal\private_namespace.h">
<Filter>Header Files\internal</Filter>
</ClInclude>
<ClInclude Include="..\..\..\..\include\jemalloc\internal\private_unnamespace.h">
<Filter>Header Files\internal</Filter>
</ClInclude>
<ClInclude Include="..\..\..\..\include\jemalloc\internal\prng.h">
<Filter>Header Files\internal</Filter>
</ClInclude>
<ClInclude Include="..\..\..\..\include\jemalloc\internal\prof.h">
<Filter>Header Files\internal</Filter>
</ClInclude>
<ClInclude Include="..\..\..\..\include\jemalloc\internal\public_namespace.h">
<Filter>Header Files\internal</Filter>
</ClInclude>
<ClInclude Include="..\..\..\..\include\jemalloc\internal\public_unnamespace.h">
<Filter>Header Files\internal</Filter>
</ClInclude>
<ClInclude Include="..\..\..\..\include\jemalloc\internal\ql.h">
<Filter>Header Files\internal</Filter>
</ClInclude>
<ClInclude Include="..\..\..\..\include\jemalloc\internal\qr.h">
<Filter>Header Files\internal</Filter>
</ClInclude>
<ClInclude Include="..\..\..\..\include\jemalloc\internal\quarantine.h">
<Filter>Header Files\internal</Filter>
</ClInclude>
<ClInclude Include="..\..\..\..\include\jemalloc\internal\rb.h">
<Filter>Header Files\internal</Filter>
</ClInclude>
<ClInclude Include="..\..\..\..\include\jemalloc\internal\rtree.h">
<Filter>Header Files\internal</Filter>
</ClInclude>
<ClInclude Include="..\..\..\..\include\jemalloc\internal\size_classes.h">
<Filter>Header Files\internal</Filter>
</ClInclude>
<ClInclude Include="..\..\..\..\include\jemalloc\internal\smoothstep.h">
<Filter>Header Files\internal</Filter>
</ClInclude>
<ClInclude Include="..\..\..\..\include\jemalloc\internal\spin.h">
<Filter>Header Files\internal</Filter>
</ClInclude>
<ClInclude Include="..\..\..\..\include\jemalloc\internal\stats.h">
<Filter>Header Files\internal</Filter>
</ClInclude>
<ClInclude Include="..\..\..\..\include\jemalloc\internal\tcache.h">
<Filter>Header Files\internal</Filter>
</ClInclude>
<ClInclude Include="..\..\..\..\include\jemalloc\internal\ticker.h">
<Filter>Header Files\internal</Filter>
</ClInclude>
<ClInclude Include="..\..\..\..\include\jemalloc\internal\tsd.h">
<Filter>Header Files\internal</Filter>
</ClInclude>
<ClInclude Include="..\..\..\..\include\jemalloc\internal\util.h">
<Filter>Header Files\internal</Filter>
</ClInclude>
<ClInclude Include="..\..\..\..\include\jemalloc\internal\witness.h">
<Filter>Header Files\internal</Filter>
</ClInclude>
<ClInclude Include="..\..\..\..\include\msvc_compat\strings.h">
<Filter>Header Files\msvc_compat</Filter>
</ClInclude>
<ClInclude Include="..\..\..\..\include\msvc_compat\windows_extra.h">
<Filter>Header Files\msvc_compat</Filter>
</ClInclude>
<ClInclude Include="..\..\..\..\include\msvc_compat\C99\stdbool.h">
<Filter>Header Files\msvc_compat\C99</Filter>
</ClInclude>
<ClInclude Include="..\..\..\..\include\msvc_compat\C99\stdint.h">
<Filter>Header Files\msvc_compat\C99</Filter>
</ClInclude>
</ItemGroup>
<ItemGroup>
<ClCompile Include="..\..\..\..\src\arena.c">
<Filter>Source Files</Filter>
</ClCompile>
<ClCompile Include="..\..\..\..\src\atomic.c">
<Filter>Source Files</Filter>
</ClCompile>
<ClCompile Include="..\..\..\..\src\base.c">
<Filter>Source Files</Filter>
</ClCompile>
<ClCompile Include="..\..\..\..\src\bitmap.c">
<Filter>Source Files</Filter>
</ClCompile>
<ClCompile Include="..\..\..\..\src\chunk.c">
<Filter>Source Files</Filter>
</ClCompile>
<ClCompile Include="..\..\..\..\src\chunk_dss.c">
<Filter>Source Files</Filter>
</ClCompile>
<ClCompile Include="..\..\..\..\src\chunk_mmap.c">
<Filter>Source Files</Filter>
</ClCompile>
<ClCompile Include="..\..\..\..\src\ckh.c">
<Filter>Source Files</Filter>
</ClCompile>
<ClCompile Include="..\..\..\..\src\ctl.c">
<Filter>Source Files</Filter>
</ClCompile>
<ClCompile Include="..\..\..\..\src\extent.c">
<Filter>Source Files</Filter>
</ClCompile>
<ClCompile Include="..\..\..\..\src\hash.c">
<Filter>Source Files</Filter>
</ClCompile>
<ClCompile Include="..\..\..\..\src\huge.c">
<Filter>Source Files</Filter>
</ClCompile>
<ClCompile Include="..\..\..\..\src\jemalloc.c">
<Filter>Source Files</Filter>
</ClCompile>
<ClCompile Include="..\..\..\..\src\mb.c">
<Filter>Source Files</Filter>
</ClCompile>
<ClCompile Include="..\..\..\..\src\mutex.c">
<Filter>Source Files</Filter>
</ClCompile>
<ClCompile Include="..\..\..\..\src\nstime.c">
<Filter>Source Files</Filter>
</ClCompile>
<ClCompile Include="..\..\..\..\src\pages.c">
<Filter>Source Files</Filter>
</ClCompile>
<ClCompile Include="..\..\..\..\src\prng.c">
<Filter>Source Files</Filter>
</ClCompile>
<ClCompile Include="..\..\..\..\src\prof.c">
<Filter>Source Files</Filter>
</ClCompile>
<ClCompile Include="..\..\..\..\src\quarantine.c">
<Filter>Source Files</Filter>
</ClCompile>
<ClCompile Include="..\..\..\..\src\rtree.c">
<Filter>Source Files</Filter>
</ClCompile>
<ClCompile Include="..\..\..\..\src\spin.c">
<Filter>Source Files</Filter>
</ClCompile>
<ClCompile Include="..\..\..\..\src\stats.c">
<Filter>Source Files</Filter>
</ClCompile>
<ClCompile Include="..\..\..\..\src\tcache.c">
<Filter>Source Files</Filter>
</ClCompile>
<ClCompile Include="..\..\..\..\src\ticker.c">
<Filter>Source Files</Filter>
</ClCompile>
<ClCompile Include="..\..\..\..\src\tsd.c">
<Filter>Source Files</Filter>
</ClCompile>
<ClCompile Include="..\..\..\..\src\util.c">
<Filter>Source Files</Filter>
</ClCompile>
<ClCompile Include="..\..\..\..\src\witness.c">
<Filter>Source Files</Filter>
</ClCompile>
</ItemGroup>
</Project>

View File

@ -1,89 +0,0 @@
// jemalloc C++ threaded test
// Author: Rustam Abdullaev
// Public Domain
#include <atomic>
#include <functional>
#include <future>
#include <random>
#include <thread>
#include <vector>
#include <stdio.h>
#include <jemalloc/jemalloc.h>
using std::vector;
using std::thread;
using std::uniform_int_distribution;
using std::minstd_rand;
int test_threads()
{
je_malloc_conf = "narenas:3";
int narenas = 0;
size_t sz = sizeof(narenas);
je_mallctl("opt.narenas", (void *)&narenas, &sz, NULL, 0);
if (narenas != 3) {
printf("Error: unexpected number of arenas: %d\n", narenas);
return 1;
}
static const int sizes[] = { 7, 16, 32, 60, 91, 100, 120, 144, 169, 199, 255, 400, 670, 900, 917, 1025, 3333, 5190, 13131, 49192, 99999, 123123, 255265, 2333111 };
static const int numSizes = (int)(sizeof(sizes) / sizeof(sizes[0]));
vector<thread> workers;
static const int numThreads = narenas + 1, numAllocsMax = 25, numIter1 = 50, numIter2 = 50;
je_malloc_stats_print(NULL, NULL, NULL);
size_t allocated1;
size_t sz1 = sizeof(allocated1);
je_mallctl("stats.active", (void *)&allocated1, &sz1, NULL, 0);
printf("\nPress Enter to start threads...\n");
getchar();
printf("Starting %d threads x %d x %d iterations...\n", numThreads, numIter1, numIter2);
for (int i = 0; i < numThreads; i++) {
workers.emplace_back([tid=i]() {
uniform_int_distribution<int> sizeDist(0, numSizes - 1);
minstd_rand rnd(tid * 17);
uint8_t* ptrs[numAllocsMax];
int ptrsz[numAllocsMax];
for (int i = 0; i < numIter1; ++i) {
thread t([&]() {
for (int i = 0; i < numIter2; ++i) {
const int numAllocs = numAllocsMax - sizeDist(rnd);
for (int j = 0; j < numAllocs; j += 64) {
const int x = sizeDist(rnd);
const int sz = sizes[x];
ptrsz[j] = sz;
ptrs[j] = (uint8_t*)je_malloc(sz);
if (!ptrs[j]) {
printf("Unable to allocate %d bytes in thread %d, iter %d, alloc %d. %d\n", sz, tid, i, j, x);
exit(1);
}
for (int k = 0; k < sz; k++)
ptrs[j][k] = tid + k;
}
for (int j = 0; j < numAllocs; j += 64) {
for (int k = 0, sz = ptrsz[j]; k < sz; k++)
if (ptrs[j][k] != (uint8_t)(tid + k)) {
printf("Memory error in thread %d, iter %d, alloc %d @ %d : %02X!=%02X\n", tid, i, j, k, ptrs[j][k], (uint8_t)(tid + k));
exit(1);
}
je_free(ptrs[j]);
}
}
});
t.join();
}
});
}
for (thread& t : workers) {
t.join();
}
je_malloc_stats_print(NULL, NULL, NULL);
size_t allocated2;
je_mallctl("stats.active", (void *)&allocated2, &sz1, NULL, 0);
size_t leaked = allocated2 - allocated1;
printf("\nDone. Leaked: %zd bytes\n", leaked);
bool failed = leaked > 65536; // in case C++ runtime allocated something (e.g. iostream locale or facet)
printf("\nTest %s!\n", (failed ? "FAILED" : "successful"));
printf("\nPress Enter to continue...\n");
getchar();
return failed ? 1 : 0;
}

View File

@ -1,3 +0,0 @@
#pragma once
int test_threads();

View File

@ -1,327 +0,0 @@
<?xml version="1.0" encoding="utf-8"?>
<Project DefaultTargets="Build" ToolsVersion="14.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
<ItemGroup Label="ProjectConfigurations">
<ProjectConfiguration Include="Debug-static|Win32">
<Configuration>Debug-static</Configuration>
<Platform>Win32</Platform>
</ProjectConfiguration>
<ProjectConfiguration Include="Debug-static|x64">
<Configuration>Debug-static</Configuration>
<Platform>x64</Platform>
</ProjectConfiguration>
<ProjectConfiguration Include="Debug|Win32">
<Configuration>Debug</Configuration>
<Platform>Win32</Platform>
</ProjectConfiguration>
<ProjectConfiguration Include="Release-static|Win32">
<Configuration>Release-static</Configuration>
<Platform>Win32</Platform>
</ProjectConfiguration>
<ProjectConfiguration Include="Release-static|x64">
<Configuration>Release-static</Configuration>
<Platform>x64</Platform>
</ProjectConfiguration>
<ProjectConfiguration Include="Release|Win32">
<Configuration>Release</Configuration>
<Platform>Win32</Platform>
</ProjectConfiguration>
<ProjectConfiguration Include="Debug|x64">
<Configuration>Debug</Configuration>
<Platform>x64</Platform>
</ProjectConfiguration>
<ProjectConfiguration Include="Release|x64">
<Configuration>Release</Configuration>
<Platform>x64</Platform>
</ProjectConfiguration>
</ItemGroup>
<PropertyGroup Label="Globals">
<ProjectGuid>{09028CFD-4EB7-491D-869C-0708DB97ED44}</ProjectGuid>
<Keyword>Win32Proj</Keyword>
<RootNamespace>test_threads</RootNamespace>
<WindowsTargetPlatformVersion>8.1</WindowsTargetPlatformVersion>
</PropertyGroup>
<Import Project="$(VCTargetsPath)\Microsoft.Cpp.Default.props" />
<PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Debug|Win32'" Label="Configuration">
<ConfigurationType>Application</ConfigurationType>
<UseDebugLibraries>true</UseDebugLibraries>
<PlatformToolset>v140</PlatformToolset>
<CharacterSet>MultiByte</CharacterSet>
</PropertyGroup>
<PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Debug-static|Win32'" Label="Configuration">
<ConfigurationType>Application</ConfigurationType>
<UseDebugLibraries>true</UseDebugLibraries>
<PlatformToolset>v140</PlatformToolset>
<CharacterSet>MultiByte</CharacterSet>
</PropertyGroup>
<PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Release|Win32'" Label="Configuration">
<ConfigurationType>Application</ConfigurationType>
<UseDebugLibraries>false</UseDebugLibraries>
<PlatformToolset>v140</PlatformToolset>
<WholeProgramOptimization>true</WholeProgramOptimization>
<CharacterSet>MultiByte</CharacterSet>
</PropertyGroup>
<PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Release-static|Win32'" Label="Configuration">
<ConfigurationType>Application</ConfigurationType>
<UseDebugLibraries>false</UseDebugLibraries>
<PlatformToolset>v140</PlatformToolset>
<WholeProgramOptimization>true</WholeProgramOptimization>
<CharacterSet>MultiByte</CharacterSet>
</PropertyGroup>
<PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Debug|x64'" Label="Configuration">
<ConfigurationType>Application</ConfigurationType>
<UseDebugLibraries>true</UseDebugLibraries>
<PlatformToolset>v140</PlatformToolset>
<CharacterSet>MultiByte</CharacterSet>
</PropertyGroup>
<PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Debug-static|x64'" Label="Configuration">
<ConfigurationType>Application</ConfigurationType>
<UseDebugLibraries>true</UseDebugLibraries>
<PlatformToolset>v140</PlatformToolset>
<CharacterSet>MultiByte</CharacterSet>
</PropertyGroup>
<PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Release|x64'" Label="Configuration">
<ConfigurationType>Application</ConfigurationType>
<UseDebugLibraries>false</UseDebugLibraries>
<PlatformToolset>v140</PlatformToolset>
<WholeProgramOptimization>true</WholeProgramOptimization>
<CharacterSet>MultiByte</CharacterSet>
</PropertyGroup>
<PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Release-static|x64'" Label="Configuration">
<ConfigurationType>Application</ConfigurationType>
<UseDebugLibraries>false</UseDebugLibraries>
<PlatformToolset>v140</PlatformToolset>
<WholeProgramOptimization>true</WholeProgramOptimization>
<CharacterSet>MultiByte</CharacterSet>
</PropertyGroup>
<Import Project="$(VCTargetsPath)\Microsoft.Cpp.props" />
<ImportGroup Label="ExtensionSettings">
</ImportGroup>
<ImportGroup Label="Shared">
</ImportGroup>
<ImportGroup Label="PropertySheets" Condition="'$(Configuration)|$(Platform)'=='Debug|Win32'">
<Import Project="$(UserRootDir)\Microsoft.Cpp.$(Platform).user.props" Condition="exists('$(UserRootDir)\Microsoft.Cpp.$(Platform).user.props')" Label="LocalAppDataPlatform" />
</ImportGroup>
<ImportGroup Condition="'$(Configuration)|$(Platform)'=='Debug-static|Win32'" Label="PropertySheets">
<Import Project="$(UserRootDir)\Microsoft.Cpp.$(Platform).user.props" Condition="exists('$(UserRootDir)\Microsoft.Cpp.$(Platform).user.props')" Label="LocalAppDataPlatform" />
</ImportGroup>
<ImportGroup Label="PropertySheets" Condition="'$(Configuration)|$(Platform)'=='Release|Win32'">
<Import Project="$(UserRootDir)\Microsoft.Cpp.$(Platform).user.props" Condition="exists('$(UserRootDir)\Microsoft.Cpp.$(Platform).user.props')" Label="LocalAppDataPlatform" />
</ImportGroup>
<ImportGroup Condition="'$(Configuration)|$(Platform)'=='Release-static|Win32'" Label="PropertySheets">
<Import Project="$(UserRootDir)\Microsoft.Cpp.$(Platform).user.props" Condition="exists('$(UserRootDir)\Microsoft.Cpp.$(Platform).user.props')" Label="LocalAppDataPlatform" />
</ImportGroup>
<ImportGroup Label="PropertySheets" Condition="'$(Configuration)|$(Platform)'=='Debug|x64'">
<Import Project="$(UserRootDir)\Microsoft.Cpp.$(Platform).user.props" Condition="exists('$(UserRootDir)\Microsoft.Cpp.$(Platform).user.props')" Label="LocalAppDataPlatform" />
</ImportGroup>
<ImportGroup Condition="'$(Configuration)|$(Platform)'=='Debug-static|x64'" Label="PropertySheets">
<Import Project="$(UserRootDir)\Microsoft.Cpp.$(Platform).user.props" Condition="exists('$(UserRootDir)\Microsoft.Cpp.$(Platform).user.props')" Label="LocalAppDataPlatform" />
</ImportGroup>
<ImportGroup Label="PropertySheets" Condition="'$(Configuration)|$(Platform)'=='Release|x64'">
<Import Project="$(UserRootDir)\Microsoft.Cpp.$(Platform).user.props" Condition="exists('$(UserRootDir)\Microsoft.Cpp.$(Platform).user.props')" Label="LocalAppDataPlatform" />
</ImportGroup>
<ImportGroup Condition="'$(Configuration)|$(Platform)'=='Release-static|x64'" Label="PropertySheets">
<Import Project="$(UserRootDir)\Microsoft.Cpp.$(Platform).user.props" Condition="exists('$(UserRootDir)\Microsoft.Cpp.$(Platform).user.props')" Label="LocalAppDataPlatform" />
</ImportGroup>
<PropertyGroup Label="UserMacros" />
<PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Debug|Win32'">
<OutDir>$(SolutionDir)$(Platform)\$(Configuration)\</OutDir>
<IntDir>$(Platform)\$(Configuration)\</IntDir>
<LinkIncremental>true</LinkIncremental>
</PropertyGroup>
<PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Debug-static|Win32'">
<OutDir>$(SolutionDir)$(Platform)\$(Configuration)\</OutDir>
<IntDir>$(Platform)\$(Configuration)\</IntDir>
<LinkIncremental>true</LinkIncremental>
</PropertyGroup>
<PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Debug|x64'">
<LinkIncremental>true</LinkIncremental>
<OutDir>$(SolutionDir)$(Platform)\$(Configuration)\</OutDir>
</PropertyGroup>
<PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Debug-static|x64'">
<LinkIncremental>true</LinkIncremental>
<OutDir>$(SolutionDir)$(Platform)\$(Configuration)\</OutDir>
</PropertyGroup>
<PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Release|Win32'">
<OutDir>$(SolutionDir)$(Platform)\$(Configuration)\</OutDir>
<IntDir>$(Platform)\$(Configuration)\</IntDir>
<LinkIncremental>false</LinkIncremental>
</PropertyGroup>
<PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Release-static|Win32'">
<OutDir>$(SolutionDir)$(Platform)\$(Configuration)\</OutDir>
<IntDir>$(Platform)\$(Configuration)\</IntDir>
<LinkIncremental>false</LinkIncremental>
</PropertyGroup>
<PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Release|x64'">
<OutDir>$(SolutionDir)$(Platform)\$(Configuration)\</OutDir>
<IntDir>$(Platform)\$(Configuration)\</IntDir>
<LinkIncremental>false</LinkIncremental>
</PropertyGroup>
<PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Release-static|x64'">
<OutDir>$(SolutionDir)$(Platform)\$(Configuration)\</OutDir>
<IntDir>$(Platform)\$(Configuration)\</IntDir>
<LinkIncremental>false</LinkIncremental>
</PropertyGroup>
<ItemDefinitionGroup Condition="'$(Configuration)|$(Platform)'=='Debug|Win32'">
<ClCompile>
<PrecompiledHeader>
</PrecompiledHeader>
<WarningLevel>Level3</WarningLevel>
<Optimization>Disabled</Optimization>
<PreprocessorDefinitions>WIN32;_DEBUG;_CONSOLE;%(PreprocessorDefinitions)</PreprocessorDefinitions>
<AdditionalIncludeDirectories>..\..\..\..\test\include;..\..\..\..\include;..\..\..\..\include\msvc_compat;%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>
</ClCompile>
<Link>
<SubSystem>Console</SubSystem>
<GenerateDebugInformation>true</GenerateDebugInformation>
<AdditionalLibraryDirectories>$(SolutionDir)$(Platform)\$(Configuration)</AdditionalLibraryDirectories>
<AdditionalDependencies>jemallocd.lib;kernel32.lib;user32.lib;gdi32.lib;winspool.lib;comdlg32.lib;advapi32.lib;shell32.lib;ole32.lib;oleaut32.lib;uuid.lib;odbc32.lib;odbccp32.lib;%(AdditionalDependencies)</AdditionalDependencies>
</Link>
</ItemDefinitionGroup>
<ItemDefinitionGroup Condition="'$(Configuration)|$(Platform)'=='Debug-static|Win32'">
<ClCompile>
<PrecompiledHeader>
</PrecompiledHeader>
<WarningLevel>Level3</WarningLevel>
<Optimization>Disabled</Optimization>
<PreprocessorDefinitions>JEMALLOC_EXPORT=;JEMALLOC_STATIC;_DEBUG;_CONSOLE;%(PreprocessorDefinitions)</PreprocessorDefinitions>
<AdditionalIncludeDirectories>..\..\..\..\test\include;..\..\..\..\include;..\..\..\..\include\msvc_compat;%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>
<RuntimeLibrary>MultiThreadedDebug</RuntimeLibrary>
</ClCompile>
<Link>
<SubSystem>Console</SubSystem>
<GenerateDebugInformation>true</GenerateDebugInformation>
<AdditionalLibraryDirectories>$(SolutionDir)$(Platform)\$(Configuration)</AdditionalLibraryDirectories>
<AdditionalDependencies>jemalloc-$(PlatformToolset)-$(Configuration).lib;kernel32.lib;user32.lib;gdi32.lib;winspool.lib;comdlg32.lib;advapi32.lib;shell32.lib;ole32.lib;oleaut32.lib;uuid.lib;odbc32.lib;odbccp32.lib;%(AdditionalDependencies)</AdditionalDependencies>
</Link>
</ItemDefinitionGroup>
<ItemDefinitionGroup Condition="'$(Configuration)|$(Platform)'=='Debug|x64'">
<ClCompile>
<PrecompiledHeader>
</PrecompiledHeader>
<WarningLevel>Level3</WarningLevel>
<Optimization>Disabled</Optimization>
<PreprocessorDefinitions>_DEBUG;%(PreprocessorDefinitions)</PreprocessorDefinitions>
<AdditionalIncludeDirectories>..\..\..\..\test\include;..\..\..\..\include;..\..\..\..\include\msvc_compat;%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>
</ClCompile>
<Link>
<SubSystem>Console</SubSystem>
<GenerateDebugInformation>true</GenerateDebugInformation>
<AdditionalDependencies>jemallocd.lib;kernel32.lib;user32.lib;gdi32.lib;winspool.lib;comdlg32.lib;advapi32.lib;shell32.lib;ole32.lib;oleaut32.lib;uuid.lib;odbc32.lib;odbccp32.lib;%(AdditionalDependencies)</AdditionalDependencies>
<AdditionalLibraryDirectories>$(SolutionDir)$(Platform)\$(Configuration)</AdditionalLibraryDirectories>
</Link>
</ItemDefinitionGroup>
<ItemDefinitionGroup Condition="'$(Configuration)|$(Platform)'=='Debug-static|x64'">
<ClCompile>
<PrecompiledHeader>
</PrecompiledHeader>
<WarningLevel>Level3</WarningLevel>
<Optimization>Disabled</Optimization>
<PreprocessorDefinitions>JEMALLOC_EXPORT=;JEMALLOC_STATIC;_DEBUG;%(PreprocessorDefinitions)</PreprocessorDefinitions>
<AdditionalIncludeDirectories>..\..\..\..\test\include;..\..\..\..\include;..\..\..\..\include\msvc_compat;%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>
<RuntimeLibrary>MultiThreadedDebug</RuntimeLibrary>
</ClCompile>
<Link>
<SubSystem>Console</SubSystem>
<GenerateDebugInformation>true</GenerateDebugInformation>
<AdditionalDependencies>jemalloc-vc$(PlatformToolsetVersion)-$(Configuration).lib;kernel32.lib;user32.lib;gdi32.lib;winspool.lib;comdlg32.lib;advapi32.lib;shell32.lib;ole32.lib;oleaut32.lib;uuid.lib;odbc32.lib;odbccp32.lib;%(AdditionalDependencies)</AdditionalDependencies>
<AdditionalLibraryDirectories>$(SolutionDir)$(Platform)\$(Configuration)</AdditionalLibraryDirectories>
</Link>
</ItemDefinitionGroup>
<ItemDefinitionGroup Condition="'$(Configuration)|$(Platform)'=='Release|Win32'">
<ClCompile>
<WarningLevel>Level3</WarningLevel>
<PrecompiledHeader>
</PrecompiledHeader>
<Optimization>MaxSpeed</Optimization>
<FunctionLevelLinking>true</FunctionLevelLinking>
<IntrinsicFunctions>true</IntrinsicFunctions>
<PreprocessorDefinitions>WIN32;NDEBUG;_CONSOLE;%(PreprocessorDefinitions)</PreprocessorDefinitions>
<AdditionalIncludeDirectories>..\..\..\..\test\include;..\..\..\..\include;..\..\..\..\include\msvc_compat;%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>
</ClCompile>
<Link>
<SubSystem>Console</SubSystem>
<GenerateDebugInformation>true</GenerateDebugInformation>
<EnableCOMDATFolding>true</EnableCOMDATFolding>
<OptimizeReferences>true</OptimizeReferences>
<AdditionalLibraryDirectories>$(SolutionDir)$(Platform)\$(Configuration)</AdditionalLibraryDirectories>
<AdditionalDependencies>jemalloc.lib;kernel32.lib;user32.lib;gdi32.lib;winspool.lib;comdlg32.lib;advapi32.lib;shell32.lib;ole32.lib;oleaut32.lib;uuid.lib;odbc32.lib;odbccp32.lib;%(AdditionalDependencies)</AdditionalDependencies>
</Link>
</ItemDefinitionGroup>
<ItemDefinitionGroup Condition="'$(Configuration)|$(Platform)'=='Release-static|Win32'">
<ClCompile>
<WarningLevel>Level3</WarningLevel>
<PrecompiledHeader>
</PrecompiledHeader>
<Optimization>MaxSpeed</Optimization>
<FunctionLevelLinking>true</FunctionLevelLinking>
<IntrinsicFunctions>true</IntrinsicFunctions>
<PreprocessorDefinitions>JEMALLOC_EXPORT=;JEMALLOC_STATIC;NDEBUG;_CONSOLE;%(PreprocessorDefinitions)</PreprocessorDefinitions>
<AdditionalIncludeDirectories>..\..\..\..\test\include;..\..\..\..\include;..\..\..\..\include\msvc_compat;%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>
<RuntimeLibrary>MultiThreaded</RuntimeLibrary>
</ClCompile>
<Link>
<SubSystem>Console</SubSystem>
<GenerateDebugInformation>true</GenerateDebugInformation>
<EnableCOMDATFolding>true</EnableCOMDATFolding>
<OptimizeReferences>true</OptimizeReferences>
<AdditionalLibraryDirectories>$(SolutionDir)$(Platform)\$(Configuration)</AdditionalLibraryDirectories>
<AdditionalDependencies>jemalloc-$(PlatformToolset)-$(Configuration).lib;kernel32.lib;user32.lib;gdi32.lib;winspool.lib;comdlg32.lib;advapi32.lib;shell32.lib;ole32.lib;oleaut32.lib;uuid.lib;odbc32.lib;odbccp32.lib;%(AdditionalDependencies)</AdditionalDependencies>
</Link>
</ItemDefinitionGroup>
<ItemDefinitionGroup Condition="'$(Configuration)|$(Platform)'=='Release|x64'">
<ClCompile>
<WarningLevel>Level3</WarningLevel>
<PrecompiledHeader>
</PrecompiledHeader>
<Optimization>MaxSpeed</Optimization>
<FunctionLevelLinking>true</FunctionLevelLinking>
<IntrinsicFunctions>true</IntrinsicFunctions>
<PreprocessorDefinitions>NDEBUG;_CONSOLE;%(PreprocessorDefinitions)</PreprocessorDefinitions>
<AdditionalIncludeDirectories>..\..\..\..\test\include;..\..\..\..\include;..\..\..\..\include\msvc_compat;%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>
</ClCompile>
<Link>
<SubSystem>Console</SubSystem>
<GenerateDebugInformation>true</GenerateDebugInformation>
<EnableCOMDATFolding>true</EnableCOMDATFolding>
<OptimizeReferences>true</OptimizeReferences>
<AdditionalLibraryDirectories>$(SolutionDir)$(Platform)\$(Configuration)</AdditionalLibraryDirectories>
<AdditionalDependencies>jemalloc.lib;kernel32.lib;user32.lib;gdi32.lib;winspool.lib;comdlg32.lib;advapi32.lib;shell32.lib;ole32.lib;oleaut32.lib;uuid.lib;odbc32.lib;odbccp32.lib;%(AdditionalDependencies)</AdditionalDependencies>
</Link>
</ItemDefinitionGroup>
<ItemDefinitionGroup Condition="'$(Configuration)|$(Platform)'=='Release-static|x64'">
<ClCompile>
<WarningLevel>Level3</WarningLevel>
<PrecompiledHeader>
</PrecompiledHeader>
<Optimization>MaxSpeed</Optimization>
<FunctionLevelLinking>true</FunctionLevelLinking>
<IntrinsicFunctions>true</IntrinsicFunctions>
<PreprocessorDefinitions>JEMALLOC_EXPORT=;JEMALLOC_STATIC;NDEBUG;_CONSOLE;%(PreprocessorDefinitions)</PreprocessorDefinitions>
<AdditionalIncludeDirectories>..\..\..\..\test\include;..\..\..\..\include;..\..\..\..\include\msvc_compat;%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>
<RuntimeLibrary>MultiThreaded</RuntimeLibrary>
</ClCompile>
<Link>
<SubSystem>Console</SubSystem>
<GenerateDebugInformation>true</GenerateDebugInformation>
<EnableCOMDATFolding>true</EnableCOMDATFolding>
<OptimizeReferences>true</OptimizeReferences>
<AdditionalLibraryDirectories>$(SolutionDir)$(Platform)\$(Configuration)</AdditionalLibraryDirectories>
<AdditionalDependencies>jemalloc-vc$(PlatformToolsetVersion)-$(Configuration).lib;kernel32.lib;user32.lib;gdi32.lib;winspool.lib;comdlg32.lib;advapi32.lib;shell32.lib;ole32.lib;oleaut32.lib;uuid.lib;odbc32.lib;odbccp32.lib;%(AdditionalDependencies)</AdditionalDependencies>
</Link>
</ItemDefinitionGroup>
<ItemGroup>
<ClCompile Include="test_threads.cpp" />
<ClCompile Include="test_threads_main.cpp" />
</ItemGroup>
<ItemGroup>
<ProjectReference Include="..\jemalloc\jemalloc.vcxproj">
<Project>{8d6bb292-9e1c-413d-9f98-4864bdc1514a}</Project>
</ProjectReference>
</ItemGroup>
<ItemGroup>
<ClInclude Include="test_threads.h" />
</ItemGroup>
<Import Project="$(VCTargetsPath)\Microsoft.Cpp.targets" />
<ImportGroup Label="ExtensionTargets">
</ImportGroup>
</Project>

View File

@ -1,26 +0,0 @@
<?xml version="1.0" encoding="utf-8"?>
<Project ToolsVersion="4.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
<ItemGroup>
<Filter Include="Source Files">
<UniqueIdentifier>{4FC737F1-C7A5-4376-A066-2A32D752A2FF}</UniqueIdentifier>
<Extensions>cpp;c;cc;cxx;def;odl;idl;hpj;bat;asm;asmx</Extensions>
</Filter>
<Filter Include="Header Files">
<UniqueIdentifier>{93995380-89BD-4b04-88EB-625FBE52EBFB}</UniqueIdentifier>
<Extensions>h;hh;hpp;hxx;hm;inl;inc;xsd</Extensions>
</Filter>
</ItemGroup>
<ItemGroup>
<ClCompile Include="test_threads.cpp">
<Filter>Source Files</Filter>
</ClCompile>
<ClCompile Include="test_threads_main.cpp">
<Filter>Source Files</Filter>
</ClCompile>
</ItemGroup>
<ItemGroup>
<ClInclude Include="test_threads.h">
<Filter>Header Files</Filter>
</ClInclude>
</ItemGroup>
</Project>

View File

@ -1,12 +0,0 @@
#include "test_threads.h"
#include <future>
#include <functional>
#include <chrono>
using namespace std::chrono_literals;
int main(int argc, char** argv)
{
int rc = test_threads();
return rc;
}

File diff suppressed because it is too large Load Diff

View File

@ -5,8 +5,7 @@
/* Data. */ /* Data. */
static malloc_mutex_t base_mtx; static malloc_mutex_t base_mtx;
static size_t base_extent_sn_next; static extent_tree_t base_avail_szad;
static extent_tree_t base_avail_szsnad;
static extent_node_t *base_nodes; static extent_node_t *base_nodes;
static size_t base_allocated; static size_t base_allocated;
static size_t base_resident; static size_t base_resident;
@ -14,13 +13,12 @@ static size_t base_mapped;
/******************************************************************************/ /******************************************************************************/
/* base_mtx must be held. */
static extent_node_t * static extent_node_t *
base_node_try_alloc(tsdn_t *tsdn) base_node_try_alloc(void)
{ {
extent_node_t *node; extent_node_t *node;
malloc_mutex_assert_owner(tsdn, &base_mtx);
if (base_nodes == NULL) if (base_nodes == NULL)
return (NULL); return (NULL);
node = base_nodes; node = base_nodes;
@ -29,42 +27,33 @@ base_node_try_alloc(tsdn_t *tsdn)
return (node); return (node);
} }
/* base_mtx must be held. */
static void static void
base_node_dalloc(tsdn_t *tsdn, extent_node_t *node) base_node_dalloc(extent_node_t *node)
{ {
malloc_mutex_assert_owner(tsdn, &base_mtx);
JEMALLOC_VALGRIND_MAKE_MEM_UNDEFINED(node, sizeof(extent_node_t)); JEMALLOC_VALGRIND_MAKE_MEM_UNDEFINED(node, sizeof(extent_node_t));
*(extent_node_t **)node = base_nodes; *(extent_node_t **)node = base_nodes;
base_nodes = node; base_nodes = node;
} }
static void /* base_mtx must be held. */
base_extent_node_init(extent_node_t *node, void *addr, size_t size)
{
size_t sn = atomic_add_z(&base_extent_sn_next, 1) - 1;
extent_node_init(node, NULL, addr, size, sn, true, true);
}
static extent_node_t * static extent_node_t *
base_chunk_alloc(tsdn_t *tsdn, size_t minsize) base_chunk_alloc(size_t minsize)
{ {
extent_node_t *node; extent_node_t *node;
size_t csize, nsize; size_t csize, nsize;
void *addr; void *addr;
malloc_mutex_assert_owner(tsdn, &base_mtx);
assert(minsize != 0); assert(minsize != 0);
node = base_node_try_alloc(tsdn); node = base_node_try_alloc();
/* Allocate enough space to also carve a node out if necessary. */ /* Allocate enough space to also carve a node out if necessary. */
nsize = (node == NULL) ? CACHELINE_CEILING(sizeof(extent_node_t)) : 0; nsize = (node == NULL) ? CACHELINE_CEILING(sizeof(extent_node_t)) : 0;
csize = CHUNK_CEILING(minsize + nsize); csize = CHUNK_CEILING(minsize + nsize);
addr = chunk_alloc_base(csize); addr = chunk_alloc_base(csize);
if (addr == NULL) { if (addr == NULL) {
if (node != NULL) if (node != NULL)
base_node_dalloc(tsdn, node); base_node_dalloc(node);
return (NULL); return (NULL);
} }
base_mapped += csize; base_mapped += csize;
@ -77,7 +66,7 @@ base_chunk_alloc(tsdn_t *tsdn, size_t minsize)
base_resident += PAGE_CEILING(nsize); base_resident += PAGE_CEILING(nsize);
} }
} }
base_extent_node_init(node, addr, csize); extent_node_init(node, NULL, addr, csize, true, true);
return (node); return (node);
} }
@ -87,7 +76,7 @@ base_chunk_alloc(tsdn_t *tsdn, size_t minsize)
* physical memory usage. * physical memory usage.
*/ */
void * void *
base_alloc(tsdn_t *tsdn, size_t size) base_alloc(size_t size)
{ {
void *ret; void *ret;
size_t csize, usize; size_t csize, usize;
@ -101,15 +90,15 @@ base_alloc(tsdn_t *tsdn, size_t size)
csize = CACHELINE_CEILING(size); csize = CACHELINE_CEILING(size);
usize = s2u(csize); usize = s2u(csize);
extent_node_init(&key, NULL, NULL, usize, 0, false, false); extent_node_init(&key, NULL, NULL, usize, false, false);
malloc_mutex_lock(tsdn, &base_mtx); malloc_mutex_lock(&base_mtx);
node = extent_tree_szsnad_nsearch(&base_avail_szsnad, &key); node = extent_tree_szad_nsearch(&base_avail_szad, &key);
if (node != NULL) { if (node != NULL) {
/* Use existing space. */ /* Use existing space. */
extent_tree_szsnad_remove(&base_avail_szsnad, node); extent_tree_szad_remove(&base_avail_szad, node);
} else { } else {
/* Try to allocate more space. */ /* Try to allocate more space. */
node = base_chunk_alloc(tsdn, csize); node = base_chunk_alloc(csize);
} }
if (node == NULL) { if (node == NULL) {
ret = NULL; ret = NULL;
@ -120,9 +109,9 @@ base_alloc(tsdn_t *tsdn, size_t size)
if (extent_node_size_get(node) > csize) { if (extent_node_size_get(node) > csize) {
extent_node_addr_set(node, (void *)((uintptr_t)ret + csize)); extent_node_addr_set(node, (void *)((uintptr_t)ret + csize));
extent_node_size_set(node, extent_node_size_get(node) - csize); extent_node_size_set(node, extent_node_size_get(node) - csize);
extent_tree_szsnad_insert(&base_avail_szsnad, node); extent_tree_szad_insert(&base_avail_szad, node);
} else } else
base_node_dalloc(tsdn, node); base_node_dalloc(node);
if (config_stats) { if (config_stats) {
base_allocated += csize; base_allocated += csize;
/* /*
@ -134,54 +123,52 @@ base_alloc(tsdn_t *tsdn, size_t size)
} }
JEMALLOC_VALGRIND_MAKE_MEM_DEFINED(ret, csize); JEMALLOC_VALGRIND_MAKE_MEM_DEFINED(ret, csize);
label_return: label_return:
malloc_mutex_unlock(tsdn, &base_mtx); malloc_mutex_unlock(&base_mtx);
return (ret); return (ret);
} }
void void
base_stats_get(tsdn_t *tsdn, size_t *allocated, size_t *resident, base_stats_get(size_t *allocated, size_t *resident, size_t *mapped)
size_t *mapped)
{ {
malloc_mutex_lock(tsdn, &base_mtx); malloc_mutex_lock(&base_mtx);
assert(base_allocated <= base_resident); assert(base_allocated <= base_resident);
assert(base_resident <= base_mapped); assert(base_resident <= base_mapped);
*allocated = base_allocated; *allocated = base_allocated;
*resident = base_resident; *resident = base_resident;
*mapped = base_mapped; *mapped = base_mapped;
malloc_mutex_unlock(tsdn, &base_mtx); malloc_mutex_unlock(&base_mtx);
} }
bool bool
base_boot(void) base_boot(void)
{ {
if (malloc_mutex_init(&base_mtx, "base", WITNESS_RANK_BASE)) if (malloc_mutex_init(&base_mtx))
return (true); return (true);
base_extent_sn_next = 0; extent_tree_szad_new(&base_avail_szad);
extent_tree_szsnad_new(&base_avail_szsnad);
base_nodes = NULL; base_nodes = NULL;
return (false); return (false);
} }
void void
base_prefork(tsdn_t *tsdn) base_prefork(void)
{ {
malloc_mutex_prefork(tsdn, &base_mtx); malloc_mutex_prefork(&base_mtx);
} }
void void
base_postfork_parent(tsdn_t *tsdn) base_postfork_parent(void)
{ {
malloc_mutex_postfork_parent(tsdn, &base_mtx); malloc_mutex_postfork_parent(&base_mtx);
} }
void void
base_postfork_child(tsdn_t *tsdn) base_postfork_child(void)
{ {
malloc_mutex_postfork_child(tsdn, &base_mtx); malloc_mutex_postfork_child(&base_mtx);
} }

View File

@ -3,8 +3,6 @@
/******************************************************************************/ /******************************************************************************/
#ifdef USE_TREE
void void
bitmap_info_init(bitmap_info_t *binfo, size_t nbits) bitmap_info_init(bitmap_info_t *binfo, size_t nbits)
{ {
@ -34,11 +32,20 @@ bitmap_info_init(bitmap_info_t *binfo, size_t nbits)
binfo->nbits = nbits; binfo->nbits = nbits;
} }
static size_t size_t
bitmap_info_ngroups(const bitmap_info_t *binfo) bitmap_info_ngroups(const bitmap_info_t *binfo)
{ {
return (binfo->levels[binfo->nlevels].group_offset); return (binfo->levels[binfo->nlevels].group_offset << LG_SIZEOF_BITMAP);
}
size_t
bitmap_size(size_t nbits)
{
bitmap_info_t binfo;
bitmap_info_init(&binfo, nbits);
return (bitmap_info_ngroups(&binfo));
} }
void void
@ -54,7 +61,8 @@ bitmap_init(bitmap_t *bitmap, const bitmap_info_t *binfo)
* correspond to the first logical bit in the group, so extra bits * correspond to the first logical bit in the group, so extra bits
* are the most significant bits of the last group. * are the most significant bits of the last group.
*/ */
memset(bitmap, 0xffU, bitmap_size(binfo)); memset(bitmap, 0xffU, binfo->levels[binfo->nlevels].group_offset <<
LG_SIZEOF_BITMAP);
extra = (BITMAP_GROUP_NBITS - (binfo->nbits & BITMAP_GROUP_NBITS_MASK)) extra = (BITMAP_GROUP_NBITS - (binfo->nbits & BITMAP_GROUP_NBITS_MASK))
& BITMAP_GROUP_NBITS_MASK; & BITMAP_GROUP_NBITS_MASK;
if (extra != 0) if (extra != 0)
@ -68,44 +76,3 @@ bitmap_init(bitmap_t *bitmap, const bitmap_info_t *binfo)
bitmap[binfo->levels[i+1].group_offset - 1] >>= extra; bitmap[binfo->levels[i+1].group_offset - 1] >>= extra;
} }
} }
#else /* USE_TREE */
void
bitmap_info_init(bitmap_info_t *binfo, size_t nbits)
{
assert(nbits > 0);
assert(nbits <= (ZU(1) << LG_BITMAP_MAXBITS));
binfo->ngroups = BITMAP_BITS2GROUPS(nbits);
binfo->nbits = nbits;
}
static size_t
bitmap_info_ngroups(const bitmap_info_t *binfo)
{
return (binfo->ngroups);
}
void
bitmap_init(bitmap_t *bitmap, const bitmap_info_t *binfo)
{
size_t extra;
memset(bitmap, 0xffU, bitmap_size(binfo));
extra = (BITMAP_GROUP_NBITS - (binfo->nbits & BITMAP_GROUP_NBITS_MASK))
& BITMAP_GROUP_NBITS_MASK;
if (extra != 0)
bitmap[binfo->ngroups - 1] >>= extra;
}
#endif /* USE_TREE */
size_t
bitmap_size(const bitmap_info_t *binfo)
{
return (bitmap_info_ngroups(binfo) << LG_SIZEOF_BITMAP);
}

View File

@ -49,10 +49,9 @@ const chunk_hooks_t chunk_hooks_default = {
* definition. * definition.
*/ */
static void chunk_record(tsdn_t *tsdn, arena_t *arena, static void chunk_record(arena_t *arena, chunk_hooks_t *chunk_hooks,
chunk_hooks_t *chunk_hooks, extent_tree_t *chunks_szsnad, extent_tree_t *chunks_szad, extent_tree_t *chunks_ad, bool cache,
extent_tree_t *chunks_ad, bool cache, void *chunk, size_t size, size_t sn, void *chunk, size_t size, bool zeroed, bool committed);
bool zeroed, bool committed);
/******************************************************************************/ /******************************************************************************/
@ -64,23 +63,23 @@ chunk_hooks_get_locked(arena_t *arena)
} }
chunk_hooks_t chunk_hooks_t
chunk_hooks_get(tsdn_t *tsdn, arena_t *arena) chunk_hooks_get(arena_t *arena)
{ {
chunk_hooks_t chunk_hooks; chunk_hooks_t chunk_hooks;
malloc_mutex_lock(tsdn, &arena->chunks_mtx); malloc_mutex_lock(&arena->chunks_mtx);
chunk_hooks = chunk_hooks_get_locked(arena); chunk_hooks = chunk_hooks_get_locked(arena);
malloc_mutex_unlock(tsdn, &arena->chunks_mtx); malloc_mutex_unlock(&arena->chunks_mtx);
return (chunk_hooks); return (chunk_hooks);
} }
chunk_hooks_t chunk_hooks_t
chunk_hooks_set(tsdn_t *tsdn, arena_t *arena, const chunk_hooks_t *chunk_hooks) chunk_hooks_set(arena_t *arena, const chunk_hooks_t *chunk_hooks)
{ {
chunk_hooks_t old_chunk_hooks; chunk_hooks_t old_chunk_hooks;
malloc_mutex_lock(tsdn, &arena->chunks_mtx); malloc_mutex_lock(&arena->chunks_mtx);
old_chunk_hooks = arena->chunk_hooks; old_chunk_hooks = arena->chunk_hooks;
/* /*
* Copy each field atomically so that it is impossible for readers to * Copy each field atomically so that it is impossible for readers to
@ -105,14 +104,14 @@ chunk_hooks_set(tsdn_t *tsdn, arena_t *arena, const chunk_hooks_t *chunk_hooks)
ATOMIC_COPY_HOOK(split); ATOMIC_COPY_HOOK(split);
ATOMIC_COPY_HOOK(merge); ATOMIC_COPY_HOOK(merge);
#undef ATOMIC_COPY_HOOK #undef ATOMIC_COPY_HOOK
malloc_mutex_unlock(tsdn, &arena->chunks_mtx); malloc_mutex_unlock(&arena->chunks_mtx);
return (old_chunk_hooks); return (old_chunk_hooks);
} }
static void static void
chunk_hooks_assure_initialized_impl(tsdn_t *tsdn, arena_t *arena, chunk_hooks_assure_initialized_impl(arena_t *arena, chunk_hooks_t *chunk_hooks,
chunk_hooks_t *chunk_hooks, bool locked) bool locked)
{ {
static const chunk_hooks_t uninitialized_hooks = static const chunk_hooks_t uninitialized_hooks =
CHUNK_HOOKS_INITIALIZER; CHUNK_HOOKS_INITIALIZER;
@ -120,28 +119,27 @@ chunk_hooks_assure_initialized_impl(tsdn_t *tsdn, arena_t *arena,
if (memcmp(chunk_hooks, &uninitialized_hooks, sizeof(chunk_hooks_t)) == if (memcmp(chunk_hooks, &uninitialized_hooks, sizeof(chunk_hooks_t)) ==
0) { 0) {
*chunk_hooks = locked ? chunk_hooks_get_locked(arena) : *chunk_hooks = locked ? chunk_hooks_get_locked(arena) :
chunk_hooks_get(tsdn, arena); chunk_hooks_get(arena);
} }
} }
static void static void
chunk_hooks_assure_initialized_locked(tsdn_t *tsdn, arena_t *arena, chunk_hooks_assure_initialized_locked(arena_t *arena,
chunk_hooks_t *chunk_hooks) chunk_hooks_t *chunk_hooks)
{ {
chunk_hooks_assure_initialized_impl(tsdn, arena, chunk_hooks, true); chunk_hooks_assure_initialized_impl(arena, chunk_hooks, true);
} }
static void static void
chunk_hooks_assure_initialized(tsdn_t *tsdn, arena_t *arena, chunk_hooks_assure_initialized(arena_t *arena, chunk_hooks_t *chunk_hooks)
chunk_hooks_t *chunk_hooks)
{ {
chunk_hooks_assure_initialized_impl(tsdn, arena, chunk_hooks, false); chunk_hooks_assure_initialized_impl(arena, chunk_hooks, false);
} }
bool bool
chunk_register(tsdn_t *tsdn, const void *chunk, const extent_node_t *node) chunk_register(const void *chunk, const extent_node_t *node)
{ {
assert(extent_node_addr_get(node) == chunk); assert(extent_node_addr_get(node) == chunk);
@ -161,7 +159,7 @@ chunk_register(tsdn_t *tsdn, const void *chunk, const extent_node_t *node)
high = atomic_read_z(&highchunks); high = atomic_read_z(&highchunks);
} }
if (cur > high && prof_gdump_get_unlocked()) if (cur > high && prof_gdump_get_unlocked())
prof_gdump(tsdn); prof_gdump();
} }
return (false); return (false);
@ -183,35 +181,33 @@ chunk_deregister(const void *chunk, const extent_node_t *node)
} }
/* /*
* Do first-best-fit chunk selection, i.e. select the oldest/lowest chunk that * Do first-best-fit chunk selection, i.e. select the lowest chunk that best
* best fits. * fits.
*/ */
static extent_node_t * static extent_node_t *
chunk_first_best_fit(arena_t *arena, extent_tree_t *chunks_szsnad, size_t size) chunk_first_best_fit(arena_t *arena, extent_tree_t *chunks_szad,
extent_tree_t *chunks_ad, size_t size)
{ {
extent_node_t key; extent_node_t key;
assert(size == CHUNK_CEILING(size)); assert(size == CHUNK_CEILING(size));
extent_node_init(&key, arena, NULL, size, 0, false, false); extent_node_init(&key, arena, NULL, size, false, false);
return (extent_tree_szsnad_nsearch(chunks_szsnad, &key)); return (extent_tree_szad_nsearch(chunks_szad, &key));
} }
static void * static void *
chunk_recycle(tsdn_t *tsdn, arena_t *arena, chunk_hooks_t *chunk_hooks, chunk_recycle(arena_t *arena, chunk_hooks_t *chunk_hooks,
extent_tree_t *chunks_szsnad, extent_tree_t *chunks_ad, bool cache, extent_tree_t *chunks_szad, extent_tree_t *chunks_ad, bool cache,
void *new_addr, size_t size, size_t alignment, size_t *sn, bool *zero, void *new_addr, size_t size, size_t alignment, bool *zero, bool *commit,
bool *commit, bool dalloc_node) bool dalloc_node)
{ {
void *ret; void *ret;
extent_node_t *node; extent_node_t *node;
size_t alloc_size, leadsize, trailsize; size_t alloc_size, leadsize, trailsize;
bool zeroed, committed; bool zeroed, committed;
assert(CHUNK_CEILING(size) == size);
assert(alignment > 0);
assert(new_addr == NULL || alignment == chunksize); assert(new_addr == NULL || alignment == chunksize);
assert(CHUNK_ADDR2BASE(new_addr) == new_addr);
/* /*
* Cached chunks use the node linkage embedded in their headers, in * Cached chunks use the node linkage embedded in their headers, in
* which case dalloc_node is true, and new_addr is non-NULL because * which case dalloc_node is true, and new_addr is non-NULL because
@ -219,23 +215,24 @@ chunk_recycle(tsdn_t *tsdn, arena_t *arena, chunk_hooks_t *chunk_hooks,
*/ */
assert(dalloc_node || new_addr != NULL); assert(dalloc_node || new_addr != NULL);
alloc_size = size + CHUNK_CEILING(alignment) - chunksize; alloc_size = CHUNK_CEILING(s2u(size + alignment - chunksize));
/* Beware size_t wrap-around. */ /* Beware size_t wrap-around. */
if (alloc_size < size) if (alloc_size < size)
return (NULL); return (NULL);
malloc_mutex_lock(tsdn, &arena->chunks_mtx); malloc_mutex_lock(&arena->chunks_mtx);
chunk_hooks_assure_initialized_locked(tsdn, arena, chunk_hooks); chunk_hooks_assure_initialized_locked(arena, chunk_hooks);
if (new_addr != NULL) { if (new_addr != NULL) {
extent_node_t key; extent_node_t key;
extent_node_init(&key, arena, new_addr, alloc_size, 0, false, extent_node_init(&key, arena, new_addr, alloc_size, false,
false); false);
node = extent_tree_ad_search(chunks_ad, &key); node = extent_tree_ad_search(chunks_ad, &key);
} else { } else {
node = chunk_first_best_fit(arena, chunks_szsnad, alloc_size); node = chunk_first_best_fit(arena, chunks_szad, chunks_ad,
alloc_size);
} }
if (node == NULL || (new_addr != NULL && extent_node_size_get(node) < if (node == NULL || (new_addr != NULL && extent_node_size_get(node) <
size)) { size)) {
malloc_mutex_unlock(tsdn, &arena->chunks_mtx); malloc_mutex_unlock(&arena->chunks_mtx);
return (NULL); return (NULL);
} }
leadsize = ALIGNMENT_CEILING((uintptr_t)extent_node_addr_get(node), leadsize = ALIGNMENT_CEILING((uintptr_t)extent_node_addr_get(node),
@ -244,7 +241,6 @@ chunk_recycle(tsdn_t *tsdn, arena_t *arena, chunk_hooks_t *chunk_hooks,
assert(extent_node_size_get(node) >= leadsize + size); assert(extent_node_size_get(node) >= leadsize + size);
trailsize = extent_node_size_get(node) - leadsize - size; trailsize = extent_node_size_get(node) - leadsize - size;
ret = (void *)((uintptr_t)extent_node_addr_get(node) + leadsize); ret = (void *)((uintptr_t)extent_node_addr_get(node) + leadsize);
*sn = extent_node_sn_get(node);
zeroed = extent_node_zeroed_get(node); zeroed = extent_node_zeroed_get(node);
if (zeroed) if (zeroed)
*zero = true; *zero = true;
@ -255,17 +251,17 @@ chunk_recycle(tsdn_t *tsdn, arena_t *arena, chunk_hooks_t *chunk_hooks,
if (leadsize != 0 && if (leadsize != 0 &&
chunk_hooks->split(extent_node_addr_get(node), chunk_hooks->split(extent_node_addr_get(node),
extent_node_size_get(node), leadsize, size, false, arena->ind)) { extent_node_size_get(node), leadsize, size, false, arena->ind)) {
malloc_mutex_unlock(tsdn, &arena->chunks_mtx); malloc_mutex_unlock(&arena->chunks_mtx);
return (NULL); return (NULL);
} }
/* Remove node from the tree. */ /* Remove node from the tree. */
extent_tree_szsnad_remove(chunks_szsnad, node); extent_tree_szad_remove(chunks_szad, node);
extent_tree_ad_remove(chunks_ad, node); extent_tree_ad_remove(chunks_ad, node);
arena_chunk_cache_maybe_remove(arena, node, cache); arena_chunk_cache_maybe_remove(arena, node, cache);
if (leadsize != 0) { if (leadsize != 0) {
/* Insert the leading space as a smaller chunk. */ /* Insert the leading space as a smaller chunk. */
extent_node_size_set(node, leadsize); extent_node_size_set(node, leadsize);
extent_tree_szsnad_insert(chunks_szsnad, node); extent_tree_szad_insert(chunks_szad, node);
extent_tree_ad_insert(chunks_ad, node); extent_tree_ad_insert(chunks_ad, node);
arena_chunk_cache_maybe_insert(arena, node, cache); arena_chunk_cache_maybe_insert(arena, node, cache);
node = NULL; node = NULL;
@ -275,42 +271,41 @@ chunk_recycle(tsdn_t *tsdn, arena_t *arena, chunk_hooks_t *chunk_hooks,
if (chunk_hooks->split(ret, size + trailsize, size, if (chunk_hooks->split(ret, size + trailsize, size,
trailsize, false, arena->ind)) { trailsize, false, arena->ind)) {
if (dalloc_node && node != NULL) if (dalloc_node && node != NULL)
arena_node_dalloc(tsdn, arena, node); arena_node_dalloc(arena, node);
malloc_mutex_unlock(tsdn, &arena->chunks_mtx); malloc_mutex_unlock(&arena->chunks_mtx);
chunk_record(tsdn, arena, chunk_hooks, chunks_szsnad, chunk_record(arena, chunk_hooks, chunks_szad, chunks_ad,
chunks_ad, cache, ret, size + trailsize, *sn, cache, ret, size + trailsize, zeroed, committed);
zeroed, committed);
return (NULL); return (NULL);
} }
/* Insert the trailing space as a smaller chunk. */ /* Insert the trailing space as a smaller chunk. */
if (node == NULL) { if (node == NULL) {
node = arena_node_alloc(tsdn, arena); node = arena_node_alloc(arena);
if (node == NULL) { if (node == NULL) {
malloc_mutex_unlock(tsdn, &arena->chunks_mtx); malloc_mutex_unlock(&arena->chunks_mtx);
chunk_record(tsdn, arena, chunk_hooks, chunk_record(arena, chunk_hooks, chunks_szad,
chunks_szsnad, chunks_ad, cache, ret, size chunks_ad, cache, ret, size + trailsize,
+ trailsize, *sn, zeroed, committed); zeroed, committed);
return (NULL); return (NULL);
} }
} }
extent_node_init(node, arena, (void *)((uintptr_t)(ret) + size), extent_node_init(node, arena, (void *)((uintptr_t)(ret) + size),
trailsize, *sn, zeroed, committed); trailsize, zeroed, committed);
extent_tree_szsnad_insert(chunks_szsnad, node); extent_tree_szad_insert(chunks_szad, node);
extent_tree_ad_insert(chunks_ad, node); extent_tree_ad_insert(chunks_ad, node);
arena_chunk_cache_maybe_insert(arena, node, cache); arena_chunk_cache_maybe_insert(arena, node, cache);
node = NULL; node = NULL;
} }
if (!committed && chunk_hooks->commit(ret, size, 0, size, arena->ind)) { if (!committed && chunk_hooks->commit(ret, size, 0, size, arena->ind)) {
malloc_mutex_unlock(tsdn, &arena->chunks_mtx); malloc_mutex_unlock(&arena->chunks_mtx);
chunk_record(tsdn, arena, chunk_hooks, chunks_szsnad, chunks_ad, chunk_record(arena, chunk_hooks, chunks_szad, chunks_ad, cache,
cache, ret, size, *sn, zeroed, committed); ret, size, zeroed, committed);
return (NULL); return (NULL);
} }
malloc_mutex_unlock(tsdn, &arena->chunks_mtx); malloc_mutex_unlock(&arena->chunks_mtx);
assert(dalloc_node || node != NULL); assert(dalloc_node || node != NULL);
if (dalloc_node && node != NULL) if (dalloc_node && node != NULL)
arena_node_dalloc(tsdn, arena, node); arena_node_dalloc(arena, node);
if (*zero) { if (*zero) {
if (!zeroed) if (!zeroed)
memset(ret, 0, size); memset(ret, 0, size);
@ -318,11 +313,10 @@ chunk_recycle(tsdn_t *tsdn, arena_t *arena, chunk_hooks_t *chunk_hooks,
size_t i; size_t i;
size_t *p = (size_t *)(uintptr_t)ret; size_t *p = (size_t *)(uintptr_t)ret;
JEMALLOC_VALGRIND_MAKE_MEM_DEFINED(ret, size);
for (i = 0; i < size / sizeof(size_t); i++) for (i = 0; i < size / sizeof(size_t); i++)
assert(p[i] == 0); assert(p[i] == 0);
} }
if (config_valgrind)
JEMALLOC_VALGRIND_MAKE_MEM_DEFINED(ret, size);
} }
return (ret); return (ret);
} }
@ -334,29 +328,39 @@ chunk_recycle(tsdn_t *tsdn, arena_t *arena, chunk_hooks_t *chunk_hooks,
* them if they are returned. * them if they are returned.
*/ */
static void * static void *
chunk_alloc_core(tsdn_t *tsdn, arena_t *arena, void *new_addr, size_t size, chunk_alloc_core(arena_t *arena, void *new_addr, size_t size, size_t alignment,
size_t alignment, bool *zero, bool *commit, dss_prec_t dss_prec) bool *zero, bool *commit, dss_prec_t dss_prec)
{ {
void *ret; void *ret;
chunk_hooks_t chunk_hooks = CHUNK_HOOKS_INITIALIZER;
assert(size != 0); assert(size != 0);
assert((size & chunksize_mask) == 0); assert((size & chunksize_mask) == 0);
assert(alignment != 0); assert(alignment != 0);
assert((alignment & chunksize_mask) == 0); assert((alignment & chunksize_mask) == 0);
/* Retained. */
if ((ret = chunk_recycle(arena, &chunk_hooks,
&arena->chunks_szad_retained, &arena->chunks_ad_retained, false,
new_addr, size, alignment, zero, commit, true)) != NULL)
return (ret);
/* "primary" dss. */ /* "primary" dss. */
if (have_dss && dss_prec == dss_prec_primary && (ret = if (have_dss && dss_prec == dss_prec_primary && (ret =
chunk_alloc_dss(tsdn, arena, new_addr, size, alignment, zero, chunk_alloc_dss(arena, new_addr, size, alignment, zero, commit)) !=
commit)) != NULL)
return (ret);
/* mmap. */
if ((ret = chunk_alloc_mmap(new_addr, size, alignment, zero, commit)) !=
NULL) NULL)
return (ret); return (ret);
/*
* mmap. Requesting an address is not implemented for
* chunk_alloc_mmap(), so only call it if (new_addr == NULL).
*/
if (new_addr == NULL && (ret = chunk_alloc_mmap(size, alignment, zero,
commit)) != NULL)
return (ret);
/* "secondary" dss. */ /* "secondary" dss. */
if (have_dss && dss_prec == dss_prec_secondary && (ret = if (have_dss && dss_prec == dss_prec_secondary && (ret =
chunk_alloc_dss(tsdn, arena, new_addr, size, alignment, zero, chunk_alloc_dss(arena, new_addr, size, alignment, zero, commit)) !=
commit)) != NULL) NULL)
return (ret); return (ret);
/* All strategies for allocation failed. */ /* All strategies for allocation failed. */
@ -376,7 +380,7 @@ chunk_alloc_base(size_t size)
*/ */
zero = true; zero = true;
commit = true; commit = true;
ret = chunk_alloc_mmap(NULL, size, chunksize, &zero, &commit); ret = chunk_alloc_mmap(size, chunksize, &zero, &commit);
if (ret == NULL) if (ret == NULL)
return (NULL); return (NULL);
if (config_valgrind) if (config_valgrind)
@ -386,33 +390,37 @@ chunk_alloc_base(size_t size)
} }
void * void *
chunk_alloc_cache(tsdn_t *tsdn, arena_t *arena, chunk_hooks_t *chunk_hooks, chunk_alloc_cache(arena_t *arena, chunk_hooks_t *chunk_hooks, void *new_addr,
void *new_addr, size_t size, size_t alignment, size_t *sn, bool *zero, size_t size, size_t alignment, bool *zero, bool dalloc_node)
bool *commit, bool dalloc_node)
{ {
void *ret; void *ret;
bool commit;
assert(size != 0); assert(size != 0);
assert((size & chunksize_mask) == 0); assert((size & chunksize_mask) == 0);
assert(alignment != 0); assert(alignment != 0);
assert((alignment & chunksize_mask) == 0); assert((alignment & chunksize_mask) == 0);
ret = chunk_recycle(tsdn, arena, chunk_hooks, commit = true;
&arena->chunks_szsnad_cached, &arena->chunks_ad_cached, true, ret = chunk_recycle(arena, chunk_hooks, &arena->chunks_szad_cached,
new_addr, size, alignment, sn, zero, commit, dalloc_node); &arena->chunks_ad_cached, true, new_addr, size, alignment, zero,
&commit, dalloc_node);
if (ret == NULL) if (ret == NULL)
return (NULL); return (NULL);
assert(commit);
if (config_valgrind) if (config_valgrind)
JEMALLOC_VALGRIND_MAKE_MEM_UNDEFINED(ret, size); JEMALLOC_VALGRIND_MAKE_MEM_UNDEFINED(ret, size);
return (ret); return (ret);
} }
static arena_t * static arena_t *
chunk_arena_get(tsdn_t *tsdn, unsigned arena_ind) chunk_arena_get(unsigned arena_ind)
{ {
arena_t *arena; arena_t *arena;
arena = arena_get(tsdn, arena_ind, false); /* Dodge tsd for a0 in order to avoid bootstrapping issues. */
arena = (arena_ind == 0) ? a0get() : arena_get(tsd_fetch(), arena_ind,
false, true);
/* /*
* The arena we're allocating on behalf of must have been initialized * The arena we're allocating on behalf of must have been initialized
* already. * already.
@ -422,12 +430,14 @@ chunk_arena_get(tsdn_t *tsdn, unsigned arena_ind)
} }
static void * static void *
chunk_alloc_default_impl(tsdn_t *tsdn, arena_t *arena, void *new_addr, chunk_alloc_default(void *new_addr, size_t size, size_t alignment, bool *zero,
size_t size, size_t alignment, bool *zero, bool *commit) bool *commit, unsigned arena_ind)
{ {
void *ret; void *ret;
arena_t *arena;
ret = chunk_alloc_core(tsdn, arena, new_addr, size, alignment, zero, arena = chunk_arena_get(arena_ind);
ret = chunk_alloc_core(arena, new_addr, size, alignment, zero,
commit, arena->dss_prec); commit, arena->dss_prec);
if (ret == NULL) if (ret == NULL)
return (NULL); return (NULL);
@ -437,80 +447,26 @@ chunk_alloc_default_impl(tsdn_t *tsdn, arena_t *arena, void *new_addr,
return (ret); return (ret);
} }
static void *
chunk_alloc_default(void *new_addr, size_t size, size_t alignment, bool *zero,
bool *commit, unsigned arena_ind)
{
tsdn_t *tsdn;
arena_t *arena;
tsdn = tsdn_fetch();
arena = chunk_arena_get(tsdn, arena_ind);
return (chunk_alloc_default_impl(tsdn, arena, new_addr, size, alignment,
zero, commit));
}
static void *
chunk_alloc_retained(tsdn_t *tsdn, arena_t *arena, chunk_hooks_t *chunk_hooks,
void *new_addr, size_t size, size_t alignment, size_t *sn, bool *zero,
bool *commit)
{
void *ret;
assert(size != 0);
assert((size & chunksize_mask) == 0);
assert(alignment != 0);
assert((alignment & chunksize_mask) == 0);
ret = chunk_recycle(tsdn, arena, chunk_hooks,
&arena->chunks_szsnad_retained, &arena->chunks_ad_retained, false,
new_addr, size, alignment, sn, zero, commit, true);
if (config_stats && ret != NULL)
arena->stats.retained -= size;
return (ret);
}
void * void *
chunk_alloc_wrapper(tsdn_t *tsdn, arena_t *arena, chunk_hooks_t *chunk_hooks, chunk_alloc_wrapper(arena_t *arena, chunk_hooks_t *chunk_hooks, void *new_addr,
void *new_addr, size_t size, size_t alignment, size_t *sn, bool *zero, size_t size, size_t alignment, bool *zero, bool *commit)
bool *commit)
{ {
void *ret; void *ret;
chunk_hooks_assure_initialized(tsdn, arena, chunk_hooks); chunk_hooks_assure_initialized(arena, chunk_hooks);
ret = chunk_hooks->alloc(new_addr, size, alignment, zero, commit,
ret = chunk_alloc_retained(tsdn, arena, chunk_hooks, new_addr, size, arena->ind);
alignment, sn, zero, commit); if (ret == NULL)
if (ret == NULL) { return (NULL);
if (chunk_hooks->alloc == chunk_alloc_default) { if (config_valgrind && chunk_hooks->alloc != chunk_alloc_default)
/* Call directly to propagate tsdn. */ JEMALLOC_VALGRIND_MAKE_MEM_UNDEFINED(ret, chunksize);
ret = chunk_alloc_default_impl(tsdn, arena, new_addr,
size, alignment, zero, commit);
} else {
ret = chunk_hooks->alloc(new_addr, size, alignment,
zero, commit, arena->ind);
}
if (ret == NULL)
return (NULL);
*sn = arena_extent_sn_next(arena);
if (config_valgrind && chunk_hooks->alloc !=
chunk_alloc_default)
JEMALLOC_VALGRIND_MAKE_MEM_UNDEFINED(ret, chunksize);
}
return (ret); return (ret);
} }
static void static void
chunk_record(tsdn_t *tsdn, arena_t *arena, chunk_hooks_t *chunk_hooks, chunk_record(arena_t *arena, chunk_hooks_t *chunk_hooks,
extent_tree_t *chunks_szsnad, extent_tree_t *chunks_ad, bool cache, extent_tree_t *chunks_szad, extent_tree_t *chunks_ad, bool cache,
void *chunk, size_t size, size_t sn, bool zeroed, bool committed) void *chunk, size_t size, bool zeroed, bool committed)
{ {
bool unzeroed; bool unzeroed;
extent_node_t *node, *prev; extent_node_t *node, *prev;
@ -520,9 +476,9 @@ chunk_record(tsdn_t *tsdn, arena_t *arena, chunk_hooks_t *chunk_hooks,
unzeroed = cache || !zeroed; unzeroed = cache || !zeroed;
JEMALLOC_VALGRIND_MAKE_MEM_NOACCESS(chunk, size); JEMALLOC_VALGRIND_MAKE_MEM_NOACCESS(chunk, size);
malloc_mutex_lock(tsdn, &arena->chunks_mtx); malloc_mutex_lock(&arena->chunks_mtx);
chunk_hooks_assure_initialized_locked(tsdn, arena, chunk_hooks); chunk_hooks_assure_initialized_locked(arena, chunk_hooks);
extent_node_init(&key, arena, (void *)((uintptr_t)chunk + size), 0, 0, extent_node_init(&key, arena, (void *)((uintptr_t)chunk + size), 0,
false, false); false, false);
node = extent_tree_ad_nsearch(chunks_ad, &key); node = extent_tree_ad_nsearch(chunks_ad, &key);
/* Try to coalesce forward. */ /* Try to coalesce forward. */
@ -534,21 +490,19 @@ chunk_record(tsdn_t *tsdn, arena_t *arena, chunk_hooks_t *chunk_hooks,
/* /*
* Coalesce chunk with the following address range. This does * Coalesce chunk with the following address range. This does
* not change the position within chunks_ad, so only * not change the position within chunks_ad, so only
* remove/insert from/into chunks_szsnad. * remove/insert from/into chunks_szad.
*/ */
extent_tree_szsnad_remove(chunks_szsnad, node); extent_tree_szad_remove(chunks_szad, node);
arena_chunk_cache_maybe_remove(arena, node, cache); arena_chunk_cache_maybe_remove(arena, node, cache);
extent_node_addr_set(node, chunk); extent_node_addr_set(node, chunk);
extent_node_size_set(node, size + extent_node_size_get(node)); extent_node_size_set(node, size + extent_node_size_get(node));
if (sn < extent_node_sn_get(node))
extent_node_sn_set(node, sn);
extent_node_zeroed_set(node, extent_node_zeroed_get(node) && extent_node_zeroed_set(node, extent_node_zeroed_get(node) &&
!unzeroed); !unzeroed);
extent_tree_szsnad_insert(chunks_szsnad, node); extent_tree_szad_insert(chunks_szad, node);
arena_chunk_cache_maybe_insert(arena, node, cache); arena_chunk_cache_maybe_insert(arena, node, cache);
} else { } else {
/* Coalescing forward failed, so insert a new node. */ /* Coalescing forward failed, so insert a new node. */
node = arena_node_alloc(tsdn, arena); node = arena_node_alloc(arena);
if (node == NULL) { if (node == NULL) {
/* /*
* Node allocation failed, which is an exceedingly * Node allocation failed, which is an exceedingly
@ -557,15 +511,15 @@ chunk_record(tsdn_t *tsdn, arena_t *arena, chunk_hooks_t *chunk_hooks,
* a virtual memory leak. * a virtual memory leak.
*/ */
if (cache) { if (cache) {
chunk_purge_wrapper(tsdn, arena, chunk_hooks, chunk_purge_wrapper(arena, chunk_hooks, chunk,
chunk, size, 0, size); size, 0, size);
} }
goto label_return; goto label_return;
} }
extent_node_init(node, arena, chunk, size, sn, !unzeroed, extent_node_init(node, arena, chunk, size, !unzeroed,
committed); committed);
extent_tree_ad_insert(chunks_ad, node); extent_tree_ad_insert(chunks_ad, node);
extent_tree_szsnad_insert(chunks_szsnad, node); extent_tree_szad_insert(chunks_szad, node);
arena_chunk_cache_maybe_insert(arena, node, cache); arena_chunk_cache_maybe_insert(arena, node, cache);
} }
@ -579,33 +533,31 @@ chunk_record(tsdn_t *tsdn, arena_t *arena, chunk_hooks_t *chunk_hooks,
/* /*
* Coalesce chunk with the previous address range. This does * Coalesce chunk with the previous address range. This does
* not change the position within chunks_ad, so only * not change the position within chunks_ad, so only
* remove/insert node from/into chunks_szsnad. * remove/insert node from/into chunks_szad.
*/ */
extent_tree_szsnad_remove(chunks_szsnad, prev); extent_tree_szad_remove(chunks_szad, prev);
extent_tree_ad_remove(chunks_ad, prev); extent_tree_ad_remove(chunks_ad, prev);
arena_chunk_cache_maybe_remove(arena, prev, cache); arena_chunk_cache_maybe_remove(arena, prev, cache);
extent_tree_szsnad_remove(chunks_szsnad, node); extent_tree_szad_remove(chunks_szad, node);
arena_chunk_cache_maybe_remove(arena, node, cache); arena_chunk_cache_maybe_remove(arena, node, cache);
extent_node_addr_set(node, extent_node_addr_get(prev)); extent_node_addr_set(node, extent_node_addr_get(prev));
extent_node_size_set(node, extent_node_size_get(prev) + extent_node_size_set(node, extent_node_size_get(prev) +
extent_node_size_get(node)); extent_node_size_get(node));
if (extent_node_sn_get(prev) < extent_node_sn_get(node))
extent_node_sn_set(node, extent_node_sn_get(prev));
extent_node_zeroed_set(node, extent_node_zeroed_get(prev) && extent_node_zeroed_set(node, extent_node_zeroed_get(prev) &&
extent_node_zeroed_get(node)); extent_node_zeroed_get(node));
extent_tree_szsnad_insert(chunks_szsnad, node); extent_tree_szad_insert(chunks_szad, node);
arena_chunk_cache_maybe_insert(arena, node, cache); arena_chunk_cache_maybe_insert(arena, node, cache);
arena_node_dalloc(tsdn, arena, prev); arena_node_dalloc(arena, prev);
} }
label_return: label_return:
malloc_mutex_unlock(tsdn, &arena->chunks_mtx); malloc_mutex_unlock(&arena->chunks_mtx);
} }
void void
chunk_dalloc_cache(tsdn_t *tsdn, arena_t *arena, chunk_hooks_t *chunk_hooks, chunk_dalloc_cache(arena_t *arena, chunk_hooks_t *chunk_hooks, void *chunk,
void *chunk, size_t size, size_t sn, bool committed) size_t size, bool committed)
{ {
assert(chunk != NULL); assert(chunk != NULL);
@ -613,49 +565,24 @@ chunk_dalloc_cache(tsdn_t *tsdn, arena_t *arena, chunk_hooks_t *chunk_hooks,
assert(size != 0); assert(size != 0);
assert((size & chunksize_mask) == 0); assert((size & chunksize_mask) == 0);
chunk_record(tsdn, arena, chunk_hooks, &arena->chunks_szsnad_cached, chunk_record(arena, chunk_hooks, &arena->chunks_szad_cached,
&arena->chunks_ad_cached, true, chunk, size, sn, false, &arena->chunks_ad_cached, true, chunk, size, false, committed);
committed); arena_maybe_purge(arena);
arena_maybe_purge(tsdn, arena);
}
static bool
chunk_dalloc_default_impl(void *chunk, size_t size)
{
if (!have_dss || !chunk_in_dss(chunk))
return (chunk_dalloc_mmap(chunk, size));
return (true);
}
static bool
chunk_dalloc_default(void *chunk, size_t size, bool committed,
unsigned arena_ind)
{
return (chunk_dalloc_default_impl(chunk, size));
} }
void void
chunk_dalloc_wrapper(tsdn_t *tsdn, arena_t *arena, chunk_hooks_t *chunk_hooks, chunk_dalloc_arena(arena_t *arena, chunk_hooks_t *chunk_hooks, void *chunk,
void *chunk, size_t size, size_t sn, bool zeroed, bool committed) size_t size, bool zeroed, bool committed)
{ {
bool err;
assert(chunk != NULL); assert(chunk != NULL);
assert(CHUNK_ADDR2BASE(chunk) == chunk); assert(CHUNK_ADDR2BASE(chunk) == chunk);
assert(size != 0); assert(size != 0);
assert((size & chunksize_mask) == 0); assert((size & chunksize_mask) == 0);
chunk_hooks_assure_initialized(tsdn, arena, chunk_hooks); chunk_hooks_assure_initialized(arena, chunk_hooks);
/* Try to deallocate. */ /* Try to deallocate. */
if (chunk_hooks->dalloc == chunk_dalloc_default) { if (!chunk_hooks->dalloc(chunk, size, committed, arena->ind))
/* Call directly to propagate tsdn. */
err = chunk_dalloc_default_impl(chunk, size);
} else
err = chunk_hooks->dalloc(chunk, size, committed, arena->ind);
if (!err)
return; return;
/* Try to decommit; purge if that fails. */ /* Try to decommit; purge if that fails. */
if (committed) { if (committed) {
@ -664,12 +591,29 @@ chunk_dalloc_wrapper(tsdn_t *tsdn, arena_t *arena, chunk_hooks_t *chunk_hooks,
} }
zeroed = !committed || !chunk_hooks->purge(chunk, size, 0, size, zeroed = !committed || !chunk_hooks->purge(chunk, size, 0, size,
arena->ind); arena->ind);
chunk_record(tsdn, arena, chunk_hooks, &arena->chunks_szsnad_retained, chunk_record(arena, chunk_hooks, &arena->chunks_szad_retained,
&arena->chunks_ad_retained, false, chunk, size, sn, zeroed, &arena->chunks_ad_retained, false, chunk, size, zeroed, committed);
committed); }
if (config_stats) static bool
arena->stats.retained += size; chunk_dalloc_default(void *chunk, size_t size, bool committed,
unsigned arena_ind)
{
if (!have_dss || !chunk_in_dss(chunk))
return (chunk_dalloc_mmap(chunk, size));
return (true);
}
void
chunk_dalloc_wrapper(arena_t *arena, chunk_hooks_t *chunk_hooks, void *chunk,
size_t size, bool committed)
{
chunk_hooks_assure_initialized(arena, chunk_hooks);
chunk_hooks->dalloc(chunk, size, committed, arena->ind);
if (config_valgrind && chunk_hooks->dalloc != chunk_dalloc_default)
JEMALLOC_VALGRIND_MAKE_MEM_NOACCESS(chunk, size);
} }
static bool static bool
@ -690,9 +634,8 @@ chunk_decommit_default(void *chunk, size_t size, size_t offset, size_t length,
length)); length));
} }
static bool bool
chunk_purge_default(void *chunk, size_t size, size_t offset, size_t length, chunk_purge_arena(arena_t *arena, void *chunk, size_t offset, size_t length)
unsigned arena_ind)
{ {
assert(chunk != NULL); assert(chunk != NULL);
@ -705,12 +648,21 @@ chunk_purge_default(void *chunk, size_t size, size_t offset, size_t length,
length)); length));
} }
bool static bool
chunk_purge_wrapper(tsdn_t *tsdn, arena_t *arena, chunk_hooks_t *chunk_hooks, chunk_purge_default(void *chunk, size_t size, size_t offset, size_t length,
void *chunk, size_t size, size_t offset, size_t length) unsigned arena_ind)
{ {
chunk_hooks_assure_initialized(tsdn, arena, chunk_hooks); return (chunk_purge_arena(chunk_arena_get(arena_ind), chunk, offset,
length));
}
bool
chunk_purge_wrapper(arena_t *arena, chunk_hooks_t *chunk_hooks, void *chunk,
size_t size, size_t offset, size_t length)
{
chunk_hooks_assure_initialized(arena, chunk_hooks);
return (chunk_hooks->purge(chunk, size, offset, length, arena->ind)); return (chunk_hooks->purge(chunk, size, offset, length, arena->ind));
} }
@ -724,31 +676,24 @@ chunk_split_default(void *chunk, size_t size, size_t size_a, size_t size_b,
return (false); return (false);
} }
static bool
chunk_merge_default_impl(void *chunk_a, void *chunk_b)
{
if (!maps_coalesce)
return (true);
if (have_dss && !chunk_dss_mergeable(chunk_a, chunk_b))
return (true);
return (false);
}
static bool static bool
chunk_merge_default(void *chunk_a, size_t size_a, void *chunk_b, size_t size_b, chunk_merge_default(void *chunk_a, size_t size_a, void *chunk_b, size_t size_b,
bool committed, unsigned arena_ind) bool committed, unsigned arena_ind)
{ {
return (chunk_merge_default_impl(chunk_a, chunk_b)); if (!maps_coalesce)
return (true);
if (have_dss && chunk_in_dss(chunk_a) != chunk_in_dss(chunk_b))
return (true);
return (false);
} }
static rtree_node_elm_t * static rtree_node_elm_t *
chunks_rtree_node_alloc(size_t nelms) chunks_rtree_node_alloc(size_t nelms)
{ {
return ((rtree_node_elm_t *)base_alloc(TSDN_NULL, nelms * return ((rtree_node_elm_t *)base_alloc(nelms *
sizeof(rtree_node_elm_t))); sizeof(rtree_node_elm_t)));
} }
@ -771,7 +716,7 @@ chunk_boot(void)
* so pages_map will always take fast path. * so pages_map will always take fast path.
*/ */
if (!opt_lg_chunk) { if (!opt_lg_chunk) {
opt_lg_chunk = ffs_u((unsigned)info.dwAllocationGranularity) opt_lg_chunk = jemalloc_ffs((int)info.dwAllocationGranularity)
- 1; - 1;
} }
#else #else
@ -785,11 +730,32 @@ chunk_boot(void)
chunksize_mask = chunksize - 1; chunksize_mask = chunksize - 1;
chunk_npages = (chunksize >> LG_PAGE); chunk_npages = (chunksize >> LG_PAGE);
if (have_dss) if (have_dss && chunk_dss_boot())
chunk_dss_boot(); return (true);
if (rtree_new(&chunks_rtree, (unsigned)((ZU(1) << (LG_SIZEOF_PTR+3)) - if (rtree_new(&chunks_rtree, (ZU(1) << (LG_SIZEOF_PTR+3)) -
opt_lg_chunk), chunks_rtree_node_alloc, NULL)) opt_lg_chunk, chunks_rtree_node_alloc, NULL))
return (true); return (true);
return (false); return (false);
} }
void
chunk_prefork(void)
{
chunk_dss_prefork();
}
void
chunk_postfork_parent(void)
{
chunk_dss_postfork_parent();
}
void
chunk_postfork_child(void)
{
chunk_dss_postfork_child();
}

View File

@ -10,19 +10,20 @@ const char *dss_prec_names[] = {
"N/A" "N/A"
}; };
/* Current dss precedence default, used when creating new arenas. */
static dss_prec_t dss_prec_default = DSS_PREC_DEFAULT;
/* /*
* Current dss precedence default, used when creating new arenas. NB: This is * Protects sbrk() calls. This avoids malloc races among threads, though it
* stored as unsigned rather than dss_prec_t because in principle there's no * does not protect against races with threads that call sbrk() directly.
* guarantee that sizeof(dss_prec_t) is the same as sizeof(unsigned), and we use
* atomic operations to synchronize the setting.
*/ */
static unsigned dss_prec_default = (unsigned)DSS_PREC_DEFAULT; static malloc_mutex_t dss_mtx;
/* Base address of the DSS. */ /* Base address of the DSS. */
static void *dss_base; static void *dss_base;
/* Atomic boolean indicating whether the DSS is exhausted. */ /* Current end of the DSS, or ((void *)-1) if the DSS is exhausted. */
static unsigned dss_exhausted; static void *dss_prev;
/* Atomic current upper limit on DSS addresses. */ /* Current upper limit on DSS addresses. */
static void *dss_max; static void *dss_max;
/******************************************************************************/ /******************************************************************************/
@ -46,7 +47,9 @@ chunk_dss_prec_get(void)
if (!have_dss) if (!have_dss)
return (dss_prec_disabled); return (dss_prec_disabled);
ret = (dss_prec_t)atomic_read_u(&dss_prec_default); malloc_mutex_lock(&dss_mtx);
ret = dss_prec_default;
malloc_mutex_unlock(&dss_mtx);
return (ret); return (ret);
} }
@ -56,46 +59,15 @@ chunk_dss_prec_set(dss_prec_t dss_prec)
if (!have_dss) if (!have_dss)
return (dss_prec != dss_prec_disabled); return (dss_prec != dss_prec_disabled);
atomic_write_u(&dss_prec_default, (unsigned)dss_prec); malloc_mutex_lock(&dss_mtx);
dss_prec_default = dss_prec;
malloc_mutex_unlock(&dss_mtx);
return (false); return (false);
} }
static void *
chunk_dss_max_update(void *new_addr)
{
void *max_cur;
spin_t spinner;
/*
* Get the current end of the DSS as max_cur and assure that dss_max is
* up to date.
*/
spin_init(&spinner);
while (true) {
void *max_prev = atomic_read_p(&dss_max);
max_cur = chunk_dss_sbrk(0);
if ((uintptr_t)max_prev > (uintptr_t)max_cur) {
/*
* Another thread optimistically updated dss_max. Wait
* for it to finish.
*/
spin_adaptive(&spinner);
continue;
}
if (!atomic_cas_p(&dss_max, max_prev, max_cur))
break;
}
/* Fixed new_addr can only be supported if it is at the edge of DSS. */
if (new_addr != NULL && max_cur != new_addr)
return (NULL);
return (max_cur);
}
void * void *
chunk_alloc_dss(tsdn_t *tsdn, arena_t *arena, void *new_addr, size_t size, chunk_alloc_dss(arena_t *arena, void *new_addr, size_t size, size_t alignment,
size_t alignment, bool *zero, bool *commit) bool *zero, bool *commit)
{ {
cassert(have_dss); cassert(have_dss);
assert(size > 0 && (size & chunksize_mask) == 0); assert(size > 0 && (size & chunksize_mask) == 0);
@ -108,20 +80,28 @@ chunk_alloc_dss(tsdn_t *tsdn, arena_t *arena, void *new_addr, size_t size,
if ((intptr_t)size < 0) if ((intptr_t)size < 0)
return (NULL); return (NULL);
if (!atomic_read_u(&dss_exhausted)) { malloc_mutex_lock(&dss_mtx);
if (dss_prev != (void *)-1) {
/* /*
* The loop is necessary to recover from races with other * The loop is necessary to recover from races with other
* threads that are using the DSS for something other than * threads that are using the DSS for something other than
* malloc. * malloc.
*/ */
while (true) { do {
void *ret, *cpad, *max_cur, *dss_next, *dss_prev; void *ret, *cpad, *dss_next;
size_t gap_size, cpad_size; size_t gap_size, cpad_size;
intptr_t incr; intptr_t incr;
/* Avoid an unnecessary system call. */
if (new_addr != NULL && dss_max != new_addr)
break;
max_cur = chunk_dss_max_update(new_addr); /* Get the current end of the DSS. */
if (max_cur == NULL) dss_max = chunk_dss_sbrk(0);
goto label_oom;
/* Make sure the earlier condition still holds. */
if (new_addr != NULL && dss_max != new_addr)
break;
/* /*
* Calculate how much padding is necessary to * Calculate how much padding is necessary to
@ -140,29 +120,22 @@ chunk_alloc_dss(tsdn_t *tsdn, arena_t *arena, void *new_addr, size_t size,
cpad_size = (uintptr_t)ret - (uintptr_t)cpad; cpad_size = (uintptr_t)ret - (uintptr_t)cpad;
dss_next = (void *)((uintptr_t)ret + size); dss_next = (void *)((uintptr_t)ret + size);
if ((uintptr_t)ret < (uintptr_t)dss_max || if ((uintptr_t)ret < (uintptr_t)dss_max ||
(uintptr_t)dss_next < (uintptr_t)dss_max) (uintptr_t)dss_next < (uintptr_t)dss_max) {
goto label_oom; /* Wrap-around. */ /* Wrap-around. */
malloc_mutex_unlock(&dss_mtx);
return (NULL);
}
incr = gap_size + cpad_size + size; incr = gap_size + cpad_size + size;
/*
* Optimistically update dss_max, and roll back below if
* sbrk() fails. No other thread will try to extend the
* DSS while dss_max is greater than the current DSS
* max reported by sbrk(0).
*/
if (atomic_cas_p(&dss_max, max_cur, dss_next))
continue;
/* Try to allocate. */
dss_prev = chunk_dss_sbrk(incr); dss_prev = chunk_dss_sbrk(incr);
if (dss_prev == max_cur) { if (dss_prev == dss_max) {
/* Success. */ /* Success. */
dss_max = dss_next;
malloc_mutex_unlock(&dss_mtx);
if (cpad_size != 0) { if (cpad_size != 0) {
chunk_hooks_t chunk_hooks = chunk_hooks_t chunk_hooks =
CHUNK_HOOKS_INITIALIZER; CHUNK_HOOKS_INITIALIZER;
chunk_dalloc_wrapper(tsdn, arena, chunk_dalloc_wrapper(arena,
&chunk_hooks, cpad, cpad_size, &chunk_hooks, cpad, cpad_size,
arena_extent_sn_next(arena), false,
true); true);
} }
if (*zero) { if (*zero) {
@ -174,65 +147,68 @@ chunk_alloc_dss(tsdn_t *tsdn, arena_t *arena, void *new_addr, size_t size,
*commit = pages_decommit(ret, size); *commit = pages_decommit(ret, size);
return (ret); return (ret);
} }
} while (dss_prev != (void *)-1);
/*
* Failure, whether due to OOM or a race with a raw
* sbrk() call from outside the allocator. Try to roll
* back optimistic dss_max update; if rollback fails,
* it's due to another caller of this function having
* succeeded since this invocation started, in which
* case rollback is not necessary.
*/
atomic_cas_p(&dss_max, dss_next, max_cur);
if (dss_prev == (void *)-1) {
/* OOM. */
atomic_write_u(&dss_exhausted, (unsigned)true);
goto label_oom;
}
}
} }
label_oom: malloc_mutex_unlock(&dss_mtx);
return (NULL); return (NULL);
} }
static bool
chunk_in_dss_helper(void *chunk, void *max)
{
return ((uintptr_t)chunk >= (uintptr_t)dss_base && (uintptr_t)chunk <
(uintptr_t)max);
}
bool bool
chunk_in_dss(void *chunk) chunk_in_dss(void *chunk)
{ {
bool ret;
cassert(have_dss); cassert(have_dss);
return (chunk_in_dss_helper(chunk, atomic_read_p(&dss_max))); malloc_mutex_lock(&dss_mtx);
if ((uintptr_t)chunk >= (uintptr_t)dss_base
&& (uintptr_t)chunk < (uintptr_t)dss_max)
ret = true;
else
ret = false;
malloc_mutex_unlock(&dss_mtx);
return (ret);
} }
bool bool
chunk_dss_mergeable(void *chunk_a, void *chunk_b)
{
void *max;
cassert(have_dss);
max = atomic_read_p(&dss_max);
return (chunk_in_dss_helper(chunk_a, max) ==
chunk_in_dss_helper(chunk_b, max));
}
void
chunk_dss_boot(void) chunk_dss_boot(void)
{ {
cassert(have_dss); cassert(have_dss);
if (malloc_mutex_init(&dss_mtx))
return (true);
dss_base = chunk_dss_sbrk(0); dss_base = chunk_dss_sbrk(0);
dss_exhausted = (unsigned)(dss_base == (void *)-1); dss_prev = dss_base;
dss_max = dss_base; dss_max = dss_base;
return (false);
}
void
chunk_dss_prefork(void)
{
if (have_dss)
malloc_mutex_prefork(&dss_mtx);
}
void
chunk_dss_postfork_parent(void)
{
if (have_dss)
malloc_mutex_postfork_parent(&dss_mtx);
}
void
chunk_dss_postfork_child(void)
{
if (have_dss)
malloc_mutex_postfork_child(&dss_mtx);
} }
/******************************************************************************/ /******************************************************************************/

View File

@ -16,22 +16,23 @@ chunk_alloc_mmap_slow(size_t size, size_t alignment, bool *zero, bool *commit)
do { do {
void *pages; void *pages;
size_t leadsize; size_t leadsize;
pages = pages_map(NULL, alloc_size, commit); pages = pages_map(NULL, alloc_size);
if (pages == NULL) if (pages == NULL)
return (NULL); return (NULL);
leadsize = ALIGNMENT_CEILING((uintptr_t)pages, alignment) - leadsize = ALIGNMENT_CEILING((uintptr_t)pages, alignment) -
(uintptr_t)pages; (uintptr_t)pages;
ret = pages_trim(pages, alloc_size, leadsize, size, commit); ret = pages_trim(pages, alloc_size, leadsize, size);
} while (ret == NULL); } while (ret == NULL);
assert(ret != NULL); assert(ret != NULL);
*zero = true; *zero = true;
if (!*commit)
*commit = pages_decommit(ret, size);
return (ret); return (ret);
} }
void * void *
chunk_alloc_mmap(void *new_addr, size_t size, size_t alignment, bool *zero, chunk_alloc_mmap(size_t size, size_t alignment, bool *zero, bool *commit)
bool *commit)
{ {
void *ret; void *ret;
size_t offset; size_t offset;
@ -52,10 +53,9 @@ chunk_alloc_mmap(void *new_addr, size_t size, size_t alignment, bool *zero,
assert(alignment != 0); assert(alignment != 0);
assert((alignment & chunksize_mask) == 0); assert((alignment & chunksize_mask) == 0);
ret = pages_map(new_addr, size, commit); ret = pages_map(NULL, size);
if (ret == NULL || ret == new_addr) if (ret == NULL)
return (ret); return (NULL);
assert(new_addr == NULL);
offset = ALIGNMENT_ADDR2OFFSET(ret, alignment); offset = ALIGNMENT_ADDR2OFFSET(ret, alignment);
if (offset != 0) { if (offset != 0) {
pages_unmap(ret, size); pages_unmap(ret, size);
@ -64,6 +64,8 @@ chunk_alloc_mmap(void *new_addr, size_t size, size_t alignment, bool *zero,
assert(ret != NULL); assert(ret != NULL);
*zero = true; *zero = true;
if (!*commit)
*commit = pages_decommit(ret, size);
return (ret); return (ret);
} }

View File

@ -99,8 +99,7 @@ ckh_try_bucket_insert(ckh_t *ckh, size_t bucket, const void *key,
* Cycle through the cells in the bucket, starting at a random position. * Cycle through the cells in the bucket, starting at a random position.
* The randomness avoids worst-case search overhead as buckets fill up. * The randomness avoids worst-case search overhead as buckets fill up.
*/ */
offset = (unsigned)prng_lg_range_u64(&ckh->prng_state, prng32(offset, LG_CKH_BUCKET_CELLS, ckh->prng_state, CKH_A, CKH_C);
LG_CKH_BUCKET_CELLS);
for (i = 0; i < (ZU(1) << LG_CKH_BUCKET_CELLS); i++) { for (i = 0; i < (ZU(1) << LG_CKH_BUCKET_CELLS); i++) {
cell = &ckh->tab[(bucket << LG_CKH_BUCKET_CELLS) + cell = &ckh->tab[(bucket << LG_CKH_BUCKET_CELLS) +
((i + offset) & ((ZU(1) << LG_CKH_BUCKET_CELLS) - 1))]; ((i + offset) & ((ZU(1) << LG_CKH_BUCKET_CELLS) - 1))];
@ -142,8 +141,7 @@ ckh_evict_reloc_insert(ckh_t *ckh, size_t argbucket, void const **argkey,
* were an item for which both hashes indicated the same * were an item for which both hashes indicated the same
* bucket. * bucket.
*/ */
i = (unsigned)prng_lg_range_u64(&ckh->prng_state, prng32(i, LG_CKH_BUCKET_CELLS, ckh->prng_state, CKH_A, CKH_C);
LG_CKH_BUCKET_CELLS);
cell = &ckh->tab[(bucket << LG_CKH_BUCKET_CELLS) + i]; cell = &ckh->tab[(bucket << LG_CKH_BUCKET_CELLS) + i];
assert(cell->key != NULL); assert(cell->key != NULL);
@ -249,7 +247,8 @@ ckh_grow(tsd_t *tsd, ckh_t *ckh)
{ {
bool ret; bool ret;
ckhc_t *tab, *ttab; ckhc_t *tab, *ttab;
unsigned lg_prevbuckets, lg_curcells; size_t lg_curcells;
unsigned lg_prevbuckets;
#ifdef CKH_COUNT #ifdef CKH_COUNT
ckh->ngrows++; ckh->ngrows++;
@ -267,12 +266,12 @@ ckh_grow(tsd_t *tsd, ckh_t *ckh)
lg_curcells++; lg_curcells++;
usize = sa2u(sizeof(ckhc_t) << lg_curcells, CACHELINE); usize = sa2u(sizeof(ckhc_t) << lg_curcells, CACHELINE);
if (unlikely(usize == 0 || usize > HUGE_MAXCLASS)) { if (usize == 0) {
ret = true; ret = true;
goto label_return; goto label_return;
} }
tab = (ckhc_t *)ipallocztm(tsd_tsdn(tsd), usize, CACHELINE, tab = (ckhc_t *)ipallocztm(tsd, usize, CACHELINE, true, NULL,
true, NULL, true, arena_ichoose(tsd, NULL)); true, NULL);
if (tab == NULL) { if (tab == NULL) {
ret = true; ret = true;
goto label_return; goto label_return;
@ -284,12 +283,12 @@ ckh_grow(tsd_t *tsd, ckh_t *ckh)
ckh->lg_curbuckets = lg_curcells - LG_CKH_BUCKET_CELLS; ckh->lg_curbuckets = lg_curcells - LG_CKH_BUCKET_CELLS;
if (!ckh_rebuild(ckh, tab)) { if (!ckh_rebuild(ckh, tab)) {
idalloctm(tsd_tsdn(tsd), tab, NULL, true, true); idalloctm(tsd, tab, tcache_get(tsd, false), true);
break; break;
} }
/* Rebuilding failed, so back out partially rebuilt table. */ /* Rebuilding failed, so back out partially rebuilt table. */
idalloctm(tsd_tsdn(tsd), ckh->tab, NULL, true, true); idalloctm(tsd, ckh->tab, tcache_get(tsd, false), true);
ckh->tab = tab; ckh->tab = tab;
ckh->lg_curbuckets = lg_prevbuckets; ckh->lg_curbuckets = lg_prevbuckets;
} }
@ -303,8 +302,8 @@ static void
ckh_shrink(tsd_t *tsd, ckh_t *ckh) ckh_shrink(tsd_t *tsd, ckh_t *ckh)
{ {
ckhc_t *tab, *ttab; ckhc_t *tab, *ttab;
size_t usize; size_t lg_curcells, usize;
unsigned lg_prevbuckets, lg_curcells; unsigned lg_prevbuckets;
/* /*
* It is possible (though unlikely, given well behaved hashes) that the * It is possible (though unlikely, given well behaved hashes) that the
@ -313,10 +312,10 @@ ckh_shrink(tsd_t *tsd, ckh_t *ckh)
lg_prevbuckets = ckh->lg_curbuckets; lg_prevbuckets = ckh->lg_curbuckets;
lg_curcells = ckh->lg_curbuckets + LG_CKH_BUCKET_CELLS - 1; lg_curcells = ckh->lg_curbuckets + LG_CKH_BUCKET_CELLS - 1;
usize = sa2u(sizeof(ckhc_t) << lg_curcells, CACHELINE); usize = sa2u(sizeof(ckhc_t) << lg_curcells, CACHELINE);
if (unlikely(usize == 0 || usize > HUGE_MAXCLASS)) if (usize == 0)
return; return;
tab = (ckhc_t *)ipallocztm(tsd_tsdn(tsd), usize, CACHELINE, true, NULL, tab = (ckhc_t *)ipallocztm(tsd, usize, CACHELINE, true, NULL, true,
true, arena_ichoose(tsd, NULL)); NULL);
if (tab == NULL) { if (tab == NULL) {
/* /*
* An OOM error isn't worth propagating, since it doesn't * An OOM error isn't worth propagating, since it doesn't
@ -331,7 +330,7 @@ ckh_shrink(tsd_t *tsd, ckh_t *ckh)
ckh->lg_curbuckets = lg_curcells - LG_CKH_BUCKET_CELLS; ckh->lg_curbuckets = lg_curcells - LG_CKH_BUCKET_CELLS;
if (!ckh_rebuild(ckh, tab)) { if (!ckh_rebuild(ckh, tab)) {
idalloctm(tsd_tsdn(tsd), tab, NULL, true, true); idalloctm(tsd, tab, tcache_get(tsd, false), true);
#ifdef CKH_COUNT #ifdef CKH_COUNT
ckh->nshrinks++; ckh->nshrinks++;
#endif #endif
@ -339,7 +338,7 @@ ckh_shrink(tsd_t *tsd, ckh_t *ckh)
} }
/* Rebuilding failed, so back out partially rebuilt table. */ /* Rebuilding failed, so back out partially rebuilt table. */
idalloctm(tsd_tsdn(tsd), ckh->tab, NULL, true, true); idalloctm(tsd, ckh->tab, tcache_get(tsd, false), true);
ckh->tab = tab; ckh->tab = tab;
ckh->lg_curbuckets = lg_prevbuckets; ckh->lg_curbuckets = lg_prevbuckets;
#ifdef CKH_COUNT #ifdef CKH_COUNT
@ -388,12 +387,12 @@ ckh_new(tsd_t *tsd, ckh_t *ckh, size_t minitems, ckh_hash_t *hash,
ckh->keycomp = keycomp; ckh->keycomp = keycomp;
usize = sa2u(sizeof(ckhc_t) << lg_mincells, CACHELINE); usize = sa2u(sizeof(ckhc_t) << lg_mincells, CACHELINE);
if (unlikely(usize == 0 || usize > HUGE_MAXCLASS)) { if (usize == 0) {
ret = true; ret = true;
goto label_return; goto label_return;
} }
ckh->tab = (ckhc_t *)ipallocztm(tsd_tsdn(tsd), usize, CACHELINE, true, ckh->tab = (ckhc_t *)ipallocztm(tsd, usize, CACHELINE, true, NULL, true,
NULL, true, arena_ichoose(tsd, NULL)); NULL);
if (ckh->tab == NULL) { if (ckh->tab == NULL) {
ret = true; ret = true;
goto label_return; goto label_return;
@ -422,9 +421,9 @@ ckh_delete(tsd_t *tsd, ckh_t *ckh)
(unsigned long long)ckh->nrelocs); (unsigned long long)ckh->nrelocs);
#endif #endif
idalloctm(tsd_tsdn(tsd), ckh->tab, NULL, true, true); idalloctm(tsd, ckh->tab, tcache_get(tsd, false), true);
if (config_debug) if (config_debug)
memset(ckh, JEMALLOC_FREE_JUNK, sizeof(ckh_t)); memset(ckh, 0x5a, sizeof(ckh_t));
} }
size_t size_t

File diff suppressed because it is too large Load Diff

View File

@ -3,48 +3,45 @@
/******************************************************************************/ /******************************************************************************/
/*
* Round down to the nearest chunk size that can actually be requested during
* normal huge allocation.
*/
JEMALLOC_INLINE_C size_t JEMALLOC_INLINE_C size_t
extent_quantize(size_t size) extent_quantize(size_t size)
{ {
size_t ret;
szind_t ind;
assert(size > 0); /*
* Round down to the nearest chunk size that can actually be requested
ind = size2index(size + 1); * during normal huge allocation.
if (ind == 0) { */
/* Avoid underflow. */ return (index2size(size2index(size + 1) - 1));
return (index2size(0));
}
ret = index2size(ind - 1);
assert(ret <= size);
return (ret);
} }
JEMALLOC_INLINE_C int JEMALLOC_INLINE_C int
extent_sz_comp(const extent_node_t *a, const extent_node_t *b) extent_szad_comp(extent_node_t *a, extent_node_t *b)
{ {
int ret;
size_t a_qsize = extent_quantize(extent_node_size_get(a)); size_t a_qsize = extent_quantize(extent_node_size_get(a));
size_t b_qsize = extent_quantize(extent_node_size_get(b)); size_t b_qsize = extent_quantize(extent_node_size_get(b));
return ((a_qsize > b_qsize) - (a_qsize < b_qsize)); /*
* Compare based on quantized size rather than size, in order to sort
* equally useful extents only by address.
*/
ret = (a_qsize > b_qsize) - (a_qsize < b_qsize);
if (ret == 0) {
uintptr_t a_addr = (uintptr_t)extent_node_addr_get(a);
uintptr_t b_addr = (uintptr_t)extent_node_addr_get(b);
ret = (a_addr > b_addr) - (a_addr < b_addr);
}
return (ret);
} }
JEMALLOC_INLINE_C int /* Generate red-black tree functions. */
extent_sn_comp(const extent_node_t *a, const extent_node_t *b) rb_gen(, extent_tree_szad_, extent_tree_t, extent_node_t, szad_link,
{ extent_szad_comp)
size_t a_sn = extent_node_sn_get(a);
size_t b_sn = extent_node_sn_get(b);
return ((a_sn > b_sn) - (a_sn < b_sn));
}
JEMALLOC_INLINE_C int JEMALLOC_INLINE_C int
extent_ad_comp(const extent_node_t *a, const extent_node_t *b) extent_ad_comp(extent_node_t *a, extent_node_t *b)
{ {
uintptr_t a_addr = (uintptr_t)extent_node_addr_get(a); uintptr_t a_addr = (uintptr_t)extent_node_addr_get(a);
uintptr_t b_addr = (uintptr_t)extent_node_addr_get(b); uintptr_t b_addr = (uintptr_t)extent_node_addr_get(b);
@ -52,26 +49,5 @@ extent_ad_comp(const extent_node_t *a, const extent_node_t *b)
return ((a_addr > b_addr) - (a_addr < b_addr)); return ((a_addr > b_addr) - (a_addr < b_addr));
} }
JEMALLOC_INLINE_C int
extent_szsnad_comp(const extent_node_t *a, const extent_node_t *b)
{
int ret;
ret = extent_sz_comp(a, b);
if (ret != 0)
return (ret);
ret = extent_sn_comp(a, b);
if (ret != 0)
return (ret);
ret = extent_ad_comp(a, b);
return (ret);
}
/* Generate red-black tree functions. */
rb_gen(, extent_tree_szsnad_, extent_tree_t, extent_node_t, szsnad_link,
extent_szsnad_comp)
/* Generate red-black tree functions. */ /* Generate red-black tree functions. */
rb_gen(, extent_tree_ad_, extent_tree_t, extent_node_t, ad_link, extent_ad_comp) rb_gen(, extent_tree_ad_, extent_tree_t, extent_node_t, ad_link, extent_ad_comp)

View File

@ -15,21 +15,12 @@ huge_node_get(const void *ptr)
} }
static bool static bool
huge_node_set(tsdn_t *tsdn, const void *ptr, extent_node_t *node) huge_node_set(const void *ptr, extent_node_t *node)
{ {
assert(extent_node_addr_get(node) == ptr); assert(extent_node_addr_get(node) == ptr);
assert(!extent_node_achunk_get(node)); assert(!extent_node_achunk_get(node));
return (chunk_register(tsdn, ptr, node)); return (chunk_register(ptr, node));
}
static void
huge_node_reset(tsdn_t *tsdn, const void *ptr, extent_node_t *node)
{
bool err;
err = huge_node_set(tsdn, ptr, node);
assert(!err);
} }
static void static void
@ -40,39 +31,39 @@ huge_node_unset(const void *ptr, const extent_node_t *node)
} }
void * void *
huge_malloc(tsdn_t *tsdn, arena_t *arena, size_t usize, bool zero) huge_malloc(tsd_t *tsd, arena_t *arena, size_t size, bool zero,
tcache_t *tcache)
{ {
size_t usize;
assert(usize == s2u(usize)); usize = s2u(size);
if (usize == 0) {
/* size_t overflow. */
return (NULL);
}
return (huge_palloc(tsdn, arena, usize, chunksize, zero)); return (huge_palloc(tsd, arena, usize, chunksize, zero, tcache));
} }
void * void *
huge_palloc(tsdn_t *tsdn, arena_t *arena, size_t usize, size_t alignment, huge_palloc(tsd_t *tsd, arena_t *arena, size_t size, size_t alignment,
bool zero) bool zero, tcache_t *tcache)
{ {
void *ret; void *ret;
size_t ausize; size_t usize;
arena_t *iarena;
extent_node_t *node; extent_node_t *node;
size_t sn;
bool is_zeroed; bool is_zeroed;
/* Allocate one or more contiguous chunks for this request. */ /* Allocate one or more contiguous chunks for this request. */
assert(!tsdn_null(tsdn) || arena != NULL); usize = sa2u(size, alignment);
if (unlikely(usize == 0))
ausize = sa2u(usize, alignment);
if (unlikely(ausize == 0 || ausize > HUGE_MAXCLASS))
return (NULL); return (NULL);
assert(ausize >= chunksize); assert(usize >= chunksize);
/* Allocate an extent node with which to track the chunk. */ /* Allocate an extent node with which to track the chunk. */
iarena = (!tsdn_null(tsdn)) ? arena_ichoose(tsdn_tsd(tsdn), NULL) : node = ipallocztm(tsd, CACHELINE_CEILING(sizeof(extent_node_t)),
a0get(); CACHELINE, false, tcache, true, arena);
node = ipallocztm(tsdn, CACHELINE_CEILING(sizeof(extent_node_t)),
CACHELINE, false, NULL, true, iarena);
if (node == NULL) if (node == NULL)
return (NULL); return (NULL);
@ -81,35 +72,33 @@ huge_palloc(tsdn_t *tsdn, arena_t *arena, size_t usize, size_t alignment,
* it is possible to make correct junk/zero fill decisions below. * it is possible to make correct junk/zero fill decisions below.
*/ */
is_zeroed = zero; is_zeroed = zero;
if (likely(!tsdn_null(tsdn))) arena = arena_choose(tsd, arena);
arena = arena_choose(tsdn_tsd(tsdn), arena); if (unlikely(arena == NULL) || (ret = arena_chunk_alloc_huge(arena,
if (unlikely(arena == NULL) || (ret = arena_chunk_alloc_huge(tsdn, size, alignment, &is_zeroed)) == NULL) {
arena, usize, alignment, &sn, &is_zeroed)) == NULL) { idalloctm(tsd, node, tcache, true);
idalloctm(tsdn, node, NULL, true, true);
return (NULL); return (NULL);
} }
extent_node_init(node, arena, ret, usize, sn, is_zeroed, true); extent_node_init(node, arena, ret, size, is_zeroed, true);
if (huge_node_set(tsdn, ret, node)) { if (huge_node_set(ret, node)) {
arena_chunk_dalloc_huge(tsdn, arena, ret, usize, sn); arena_chunk_dalloc_huge(arena, ret, size);
idalloctm(tsdn, node, NULL, true, true); idalloctm(tsd, node, tcache, true);
return (NULL); return (NULL);
} }
/* Insert node into huge. */ /* Insert node into huge. */
malloc_mutex_lock(tsdn, &arena->huge_mtx); malloc_mutex_lock(&arena->huge_mtx);
ql_elm_new(node, ql_link); ql_elm_new(node, ql_link);
ql_tail_insert(&arena->huge, node, ql_link); ql_tail_insert(&arena->huge, node, ql_link);
malloc_mutex_unlock(tsdn, &arena->huge_mtx); malloc_mutex_unlock(&arena->huge_mtx);
if (zero || (config_fill && unlikely(opt_zero))) { if (zero || (config_fill && unlikely(opt_zero))) {
if (!is_zeroed) if (!is_zeroed)
memset(ret, 0, usize); memset(ret, 0, size);
} else if (config_fill && unlikely(opt_junk_alloc)) } else if (config_fill && unlikely(opt_junk_alloc))
memset(ret, JEMALLOC_ALLOC_JUNK, usize); memset(ret, 0xa5, size);
arena_decay_tick(tsdn, arena);
return (ret); return (ret);
} }
@ -127,7 +116,7 @@ huge_dalloc_junk(void *ptr, size_t usize)
* unmapped. * unmapped.
*/ */
if (!config_munmap || (have_dss && chunk_in_dss(ptr))) if (!config_munmap || (have_dss && chunk_in_dss(ptr)))
memset(ptr, JEMALLOC_FREE_JUNK, usize); memset(ptr, 0x5a, usize);
} }
} }
#ifdef JEMALLOC_JET #ifdef JEMALLOC_JET
@ -137,8 +126,8 @@ huge_dalloc_junk_t *huge_dalloc_junk = JEMALLOC_N(huge_dalloc_junk_impl);
#endif #endif
static void static void
huge_ralloc_no_move_similar(tsdn_t *tsdn, void *ptr, size_t oldsize, huge_ralloc_no_move_similar(void *ptr, size_t oldsize, size_t usize_min,
size_t usize_min, size_t usize_max, bool zero) size_t usize_max, bool zero)
{ {
size_t usize, usize_next; size_t usize, usize_next;
extent_node_t *node; extent_node_t *node;
@ -162,28 +151,24 @@ huge_ralloc_no_move_similar(tsdn_t *tsdn, void *ptr, size_t oldsize,
if (oldsize > usize) { if (oldsize > usize) {
size_t sdiff = oldsize - usize; size_t sdiff = oldsize - usize;
if (config_fill && unlikely(opt_junk_free)) { if (config_fill && unlikely(opt_junk_free)) {
memset((void *)((uintptr_t)ptr + usize), memset((void *)((uintptr_t)ptr + usize), 0x5a, sdiff);
JEMALLOC_FREE_JUNK, sdiff);
post_zeroed = false; post_zeroed = false;
} else { } else {
post_zeroed = !chunk_purge_wrapper(tsdn, arena, post_zeroed = !chunk_purge_wrapper(arena, &chunk_hooks,
&chunk_hooks, ptr, CHUNK_CEILING(oldsize), usize, ptr, CHUNK_CEILING(oldsize), usize, sdiff);
sdiff);
} }
} else } else
post_zeroed = pre_zeroed; post_zeroed = pre_zeroed;
malloc_mutex_lock(tsdn, &arena->huge_mtx); malloc_mutex_lock(&arena->huge_mtx);
/* Update the size of the huge allocation. */ /* Update the size of the huge allocation. */
huge_node_unset(ptr, node);
assert(extent_node_size_get(node) != usize); assert(extent_node_size_get(node) != usize);
extent_node_size_set(node, usize); extent_node_size_set(node, usize);
huge_node_reset(tsdn, ptr, node);
/* Update zeroed. */ /* Update zeroed. */
extent_node_zeroed_set(node, post_zeroed); extent_node_zeroed_set(node, post_zeroed);
malloc_mutex_unlock(tsdn, &arena->huge_mtx); malloc_mutex_unlock(&arena->huge_mtx);
arena_chunk_ralloc_huge_similar(tsdn, arena, ptr, oldsize, usize); arena_chunk_ralloc_huge_similar(arena, ptr, oldsize, usize);
/* Fill if necessary (growing). */ /* Fill if necessary (growing). */
if (oldsize < usize) { if (oldsize < usize) {
@ -193,15 +178,14 @@ huge_ralloc_no_move_similar(tsdn_t *tsdn, void *ptr, size_t oldsize,
usize - oldsize); usize - oldsize);
} }
} else if (config_fill && unlikely(opt_junk_alloc)) { } else if (config_fill && unlikely(opt_junk_alloc)) {
memset((void *)((uintptr_t)ptr + oldsize), memset((void *)((uintptr_t)ptr + oldsize), 0xa5, usize -
JEMALLOC_ALLOC_JUNK, usize - oldsize); oldsize);
} }
} }
} }
static bool static bool
huge_ralloc_no_move_shrink(tsdn_t *tsdn, void *ptr, size_t oldsize, huge_ralloc_no_move_shrink(void *ptr, size_t oldsize, size_t usize)
size_t usize)
{ {
extent_node_t *node; extent_node_t *node;
arena_t *arena; arena_t *arena;
@ -212,7 +196,7 @@ huge_ralloc_no_move_shrink(tsdn_t *tsdn, void *ptr, size_t oldsize,
node = huge_node_get(ptr); node = huge_node_get(ptr);
arena = extent_node_arena_get(node); arena = extent_node_arena_get(node);
pre_zeroed = extent_node_zeroed_get(node); pre_zeroed = extent_node_zeroed_get(node);
chunk_hooks = chunk_hooks_get(tsdn, arena); chunk_hooks = chunk_hooks_get(arena);
assert(oldsize > usize); assert(oldsize > usize);
@ -229,59 +213,53 @@ huge_ralloc_no_move_shrink(tsdn_t *tsdn, void *ptr, size_t oldsize,
sdiff); sdiff);
post_zeroed = false; post_zeroed = false;
} else { } else {
post_zeroed = !chunk_purge_wrapper(tsdn, arena, post_zeroed = !chunk_purge_wrapper(arena, &chunk_hooks,
&chunk_hooks, CHUNK_ADDR2BASE((uintptr_t)ptr + CHUNK_ADDR2BASE((uintptr_t)ptr + usize),
usize), CHUNK_CEILING(oldsize), CHUNK_CEILING(oldsize),
CHUNK_ADDR2OFFSET((uintptr_t)ptr + usize), sdiff); CHUNK_ADDR2OFFSET((uintptr_t)ptr + usize), sdiff);
} }
} else } else
post_zeroed = pre_zeroed; post_zeroed = pre_zeroed;
malloc_mutex_lock(tsdn, &arena->huge_mtx); malloc_mutex_lock(&arena->huge_mtx);
/* Update the size of the huge allocation. */ /* Update the size of the huge allocation. */
huge_node_unset(ptr, node);
extent_node_size_set(node, usize); extent_node_size_set(node, usize);
huge_node_reset(tsdn, ptr, node);
/* Update zeroed. */ /* Update zeroed. */
extent_node_zeroed_set(node, post_zeroed); extent_node_zeroed_set(node, post_zeroed);
malloc_mutex_unlock(tsdn, &arena->huge_mtx); malloc_mutex_unlock(&arena->huge_mtx);
/* Zap the excess chunks. */ /* Zap the excess chunks. */
arena_chunk_ralloc_huge_shrink(tsdn, arena, ptr, oldsize, usize, arena_chunk_ralloc_huge_shrink(arena, ptr, oldsize, usize);
extent_node_sn_get(node));
return (false); return (false);
} }
static bool static bool
huge_ralloc_no_move_expand(tsdn_t *tsdn, void *ptr, size_t oldsize, huge_ralloc_no_move_expand(void *ptr, size_t oldsize, size_t usize, bool zero) {
size_t usize, bool zero) {
extent_node_t *node; extent_node_t *node;
arena_t *arena; arena_t *arena;
bool is_zeroed_subchunk, is_zeroed_chunk; bool is_zeroed_subchunk, is_zeroed_chunk;
node = huge_node_get(ptr); node = huge_node_get(ptr);
arena = extent_node_arena_get(node); arena = extent_node_arena_get(node);
malloc_mutex_lock(tsdn, &arena->huge_mtx); malloc_mutex_lock(&arena->huge_mtx);
is_zeroed_subchunk = extent_node_zeroed_get(node); is_zeroed_subchunk = extent_node_zeroed_get(node);
malloc_mutex_unlock(tsdn, &arena->huge_mtx); malloc_mutex_unlock(&arena->huge_mtx);
/* /*
* Use is_zeroed_chunk to detect whether the trailing memory is zeroed, * Copy zero into is_zeroed_chunk and pass the copy to chunk_alloc(), so
* update extent's zeroed field, and zero as necessary. * that it is possible to make correct junk/zero fill decisions below.
*/ */
is_zeroed_chunk = false; is_zeroed_chunk = zero;
if (arena_chunk_ralloc_huge_expand(tsdn, arena, ptr, oldsize, usize,
if (arena_chunk_ralloc_huge_expand(arena, ptr, oldsize, usize,
&is_zeroed_chunk)) &is_zeroed_chunk))
return (true); return (true);
malloc_mutex_lock(tsdn, &arena->huge_mtx); malloc_mutex_lock(&arena->huge_mtx);
huge_node_unset(ptr, node); /* Update the size of the huge allocation. */
extent_node_size_set(node, usize); extent_node_size_set(node, usize);
extent_node_zeroed_set(node, extent_node_zeroed_get(node) && malloc_mutex_unlock(&arena->huge_mtx);
is_zeroed_chunk);
huge_node_reset(tsdn, ptr, node);
malloc_mutex_unlock(tsdn, &arena->huge_mtx);
if (zero || (config_fill && unlikely(opt_zero))) { if (zero || (config_fill && unlikely(opt_zero))) {
if (!is_zeroed_subchunk) { if (!is_zeroed_subchunk) {
@ -294,21 +272,19 @@ huge_ralloc_no_move_expand(tsdn_t *tsdn, void *ptr, size_t oldsize,
CHUNK_CEILING(oldsize)); CHUNK_CEILING(oldsize));
} }
} else if (config_fill && unlikely(opt_junk_alloc)) { } else if (config_fill && unlikely(opt_junk_alloc)) {
memset((void *)((uintptr_t)ptr + oldsize), JEMALLOC_ALLOC_JUNK, memset((void *)((uintptr_t)ptr + oldsize), 0xa5, usize -
usize - oldsize); oldsize);
} }
return (false); return (false);
} }
bool bool
huge_ralloc_no_move(tsdn_t *tsdn, void *ptr, size_t oldsize, size_t usize_min, huge_ralloc_no_move(void *ptr, size_t oldsize, size_t usize_min,
size_t usize_max, bool zero) size_t usize_max, bool zero)
{ {
assert(s2u(oldsize) == oldsize); assert(s2u(oldsize) == oldsize);
/* The following should have been caught by callers. */
assert(usize_min > 0 && usize_max <= HUGE_MAXCLASS);
/* Both allocations must be huge to avoid a move. */ /* Both allocations must be huge to avoid a move. */
if (oldsize < chunksize || usize_max < chunksize) if (oldsize < chunksize || usize_max < chunksize)
@ -316,18 +292,13 @@ huge_ralloc_no_move(tsdn_t *tsdn, void *ptr, size_t oldsize, size_t usize_min,
if (CHUNK_CEILING(usize_max) > CHUNK_CEILING(oldsize)) { if (CHUNK_CEILING(usize_max) > CHUNK_CEILING(oldsize)) {
/* Attempt to expand the allocation in-place. */ /* Attempt to expand the allocation in-place. */
if (!huge_ralloc_no_move_expand(tsdn, ptr, oldsize, usize_max, if (!huge_ralloc_no_move_expand(ptr, oldsize, usize_max, zero))
zero)) {
arena_decay_tick(tsdn, huge_aalloc(ptr));
return (false); return (false);
}
/* Try again, this time with usize_min. */ /* Try again, this time with usize_min. */
if (usize_min < usize_max && CHUNK_CEILING(usize_min) > if (usize_min < usize_max && CHUNK_CEILING(usize_min) >
CHUNK_CEILING(oldsize) && huge_ralloc_no_move_expand(tsdn, CHUNK_CEILING(oldsize) && huge_ralloc_no_move_expand(ptr,
ptr, oldsize, usize_min, zero)) { oldsize, usize_min, zero))
arena_decay_tick(tsdn, huge_aalloc(ptr));
return (false); return (false);
}
} }
/* /*
@ -336,46 +307,36 @@ huge_ralloc_no_move(tsdn_t *tsdn, void *ptr, size_t oldsize, size_t usize_min,
*/ */
if (CHUNK_CEILING(oldsize) >= CHUNK_CEILING(usize_min) if (CHUNK_CEILING(oldsize) >= CHUNK_CEILING(usize_min)
&& CHUNK_CEILING(oldsize) <= CHUNK_CEILING(usize_max)) { && CHUNK_CEILING(oldsize) <= CHUNK_CEILING(usize_max)) {
huge_ralloc_no_move_similar(tsdn, ptr, oldsize, usize_min, huge_ralloc_no_move_similar(ptr, oldsize, usize_min, usize_max,
usize_max, zero); zero);
arena_decay_tick(tsdn, huge_aalloc(ptr));
return (false); return (false);
} }
/* Attempt to shrink the allocation in-place. */ /* Attempt to shrink the allocation in-place. */
if (CHUNK_CEILING(oldsize) > CHUNK_CEILING(usize_max)) { if (CHUNK_CEILING(oldsize) > CHUNK_CEILING(usize_max))
if (!huge_ralloc_no_move_shrink(tsdn, ptr, oldsize, return (huge_ralloc_no_move_shrink(ptr, oldsize, usize_max));
usize_max)) {
arena_decay_tick(tsdn, huge_aalloc(ptr));
return (false);
}
}
return (true); return (true);
} }
static void * static void *
huge_ralloc_move_helper(tsdn_t *tsdn, arena_t *arena, size_t usize, huge_ralloc_move_helper(tsd_t *tsd, arena_t *arena, size_t usize,
size_t alignment, bool zero) size_t alignment, bool zero, tcache_t *tcache)
{ {
if (alignment <= chunksize) if (alignment <= chunksize)
return (huge_malloc(tsdn, arena, usize, zero)); return (huge_malloc(tsd, arena, usize, zero, tcache));
return (huge_palloc(tsdn, arena, usize, alignment, zero)); return (huge_palloc(tsd, arena, usize, alignment, zero, tcache));
} }
void * void *
huge_ralloc(tsd_t *tsd, arena_t *arena, void *ptr, size_t oldsize, huge_ralloc(tsd_t *tsd, arena_t *arena, void *ptr, size_t oldsize, size_t usize,
size_t usize, size_t alignment, bool zero, tcache_t *tcache) size_t alignment, bool zero, tcache_t *tcache)
{ {
void *ret; void *ret;
size_t copysize; size_t copysize;
/* The following should have been caught by callers. */
assert(usize > 0 && usize <= HUGE_MAXCLASS);
/* Try to avoid moving the allocation. */ /* Try to avoid moving the allocation. */
if (!huge_ralloc_no_move(tsd_tsdn(tsd), ptr, oldsize, usize, usize, if (!huge_ralloc_no_move(ptr, oldsize, usize, usize, zero))
zero))
return (ptr); return (ptr);
/* /*
@ -383,19 +344,19 @@ huge_ralloc(tsd_t *tsd, arena_t *arena, void *ptr, size_t oldsize,
* different size class. In that case, fall back to allocating new * different size class. In that case, fall back to allocating new
* space and copying. * space and copying.
*/ */
ret = huge_ralloc_move_helper(tsd_tsdn(tsd), arena, usize, alignment, ret = huge_ralloc_move_helper(tsd, arena, usize, alignment, zero,
zero); tcache);
if (ret == NULL) if (ret == NULL)
return (NULL); return (NULL);
copysize = (usize < oldsize) ? usize : oldsize; copysize = (usize < oldsize) ? usize : oldsize;
memcpy(ret, ptr, copysize); memcpy(ret, ptr, copysize);
isqalloc(tsd, ptr, oldsize, tcache, true); isqalloc(tsd, ptr, oldsize, tcache);
return (ret); return (ret);
} }
void void
huge_dalloc(tsdn_t *tsdn, void *ptr) huge_dalloc(tsd_t *tsd, void *ptr, tcache_t *tcache)
{ {
extent_node_t *node; extent_node_t *node;
arena_t *arena; arena_t *arena;
@ -403,18 +364,15 @@ huge_dalloc(tsdn_t *tsdn, void *ptr)
node = huge_node_get(ptr); node = huge_node_get(ptr);
arena = extent_node_arena_get(node); arena = extent_node_arena_get(node);
huge_node_unset(ptr, node); huge_node_unset(ptr, node);
malloc_mutex_lock(tsdn, &arena->huge_mtx); malloc_mutex_lock(&arena->huge_mtx);
ql_remove(&arena->huge, node, ql_link); ql_remove(&arena->huge, node, ql_link);
malloc_mutex_unlock(tsdn, &arena->huge_mtx); malloc_mutex_unlock(&arena->huge_mtx);
huge_dalloc_junk(extent_node_addr_get(node), huge_dalloc_junk(extent_node_addr_get(node),
extent_node_size_get(node)); extent_node_size_get(node));
arena_chunk_dalloc_huge(tsdn, extent_node_arena_get(node), arena_chunk_dalloc_huge(extent_node_arena_get(node),
extent_node_addr_get(node), extent_node_size_get(node), extent_node_addr_get(node), extent_node_size_get(node));
extent_node_sn_get(node)); idalloctm(tsd, node, tcache, true);
idalloctm(tsdn, node, NULL, true, true);
arena_decay_tick(tsdn, arena);
} }
arena_t * arena_t *
@ -425,7 +383,7 @@ huge_aalloc(const void *ptr)
} }
size_t size_t
huge_salloc(tsdn_t *tsdn, const void *ptr) huge_salloc(const void *ptr)
{ {
size_t size; size_t size;
extent_node_t *node; extent_node_t *node;
@ -433,15 +391,15 @@ huge_salloc(tsdn_t *tsdn, const void *ptr)
node = huge_node_get(ptr); node = huge_node_get(ptr);
arena = extent_node_arena_get(node); arena = extent_node_arena_get(node);
malloc_mutex_lock(tsdn, &arena->huge_mtx); malloc_mutex_lock(&arena->huge_mtx);
size = extent_node_size_get(node); size = extent_node_size_get(node);
malloc_mutex_unlock(tsdn, &arena->huge_mtx); malloc_mutex_unlock(&arena->huge_mtx);
return (size); return (size);
} }
prof_tctx_t * prof_tctx_t *
huge_prof_tctx_get(tsdn_t *tsdn, const void *ptr) huge_prof_tctx_get(const void *ptr)
{ {
prof_tctx_t *tctx; prof_tctx_t *tctx;
extent_node_t *node; extent_node_t *node;
@ -449,29 +407,29 @@ huge_prof_tctx_get(tsdn_t *tsdn, const void *ptr)
node = huge_node_get(ptr); node = huge_node_get(ptr);
arena = extent_node_arena_get(node); arena = extent_node_arena_get(node);
malloc_mutex_lock(tsdn, &arena->huge_mtx); malloc_mutex_lock(&arena->huge_mtx);
tctx = extent_node_prof_tctx_get(node); tctx = extent_node_prof_tctx_get(node);
malloc_mutex_unlock(tsdn, &arena->huge_mtx); malloc_mutex_unlock(&arena->huge_mtx);
return (tctx); return (tctx);
} }
void void
huge_prof_tctx_set(tsdn_t *tsdn, const void *ptr, prof_tctx_t *tctx) huge_prof_tctx_set(const void *ptr, prof_tctx_t *tctx)
{ {
extent_node_t *node; extent_node_t *node;
arena_t *arena; arena_t *arena;
node = huge_node_get(ptr); node = huge_node_get(ptr);
arena = extent_node_arena_get(node); arena = extent_node_arena_get(node);
malloc_mutex_lock(tsdn, &arena->huge_mtx); malloc_mutex_lock(&arena->huge_mtx);
extent_node_prof_tctx_set(node, tctx); extent_node_prof_tctx_set(node, tctx);
malloc_mutex_unlock(tsdn, &arena->huge_mtx); malloc_mutex_unlock(&arena->huge_mtx);
} }
void void
huge_prof_tctx_reset(tsdn_t *tsdn, const void *ptr) huge_prof_tctx_reset(const void *ptr)
{ {
huge_prof_tctx_set(tsdn, ptr, (prof_tctx_t *)(uintptr_t)1U); huge_prof_tctx_set(ptr, (prof_tctx_t *)(uintptr_t)1U);
} }

File diff suppressed because it is too large Load Diff

View File

@ -69,7 +69,7 @@ JEMALLOC_EXPORT int _pthread_mutex_init_calloc_cb(pthread_mutex_t *mutex,
#endif #endif
bool bool
malloc_mutex_init(malloc_mutex_t *mutex, const char *name, witness_rank_t rank) malloc_mutex_init(malloc_mutex_t *mutex)
{ {
#ifdef _WIN32 #ifdef _WIN32
@ -80,8 +80,6 @@ malloc_mutex_init(malloc_mutex_t *mutex, const char *name, witness_rank_t rank)
_CRT_SPINCOUNT)) _CRT_SPINCOUNT))
return (true); return (true);
# endif # endif
#elif (defined(JEMALLOC_OS_UNFAIR_LOCK))
mutex->lock = OS_UNFAIR_LOCK_INIT;
#elif (defined(JEMALLOC_OSSPIN)) #elif (defined(JEMALLOC_OSSPIN))
mutex->lock = 0; mutex->lock = 0;
#elif (defined(JEMALLOC_MUTEX_INIT_CB)) #elif (defined(JEMALLOC_MUTEX_INIT_CB))
@ -105,34 +103,31 @@ malloc_mutex_init(malloc_mutex_t *mutex, const char *name, witness_rank_t rank)
} }
pthread_mutexattr_destroy(&attr); pthread_mutexattr_destroy(&attr);
#endif #endif
if (config_debug)
witness_init(&mutex->witness, name, rank, NULL);
return (false); return (false);
} }
void void
malloc_mutex_prefork(tsdn_t *tsdn, malloc_mutex_t *mutex) malloc_mutex_prefork(malloc_mutex_t *mutex)
{ {
malloc_mutex_lock(tsdn, mutex); malloc_mutex_lock(mutex);
} }
void void
malloc_mutex_postfork_parent(tsdn_t *tsdn, malloc_mutex_t *mutex) malloc_mutex_postfork_parent(malloc_mutex_t *mutex)
{ {
malloc_mutex_unlock(tsdn, mutex); malloc_mutex_unlock(mutex);
} }
void void
malloc_mutex_postfork_child(tsdn_t *tsdn, malloc_mutex_t *mutex) malloc_mutex_postfork_child(malloc_mutex_t *mutex)
{ {
#ifdef JEMALLOC_MUTEX_INIT_CB #ifdef JEMALLOC_MUTEX_INIT_CB
malloc_mutex_unlock(tsdn, mutex); malloc_mutex_unlock(mutex);
#else #else
if (malloc_mutex_init(mutex, mutex->witness.name, if (malloc_mutex_init(mutex)) {
mutex->witness.rank)) {
malloc_printf("<jemalloc>: Error re-initializing mutex in " malloc_printf("<jemalloc>: Error re-initializing mutex in "
"child\n"); "child\n");
if (opt_abort) if (opt_abort)
@ -142,7 +137,7 @@ malloc_mutex_postfork_child(tsdn_t *tsdn, malloc_mutex_t *mutex)
} }
bool bool
malloc_mutex_boot(void) mutex_boot(void)
{ {
#ifdef JEMALLOC_MUTEX_INIT_CB #ifdef JEMALLOC_MUTEX_INIT_CB

View File

@ -1,194 +0,0 @@
#include "jemalloc/internal/jemalloc_internal.h"
#define BILLION UINT64_C(1000000000)
void
nstime_init(nstime_t *time, uint64_t ns)
{
time->ns = ns;
}
void
nstime_init2(nstime_t *time, uint64_t sec, uint64_t nsec)
{
time->ns = sec * BILLION + nsec;
}
uint64_t
nstime_ns(const nstime_t *time)
{
return (time->ns);
}
uint64_t
nstime_sec(const nstime_t *time)
{
return (time->ns / BILLION);
}
uint64_t
nstime_nsec(const nstime_t *time)
{
return (time->ns % BILLION);
}
void
nstime_copy(nstime_t *time, const nstime_t *source)
{
*time = *source;
}
int
nstime_compare(const nstime_t *a, const nstime_t *b)
{
return ((a->ns > b->ns) - (a->ns < b->ns));
}
void
nstime_add(nstime_t *time, const nstime_t *addend)
{
assert(UINT64_MAX - time->ns >= addend->ns);
time->ns += addend->ns;
}
void
nstime_subtract(nstime_t *time, const nstime_t *subtrahend)
{
assert(nstime_compare(time, subtrahend) >= 0);
time->ns -= subtrahend->ns;
}
void
nstime_imultiply(nstime_t *time, uint64_t multiplier)
{
assert((((time->ns | multiplier) & (UINT64_MAX << (sizeof(uint64_t) <<
2))) == 0) || ((time->ns * multiplier) / multiplier == time->ns));
time->ns *= multiplier;
}
void
nstime_idivide(nstime_t *time, uint64_t divisor)
{
assert(divisor != 0);
time->ns /= divisor;
}
uint64_t
nstime_divide(const nstime_t *time, const nstime_t *divisor)
{
assert(divisor->ns != 0);
return (time->ns / divisor->ns);
}
#ifdef _WIN32
# define NSTIME_MONOTONIC true
static void
nstime_get(nstime_t *time)
{
FILETIME ft;
uint64_t ticks_100ns;
GetSystemTimeAsFileTime(&ft);
ticks_100ns = (((uint64_t)ft.dwHighDateTime) << 32) | ft.dwLowDateTime;
nstime_init(time, ticks_100ns * 100);
}
#elif JEMALLOC_HAVE_CLOCK_MONOTONIC_COARSE
# define NSTIME_MONOTONIC true
static void
nstime_get(nstime_t *time)
{
struct timespec ts;
clock_gettime(CLOCK_MONOTONIC_COARSE, &ts);
nstime_init2(time, ts.tv_sec, ts.tv_nsec);
}
#elif JEMALLOC_HAVE_CLOCK_MONOTONIC
# define NSTIME_MONOTONIC true
static void
nstime_get(nstime_t *time)
{
struct timespec ts;
clock_gettime(CLOCK_MONOTONIC, &ts);
nstime_init2(time, ts.tv_sec, ts.tv_nsec);
}
#elif JEMALLOC_HAVE_MACH_ABSOLUTE_TIME
# define NSTIME_MONOTONIC true
static void
nstime_get(nstime_t *time)
{
nstime_init(time, mach_absolute_time());
}
#else
# define NSTIME_MONOTONIC false
static void
nstime_get(nstime_t *time)
{
struct timeval tv;
gettimeofday(&tv, NULL);
nstime_init2(time, tv.tv_sec, tv.tv_usec * 1000);
}
#endif
#ifdef JEMALLOC_JET
#undef nstime_monotonic
#define nstime_monotonic JEMALLOC_N(n_nstime_monotonic)
#endif
bool
nstime_monotonic(void)
{
return (NSTIME_MONOTONIC);
#undef NSTIME_MONOTONIC
}
#ifdef JEMALLOC_JET
#undef nstime_monotonic
#define nstime_monotonic JEMALLOC_N(nstime_monotonic)
nstime_monotonic_t *nstime_monotonic = JEMALLOC_N(n_nstime_monotonic);
#endif
#ifdef JEMALLOC_JET
#undef nstime_update
#define nstime_update JEMALLOC_N(n_nstime_update)
#endif
bool
nstime_update(nstime_t *time)
{
nstime_t old_time;
nstime_copy(&old_time, time);
nstime_get(time);
/* Handle non-monotonic clocks. */
if (unlikely(nstime_compare(&old_time, time) > 0)) {
nstime_copy(time, &old_time);
return (true);
}
return (false);
}
#ifdef JEMALLOC_JET
#undef nstime_update
#define nstime_update JEMALLOC_N(nstime_update)
nstime_update_t *nstime_update = JEMALLOC_N(n_nstime_update);
#endif

View File

@ -1,49 +1,29 @@
#define JEMALLOC_PAGES_C_ #define JEMALLOC_PAGES_C_
#include "jemalloc/internal/jemalloc_internal.h" #include "jemalloc/internal/jemalloc_internal.h"
#ifdef JEMALLOC_SYSCTL_VM_OVERCOMMIT
#include <sys/sysctl.h>
#endif
/******************************************************************************/
/* Data. */
#ifndef _WIN32
# define PAGES_PROT_COMMIT (PROT_READ | PROT_WRITE)
# define PAGES_PROT_DECOMMIT (PROT_NONE)
static int mmap_flags;
#endif
static bool os_overcommits;
/******************************************************************************/ /******************************************************************************/
void * void *
pages_map(void *addr, size_t size, bool *commit) pages_map(void *addr, size_t size)
{ {
void *ret; void *ret;
assert(size != 0); assert(size != 0);
if (os_overcommits)
*commit = true;
#ifdef _WIN32 #ifdef _WIN32
/* /*
* If VirtualAlloc can't allocate at the given address when one is * If VirtualAlloc can't allocate at the given address when one is
* given, it fails and returns NULL. * given, it fails and returns NULL.
*/ */
ret = VirtualAlloc(addr, size, MEM_RESERVE | (*commit ? MEM_COMMIT : 0), ret = VirtualAlloc(addr, size, MEM_COMMIT | MEM_RESERVE,
PAGE_READWRITE); PAGE_READWRITE);
#else #else
/* /*
* We don't use MAP_FIXED here, because it can cause the *replacement* * We don't use MAP_FIXED here, because it can cause the *replacement*
* of existing mappings, and we only want to create new mappings. * of existing mappings, and we only want to create new mappings.
*/ */
{ ret = mmap(addr, size, PROT_READ | PROT_WRITE, MAP_PRIVATE | MAP_ANON,
int prot = *commit ? PAGES_PROT_COMMIT : PAGES_PROT_DECOMMIT; -1, 0);
ret = mmap(addr, size, prot, mmap_flags, -1, 0);
}
assert(ret != NULL); assert(ret != NULL);
if (ret == MAP_FAILED) if (ret == MAP_FAILED)
@ -87,8 +67,7 @@ pages_unmap(void *addr, size_t size)
} }
void * void *
pages_trim(void *addr, size_t alloc_size, size_t leadsize, size_t size, pages_trim(void *addr, size_t alloc_size, size_t leadsize, size_t size)
bool *commit)
{ {
void *ret = (void *)((uintptr_t)addr + leadsize); void *ret = (void *)((uintptr_t)addr + leadsize);
@ -98,7 +77,7 @@ pages_trim(void *addr, size_t alloc_size, size_t leadsize, size_t size,
void *new_addr; void *new_addr;
pages_unmap(addr, alloc_size); pages_unmap(addr, alloc_size);
new_addr = pages_map(ret, size, commit); new_addr = pages_map(ret, size);
if (new_addr == ret) if (new_addr == ret)
return (ret); return (ret);
if (new_addr) if (new_addr)
@ -122,17 +101,17 @@ static bool
pages_commit_impl(void *addr, size_t size, bool commit) pages_commit_impl(void *addr, size_t size, bool commit)
{ {
if (os_overcommits) #ifndef _WIN32
return (true); /*
* The following decommit/commit implementation is functional, but
#ifdef _WIN32 * always disabled because it doesn't add value beyong improved
return (commit ? (addr != VirtualAlloc(addr, size, MEM_COMMIT, * debugging (at the cost of extra system calls) on systems that
PAGE_READWRITE)) : (!VirtualFree(addr, size, MEM_DECOMMIT))); * overcommit.
#else */
{ if (false) {
int prot = commit ? PAGES_PROT_COMMIT : PAGES_PROT_DECOMMIT; int prot = commit ? (PROT_READ | PROT_WRITE) : PROT_NONE;
void *result = mmap(addr, size, prot, mmap_flags | MAP_FIXED, void *result = mmap(addr, size, prot, MAP_PRIVATE | MAP_ANON |
-1, 0); MAP_FIXED, -1, 0);
if (result == MAP_FAILED) if (result == MAP_FAILED)
return (true); return (true);
if (result != addr) { if (result != addr) {
@ -146,6 +125,7 @@ pages_commit_impl(void *addr, size_t size, bool commit)
return (false); return (false);
} }
#endif #endif
return (true);
} }
bool bool
@ -170,16 +150,15 @@ pages_purge(void *addr, size_t size)
#ifdef _WIN32 #ifdef _WIN32
VirtualAlloc(addr, size, MEM_RESET, PAGE_READWRITE); VirtualAlloc(addr, size, MEM_RESET, PAGE_READWRITE);
unzeroed = true; unzeroed = true;
#elif (defined(JEMALLOC_PURGE_MADVISE_FREE) || \ #elif defined(JEMALLOC_HAVE_MADVISE)
defined(JEMALLOC_PURGE_MADVISE_DONTNEED)) # ifdef JEMALLOC_PURGE_MADVISE_DONTNEED
# if defined(JEMALLOC_PURGE_MADVISE_FREE)
# define JEMALLOC_MADV_PURGE MADV_FREE
# define JEMALLOC_MADV_ZEROS false
# elif defined(JEMALLOC_PURGE_MADVISE_DONTNEED)
# define JEMALLOC_MADV_PURGE MADV_DONTNEED # define JEMALLOC_MADV_PURGE MADV_DONTNEED
# define JEMALLOC_MADV_ZEROS true # define JEMALLOC_MADV_ZEROS true
# elif defined(JEMALLOC_PURGE_MADVISE_FREE)
# define JEMALLOC_MADV_PURGE MADV_FREE
# define JEMALLOC_MADV_ZEROS false
# else # else
# error No madvise(2) flag defined for purging unused dirty pages # error "No madvise(2) flag defined for purging unused dirty pages."
# endif # endif
int err = madvise(addr, size, JEMALLOC_MADV_PURGE); int err = madvise(addr, size, JEMALLOC_MADV_PURGE);
unzeroed = (!JEMALLOC_MADV_ZEROS || err != 0); unzeroed = (!JEMALLOC_MADV_ZEROS || err != 0);
@ -192,111 +171,3 @@ pages_purge(void *addr, size_t size)
return (unzeroed); return (unzeroed);
} }
bool
pages_huge(void *addr, size_t size)
{
assert(PAGE_ADDR2BASE(addr) == addr);
assert(PAGE_CEILING(size) == size);
#ifdef JEMALLOC_THP
return (madvise(addr, size, MADV_HUGEPAGE) != 0);
#else
return (false);
#endif
}
bool
pages_nohuge(void *addr, size_t size)
{
assert(PAGE_ADDR2BASE(addr) == addr);
assert(PAGE_CEILING(size) == size);
#ifdef JEMALLOC_THP
return (madvise(addr, size, MADV_NOHUGEPAGE) != 0);
#else
return (false);
#endif
}
#ifdef JEMALLOC_SYSCTL_VM_OVERCOMMIT
static bool
os_overcommits_sysctl(void)
{
int vm_overcommit;
size_t sz;
sz = sizeof(vm_overcommit);
if (sysctlbyname("vm.overcommit", &vm_overcommit, &sz, NULL, 0) != 0)
return (false); /* Error. */
return ((vm_overcommit & 0x3) == 0);
}
#endif
#ifdef JEMALLOC_PROC_SYS_VM_OVERCOMMIT_MEMORY
/*
* Use syscall(2) rather than {open,read,close}(2) when possible to avoid
* reentry during bootstrapping if another library has interposed system call
* wrappers.
*/
static bool
os_overcommits_proc(void)
{
int fd;
char buf[1];
ssize_t nread;
#if defined(JEMALLOC_USE_SYSCALL) && defined(SYS_open)
fd = (int)syscall(SYS_open, "/proc/sys/vm/overcommit_memory", O_RDONLY);
#else
fd = open("/proc/sys/vm/overcommit_memory", O_RDONLY);
#endif
if (fd == -1)
return (false); /* Error. */
#if defined(JEMALLOC_USE_SYSCALL) && defined(SYS_read)
nread = (ssize_t)syscall(SYS_read, fd, &buf, sizeof(buf));
#else
nread = read(fd, &buf, sizeof(buf));
#endif
#if defined(JEMALLOC_USE_SYSCALL) && defined(SYS_close)
syscall(SYS_close, fd);
#else
close(fd);
#endif
if (nread < 1)
return (false); /* Error. */
/*
* /proc/sys/vm/overcommit_memory meanings:
* 0: Heuristic overcommit.
* 1: Always overcommit.
* 2: Never overcommit.
*/
return (buf[0] == '0' || buf[0] == '1');
}
#endif
void
pages_boot(void)
{
#ifndef _WIN32
mmap_flags = MAP_PRIVATE | MAP_ANON;
#endif
#ifdef JEMALLOC_SYSCTL_VM_OVERCOMMIT
os_overcommits = os_overcommits_sysctl();
#elif defined(JEMALLOC_PROC_SYS_VM_OVERCOMMIT_MEMORY)
os_overcommits = os_overcommits_proc();
# ifdef MAP_NORESERVE
if (os_overcommits)
mmap_flags |= MAP_NORESERVE;
# endif
#else
os_overcommits = false;
#endif
}

View File

@ -1,2 +0,0 @@
#define JEMALLOC_PRNG_C_
#include "jemalloc/internal/jemalloc_internal.h"

File diff suppressed because it is too large Load Diff

View File

@ -13,22 +13,22 @@
/* Function prototypes for non-inline static functions. */ /* Function prototypes for non-inline static functions. */
static quarantine_t *quarantine_grow(tsd_t *tsd, quarantine_t *quarantine); static quarantine_t *quarantine_grow(tsd_t *tsd, quarantine_t *quarantine);
static void quarantine_drain_one(tsdn_t *tsdn, quarantine_t *quarantine); static void quarantine_drain_one(tsd_t *tsd, quarantine_t *quarantine);
static void quarantine_drain(tsdn_t *tsdn, quarantine_t *quarantine, static void quarantine_drain(tsd_t *tsd, quarantine_t *quarantine,
size_t upper_bound); size_t upper_bound);
/******************************************************************************/ /******************************************************************************/
static quarantine_t * static quarantine_t *
quarantine_init(tsdn_t *tsdn, size_t lg_maxobjs) quarantine_init(tsd_t *tsd, size_t lg_maxobjs)
{ {
quarantine_t *quarantine; quarantine_t *quarantine;
size_t size;
size = offsetof(quarantine_t, objs) + ((ZU(1) << lg_maxobjs) * assert(tsd_nominal(tsd));
sizeof(quarantine_obj_t));
quarantine = (quarantine_t *)iallocztm(tsdn, size, size2index(size), quarantine = (quarantine_t *)iallocztm(tsd, offsetof(quarantine_t, objs)
false, NULL, true, arena_get(TSDN_NULL, 0, true), true); + ((ZU(1) << lg_maxobjs) * sizeof(quarantine_obj_t)), false,
tcache_get(tsd, true), true, NULL);
if (quarantine == NULL) if (quarantine == NULL)
return (NULL); return (NULL);
quarantine->curbytes = 0; quarantine->curbytes = 0;
@ -47,7 +47,7 @@ quarantine_alloc_hook_work(tsd_t *tsd)
if (!tsd_nominal(tsd)) if (!tsd_nominal(tsd))
return; return;
quarantine = quarantine_init(tsd_tsdn(tsd), LG_MAXOBJS_INIT); quarantine = quarantine_init(tsd, LG_MAXOBJS_INIT);
/* /*
* Check again whether quarantine has been initialized, because * Check again whether quarantine has been initialized, because
* quarantine_init() may have triggered recursive initialization. * quarantine_init() may have triggered recursive initialization.
@ -55,7 +55,7 @@ quarantine_alloc_hook_work(tsd_t *tsd)
if (tsd_quarantine_get(tsd) == NULL) if (tsd_quarantine_get(tsd) == NULL)
tsd_quarantine_set(tsd, quarantine); tsd_quarantine_set(tsd, quarantine);
else else
idalloctm(tsd_tsdn(tsd), quarantine, NULL, true, true); idalloctm(tsd, quarantine, tcache_get(tsd, false), true);
} }
static quarantine_t * static quarantine_t *
@ -63,9 +63,9 @@ quarantine_grow(tsd_t *tsd, quarantine_t *quarantine)
{ {
quarantine_t *ret; quarantine_t *ret;
ret = quarantine_init(tsd_tsdn(tsd), quarantine->lg_maxobjs + 1); ret = quarantine_init(tsd, quarantine->lg_maxobjs + 1);
if (ret == NULL) { if (ret == NULL) {
quarantine_drain_one(tsd_tsdn(tsd), quarantine); quarantine_drain_one(tsd, quarantine);
return (quarantine); return (quarantine);
} }
@ -87,18 +87,18 @@ quarantine_grow(tsd_t *tsd, quarantine_t *quarantine)
memcpy(&ret->objs[ncopy_a], quarantine->objs, ncopy_b * memcpy(&ret->objs[ncopy_a], quarantine->objs, ncopy_b *
sizeof(quarantine_obj_t)); sizeof(quarantine_obj_t));
} }
idalloctm(tsd_tsdn(tsd), quarantine, NULL, true, true); idalloctm(tsd, quarantine, tcache_get(tsd, false), true);
tsd_quarantine_set(tsd, ret); tsd_quarantine_set(tsd, ret);
return (ret); return (ret);
} }
static void static void
quarantine_drain_one(tsdn_t *tsdn, quarantine_t *quarantine) quarantine_drain_one(tsd_t *tsd, quarantine_t *quarantine)
{ {
quarantine_obj_t *obj = &quarantine->objs[quarantine->first]; quarantine_obj_t *obj = &quarantine->objs[quarantine->first];
assert(obj->usize == isalloc(tsdn, obj->ptr, config_prof)); assert(obj->usize == isalloc(obj->ptr, config_prof));
idalloctm(tsdn, obj->ptr, NULL, false, true); idalloctm(tsd, obj->ptr, NULL, false);
quarantine->curbytes -= obj->usize; quarantine->curbytes -= obj->usize;
quarantine->curobjs--; quarantine->curobjs--;
quarantine->first = (quarantine->first + 1) & ((ZU(1) << quarantine->first = (quarantine->first + 1) & ((ZU(1) <<
@ -106,24 +106,24 @@ quarantine_drain_one(tsdn_t *tsdn, quarantine_t *quarantine)
} }
static void static void
quarantine_drain(tsdn_t *tsdn, quarantine_t *quarantine, size_t upper_bound) quarantine_drain(tsd_t *tsd, quarantine_t *quarantine, size_t upper_bound)
{ {
while (quarantine->curbytes > upper_bound && quarantine->curobjs > 0) while (quarantine->curbytes > upper_bound && quarantine->curobjs > 0)
quarantine_drain_one(tsdn, quarantine); quarantine_drain_one(tsd, quarantine);
} }
void void
quarantine(tsd_t *tsd, void *ptr) quarantine(tsd_t *tsd, void *ptr)
{ {
quarantine_t *quarantine; quarantine_t *quarantine;
size_t usize = isalloc(tsd_tsdn(tsd), ptr, config_prof); size_t usize = isalloc(ptr, config_prof);
cassert(config_fill); cassert(config_fill);
assert(opt_quarantine); assert(opt_quarantine);
if ((quarantine = tsd_quarantine_get(tsd)) == NULL) { if ((quarantine = tsd_quarantine_get(tsd)) == NULL) {
idalloctm(tsd_tsdn(tsd), ptr, NULL, false, true); idalloctm(tsd, ptr, NULL, false);
return; return;
} }
/* /*
@ -133,7 +133,7 @@ quarantine(tsd_t *tsd, void *ptr)
if (quarantine->curbytes + usize > opt_quarantine) { if (quarantine->curbytes + usize > opt_quarantine) {
size_t upper_bound = (opt_quarantine >= usize) ? opt_quarantine size_t upper_bound = (opt_quarantine >= usize) ? opt_quarantine
- usize : 0; - usize : 0;
quarantine_drain(tsd_tsdn(tsd), quarantine, upper_bound); quarantine_drain(tsd, quarantine, upper_bound);
} }
/* Grow the quarantine ring buffer if it's full. */ /* Grow the quarantine ring buffer if it's full. */
if (quarantine->curobjs == (ZU(1) << quarantine->lg_maxobjs)) if (quarantine->curobjs == (ZU(1) << quarantine->lg_maxobjs))
@ -158,11 +158,11 @@ quarantine(tsd_t *tsd, void *ptr)
&& usize <= SMALL_MAXCLASS) && usize <= SMALL_MAXCLASS)
arena_quarantine_junk_small(ptr, usize); arena_quarantine_junk_small(ptr, usize);
else else
memset(ptr, JEMALLOC_FREE_JUNK, usize); memset(ptr, 0x5a, usize);
} }
} else { } else {
assert(quarantine->curbytes == 0); assert(quarantine->curbytes == 0);
idalloctm(tsd_tsdn(tsd), ptr, NULL, false, true); idalloctm(tsd, ptr, NULL, false);
} }
} }
@ -176,8 +176,8 @@ quarantine_cleanup(tsd_t *tsd)
quarantine = tsd_quarantine_get(tsd); quarantine = tsd_quarantine_get(tsd);
if (quarantine != NULL) { if (quarantine != NULL) {
quarantine_drain(tsd_tsdn(tsd), quarantine, 0); quarantine_drain(tsd, quarantine, 0);
idalloctm(tsd_tsdn(tsd), quarantine, NULL, true, true); idalloctm(tsd, quarantine, tcache_get(tsd, false), true);
tsd_quarantine_set(tsd, NULL); tsd_quarantine_set(tsd, NULL);
} }
} }

View File

@ -15,8 +15,6 @@ rtree_new(rtree_t *rtree, unsigned bits, rtree_node_alloc_t *alloc,
{ {
unsigned bits_in_leaf, height, i; unsigned bits_in_leaf, height, i;
assert(RTREE_HEIGHT_MAX == ((ZU(1) << (LG_SIZEOF_PTR+3)) /
RTREE_BITS_PER_LEVEL));
assert(bits > 0 && bits <= (sizeof(uintptr_t) << 3)); assert(bits > 0 && bits <= (sizeof(uintptr_t) << 3));
bits_in_leaf = (bits % RTREE_BITS_PER_LEVEL) == 0 ? RTREE_BITS_PER_LEVEL bits_in_leaf = (bits % RTREE_BITS_PER_LEVEL) == 0 ? RTREE_BITS_PER_LEVEL
@ -96,15 +94,12 @@ rtree_node_init(rtree_t *rtree, unsigned level, rtree_node_elm_t **elmp)
rtree_node_elm_t *node; rtree_node_elm_t *node;
if (atomic_cas_p((void **)elmp, NULL, RTREE_NODE_INITIALIZING)) { if (atomic_cas_p((void **)elmp, NULL, RTREE_NODE_INITIALIZING)) {
spin_t spinner;
/* /*
* Another thread is already in the process of initializing. * Another thread is already in the process of initializing.
* Spin-wait until initialization is complete. * Spin-wait until initialization is complete.
*/ */
spin_init(&spinner);
do { do {
spin_adaptive(&spinner); CPU_SPINWAIT;
node = atomic_read_p((void **)elmp); node = atomic_read_p((void **)elmp);
} while (node == RTREE_NODE_INITIALIZING); } while (node == RTREE_NODE_INITIALIZING);
} else { } else {
@ -128,5 +123,5 @@ rtree_node_elm_t *
rtree_child_read_hard(rtree_t *rtree, rtree_node_elm_t *elm, unsigned level) rtree_child_read_hard(rtree_t *rtree, rtree_node_elm_t *elm, unsigned level)
{ {
return (rtree_node_init(rtree, level+1, &elm->child)); return (rtree_node_init(rtree, level, &elm->child));
} }

View File

@ -1,2 +0,0 @@
#define JEMALLOC_SPIN_C_
#include "jemalloc/internal/jemalloc_internal.h"

1242
deps/jemalloc/src/stats.c vendored Executable file → Normal file

File diff suppressed because it is too large Load Diff

170
deps/jemalloc/src/tcache.c vendored Executable file → Normal file
View File

@ -10,7 +10,7 @@ ssize_t opt_lg_tcache_max = LG_TCACHE_MAXCLASS_DEFAULT;
tcache_bin_info_t *tcache_bin_info; tcache_bin_info_t *tcache_bin_info;
static unsigned stack_nelms; /* Total stack elms per tcache. */ static unsigned stack_nelms; /* Total stack elms per tcache. */
unsigned nhbins; size_t nhbins;
size_t tcache_maxclass; size_t tcache_maxclass;
tcaches_t *tcaches; tcaches_t *tcaches;
@ -23,11 +23,10 @@ static tcaches_t *tcaches_avail;
/******************************************************************************/ /******************************************************************************/
size_t size_t tcache_salloc(const void *ptr)
tcache_salloc(tsdn_t *tsdn, const void *ptr)
{ {
return (arena_salloc(tsdn, ptr, false)); return (arena_salloc(ptr, false));
} }
void void
@ -68,19 +67,20 @@ tcache_event_hard(tsd_t *tsd, tcache_t *tcache)
tcache->next_gc_bin++; tcache->next_gc_bin++;
if (tcache->next_gc_bin == nhbins) if (tcache->next_gc_bin == nhbins)
tcache->next_gc_bin = 0; tcache->next_gc_bin = 0;
tcache->ev_cnt = 0;
} }
void * void *
tcache_alloc_small_hard(tsdn_t *tsdn, arena_t *arena, tcache_t *tcache, tcache_alloc_small_hard(tsd_t *tsd, arena_t *arena, tcache_t *tcache,
tcache_bin_t *tbin, szind_t binind, bool *tcache_success) tcache_bin_t *tbin, szind_t binind)
{ {
void *ret; void *ret;
arena_tcache_fill_small(tsdn, arena, tbin, binind, config_prof ? arena_tcache_fill_small(arena, tbin, binind, config_prof ?
tcache->prof_accumbytes : 0); tcache->prof_accumbytes : 0);
if (config_prof) if (config_prof)
tcache->prof_accumbytes = 0; tcache->prof_accumbytes = 0;
ret = tcache_alloc_easy(tbin, tcache_success); ret = tcache_alloc_easy(tbin);
return (ret); return (ret);
} }
@ -102,18 +102,17 @@ tcache_bin_flush_small(tsd_t *tsd, tcache_t *tcache, tcache_bin_t *tbin,
for (nflush = tbin->ncached - rem; nflush > 0; nflush = ndeferred) { for (nflush = tbin->ncached - rem; nflush > 0; nflush = ndeferred) {
/* Lock the arena bin associated with the first object. */ /* Lock the arena bin associated with the first object. */
arena_chunk_t *chunk = (arena_chunk_t *)CHUNK_ADDR2BASE( arena_chunk_t *chunk = (arena_chunk_t *)CHUNK_ADDR2BASE(
*(tbin->avail - 1)); tbin->avail[0]);
arena_t *bin_arena = extent_node_arena_get(&chunk->node); arena_t *bin_arena = extent_node_arena_get(&chunk->node);
arena_bin_t *bin = &bin_arena->bins[binind]; arena_bin_t *bin = &bin_arena->bins[binind];
if (config_prof && bin_arena == arena) { if (config_prof && bin_arena == arena) {
if (arena_prof_accum(tsd_tsdn(tsd), arena, if (arena_prof_accum(arena, tcache->prof_accumbytes))
tcache->prof_accumbytes)) prof_idump();
prof_idump(tsd_tsdn(tsd));
tcache->prof_accumbytes = 0; tcache->prof_accumbytes = 0;
} }
malloc_mutex_lock(tsd_tsdn(tsd), &bin->lock); malloc_mutex_lock(&bin->lock);
if (config_stats && bin_arena == arena) { if (config_stats && bin_arena == arena) {
assert(!merged_stats); assert(!merged_stats);
merged_stats = true; merged_stats = true;
@ -123,16 +122,16 @@ tcache_bin_flush_small(tsd_t *tsd, tcache_t *tcache, tcache_bin_t *tbin,
} }
ndeferred = 0; ndeferred = 0;
for (i = 0; i < nflush; i++) { for (i = 0; i < nflush; i++) {
ptr = *(tbin->avail - 1 - i); ptr = tbin->avail[i];
assert(ptr != NULL); assert(ptr != NULL);
chunk = (arena_chunk_t *)CHUNK_ADDR2BASE(ptr); chunk = (arena_chunk_t *)CHUNK_ADDR2BASE(ptr);
if (extent_node_arena_get(&chunk->node) == bin_arena) { if (extent_node_arena_get(&chunk->node) == bin_arena) {
size_t pageind = ((uintptr_t)ptr - size_t pageind = ((uintptr_t)ptr -
(uintptr_t)chunk) >> LG_PAGE; (uintptr_t)chunk) >> LG_PAGE;
arena_chunk_map_bits_t *bitselm = arena_chunk_map_bits_t *bitselm =
arena_bitselm_get_mutable(chunk, pageind); arena_bitselm_get(chunk, pageind);
arena_dalloc_bin_junked_locked(tsd_tsdn(tsd), arena_dalloc_bin_junked_locked(bin_arena, chunk,
bin_arena, chunk, ptr, bitselm); ptr, bitselm);
} else { } else {
/* /*
* This object was allocated via a different * This object was allocated via a different
@ -140,12 +139,11 @@ tcache_bin_flush_small(tsd_t *tsd, tcache_t *tcache, tcache_bin_t *tbin,
* locked. Stash the object, so that it can be * locked. Stash the object, so that it can be
* handled in a future pass. * handled in a future pass.
*/ */
*(tbin->avail - 1 - ndeferred) = ptr; tbin->avail[ndeferred] = ptr;
ndeferred++; ndeferred++;
} }
} }
malloc_mutex_unlock(tsd_tsdn(tsd), &bin->lock); malloc_mutex_unlock(&bin->lock);
arena_decay_ticks(tsd_tsdn(tsd), bin_arena, nflush - ndeferred);
} }
if (config_stats && !merged_stats) { if (config_stats && !merged_stats) {
/* /*
@ -153,15 +151,15 @@ tcache_bin_flush_small(tsd_t *tsd, tcache_t *tcache, tcache_bin_t *tbin,
* arena, so the stats didn't get merged. Manually do so now. * arena, so the stats didn't get merged. Manually do so now.
*/ */
arena_bin_t *bin = &arena->bins[binind]; arena_bin_t *bin = &arena->bins[binind];
malloc_mutex_lock(tsd_tsdn(tsd), &bin->lock); malloc_mutex_lock(&bin->lock);
bin->stats.nflushes++; bin->stats.nflushes++;
bin->stats.nrequests += tbin->tstats.nrequests; bin->stats.nrequests += tbin->tstats.nrequests;
tbin->tstats.nrequests = 0; tbin->tstats.nrequests = 0;
malloc_mutex_unlock(tsd_tsdn(tsd), &bin->lock); malloc_mutex_unlock(&bin->lock);
} }
memmove(tbin->avail - rem, tbin->avail - tbin->ncached, rem * memmove(tbin->avail, &tbin->avail[tbin->ncached - rem],
sizeof(void *)); rem * sizeof(void *));
tbin->ncached = rem; tbin->ncached = rem;
if ((int)tbin->ncached < tbin->low_water) if ((int)tbin->ncached < tbin->low_water)
tbin->low_water = tbin->ncached; tbin->low_water = tbin->ncached;
@ -184,13 +182,13 @@ tcache_bin_flush_large(tsd_t *tsd, tcache_bin_t *tbin, szind_t binind,
for (nflush = tbin->ncached - rem; nflush > 0; nflush = ndeferred) { for (nflush = tbin->ncached - rem; nflush > 0; nflush = ndeferred) {
/* Lock the arena associated with the first object. */ /* Lock the arena associated with the first object. */
arena_chunk_t *chunk = (arena_chunk_t *)CHUNK_ADDR2BASE( arena_chunk_t *chunk = (arena_chunk_t *)CHUNK_ADDR2BASE(
*(tbin->avail - 1)); tbin->avail[0]);
arena_t *locked_arena = extent_node_arena_get(&chunk->node); arena_t *locked_arena = extent_node_arena_get(&chunk->node);
UNUSED bool idump; UNUSED bool idump;
if (config_prof) if (config_prof)
idump = false; idump = false;
malloc_mutex_lock(tsd_tsdn(tsd), &locked_arena->lock); malloc_mutex_lock(&locked_arena->lock);
if ((config_prof || config_stats) && locked_arena == arena) { if ((config_prof || config_stats) && locked_arena == arena) {
if (config_prof) { if (config_prof) {
idump = arena_prof_accum_locked(arena, idump = arena_prof_accum_locked(arena,
@ -208,13 +206,13 @@ tcache_bin_flush_large(tsd_t *tsd, tcache_bin_t *tbin, szind_t binind,
} }
ndeferred = 0; ndeferred = 0;
for (i = 0; i < nflush; i++) { for (i = 0; i < nflush; i++) {
ptr = *(tbin->avail - 1 - i); ptr = tbin->avail[i];
assert(ptr != NULL); assert(ptr != NULL);
chunk = (arena_chunk_t *)CHUNK_ADDR2BASE(ptr); chunk = (arena_chunk_t *)CHUNK_ADDR2BASE(ptr);
if (extent_node_arena_get(&chunk->node) == if (extent_node_arena_get(&chunk->node) ==
locked_arena) { locked_arena) {
arena_dalloc_large_junked_locked(tsd_tsdn(tsd), arena_dalloc_large_junked_locked(locked_arena,
locked_arena, chunk, ptr); chunk, ptr);
} else { } else {
/* /*
* This object was allocated via a different * This object was allocated via a different
@ -222,56 +220,62 @@ tcache_bin_flush_large(tsd_t *tsd, tcache_bin_t *tbin, szind_t binind,
* Stash the object, so that it can be handled * Stash the object, so that it can be handled
* in a future pass. * in a future pass.
*/ */
*(tbin->avail - 1 - ndeferred) = ptr; tbin->avail[ndeferred] = ptr;
ndeferred++; ndeferred++;
} }
} }
malloc_mutex_unlock(tsd_tsdn(tsd), &locked_arena->lock); malloc_mutex_unlock(&locked_arena->lock);
if (config_prof && idump) if (config_prof && idump)
prof_idump(tsd_tsdn(tsd)); prof_idump();
arena_decay_ticks(tsd_tsdn(tsd), locked_arena, nflush -
ndeferred);
} }
if (config_stats && !merged_stats) { if (config_stats && !merged_stats) {
/* /*
* The flush loop didn't happen to flush to this thread's * The flush loop didn't happen to flush to this thread's
* arena, so the stats didn't get merged. Manually do so now. * arena, so the stats didn't get merged. Manually do so now.
*/ */
malloc_mutex_lock(tsd_tsdn(tsd), &arena->lock); malloc_mutex_lock(&arena->lock);
arena->stats.nrequests_large += tbin->tstats.nrequests; arena->stats.nrequests_large += tbin->tstats.nrequests;
arena->stats.lstats[binind - NBINS].nrequests += arena->stats.lstats[binind - NBINS].nrequests +=
tbin->tstats.nrequests; tbin->tstats.nrequests;
tbin->tstats.nrequests = 0; tbin->tstats.nrequests = 0;
malloc_mutex_unlock(tsd_tsdn(tsd), &arena->lock); malloc_mutex_unlock(&arena->lock);
} }
memmove(tbin->avail - rem, tbin->avail - tbin->ncached, rem * memmove(tbin->avail, &tbin->avail[tbin->ncached - rem],
sizeof(void *)); rem * sizeof(void *));
tbin->ncached = rem; tbin->ncached = rem;
if ((int)tbin->ncached < tbin->low_water) if ((int)tbin->ncached < tbin->low_water)
tbin->low_water = tbin->ncached; tbin->low_water = tbin->ncached;
} }
static void void
tcache_arena_associate(tsdn_t *tsdn, tcache_t *tcache, arena_t *arena) tcache_arena_associate(tcache_t *tcache, arena_t *arena)
{ {
if (config_stats) { if (config_stats) {
/* Link into list of extant tcaches. */ /* Link into list of extant tcaches. */
malloc_mutex_lock(tsdn, &arena->lock); malloc_mutex_lock(&arena->lock);
ql_elm_new(tcache, link); ql_elm_new(tcache, link);
ql_tail_insert(&arena->tcache_ql, tcache, link); ql_tail_insert(&arena->tcache_ql, tcache, link);
malloc_mutex_unlock(tsdn, &arena->lock); malloc_mutex_unlock(&arena->lock);
} }
} }
static void void
tcache_arena_dissociate(tsdn_t *tsdn, tcache_t *tcache, arena_t *arena) tcache_arena_reassociate(tcache_t *tcache, arena_t *oldarena, arena_t *newarena)
{
tcache_arena_dissociate(tcache, oldarena);
tcache_arena_associate(tcache, newarena);
}
void
tcache_arena_dissociate(tcache_t *tcache, arena_t *arena)
{ {
if (config_stats) { if (config_stats) {
/* Unlink from list of extant tcaches. */ /* Unlink from list of extant tcaches. */
malloc_mutex_lock(tsdn, &arena->lock); malloc_mutex_lock(&arena->lock);
if (config_debug) { if (config_debug) {
bool in_ql = false; bool in_ql = false;
tcache_t *iter; tcache_t *iter;
@ -284,20 +288,11 @@ tcache_arena_dissociate(tsdn_t *tsdn, tcache_t *tcache, arena_t *arena)
assert(in_ql); assert(in_ql);
} }
ql_remove(&arena->tcache_ql, tcache, link); ql_remove(&arena->tcache_ql, tcache, link);
tcache_stats_merge(tsdn, tcache, arena); tcache_stats_merge(tcache, arena);
malloc_mutex_unlock(tsdn, &arena->lock); malloc_mutex_unlock(&arena->lock);
} }
} }
void
tcache_arena_reassociate(tsdn_t *tsdn, tcache_t *tcache, arena_t *oldarena,
arena_t *newarena)
{
tcache_arena_dissociate(tsdn, tcache, oldarena);
tcache_arena_associate(tsdn, tcache, newarena);
}
tcache_t * tcache_t *
tcache_get_hard(tsd_t *tsd) tcache_get_hard(tsd_t *tsd)
{ {
@ -311,11 +306,11 @@ tcache_get_hard(tsd_t *tsd)
arena = arena_choose(tsd, NULL); arena = arena_choose(tsd, NULL);
if (unlikely(arena == NULL)) if (unlikely(arena == NULL))
return (NULL); return (NULL);
return (tcache_create(tsd_tsdn(tsd), arena)); return (tcache_create(tsd, arena));
} }
tcache_t * tcache_t *
tcache_create(tsdn_t *tsdn, arena_t *arena) tcache_create(tsd_t *tsd, arena_t *arena)
{ {
tcache_t *tcache; tcache_t *tcache;
size_t size, stack_offset; size_t size, stack_offset;
@ -329,26 +324,18 @@ tcache_create(tsdn_t *tsdn, arena_t *arena)
/* Avoid false cacheline sharing. */ /* Avoid false cacheline sharing. */
size = sa2u(size, CACHELINE); size = sa2u(size, CACHELINE);
tcache = ipallocztm(tsdn, size, CACHELINE, true, NULL, true, tcache = ipallocztm(tsd, size, CACHELINE, true, false, true, a0get());
arena_get(TSDN_NULL, 0, true));
if (tcache == NULL) if (tcache == NULL)
return (NULL); return (NULL);
tcache_arena_associate(tsdn, tcache, arena); tcache_arena_associate(tcache, arena);
ticker_init(&tcache->gc_ticker, TCACHE_GC_INCR);
assert((TCACHE_NSLOTS_SMALL_MAX & 1U) == 0); assert((TCACHE_NSLOTS_SMALL_MAX & 1U) == 0);
for (i = 0; i < nhbins; i++) { for (i = 0; i < nhbins; i++) {
tcache->tbins[i].lg_fill_div = 1; tcache->tbins[i].lg_fill_div = 1;
stack_offset += tcache_bin_info[i].ncached_max * sizeof(void *);
/*
* avail points past the available space. Allocations will
* access the slots toward higher addresses (for the benefit of
* prefetch).
*/
tcache->tbins[i].avail = (void **)((uintptr_t)tcache + tcache->tbins[i].avail = (void **)((uintptr_t)tcache +
(uintptr_t)stack_offset); (uintptr_t)stack_offset);
stack_offset += tcache_bin_info[i].ncached_max * sizeof(void *);
} }
return (tcache); return (tcache);
@ -361,7 +348,7 @@ tcache_destroy(tsd_t *tsd, tcache_t *tcache)
unsigned i; unsigned i;
arena = arena_choose(tsd, NULL); arena = arena_choose(tsd, NULL);
tcache_arena_dissociate(tsd_tsdn(tsd), tcache, arena); tcache_arena_dissociate(tcache, arena);
for (i = 0; i < NBINS; i++) { for (i = 0; i < NBINS; i++) {
tcache_bin_t *tbin = &tcache->tbins[i]; tcache_bin_t *tbin = &tcache->tbins[i];
@ -369,9 +356,9 @@ tcache_destroy(tsd_t *tsd, tcache_t *tcache)
if (config_stats && tbin->tstats.nrequests != 0) { if (config_stats && tbin->tstats.nrequests != 0) {
arena_bin_t *bin = &arena->bins[i]; arena_bin_t *bin = &arena->bins[i];
malloc_mutex_lock(tsd_tsdn(tsd), &bin->lock); malloc_mutex_lock(&bin->lock);
bin->stats.nrequests += tbin->tstats.nrequests; bin->stats.nrequests += tbin->tstats.nrequests;
malloc_mutex_unlock(tsd_tsdn(tsd), &bin->lock); malloc_mutex_unlock(&bin->lock);
} }
} }
@ -380,19 +367,19 @@ tcache_destroy(tsd_t *tsd, tcache_t *tcache)
tcache_bin_flush_large(tsd, tbin, i, 0, tcache); tcache_bin_flush_large(tsd, tbin, i, 0, tcache);
if (config_stats && tbin->tstats.nrequests != 0) { if (config_stats && tbin->tstats.nrequests != 0) {
malloc_mutex_lock(tsd_tsdn(tsd), &arena->lock); malloc_mutex_lock(&arena->lock);
arena->stats.nrequests_large += tbin->tstats.nrequests; arena->stats.nrequests_large += tbin->tstats.nrequests;
arena->stats.lstats[i - NBINS].nrequests += arena->stats.lstats[i - NBINS].nrequests +=
tbin->tstats.nrequests; tbin->tstats.nrequests;
malloc_mutex_unlock(tsd_tsdn(tsd), &arena->lock); malloc_mutex_unlock(&arena->lock);
} }
} }
if (config_prof && tcache->prof_accumbytes > 0 && if (config_prof && tcache->prof_accumbytes > 0 &&
arena_prof_accum(tsd_tsdn(tsd), arena, tcache->prof_accumbytes)) arena_prof_accum(arena, tcache->prof_accumbytes))
prof_idump(tsd_tsdn(tsd)); prof_idump();
idalloctm(tsd_tsdn(tsd), tcache, NULL, true, true); idalloctm(tsd, tcache, false, true);
} }
void void
@ -416,22 +403,21 @@ tcache_enabled_cleanup(tsd_t *tsd)
/* Do nothing. */ /* Do nothing. */
} }
/* Caller must own arena->lock. */
void void
tcache_stats_merge(tsdn_t *tsdn, tcache_t *tcache, arena_t *arena) tcache_stats_merge(tcache_t *tcache, arena_t *arena)
{ {
unsigned i; unsigned i;
cassert(config_stats); cassert(config_stats);
malloc_mutex_assert_owner(tsdn, &arena->lock);
/* Merge and reset tcache stats. */ /* Merge and reset tcache stats. */
for (i = 0; i < NBINS; i++) { for (i = 0; i < NBINS; i++) {
arena_bin_t *bin = &arena->bins[i]; arena_bin_t *bin = &arena->bins[i];
tcache_bin_t *tbin = &tcache->tbins[i]; tcache_bin_t *tbin = &tcache->tbins[i];
malloc_mutex_lock(tsdn, &bin->lock); malloc_mutex_lock(&bin->lock);
bin->stats.nrequests += tbin->tstats.nrequests; bin->stats.nrequests += tbin->tstats.nrequests;
malloc_mutex_unlock(tsdn, &bin->lock); malloc_mutex_unlock(&bin->lock);
tbin->tstats.nrequests = 0; tbin->tstats.nrequests = 0;
} }
@ -447,12 +433,11 @@ tcache_stats_merge(tsdn_t *tsdn, tcache_t *tcache, arena_t *arena)
bool bool
tcaches_create(tsd_t *tsd, unsigned *r_ind) tcaches_create(tsd_t *tsd, unsigned *r_ind)
{ {
arena_t *arena;
tcache_t *tcache; tcache_t *tcache;
tcaches_t *elm; tcaches_t *elm;
if (tcaches == NULL) { if (tcaches == NULL) {
tcaches = base_alloc(tsd_tsdn(tsd), sizeof(tcache_t *) * tcaches = base_alloc(sizeof(tcache_t *) *
(MALLOCX_TCACHE_MAX+1)); (MALLOCX_TCACHE_MAX+1));
if (tcaches == NULL) if (tcaches == NULL)
return (true); return (true);
@ -460,10 +445,7 @@ tcaches_create(tsd_t *tsd, unsigned *r_ind)
if (tcaches_avail == NULL && tcaches_past > MALLOCX_TCACHE_MAX) if (tcaches_avail == NULL && tcaches_past > MALLOCX_TCACHE_MAX)
return (true); return (true);
arena = arena_ichoose(tsd, NULL); tcache = tcache_create(tsd, a0get());
if (unlikely(arena == NULL))
return (true);
tcache = tcache_create(tsd_tsdn(tsd), arena);
if (tcache == NULL) if (tcache == NULL)
return (true); return (true);
@ -471,7 +453,7 @@ tcaches_create(tsd_t *tsd, unsigned *r_ind)
elm = tcaches_avail; elm = tcaches_avail;
tcaches_avail = tcaches_avail->next; tcaches_avail = tcaches_avail->next;
elm->tcache = tcache; elm->tcache = tcache;
*r_ind = (unsigned)(elm - tcaches); *r_ind = elm - tcaches;
} else { } else {
elm = &tcaches[tcaches_past]; elm = &tcaches[tcaches_past];
elm->tcache = tcache; elm->tcache = tcache;
@ -509,7 +491,7 @@ tcaches_destroy(tsd_t *tsd, unsigned ind)
} }
bool bool
tcache_boot(tsdn_t *tsdn) tcache_boot(void)
{ {
unsigned i; unsigned i;
@ -517,17 +499,17 @@ tcache_boot(tsdn_t *tsdn)
* If necessary, clamp opt_lg_tcache_max, now that large_maxclass is * If necessary, clamp opt_lg_tcache_max, now that large_maxclass is
* known. * known.
*/ */
if (opt_lg_tcache_max < 0 || (ZU(1) << opt_lg_tcache_max) < SMALL_MAXCLASS) if (opt_lg_tcache_max < 0 || (1U << opt_lg_tcache_max) < SMALL_MAXCLASS)
tcache_maxclass = SMALL_MAXCLASS; tcache_maxclass = SMALL_MAXCLASS;
else if ((ZU(1) << opt_lg_tcache_max) > large_maxclass) else if ((1U << opt_lg_tcache_max) > large_maxclass)
tcache_maxclass = large_maxclass; tcache_maxclass = large_maxclass;
else else
tcache_maxclass = (ZU(1) << opt_lg_tcache_max); tcache_maxclass = (1U << opt_lg_tcache_max);
nhbins = size2index(tcache_maxclass) + 1; nhbins = size2index(tcache_maxclass) + 1;
/* Initialize tcache_bin_info. */ /* Initialize tcache_bin_info. */
tcache_bin_info = (tcache_bin_info_t *)base_alloc(tsdn, nhbins * tcache_bin_info = (tcache_bin_info_t *)base_alloc(nhbins *
sizeof(tcache_bin_info_t)); sizeof(tcache_bin_info_t));
if (tcache_bin_info == NULL) if (tcache_bin_info == NULL)
return (true); return (true);

View File

@ -1,2 +0,0 @@
#define JEMALLOC_TICKER_C_
#include "jemalloc/internal/jemalloc_internal.h"

View File

@ -77,7 +77,7 @@ tsd_cleanup(void *arg)
/* Do nothing. */ /* Do nothing. */
break; break;
case tsd_state_nominal: case tsd_state_nominal:
#define O(n, t) \ #define O(n, t) \
n##_cleanup(tsd); n##_cleanup(tsd);
MALLOC_TSD MALLOC_TSD
#undef O #undef O
@ -106,17 +106,15 @@ MALLOC_TSD
} }
} }
tsd_t * bool
malloc_tsd_boot0(void) malloc_tsd_boot0(void)
{ {
tsd_t *tsd;
ncleanups = 0; ncleanups = 0;
if (tsd_boot0()) if (tsd_boot0())
return (NULL); return (true);
tsd = tsd_fetch(); *tsd_arenas_cache_bypassp_get(tsd_fetch()) = true;
*tsd_arenas_tdata_bypassp_get(tsd) = true; return (false);
return (tsd);
} }
void void
@ -124,7 +122,7 @@ malloc_tsd_boot1(void)
{ {
tsd_boot1(); tsd_boot1();
*tsd_arenas_tdata_bypassp_get(tsd_fetch()) = false; *tsd_arenas_cache_bypassp_get(tsd_fetch()) = false;
} }
#ifdef _WIN32 #ifdef _WIN32
@ -150,15 +148,13 @@ _tls_callback(HINSTANCE hinstDLL, DWORD fdwReason, LPVOID lpvReserved)
#ifdef _MSC_VER #ifdef _MSC_VER
# ifdef _M_IX86 # ifdef _M_IX86
# pragma comment(linker, "/INCLUDE:__tls_used") # pragma comment(linker, "/INCLUDE:__tls_used")
# pragma comment(linker, "/INCLUDE:_tls_callback")
# else # else
# pragma comment(linker, "/INCLUDE:_tls_used") # pragma comment(linker, "/INCLUDE:_tls_used")
# pragma comment(linker, "/INCLUDE:tls_callback")
# endif # endif
# pragma section(".CRT$XLY",long,read) # pragma section(".CRT$XLY",long,read)
#endif #endif
JEMALLOC_SECTION(".CRT$XLY") JEMALLOC_ATTR(used) JEMALLOC_SECTION(".CRT$XLY") JEMALLOC_ATTR(used)
BOOL (WINAPI *const tls_callback)(HINSTANCE hinstDLL, static BOOL (WINAPI *const tls_callback)(HINSTANCE hinstDLL,
DWORD fdwReason, LPVOID lpvReserved) = _tls_callback; DWORD fdwReason, LPVOID lpvReserved) = _tls_callback;
#endif #endif
@ -171,10 +167,10 @@ tsd_init_check_recursion(tsd_init_head_t *head, tsd_init_block_t *block)
tsd_init_block_t *iter; tsd_init_block_t *iter;
/* Check whether this thread has already inserted into the list. */ /* Check whether this thread has already inserted into the list. */
malloc_mutex_lock(TSDN_NULL, &head->lock); malloc_mutex_lock(&head->lock);
ql_foreach(iter, &head->blocks, link) { ql_foreach(iter, &head->blocks, link) {
if (iter->thread == self) { if (iter->thread == self) {
malloc_mutex_unlock(TSDN_NULL, &head->lock); malloc_mutex_unlock(&head->lock);
return (iter->data); return (iter->data);
} }
} }
@ -182,7 +178,7 @@ tsd_init_check_recursion(tsd_init_head_t *head, tsd_init_block_t *block)
ql_elm_new(block, link); ql_elm_new(block, link);
block->thread = self; block->thread = self;
ql_tail_insert(&head->blocks, block, link); ql_tail_insert(&head->blocks, block, link);
malloc_mutex_unlock(TSDN_NULL, &head->lock); malloc_mutex_unlock(&head->lock);
return (NULL); return (NULL);
} }
@ -190,8 +186,8 @@ void
tsd_init_finish(tsd_init_head_t *head, tsd_init_block_t *block) tsd_init_finish(tsd_init_head_t *head, tsd_init_block_t *block)
{ {
malloc_mutex_lock(TSDN_NULL, &head->lock); malloc_mutex_lock(&head->lock);
ql_remove(&head->blocks, block, link); ql_remove(&head->blocks, block, link);
malloc_mutex_unlock(TSDN_NULL, &head->lock); malloc_mutex_unlock(&head->lock);
} }
#endif #endif

42
deps/jemalloc/src/util.c vendored Executable file → Normal file
View File

@ -1,7 +1,3 @@
/*
* Define simple versions of assertion macros that won't recurse in case
* of assertion failures in malloc_*printf().
*/
#define assert(e) do { \ #define assert(e) do { \
if (config_debug && !(e)) { \ if (config_debug && !(e)) { \
malloc_write("<jemalloc>: Failed assertion\n"); \ malloc_write("<jemalloc>: Failed assertion\n"); \
@ -14,7 +10,6 @@
malloc_write("<jemalloc>: Unreachable code reached\n"); \ malloc_write("<jemalloc>: Unreachable code reached\n"); \
abort(); \ abort(); \
} \ } \
unreachable(); \
} while (0) } while (0)
#define not_implemented() do { \ #define not_implemented() do { \
@ -49,19 +44,15 @@ static void
wrtmessage(void *cbopaque, const char *s) wrtmessage(void *cbopaque, const char *s)
{ {
#if defined(JEMALLOC_USE_SYSCALL) && defined(SYS_write) #ifdef SYS_write
/* /*
* Use syscall(2) rather than write(2) when possible in order to avoid * Use syscall(2) rather than write(2) when possible in order to avoid
* the possibility of memory allocation within libc. This is necessary * the possibility of memory allocation within libc. This is necessary
* on FreeBSD; most operating systems do not have this problem though. * on FreeBSD; most operating systems do not have this problem though.
*
* syscall() returns long or int, depending on platform, so capture the
* unused result in the widest plausible type to avoid compiler
* warnings.
*/ */
UNUSED long result = syscall(SYS_write, STDERR_FILENO, s, strlen(s)); UNUSED int result = syscall(SYS_write, STDERR_FILENO, s, strlen(s));
#else #else
UNUSED ssize_t result = write(STDERR_FILENO, s, strlen(s)); UNUSED int result = write(STDERR_FILENO, s, strlen(s));
#endif #endif
} }
@ -91,7 +82,7 @@ buferror(int err, char *buf, size_t buflen)
#ifdef _WIN32 #ifdef _WIN32
FormatMessageA(FORMAT_MESSAGE_FROM_SYSTEM, NULL, err, 0, FormatMessageA(FORMAT_MESSAGE_FROM_SYSTEM, NULL, err, 0,
(LPSTR)buf, (DWORD)buflen, NULL); (LPSTR)buf, buflen, NULL);
return (0); return (0);
#elif defined(__GLIBC__) && defined(_GNU_SOURCE) #elif defined(__GLIBC__) && defined(_GNU_SOURCE)
char *b = strerror_r(err, buf, buflen); char *b = strerror_r(err, buf, buflen);
@ -200,7 +191,7 @@ malloc_strtoumax(const char *restrict nptr, char **restrict endptr, int base)
p++; p++;
} }
if (neg) if (neg)
ret = (uintmax_t)(-((intmax_t)ret)); ret = -ret;
if (p == ns) { if (p == ns) {
/* No conversion performed. */ /* No conversion performed. */
@ -315,9 +306,10 @@ x2s(uintmax_t x, bool alt_form, bool uppercase, char *s, size_t *slen_p)
return (s); return (s);
} }
size_t int
malloc_vsnprintf(char *str, size_t size, const char *format, va_list ap) malloc_vsnprintf(char *str, size_t size, const char *format, va_list ap)
{ {
int ret;
size_t i; size_t i;
const char *f; const char *f;
@ -408,8 +400,6 @@ malloc_vsnprintf(char *str, size_t size, const char *format, va_list ap)
int prec = -1; int prec = -1;
int width = -1; int width = -1;
unsigned char len = '?'; unsigned char len = '?';
char *s;
size_t slen;
f++; f++;
/* Flags. */ /* Flags. */
@ -500,6 +490,8 @@ malloc_vsnprintf(char *str, size_t size, const char *format, va_list ap)
} }
/* Conversion specifier. */ /* Conversion specifier. */
switch (*f) { switch (*f) {
char *s;
size_t slen;
case '%': case '%':
/* %% */ /* %% */
APPEND_C(*f); APPEND_C(*f);
@ -585,19 +577,20 @@ malloc_vsnprintf(char *str, size_t size, const char *format, va_list ap)
str[i] = '\0'; str[i] = '\0';
else else
str[size - 1] = '\0'; str[size - 1] = '\0';
ret = i;
#undef APPEND_C #undef APPEND_C
#undef APPEND_S #undef APPEND_S
#undef APPEND_PADDED_S #undef APPEND_PADDED_S
#undef GET_ARG_NUMERIC #undef GET_ARG_NUMERIC
return (i); return (ret);
} }
JEMALLOC_FORMAT_PRINTF(3, 4) JEMALLOC_FORMAT_PRINTF(3, 4)
size_t int
malloc_snprintf(char *str, size_t size, const char *format, ...) malloc_snprintf(char *str, size_t size, const char *format, ...)
{ {
size_t ret; int ret;
va_list ap; va_list ap;
va_start(ap, format); va_start(ap, format);
@ -655,12 +648,3 @@ malloc_printf(const char *format, ...)
malloc_vcprintf(NULL, NULL, format, ap); malloc_vcprintf(NULL, NULL, format, ap);
va_end(ap); va_end(ap);
} }
/*
* Restore normal assertion macros, in order to make it possible to compile all
* C files as a single concatenation.
*/
#undef assert
#undef not_reached
#undef not_implemented
#include "jemalloc/internal/assert.h"

View File

@ -1,136 +0,0 @@
#define JEMALLOC_WITNESS_C_
#include "jemalloc/internal/jemalloc_internal.h"
void
witness_init(witness_t *witness, const char *name, witness_rank_t rank,
witness_comp_t *comp)
{
witness->name = name;
witness->rank = rank;
witness->comp = comp;
}
#ifdef JEMALLOC_JET
#undef witness_lock_error
#define witness_lock_error JEMALLOC_N(n_witness_lock_error)
#endif
void
witness_lock_error(const witness_list_t *witnesses, const witness_t *witness)
{
witness_t *w;
malloc_printf("<jemalloc>: Lock rank order reversal:");
ql_foreach(w, witnesses, link) {
malloc_printf(" %s(%u)", w->name, w->rank);
}
malloc_printf(" %s(%u)\n", witness->name, witness->rank);
abort();
}
#ifdef JEMALLOC_JET
#undef witness_lock_error
#define witness_lock_error JEMALLOC_N(witness_lock_error)
witness_lock_error_t *witness_lock_error = JEMALLOC_N(n_witness_lock_error);
#endif
#ifdef JEMALLOC_JET
#undef witness_owner_error
#define witness_owner_error JEMALLOC_N(n_witness_owner_error)
#endif
void
witness_owner_error(const witness_t *witness)
{
malloc_printf("<jemalloc>: Should own %s(%u)\n", witness->name,
witness->rank);
abort();
}
#ifdef JEMALLOC_JET
#undef witness_owner_error
#define witness_owner_error JEMALLOC_N(witness_owner_error)
witness_owner_error_t *witness_owner_error = JEMALLOC_N(n_witness_owner_error);
#endif
#ifdef JEMALLOC_JET
#undef witness_not_owner_error
#define witness_not_owner_error JEMALLOC_N(n_witness_not_owner_error)
#endif
void
witness_not_owner_error(const witness_t *witness)
{
malloc_printf("<jemalloc>: Should not own %s(%u)\n", witness->name,
witness->rank);
abort();
}
#ifdef JEMALLOC_JET
#undef witness_not_owner_error
#define witness_not_owner_error JEMALLOC_N(witness_not_owner_error)
witness_not_owner_error_t *witness_not_owner_error =
JEMALLOC_N(n_witness_not_owner_error);
#endif
#ifdef JEMALLOC_JET
#undef witness_lockless_error
#define witness_lockless_error JEMALLOC_N(n_witness_lockless_error)
#endif
void
witness_lockless_error(const witness_list_t *witnesses)
{
witness_t *w;
malloc_printf("<jemalloc>: Should not own any locks:");
ql_foreach(w, witnesses, link) {
malloc_printf(" %s(%u)", w->name, w->rank);
}
malloc_printf("\n");
abort();
}
#ifdef JEMALLOC_JET
#undef witness_lockless_error
#define witness_lockless_error JEMALLOC_N(witness_lockless_error)
witness_lockless_error_t *witness_lockless_error =
JEMALLOC_N(n_witness_lockless_error);
#endif
void
witnesses_cleanup(tsd_t *tsd)
{
witness_assert_lockless(tsd_tsdn(tsd));
/* Do nothing. */
}
void
witness_fork_cleanup(tsd_t *tsd)
{
/* Do nothing. */
}
void
witness_prefork(tsd_t *tsd)
{
tsd_witness_fork_set(tsd, true);
}
void
witness_postfork_parent(tsd_t *tsd)
{
tsd_witness_fork_set(tsd, false);
}
void
witness_postfork_child(tsd_t *tsd)
{
#ifndef JEMALLOC_MUTEX_INIT_CB
witness_list_t *witnesses;
witnesses = tsd_witnessesp_get(tsd);
ql_new(witnesses);
#endif
tsd_witness_fork_set(tsd, false);
}

View File

@ -4,7 +4,7 @@
#endif #endif
/* /*
* The malloc_default_purgeable_zone() function is only available on >= 10.6. * The malloc_default_purgeable_zone function is only available on >= 10.6.
* We need to check whether it is present at runtime, thus the weak_import. * We need to check whether it is present at runtime, thus the weak_import.
*/ */
extern malloc_zone_t *malloc_default_purgeable_zone(void) extern malloc_zone_t *malloc_default_purgeable_zone(void)
@ -13,9 +13,8 @@ JEMALLOC_ATTR(weak_import);
/******************************************************************************/ /******************************************************************************/
/* Data. */ /* Data. */
static malloc_zone_t *default_zone, *purgeable_zone; static malloc_zone_t zone;
static malloc_zone_t jemalloc_zone; static struct malloc_introspection_t zone_introspect;
static struct malloc_introspection_t jemalloc_zone_introspect;
/******************************************************************************/ /******************************************************************************/
/* Function prototypes for non-inline static functions. */ /* Function prototypes for non-inline static functions. */
@ -57,7 +56,7 @@ zone_size(malloc_zone_t *zone, void *ptr)
* not work in practice, we must check all pointers to assure that they * not work in practice, we must check all pointers to assure that they
* reside within a mapped chunk before determining size. * reside within a mapped chunk before determining size.
*/ */
return (ivsalloc(tsdn_fetch(), ptr, config_prof)); return (ivsalloc(ptr, config_prof));
} }
static void * static void *
@ -88,7 +87,7 @@ static void
zone_free(malloc_zone_t *zone, void *ptr) zone_free(malloc_zone_t *zone, void *ptr)
{ {
if (ivsalloc(tsdn_fetch(), ptr, config_prof) != 0) { if (ivsalloc(ptr, config_prof) != 0) {
je_free(ptr); je_free(ptr);
return; return;
} }
@ -100,7 +99,7 @@ static void *
zone_realloc(malloc_zone_t *zone, void *ptr, size_t size) zone_realloc(malloc_zone_t *zone, void *ptr, size_t size)
{ {
if (ivsalloc(tsdn_fetch(), ptr, config_prof) != 0) if (ivsalloc(ptr, config_prof) != 0)
return (je_realloc(ptr, size)); return (je_realloc(ptr, size));
return (realloc(ptr, size)); return (realloc(ptr, size));
@ -122,11 +121,9 @@ zone_memalign(malloc_zone_t *zone, size_t alignment, size_t size)
static void static void
zone_free_definite_size(malloc_zone_t *zone, void *ptr, size_t size) zone_free_definite_size(malloc_zone_t *zone, void *ptr, size_t size)
{ {
size_t alloc_size;
alloc_size = ivsalloc(tsdn_fetch(), ptr, config_prof); if (ivsalloc(ptr, config_prof) != 0) {
if (alloc_size != 0) { assert(ivsalloc(ptr, config_prof) == size);
assert(alloc_size == size);
je_free(ptr); je_free(ptr);
return; return;
} }
@ -165,103 +162,89 @@ static void
zone_force_unlock(malloc_zone_t *zone) zone_force_unlock(malloc_zone_t *zone)
{ {
/*
* Call jemalloc_postfork_child() rather than
* jemalloc_postfork_parent(), because this function is executed by both
* parent and child. The parent can tolerate having state
* reinitialized, but the child cannot unlock mutexes that were locked
* by the parent.
*/
if (isthreaded) if (isthreaded)
jemalloc_postfork_child(); jemalloc_postfork_parent();
} }
static void JEMALLOC_ATTR(constructor)
zone_init(void) void
register_zone(void)
{ {
jemalloc_zone.size = (void *)zone_size;
jemalloc_zone.malloc = (void *)zone_malloc;
jemalloc_zone.calloc = (void *)zone_calloc;
jemalloc_zone.valloc = (void *)zone_valloc;
jemalloc_zone.free = (void *)zone_free;
jemalloc_zone.realloc = (void *)zone_realloc;
jemalloc_zone.destroy = (void *)zone_destroy;
jemalloc_zone.zone_name = "jemalloc_zone";
jemalloc_zone.batch_malloc = NULL;
jemalloc_zone.batch_free = NULL;
jemalloc_zone.introspect = &jemalloc_zone_introspect;
jemalloc_zone.version = JEMALLOC_ZONE_VERSION;
#if (JEMALLOC_ZONE_VERSION >= 5)
jemalloc_zone.memalign = zone_memalign;
#endif
#if (JEMALLOC_ZONE_VERSION >= 6)
jemalloc_zone.free_definite_size = zone_free_definite_size;
#endif
#if (JEMALLOC_ZONE_VERSION >= 8)
jemalloc_zone.pressure_relief = NULL;
#endif
jemalloc_zone_introspect.enumerator = NULL;
jemalloc_zone_introspect.good_size = (void *)zone_good_size;
jemalloc_zone_introspect.check = NULL;
jemalloc_zone_introspect.print = NULL;
jemalloc_zone_introspect.log = NULL;
jemalloc_zone_introspect.force_lock = (void *)zone_force_lock;
jemalloc_zone_introspect.force_unlock = (void *)zone_force_unlock;
jemalloc_zone_introspect.statistics = NULL;
#if (JEMALLOC_ZONE_VERSION >= 6)
jemalloc_zone_introspect.zone_locked = NULL;
#endif
#if (JEMALLOC_ZONE_VERSION >= 7)
jemalloc_zone_introspect.enable_discharge_checking = NULL;
jemalloc_zone_introspect.disable_discharge_checking = NULL;
jemalloc_zone_introspect.discharge = NULL;
# ifdef __BLOCKS__
jemalloc_zone_introspect.enumerate_discharged_pointers = NULL;
# else
jemalloc_zone_introspect.enumerate_unavailable_without_blocks = NULL;
# endif
#endif
}
static malloc_zone_t *
zone_default_get(void)
{
malloc_zone_t **zones = NULL;
unsigned int num_zones = 0;
/* /*
* On OSX 10.12, malloc_default_zone returns a special zone that is not * If something else replaced the system default zone allocator, don't
* present in the list of registered zones. That zone uses a "lite zone" * register jemalloc's.
* if one is present (apparently enabled when malloc stack logging is
* enabled), or the first registered zone otherwise. In practice this
* means unless malloc stack logging is enabled, the first registered
* zone is the default. So get the list of zones to get the first one,
* instead of relying on malloc_default_zone.
*/ */
if (KERN_SUCCESS != malloc_get_all_zones(0, NULL, malloc_zone_t *default_zone = malloc_default_zone();
(vm_address_t**)&zones, &num_zones)) { malloc_zone_t *purgeable_zone = NULL;
/* if (!default_zone->zone_name ||
* Reset the value in case the failure happened after it was strcmp(default_zone->zone_name, "DefaultMallocZone") != 0) {
* set. return;
*/
num_zones = 0;
} }
if (num_zones) zone.size = (void *)zone_size;
return (zones[0]); zone.malloc = (void *)zone_malloc;
zone.calloc = (void *)zone_calloc;
zone.valloc = (void *)zone_valloc;
zone.free = (void *)zone_free;
zone.realloc = (void *)zone_realloc;
zone.destroy = (void *)zone_destroy;
zone.zone_name = "jemalloc_zone";
zone.batch_malloc = NULL;
zone.batch_free = NULL;
zone.introspect = &zone_introspect;
zone.version = JEMALLOC_ZONE_VERSION;
#if (JEMALLOC_ZONE_VERSION >= 5)
zone.memalign = zone_memalign;
#endif
#if (JEMALLOC_ZONE_VERSION >= 6)
zone.free_definite_size = zone_free_definite_size;
#endif
#if (JEMALLOC_ZONE_VERSION >= 8)
zone.pressure_relief = NULL;
#endif
return (malloc_default_zone()); zone_introspect.enumerator = NULL;
} zone_introspect.good_size = (void *)zone_good_size;
zone_introspect.check = NULL;
zone_introspect.print = NULL;
zone_introspect.log = NULL;
zone_introspect.force_lock = (void *)zone_force_lock;
zone_introspect.force_unlock = (void *)zone_force_unlock;
zone_introspect.statistics = NULL;
#if (JEMALLOC_ZONE_VERSION >= 6)
zone_introspect.zone_locked = NULL;
#endif
#if (JEMALLOC_ZONE_VERSION >= 7)
zone_introspect.enable_discharge_checking = NULL;
zone_introspect.disable_discharge_checking = NULL;
zone_introspect.discharge = NULL;
#ifdef __BLOCKS__
zone_introspect.enumerate_discharged_pointers = NULL;
#else
zone_introspect.enumerate_unavailable_without_blocks = NULL;
#endif
#endif
/* As written, this function can only promote jemalloc_zone. */ /*
static void * The default purgeable zone is created lazily by OSX's libc. It uses
zone_promote(void) * the default zone when it is created for "small" allocations
{ * (< 15 KiB), but assumes the default zone is a scalable_zone. This
malloc_zone_t *zone; * obviously fails when the default zone is the jemalloc zone, so
* malloc_default_purgeable_zone is called beforehand so that the
* default purgeable zone is created when the default zone is still
* a scalable_zone. As purgeable zones only exist on >= 10.6, we need
* to check for the existence of malloc_default_purgeable_zone() at
* run time.
*/
if (malloc_default_purgeable_zone != NULL)
purgeable_zone = malloc_default_purgeable_zone();
/* Register the custom zone. At this point it won't be the default. */
malloc_zone_register(&zone);
do { do {
default_zone = malloc_default_zone();
/* /*
* Unregister and reregister the default zone. On OSX >= 10.6, * Unregister and reregister the default zone. On OSX >= 10.6,
* unregistering takes the last registered zone and places it * unregistering takes the last registered zone and places it
@ -272,7 +255,6 @@ zone_promote(void)
*/ */
malloc_zone_unregister(default_zone); malloc_zone_unregister(default_zone);
malloc_zone_register(default_zone); malloc_zone_register(default_zone);
/* /*
* On OSX 10.6, having the default purgeable zone appear before * On OSX 10.6, having the default purgeable zone appear before
* the default zone makes some things crash because it thinks it * the default zone makes some things crash because it thinks it
@ -284,47 +266,9 @@ zone_promote(void)
* above, i.e. the default zone. Registering it again then puts * above, i.e. the default zone. Registering it again then puts
* it at the end, obviously after the default zone. * it at the end, obviously after the default zone.
*/ */
if (purgeable_zone != NULL) { if (purgeable_zone) {
malloc_zone_unregister(purgeable_zone); malloc_zone_unregister(purgeable_zone);
malloc_zone_register(purgeable_zone); malloc_zone_register(purgeable_zone);
} }
} while (malloc_default_zone() != &zone);
zone = zone_default_get();
} while (zone != &jemalloc_zone);
}
JEMALLOC_ATTR(constructor)
void
zone_register(void)
{
/*
* If something else replaced the system default zone allocator, don't
* register jemalloc's.
*/
default_zone = zone_default_get();
if (!default_zone->zone_name || strcmp(default_zone->zone_name,
"DefaultMallocZone") != 0)
return;
/*
* The default purgeable zone is created lazily by OSX's libc. It uses
* the default zone when it is created for "small" allocations
* (< 15 KiB), but assumes the default zone is a scalable_zone. This
* obviously fails when the default zone is the jemalloc zone, so
* malloc_default_purgeable_zone() is called beforehand so that the
* default purgeable zone is created when the default zone is still
* a scalable_zone. As purgeable zones only exist on >= 10.6, we need
* to check for the existence of malloc_default_purgeable_zone() at
* run time.
*/
purgeable_zone = (malloc_default_purgeable_zone == NULL) ? NULL :
malloc_default_purgeable_zone();
/* Register the custom zone. At this point it won't be the default. */
zone_init();
malloc_zone_register(&jemalloc_zone);
/* Promote the custom zone to be default. */
zone_promote();
} }

View File

@ -11,6 +11,7 @@
#ifdef _WIN32 #ifdef _WIN32
# include "msvc_compat/strings.h" # include "msvc_compat/strings.h"
#endif #endif
#include <sys/time.h>
#ifdef _WIN32 #ifdef _WIN32
# include <windows.h> # include <windows.h>
@ -19,6 +20,39 @@
# include <pthread.h> # include <pthread.h>
#endif #endif
/******************************************************************************/
/*
* Define always-enabled assertion macros, so that test assertions execute even
* if assertions are disabled in the library code. These definitions must
* exist prior to including "jemalloc/internal/util.h".
*/
#define assert(e) do { \
if (!(e)) { \
malloc_printf( \
"<jemalloc>: %s:%d: Failed assertion: \"%s\"\n", \
__FILE__, __LINE__, #e); \
abort(); \
} \
} while (0)
#define not_reached() do { \
malloc_printf( \
"<jemalloc>: %s:%d: Unreachable code reached\n", \
__FILE__, __LINE__); \
abort(); \
} while (0)
#define not_implemented() do { \
malloc_printf("<jemalloc>: %s:%d: Not implemented\n", \
__FILE__, __LINE__); \
abort(); \
} while (0)
#define assert_not_implemented(e) do { \
if (!(e)) \
not_implemented(); \
} while (0)
#include "test/jemalloc_test_defs.h" #include "test/jemalloc_test_defs.h"
#ifdef JEMALLOC_OSSPIN #ifdef JEMALLOC_OSSPIN
@ -53,14 +87,6 @@
# include "jemalloc/internal/jemalloc_internal_defs.h" # include "jemalloc/internal/jemalloc_internal_defs.h"
# include "jemalloc/internal/jemalloc_internal_macros.h" # include "jemalloc/internal/jemalloc_internal_macros.h"
static const bool config_debug =
#ifdef JEMALLOC_DEBUG
true
#else
false
#endif
;
# define JEMALLOC_N(n) @private_namespace@##n # define JEMALLOC_N(n) @private_namespace@##n
# include "jemalloc/internal/private_namespace.h" # include "jemalloc/internal/private_namespace.h"
@ -68,7 +94,6 @@ static const bool config_debug =
# define JEMALLOC_H_STRUCTS # define JEMALLOC_H_STRUCTS
# define JEMALLOC_H_EXTERNS # define JEMALLOC_H_EXTERNS
# define JEMALLOC_H_INLINES # define JEMALLOC_H_INLINES
# include "jemalloc/internal/nstime.h"
# include "jemalloc/internal/util.h" # include "jemalloc/internal/util.h"
# include "jemalloc/internal/qr.h" # include "jemalloc/internal/qr.h"
# include "jemalloc/internal/ql.h" # include "jemalloc/internal/ql.h"
@ -124,40 +149,3 @@ static const bool config_debug =
#include "test/thd.h" #include "test/thd.h"
#define MEXP 19937 #define MEXP 19937
#include "test/SFMT.h" #include "test/SFMT.h"
/******************************************************************************/
/*
* Define always-enabled assertion macros, so that test assertions execute even
* if assertions are disabled in the library code.
*/
#undef assert
#undef not_reached
#undef not_implemented
#undef assert_not_implemented
#define assert(e) do { \
if (!(e)) { \
malloc_printf( \
"<jemalloc>: %s:%d: Failed assertion: \"%s\"\n", \
__FILE__, __LINE__, #e); \
abort(); \
} \
} while (0)
#define not_reached() do { \
malloc_printf( \
"<jemalloc>: %s:%d: Unreachable code reached\n", \
__FILE__, __LINE__); \
abort(); \
} while (0)
#define not_implemented() do { \
malloc_printf("<jemalloc>: %s:%d: Not implemented\n", \
__FILE__, __LINE__); \
abort(); \
} while (0)
#define assert_not_implemented(e) do { \
if (!(e)) \
not_implemented(); \
} while (0)

View File

@ -8,8 +8,6 @@
typedef struct { typedef struct {
#ifdef _WIN32 #ifdef _WIN32
CRITICAL_SECTION lock; CRITICAL_SECTION lock;
#elif (defined(JEMALLOC_OS_UNFAIR_LOCK))
os_unfair_lock lock;
#elif (defined(JEMALLOC_OSSPIN)) #elif (defined(JEMALLOC_OSSPIN))
OSSpinLock lock; OSSpinLock lock;
#else #else

View File

@ -311,9 +311,6 @@ label_test_end: \
#define test(...) \ #define test(...) \
p_test(__VA_ARGS__, NULL) p_test(__VA_ARGS__, NULL)
#define test_no_malloc_init(...) \
p_test_no_malloc_init(__VA_ARGS__, NULL)
#define test_skip_if(e) do { \ #define test_skip_if(e) do { \
if (e) { \ if (e) { \
test_skip("%s:%s:%d: Test skipped: (%s)", \ test_skip("%s:%s:%d: Test skipped: (%s)", \
@ -327,7 +324,6 @@ void test_fail(const char *format, ...) JEMALLOC_FORMAT_PRINTF(1, 2);
/* For private use by macros. */ /* For private use by macros. */
test_status_t p_test(test_t *t, ...); test_status_t p_test(test_t *t, ...);
test_status_t p_test_no_malloc_init(test_t *t, ...);
void p_test_init(const char *name); void p_test_init(const char *name);
void p_test_fini(void); void p_test_fini(void);
void p_test_fail(const char *prefix, const char *message); void p_test_fail(const char *prefix, const char *message);

Some files were not shown because too many files have changed in this diff Show More