Fast Auxiliary Space Preconditioning  1.8.4 Feb/15/2016
malloc.c.h
1 /*
2  This is a version (aka dlmalloc) of malloc/free/realloc written by
3  Doug Lea and released to the public domain, as explained at
4  http://creativecommons.org/licenses/publicdomain. Send questions,
5  comments, complaints, performance data, etc to dl@cs.oswego.edu
6 
7 * Version 2.8.4 Wed May 27 09:56:23 2009 Doug Lea (dl at gee)
8 
9  Note: There may be an updated version of this malloc obtainable at
10  ftp://gee.cs.oswego.edu/pub/misc/malloc.c
11  Check before installing!
12 
13 * Quickstart
14 
15  This library is all in one file to simplify the most common usage:
16  ftp it, compile it (-O3), and link it into another program. All of
17  the compile-time options default to reasonable values for use on
18  most platforms. You might later want to step through various
19  compile-time and dynamic tuning options.
20 
21  For convenience, an include file for code using this malloc is at:
22  ftp://gee.cs.oswego.edu/pub/misc/malloc-2.8.4.h
23  You don't really need this .h file unless you call functions not
24  defined in your system include files. The .h file contains only the
25  excerpts from this file needed for using this malloc on ANSI C/C++
26  systems, so long as you haven't changed compile-time options about
27  naming and tuning parameters. If you do, then you can create your
28  own malloc.h that does include all settings by cutting at the point
29  indicated below. Note that you may already by default be using a C
30  library containing a malloc that is based on some version of this
31  malloc (for example in linux). You might still want to use the one
32  in this file to customize settings or to avoid overheads associated
33  with library versions.
34 
35 * Vital statistics:
36 
37  Supported pointer/size_t representation: 4 or 8 bytes
38  size_t MUST be an unsigned type of the same width as
39  pointers. (If you are using an ancient system that declares
40  size_t as a signed type, or need it to be a different width
41  than pointers, you can use a previous release of this malloc
42  (e.g. 2.7.2) supporting these.)
43 
44  Alignment: 8 bytes (default)
45  This suffices for nearly all current machines and C compilers.
46  However, you can define MALLOC_ALIGNMENT to be wider than this
47  if necessary (up to 128bytes), at the expense of using more space.
48 
49  Minimum overhead per allocated chunk: 4 or 8 bytes (if 4byte sizes)
50  8 or 16 bytes (if 8byte sizes)
51  Each malloced chunk has a hidden word of overhead holding size
52  and status information, and additional cross-check word
53  if FOOTERS is defined.
54 
55  Minimum allocated size: 4-byte ptrs: 16 bytes (including overhead)
56  8-byte ptrs: 32 bytes (including overhead)
57 
58  Even a request for zero bytes (i.e., malloc(0)) returns a
59  pointer to something of the minimum allocatable size.
60  The maximum overhead wastage (i.e., number of extra bytes
61  allocated than were requested in malloc) is less than or equal
62  to the minimum size, except for requests >= mmap_threshold that
63  are serviced via mmap(), where the worst case wastage is about
64  32 bytes plus the remainder from a system page (the minimal
65  mmap unit); typically 4096 or 8192 bytes.
66 
67  Security: static-safe; optionally more or less
68  The "security" of malloc refers to the ability of malicious
69  code to accentuate the effects of errors (for example, freeing
70  space that is not currently malloc'ed or overwriting past the
71  ends of chunks) in code that calls malloc. This malloc
72  guarantees not to modify any memory locations below the base of
73  heap, i.e., static variables, even in the presence of usage
74  errors. The routines additionally detect most improper frees
75  and reallocs. All this holds as long as the static bookkeeping
76  for malloc itself is not corrupted by some other means. This
77  is only one aspect of security -- these checks do not, and
78  cannot, detect all possible programming errors.
79 
80  If FOOTERS is defined nonzero, then each allocated chunk
81  carries an additional check word to verify that it was malloced
82  from its space. These check words are the same within each
83  execution of a program using malloc, but differ across
84  executions, so externally crafted fake chunks cannot be
85  freed. This improves security by rejecting frees/reallocs that
86  could corrupt heap memory, in addition to the checks preventing
87  writes to statics that are always on. This may further improve
88  security at the expense of time and space overhead. (Note that
89  FOOTERS may also be worth using with MSPACES.)
90 
91  By default detected errors cause the program to abort (calling
92  "abort()"). You can override this to instead proceed past
93  errors by defining PROCEED_ON_ERROR. In this case, a bad free
94  has no effect, and a malloc that encounters a bad address
95  caused by user overwrites will ignore the bad address by
96  dropping pointers and indices to all known memory. This may
97  be appropriate for programs that should continue if at all
98  possible in the face of programming errors, although they may
99  run out of memory because dropped memory is never reclaimed.
100 
101  If you don't like either of these options, you can define
102  CORRUPTION_ERROR_ACTION and USAGE_ERROR_ACTION to do anything
103  else. And if if you are sure that your program using malloc has
104  no errors or vulnerabilities, you can define INSECURE to 1,
105  which might (or might not) provide a small performance improvement.
106 
107  Thread-safety: NOT thread-safe unless USE_LOCKS defined
108  When USE_LOCKS is defined, each public call to malloc, free,
109  etc is surrounded with either a pthread mutex or a win32
110  spinlock (depending on WIN32). This is not especially fast, and
111  can be a major bottleneck. It is designed only to provide
112  minimal protection in concurrent environments, and to provide a
113  basis for extensions. If you are using malloc in a concurrent
114  program, consider instead using nedmalloc
115  (http://www.nedprod.com/programs/portable/nedmalloc/) or
116  ptmalloc (See http://www.malloc.de), which are derived
117  from versions of this malloc.
118 
119  System requirements: Any combination of MORECORE and/or MMAP/MUNMAP
120  This malloc can use unix sbrk or any emulation (invoked using
121  the CALL_MORECORE macro) and/or mmap/munmap or any emulation
122  (invoked using CALL_MMAP/CALL_MUNMAP) to get and release system
123  memory. On most unix systems, it tends to work best if both
124  MORECORE and MMAP are enabled. On Win32, it uses emulations
125  based on VirtualAlloc. It also uses common C library functions
126  like memset.
127 
128  Compliance: I believe it is compliant with the Single Unix Specification
129  (See http://www.unix.org). Also SVID/XPG, ANSI C, and probably
130  others as well.
131 
132 * Overview of algorithms
133 
134  This is not the fastest, most space-conserving, most portable, or
135  most tunable malloc ever written. However it is among the fastest
136  while also being among the most space-conserving, portable and
137  tunable. Consistent balance across these factors results in a good
138  general-purpose allocator for malloc-intensive programs.
139 
140  In most ways, this malloc is a best-fit allocator. Generally, it
141  chooses the best-fitting existing chunk for a request, with ties
142  broken in approximately least-recently-used order. (This strategy
143  normally maintains low fragmentation.) However, for requests less
144  than 256bytes, it deviates from best-fit when there is not an
145  exactly fitting available chunk by preferring to use space adjacent
146  to that used for the previous small request, as well as by breaking
147  ties in approximately most-recently-used order. (These enhance
148  locality of series of small allocations.) And for very large requests
149  (>= 256Kb by default), it relies on system memory mapping
150  facilities, if supported. (This helps avoid carrying around and
151  possibly fragmenting memory used only for large chunks.)
152 
153  All operations (except malloc_stats and mallinfo) have execution
154  times that are bounded by a constant factor of the number of bits in
155  a size_t, not counting any clearing in calloc or copying in realloc,
156  or actions surrounding MORECORE and MMAP that have times
157  proportional to the number of non-contiguous regions returned by
158  system allocation routines, which is often just 1. In real-time
159  applications, you can optionally suppress segment traversals using
160  NO_SEGMENT_TRAVERSAL, which assures bounded execution even when
161  system allocators return non-contiguous spaces, at the typical
162  expense of carrying around more memory and increased fragmentation.
163 
164  The implementation is not very modular and seriously overuses
165  macros. Perhaps someday all C compilers will do as good a job
166  inlining modular code as can now be done by brute-force expansion,
167  but now, enough of them seem not to.
168 
169  Some compilers issue a lot of warnings about code that is
170  dead/unreachable only on some platforms, and also about intentional
171  uses of negation on unsigned types. All known cases of each can be
172  ignored.
173 
174  For a longer but out of date high-level description, see
175  http://gee.cs.oswego.edu/dl/html/malloc.html
176 
177 * MSPACES
178  If MSPACES is defined, then in addition to malloc, free, etc.,
179  this file also defines mspace_malloc, mspace_free, etc. These
180  are versions of malloc routines that take an "mspace" argument
181  obtained using create_mspace, to control all internal bookkeeping.
182  If ONLY_MSPACES is defined, only these versions are compiled.
183  So if you would like to use this allocator for only some allocations,
184  and your system malloc for others, you can compile with
185  ONLY_MSPACES and then do something like...
186  static mspace mymspace = create_mspace(0,0); // for example
187  #define mymalloc(bytes) mspace_malloc(mymspace, bytes)
188 
189  (Note: If you only need one instance of an mspace, you can instead
190  use "USE_DL_PREFIX" to relabel the global malloc.)
191 
192  You can similarly create thread-local allocators by storing
193  mspaces as thread-locals. For example:
194  static __thread mspace tlms = 0;
195  void* tlmalloc(size_t bytes) {
196  if (tlms == 0) tlms = create_mspace(0, 0);
197  return mspace_malloc(tlms, bytes);
198  }
199  void tlfree(void* mem) { mspace_free(tlms, mem); }
200 
201  Unless FOOTERS is defined, each mspace is completely independent.
202  You cannot allocate from one and free to another (although
203  conformance is only weakly checked, so usage errors are not always
204  caught). If FOOTERS is defined, then each chunk carries around a tag
205  indicating its originating mspace, and frees are directed to their
206  originating spaces.
207 
208  ------------------------- Compile-time options ---------------------------
209 
210 Be careful in setting #define values for numerical constants of type
211 size_t. On some systems, literal values are not automatically extended
212 to size_t precision unless they are explicitly casted. You can also
213 use the symbolic values MAX_SIZE_T, SIZE_T_ONE, etc below.
214 
215 WIN32 default: defined if _WIN32 defined
216  Defining WIN32 sets up defaults for MS environment and compilers.
217  Otherwise defaults are for unix. Beware that there seem to be some
218  cases where this malloc might not be a pure drop-in replacement for
219  Win32 malloc: Random-looking failures from Win32 GDI API's (eg;
220  SetDIBits()) may be due to bugs in some video driver implementations
221  when pixel buffers are malloc()ed, and the region spans more than
222  one VirtualAlloc()ed region. Because dlmalloc uses a small (64Kb)
223  default granularity, pixel buffers may straddle virtual allocation
224  regions more often than when using the Microsoft allocator. You can
225  avoid this by using VirtualAlloc() and VirtualFree() for all pixel
226  buffers rather than using malloc(). If this is not possible,
227  recompile this malloc with a larger DEFAULT_GRANULARITY.
228 
229 MALLOC_ALIGNMENT default: (size_t)8
230  Controls the minimum alignment for malloc'ed chunks. It must be a
231  power of two and at least 8, even on machines for which smaller
232  alignments would suffice. It may be defined as larger than this
233  though. Note however that code and data structures are optimized for
234  the case of 8-byte alignment.
235 
236 MSPACES default: 0 (false)
237  If true, compile in support for independent allocation spaces.
238  This is only supported if HAVE_MMAP is true.
239 
240 ONLY_MSPACES default: 0 (false)
241  If true, only compile in mspace versions, not regular versions.
242 
243 USE_LOCKS default: 0 (false)
244  Causes each call to each public routine to be surrounded with
245  pthread or WIN32 mutex lock/unlock. (If set true, this can be
246  overridden on a per-mspace basis for mspace versions.) If set to a
247  non-zero value other than 1, locks are used, but their
248  implementation is left out, so lock functions must be supplied manually,
249  as described below.
250 
251 USE_SPIN_LOCKS default: 1 iff USE_LOCKS and on x86 using gcc or MSC
252  If true, uses custom spin locks for locking. This is currently
253  supported only for x86 platforms using gcc or recent MS compilers.
254  Otherwise, posix locks or win32 critical sections are used.
255 
256 FOOTERS default: 0
257  If true, provide extra checking and dispatching by placing
258  information in the footers of allocated chunks. This adds
259  space and time overhead.
260 
261 INSECURE default: 0
262  If true, omit checks for usage errors and heap space overwrites.
263 
264 USE_DL_PREFIX default: NOT defined
265  Causes compiler to prefix all public routines with the string 'dl'.
266  This can be useful when you only want to use this malloc in one part
267  of a program, using your regular system malloc elsewhere.
268 
269 ABORT default: defined as abort()
270  Defines how to abort on failed checks. On most systems, a failed
271  check cannot die with an "assert" or even print an informative
272  message, because the underlying print routines in turn call malloc,
273  which will fail again. Generally, the best policy is to simply call
274  abort(). It's not very useful to do more than this because many
275  errors due to overwriting will show up as address faults (null, odd
276  addresses etc) rather than malloc-triggered checks, so will also
277  abort. Also, most compilers know that abort() does not return, so
278  can better optimize code conditionally calling it.
279 
280 PROCEED_ON_ERROR default: defined as 0 (false)
281  Controls whether detected bad addresses cause them to bypassed
282  rather than aborting. If set, detected bad arguments to free and
283  realloc are ignored. And all bookkeeping information is zeroed out
284  upon a detected overwrite of freed heap space, thus losing the
285  ability to ever return it from malloc again, but enabling the
286  application to proceed. If PROCEED_ON_ERROR is defined, the
287  static variable malloc_corruption_error_count is compiled in
288  and can be examined to see if errors have occurred. This option
289  generates slower code than the default abort policy.
290 
291 DEBUG default: NOT defined
292  The DEBUG setting is mainly intended for people trying to modify
293  this code or diagnose problems when porting to new platforms.
294  However, it may also be able to better isolate user errors than just
295  using runtime checks. The assertions in the check routines spell
296  out in more detail the assumptions and invariants underlying the
297  algorithms. The checking is fairly extensive, and will slow down
298  execution noticeably. Calling malloc_stats or mallinfo with DEBUG
299  set will attempt to check every non-mmapped allocated and free chunk
300  in the course of computing the summaries.
301 
302 ABORT_ON_ASSERT_FAILURE default: defined as 1 (true)
303  Debugging assertion failures can be nearly impossible if your
304  version of the assert macro causes malloc to be called, which will
305  lead to a cascade of further failures, blowing the runtime stack.
306  ABORT_ON_ASSERT_FAILURE cause assertions failures to call abort(),
307  which will usually make debugging easier.
308 
309 MALLOC_FAILURE_ACTION default: sets errno to ENOMEM, or no-op on win32
310  The action to take before "return 0" when malloc fails to be able to
311  return memory because there is none available.
312 
313 HAVE_MORECORE default: 1 (true) unless win32 or ONLY_MSPACES
314  True if this system supports sbrk or an emulation of it.
315 
316 MORECORE default: sbrk
317  The name of the sbrk-style system routine to call to obtain more
318  memory. See below for guidance on writing custom MORECORE
319  functions. The type of the argument to sbrk/MORECORE varies across
320  systems. It cannot be size_t, because it supports negative
321  arguments, so it is normally the signed type of the same width as
322  size_t (sometimes declared as "intptr_t"). It doesn't much matter
323  though. Internally, we only call it with arguments less than half
324  the max value of a size_t, which should work across all reasonable
325  possibilities, although sometimes generating compiler warnings.
326 
327 MORECORE_CONTIGUOUS default: 1 (true) if HAVE_MORECORE
328  If true, take advantage of fact that consecutive calls to MORECORE
329  with positive arguments always return contiguous increasing
330  addresses. This is true of unix sbrk. It does not hurt too much to
331  set it true anyway, since malloc copes with non-contiguities.
332  Setting it false when definitely non-contiguous saves time
333  and possibly wasted space it would take to discover this though.
334 
335 MORECORE_CANNOT_TRIM default: NOT defined
336  True if MORECORE cannot release space back to the system when given
337  negative arguments. This is generally necessary only if you are
338  using a hand-crafted MORECORE function that cannot handle negative
339  arguments.
340 
341 NO_SEGMENT_TRAVERSAL default: 0
342  If non-zero, suppresses traversals of memory segments
343  returned by either MORECORE or CALL_MMAP. This disables
344  merging of segments that are contiguous, and selectively
345  releasing them to the OS if unused, but bounds execution times.
346 
347 HAVE_MMAP default: 1 (true)
348  True if this system supports mmap or an emulation of it. If so, and
349  HAVE_MORECORE is not true, MMAP is used for all system
350  allocation. If set and HAVE_MORECORE is true as well, MMAP is
351  primarily used to directly allocate very large blocks. It is also
352  used as a backup strategy in cases where MORECORE fails to provide
353  space from system. Note: A single call to MUNMAP is assumed to be
354  able to unmap memory that may have be allocated using multiple calls
355  to MMAP, so long as they are adjacent.
356 
357 HAVE_MREMAP default: 1 on linux, else 0
358  If true realloc() uses mremap() to re-allocate large blocks and
359  extend or shrink allocation spaces.
360 
361 MMAP_CLEARS default: 1 except on WINCE.
362  True if mmap clears memory so calloc doesn't need to. This is true
363  for standard unix mmap using /dev/zero and on WIN32 except for WINCE.
364 
365 USE_BUILTIN_FFS default: 0 (i.e., not used)
366  Causes malloc to use the builtin ffs() function to compute indices.
367  Some compilers may recognize and intrinsify ffs to be faster than the
368  supplied C version. Also, the case of x86 using gcc is special-cased
369  to an asm instruction, so is already as fast as it can be, and so
370  this setting has no effect. Similarly for Win32 under recent MS compilers.
371  (On most x86s, the asm version is only slightly faster than the C version.)
372 
373 malloc_getpagesize default: derive from system includes, or 4096.
374  The system page size. To the extent possible, this malloc manages
375  memory from the system in page-size units. This may be (and
376  usually is) a function rather than a constant. This is ignored
377  if WIN32, where page size is determined using getSystemInfo during
378  initialization. This may be several megabytes if ENABLE_LARGE_PAGES
379  is enabled.
380 
381 ENABLE_LARGE_PAGES default: NOT defined
382  Causes the system page size to be the value of GetLargePageMinimum()
383  if that function is available (Windows Server 2003/Vista or later).
384  This allows the use of large page entries in the MMU which can
385  significantly improve performance in large working set applications
386  as TLB cache load is reduced by a factor of three. Note that enabling
387  this option is equal to locking the process' memory in current
388  implementations of Windows and requires the SE_LOCK_MEMORY_PRIVILEGE
389  to be held by the process in order to succeed.
390 
391 USE_DEV_RANDOM default: 0 (i.e., not used)
392  Causes malloc to use /dev/random to initialize secure magic seed for
393  stamping footers. Otherwise, the current time is used.
394 
395 NO_MALLINFO default: 0
396  If defined, don't compile "mallinfo". This can be a simple way
397  of dealing with mismatches between system declarations and
398  those in this file.
399 
400 MALLINFO_FIELD_TYPE default: size_t
401  The type of the fields in the mallinfo struct. This was originally
402  defined as "int" in SVID etc, but is more usefully defined as
403  size_t. The value is used only if HAVE_USR_INCLUDE_MALLOC_H is not set
404 
405 REALLOC_ZERO_BYTES_FREES default: not defined
406  This should be set if a call to realloc with zero bytes should
407  be the same as a call to free. Some people think it should. Otherwise,
408  since this malloc returns a unique pointer for malloc(0), so does
409  realloc(p, 0).
410 
411 LACKS_UNISTD_H, LACKS_FCNTL_H, LACKS_SYS_PARAM_H, LACKS_SYS_MMAN_H
412 LACKS_STRINGS_H, LACKS_STRING_H, LACKS_SYS_TYPES_H, LACKS_ERRNO_H
413 LACKS_STDLIB_H default: NOT defined unless on WIN32
414  Define these if your system does not have these header files.
415  You might need to manually insert some of the declarations they provide.
416 
417 DEFAULT_GRANULARITY default: page size if MORECORE_CONTIGUOUS,
418  system_info.dwAllocationGranularity in WIN32,
419  GetLargePageMinimum() if ENABLE_LARGE_PAGES,
420  otherwise 64K.
421  Also settable using mallopt(M_GRANULARITY, x)
422  The unit for allocating and deallocating memory from the system. On
423  most systems with contiguous MORECORE, there is no reason to
424  make this more than a page. However, systems with MMAP tend to
425  either require or encourage larger granularities. You can increase
426  this value to prevent system allocation functions to be called so
427  often, especially if they are slow. The value must be at least one
428  page and must be a power of two. Setting to 0 causes initialization
429  to either page size or win32 region size. (Note: In previous
430  versions of malloc, the equivalent of this option was called
431  "TOP_PAD")
432 
433 DEFAULT_GRANULARITY_ALIGNED default: undefined (which means page size)
434  Whether to enforce alignment when allocating and deallocating memory
435  from the system i.e. the base address of all allocations will be
436  aligned to DEFAULT_GRANULARITY if it is set. Note that enabling this carries
437  some overhead as multiple calls must now be made when probing for a valid
438  aligned value, however it does greatly ease the checking for whether
439  a given memory pointer was allocated by this allocator rather than
440  some other.
441 
442 DEFAULT_TRIM_THRESHOLD default: 2MB
443  Also settable using mallopt(M_TRIM_THRESHOLD, x)
444  The maximum amount of unused top-most memory to keep before
445  releasing via malloc_trim in free(). Automatic trimming is mainly
446  useful in long-lived programs using contiguous MORECORE. Because
447  trimming via sbrk can be slow on some systems, and can sometimes be
448  wasteful (in cases where programs immediately afterward allocate
449  more large chunks) the value should be high enough so that your
450  overall system performance would improve by releasing this much
451  memory. As a rough guide, you might set to a value close to the
452  average size of a process (program) running on your system.
453  Releasing this much memory would allow such a process to run in
454  memory. Generally, it is worth tuning trim thresholds when a
455  program undergoes phases where several large chunks are allocated
456  and released in ways that can reuse each other's storage, perhaps
457  mixed with phases where there are no such chunks at all. The trim
458  value must be greater than page size to have any useful effect. To
459  disable trimming completely, you can set to MAX_SIZE_T. Note that the trick
460  some people use of mallocing a huge space and then freeing it at
461  program startup, in an attempt to reserve system memory, doesn't
462  have the intended effect under automatic trimming, since that memory
463  will immediately be returned to the system.
464 
465 DEFAULT_MMAP_THRESHOLD default: 256K
466  Also settable using mallopt(M_MMAP_THRESHOLD, x)
467  The request size threshold for using MMAP to directly service a
468  request. Requests of at least this size that cannot be allocated
469  using already-existing space will be serviced via mmap. (If enough
470  normal freed space already exists it is used instead.) Using mmap
471  segregates relatively large chunks of memory so that they can be
472  individually obtained and released from the host system. A request
473  serviced through mmap is never reused by any other request (at least
474  not directly; the system may just so happen to remap successive
475  requests to the same locations). Segregating space in this way has
476  the benefits that: Mmapped space can always be individually released
477  back to the system, which helps keep the system level memory demands
478  of a long-lived program low. Also, mapped memory doesn't become
479  `locked' between other chunks, as can happen with normally allocated
480  chunks, which means that even trimming via malloc_trim would not
481  release them. However, it has the disadvantage that the space
482  cannot be reclaimed, consolidated, and then used to service later
483  requests, as happens with normal chunks. The advantages of mmap
484  nearly always outweigh disadvantages for "large" chunks, but the
485  value of "large" may vary across systems. The default is an
486  empirically derived value that works well in most systems. You can
487  disable mmap by setting to MAX_SIZE_T.
488 
489 MAX_RELEASE_CHECK_RATE default: 4095 unless not HAVE_MMAP
490  The number of consolidated frees between checks to release
491  unused segments when freeing. When using non-contiguous segments,
492  especially with multiple mspaces, checking only for topmost space
493  doesn't always suffice to trigger trimming. To compensate for this,
494  free() will, with a period of MAX_RELEASE_CHECK_RATE (or the
495  current number of segments, if greater) try to release unused
496  segments to the OS when freeing chunks that result in
497  consolidation. The best value for this parameter is a compromise
498  between slowing down frees with relatively costly checks that
499  rarely trigger versus holding on to unused memory. To effectively
500  disable, set to MAX_SIZE_T. This may lead to a very slight speed
501  improvement at the expense of carrying around more memory.
502 */
503 
504 /* Version identifier to allow people to support multiple versions */
505 #ifndef DLMALLOC_VERSION
506 #define DLMALLOC_VERSION 20804
507 #endif /* DLMALLOC_VERSION */
508 
509 #ifndef WIN32
510 #ifdef _WIN32
511 #define WIN32 1
512 #endif /* _WIN32 */
513 #ifdef _WIN32_WCE
514 #define LACKS_FCNTL_H
515 #define WIN32 1
516 #endif /* _WIN32_WCE */
517 #endif /* WIN32 */
518 #ifdef WIN32
519 #define WIN32_LEAN_AND_MEAN
520 #include <windows.h>
521 #include <tchar.h>
522 #define HAVE_MMAP 1
523 #define HAVE_MORECORE 0
524 #define LACKS_UNISTD_H
525 #define LACKS_SYS_PARAM_H
526 #define LACKS_SYS_MMAN_H
527 #define LACKS_STRING_H
528 #define LACKS_STRINGS_H
529 #define LACKS_SYS_TYPES_H
530 #define LACKS_ERRNO_H
531 #ifndef MALLOC_FAILURE_ACTION
532 #define MALLOC_FAILURE_ACTION
533 #endif /* MALLOC_FAILURE_ACTION */
534 #ifdef _WIN32_WCE /* WINCE reportedly does not clear */
535 #define MMAP_CLEARS 0
536 #else
537 #define MMAP_CLEARS 1
538 #endif /* _WIN32_WCE */
539 #endif /* WIN32 */
540 
541 #if defined(DARWIN) || defined(_DARWIN)
542 /* Mac OSX docs advise not to use sbrk; it seems better to use mmap */
543 #ifndef HAVE_MORECORE
544 #define HAVE_MORECORE 0
545 #define HAVE_MMAP 1
546 /* OSX allocators provide 16 byte alignment */
547 #ifndef MALLOC_ALIGNMENT
548 #define MALLOC_ALIGNMENT ((size_t)16U)
549 #endif
550 #endif /* HAVE_MORECORE */
551 #endif /* DARWIN */
552 
553 #ifndef LACKS_SYS_TYPES_H
554 #include <sys/types.h> /* For size_t */
555 #endif /* LACKS_SYS_TYPES_H */
556 
557 #if (defined(__GNUC__) && ((defined(__i386__) || defined(__x86_64__)))) || (defined(_MSC_VER) && _MSC_VER>=1310)
558 #define SPIN_LOCKS_AVAILABLE 1
559 #else
560 #define SPIN_LOCKS_AVAILABLE 0
561 #endif
562 
563 /* The maximum possible size_t value has all bits set */
564 #define MAX_SIZE_T (~(size_t)0)
565 
566 #ifndef ONLY_MSPACES
567 #define ONLY_MSPACES 0 /* define to a value */
568 #else
569 #define ONLY_MSPACES 1
570 #endif /* ONLY_MSPACES */
571 #ifndef MSPACES
572 #if ONLY_MSPACES
573 #define MSPACES 1
574 #else /* ONLY_MSPACES */
575 #define MSPACES 0
576 #endif /* ONLY_MSPACES */
577 #endif /* MSPACES */
578 #ifndef MALLOC_ALIGNMENT
579 #define MALLOC_ALIGNMENT ((size_t)8U)
580 #endif /* MALLOC_ALIGNMENT */
581 #ifndef FOOTERS
582 #define FOOTERS 0
583 #endif /* FOOTERS */
584 #ifndef ABORT
585 #define ABORT abort()
586 #endif /* ABORT */
587 #ifndef ABORT_ON_ASSERT_FAILURE
588 #define ABORT_ON_ASSERT_FAILURE 1
589 #endif /* ABORT_ON_ASSERT_FAILURE */
590 #ifndef PROCEED_ON_ERROR
591 #define PROCEED_ON_ERROR 0
592 #endif /* PROCEED_ON_ERROR */
593 #ifndef USE_LOCKS
594 #define USE_LOCKS 0
595 #endif /* USE_LOCKS */
596 #ifndef USE_SPIN_LOCKS
597 #if USE_LOCKS && SPIN_LOCKS_AVAILABLE
598 #define USE_SPIN_LOCKS 1
599 #else
600 #define USE_SPIN_LOCKS 0
601 #endif /* USE_LOCKS && SPIN_LOCKS_AVAILABLE. */
602 #endif /* USE_SPIN_LOCKS */
603 #ifndef INSECURE
604 #define INSECURE 0
605 #endif /* INSECURE */
606 #ifndef HAVE_MMAP
607 #define HAVE_MMAP 1
608 #endif /* HAVE_MMAP */
609 #ifndef MMAP_CLEARS
610 #define MMAP_CLEARS 1
611 #endif /* MMAP_CLEARS */
612 #ifndef HAVE_MREMAP
613 #ifdef linux
614 #define HAVE_MREMAP 1
615 #else /* linux */
616 #define HAVE_MREMAP 0
617 #endif /* linux */
618 #endif /* HAVE_MREMAP */
619 #ifndef MALLOC_FAILURE_ACTION
620 #define MALLOC_FAILURE_ACTION errno = ENOMEM;
621 #endif /* MALLOC_FAILURE_ACTION */
622 #ifndef HAVE_MORECORE
623 #if ONLY_MSPACES
624 #define HAVE_MORECORE 0
625 #else /* ONLY_MSPACES */
626 #define HAVE_MORECORE 1
627 #endif /* ONLY_MSPACES */
628 #endif /* HAVE_MORECORE */
629 #if !HAVE_MORECORE
630 #define MORECORE_CONTIGUOUS 0
631 #else /* !HAVE_MORECORE */
632 #define MORECORE_DEFAULT sbrk
633 #ifndef MORECORE_CONTIGUOUS
634 #define MORECORE_CONTIGUOUS 1
635 #endif /* MORECORE_CONTIGUOUS */
636 #endif /* HAVE_MORECORE */
637 #ifndef DEFAULT_GRANULARITY
638 #if (MORECORE_CONTIGUOUS || defined(WIN32))
639 #define DEFAULT_GRANULARITY (0) /* 0 means to compute in init_mparams */
640 #else /* MORECORE_CONTIGUOUS */
641 #define DEFAULT_GRANULARITY ((size_t)64U * (size_t)1024U)
642 #endif /* MORECORE_CONTIGUOUS */
643 #endif /* DEFAULT_GRANULARITY */
644 #ifndef DEFAULT_TRIM_THRESHOLD
645 #ifndef MORECORE_CANNOT_TRIM
646 #define DEFAULT_TRIM_THRESHOLD ((size_t)2U * (size_t)1024U * (size_t)1024U)
647 #else /* MORECORE_CANNOT_TRIM */
648 #define DEFAULT_TRIM_THRESHOLD MAX_SIZE_T
649 #endif /* MORECORE_CANNOT_TRIM */
650 #endif /* DEFAULT_TRIM_THRESHOLD */
651 #ifndef DEFAULT_MMAP_THRESHOLD
652 #if HAVE_MMAP
653 #define DEFAULT_MMAP_THRESHOLD ((size_t)256U * (size_t)1024U)
654 #else /* HAVE_MMAP */
655 #define DEFAULT_MMAP_THRESHOLD MAX_SIZE_T
656 #endif /* HAVE_MMAP */
657 #endif /* DEFAULT_MMAP_THRESHOLD */
658 #ifndef MAX_RELEASE_CHECK_RATE
659 #if HAVE_MMAP
660 #define MAX_RELEASE_CHECK_RATE 4095
661 #else
662 #define MAX_RELEASE_CHECK_RATE MAX_SIZE_T
663 #endif /* HAVE_MMAP */
664 #endif /* MAX_RELEASE_CHECK_RATE */
665 #ifndef USE_BUILTIN_FFS
666 #define USE_BUILTIN_FFS 0
667 #endif /* USE_BUILTIN_FFS */
668 #ifndef USE_DEV_RANDOM
669 #define USE_DEV_RANDOM 0
670 #endif /* USE_DEV_RANDOM */
671 #ifndef NO_MALLINFO
672 #define NO_MALLINFO 0
673 #endif /* NO_MALLINFO */
674 #ifndef MALLINFO_FIELD_TYPE
675 #define MALLINFO_FIELD_TYPE size_t
676 #endif /* MALLINFO_FIELD_TYPE */
677 #ifndef NO_SEGMENT_TRAVERSAL
678 #define NO_SEGMENT_TRAVERSAL 0
679 #endif /* NO_SEGMENT_TRAVERSAL */
680 
681 /*
682  mallopt tuning options. SVID/XPG defines four standard parameter
683  numbers for mallopt, normally defined in malloc.h. None of these
684  are used in this malloc, so setting them has no effect. But this
685  malloc does support the following options.
686 */
687 
688 #define M_TRIM_THRESHOLD (-1)
689 #define M_GRANULARITY (-2)
690 #define M_MMAP_THRESHOLD (-3)
691 
692 /* ------------------------ Mallinfo declarations ------------------------ */
693 
694 #if !NO_MALLINFO
695 /*
696  This version of malloc supports the standard SVID/XPG mallinfo
697  routine that returns a struct containing usage properties and
698  statistics. It should work on any system that has a
699  /usr/include/malloc.h defining struct mallinfo. The main
700  declaration needed is the mallinfo struct that is returned (by-copy)
701  by mallinfo(). The malloinfo struct contains a bunch of fields that
702  are not even meaningful in this version of malloc. These fields are
703  are instead filled by mallinfo() with other numbers that might be of
704  interest.
705 
706  HAVE_USR_INCLUDE_MALLOC_H should be set if you have a
707  /usr/include/malloc.h file that includes a declaration of struct
708  mallinfo. If so, it is included; else a compliant version is
709  declared below. These must be precisely the same for mallinfo() to
710  work. The original SVID version of this struct, defined on most
711  systems with mallinfo, declares all fields as ints. But some others
712  define as unsigned long. If your system defines the fields using a
713  type of different width than listed here, you MUST #include your
714  system version and #define HAVE_USR_INCLUDE_MALLOC_H.
715 */
716 
717 /* #define HAVE_USR_INCLUDE_MALLOC_H */
718 
719 #ifdef HAVE_USR_INCLUDE_MALLOC_H
720 #include "/usr/include/malloc.h"
721 #else /* HAVE_USR_INCLUDE_MALLOC_H */
722 #ifndef STRUCT_MALLINFO_DECLARED
723 #define STRUCT_MALLINFO_DECLARED 1
724 struct mallinfo {
725  MALLINFO_FIELD_TYPE arena; /* non-mmapped space allocated from system */
726  MALLINFO_FIELD_TYPE ordblks; /* number of free chunks */
727  MALLINFO_FIELD_TYPE smblks; /* always 0 */
728  MALLINFO_FIELD_TYPE hblks; /* always 0 */
729  MALLINFO_FIELD_TYPE hblkhd; /* space in mmapped regions */
730  MALLINFO_FIELD_TYPE usmblks; /* maximum total allocated space */
731  MALLINFO_FIELD_TYPE fsmblks; /* always 0 */
732  MALLINFO_FIELD_TYPE uordblks; /* total allocated space */
733  MALLINFO_FIELD_TYPE fordblks; /* total free space */
734  MALLINFO_FIELD_TYPE keepcost; /* releasable (via malloc_trim) space */
735 };
736 #endif /* STRUCT_MALLINFO_DECLARED */
737 #endif /* HAVE_USR_INCLUDE_MALLOC_H */
738 #endif /* NO_MALLINFO */
739 
740 /*
741  Try to persuade compilers to inline. The most critical functions for
742  inlining are defined as macros, so these aren't used for them.
743 */
744 
745 #ifndef FORCEINLINE
746  #if defined(__GNUC__)
747 #define FORCEINLINE __inline __attribute__ ((always_inline))
748  #elif defined(_MSC_VER)
749  #define FORCEINLINE __forceinline
750  #endif
751 #endif
752 #ifndef NOINLINE
753  #if defined(__GNUC__)
754  #define NOINLINE __attribute__ ((noinline))
755  #elif defined(_MSC_VER)
756  #define NOINLINE __declspec(noinline)
757  #else
758  #define NOINLINE
759  #endif
760 #endif
761 
762 #ifdef __cplusplus
763 extern "C" {
764 #ifndef FORCEINLINE
765  #define FORCEINLINE inline
766 #endif
767 #endif /* __cplusplus */
768 #ifndef FORCEINLINE
769  #define FORCEINLINE
770 #endif
771 
772 #if !ONLY_MSPACES
773 
774 /* ------------------- Declarations of public routines ------------------- */
775 
776 #ifndef USE_DL_PREFIX
777 #define dlcalloc calloc
778 #define dlfree free
779 #define dlmalloc malloc
780 #define dlmemalign memalign
781 #define dlrealloc realloc
782 #define dlvalloc valloc
783 #define dlpvalloc pvalloc
784 #define dlmallinfo mallinfo
785 #define dlmallopt mallopt
786 #define dlmalloc_trim malloc_trim
787 #define dlmalloc_stats malloc_stats
788 #define dlmalloc_usable_size malloc_usable_size
789 #define dlmalloc_footprint malloc_footprint
790 #define dlmalloc_max_footprint malloc_max_footprint
791 #define dlindependent_calloc independent_calloc
792 #define dlindependent_comalloc independent_comalloc
793 #endif /* USE_DL_PREFIX */
794 
795 
796 /*
797  malloc(size_t n)
798  Returns a pointer to a newly allocated chunk of at least n bytes, or
799  null if no space is available, in which case errno is set to ENOMEM
800  on ANSI C systems.
801 
802  If n is zero, malloc returns a minimum-sized chunk. (The minimum
803  size is 16 bytes on most 32bit systems, and 32 bytes on 64bit
804  systems.) Note that size_t is an unsigned type, so calls with
805  arguments that would be negative if signed are interpreted as
806  requests for huge amounts of space, which will often fail. The
807  maximum supported value of n differs across systems, but is in all
808  cases less than the maximum representable value of a size_t.
809 */
810 void* dlmalloc(size_t);
811 
812 /*
813  free(void* p)
814  Releases the chunk of memory pointed to by p, that had been previously
815  allocated using malloc or a related routine such as realloc.
816  It has no effect if p is null. If p was not malloced or already
817  freed, free(p) will by default cause the current program to abort.
818 */
819 void dlfree(void*);
820 
821 /*
822  calloc(size_t n_elements, size_t element_size);
823  Returns a pointer to n_elements * element_size bytes, with all locations
824  set to zero.
825 */
826 void* dlcalloc(size_t, size_t);
827 
828 /*
829  realloc(void* p, size_t n)
830  Returns a pointer to a chunk of size n that contains the same data
831  as does chunk p up to the minimum of (n, p's size) bytes, or null
832  if no space is available.
833 
834  The returned pointer may or may not be the same as p. The algorithm
835  prefers extending p in most cases when possible, otherwise it
836  employs the equivalent of a malloc-copy-free sequence.
837 
838  If p is null, realloc is equivalent to malloc.
839 
840  If space is not available, realloc returns null, errno is set (if on
841  ANSI) and p is NOT freed.
842 
843  if n is for fewer bytes than already held by p, the newly unused
844  space is lopped off and freed if possible. realloc with a size
845  argument of zero (re)allocates a minimum-sized chunk.
846 
847  The old unix realloc convention of allowing the last-free'd chunk
848  to be used as an argument to realloc is not supported.
849 */
850 
851 void* dlrealloc(void*, size_t);
852 
853 /*
854  memalign(size_t alignment, size_t n);
855  Returns a pointer to a newly allocated chunk of n bytes, aligned
856  in accord with the alignment argument.
857 
858  The alignment argument should be a power of two. If the argument is
859  not a power of two, the nearest greater power is used.
860  8-byte alignment is guaranteed by normal malloc calls, so don't
861  bother calling memalign with an argument of 8 or less.
862 
863  Overreliance on memalign is a sure way to fragment space.
864 */
865 void* dlmemalign(size_t, size_t);
866 
867 /*
868  valloc(size_t n);
869  Equivalent to memalign(pagesize, n), where pagesize is the page
870  size of the system. If the pagesize is unknown, 4096 is used.
871 */
872 void* dlvalloc(size_t);
873 
874 /*
875  mallopt(int parameter_number, int parameter_value)
876  Sets tunable parameters The format is to provide a
877  (parameter-number, parameter-value) pair. mallopt then sets the
878  corresponding parameter to the argument value if it can (i.e., so
879  long as the value is meaningful), and returns 1 if successful else
880  0. To workaround the fact that mallopt is specified to use int,
881  not size_t parameters, the value -1 is specially treated as the
882  maximum unsigned size_t value.
883 
884  SVID/XPG/ANSI defines four standard param numbers for mallopt,
885  normally defined in malloc.h. None of these are use in this malloc,
886  so setting them has no effect. But this malloc also supports other
887  options in mallopt. See below for details. Briefly, supported
888  parameters are as follows (listed defaults are for "typical"
889  configurations).
890 
891  Symbol param # default allowed param values
892  M_TRIM_THRESHOLD -1 2*1024*1024 any (-1 disables)
893  M_GRANULARITY -2 page size any power of 2 >= page size
894  M_MMAP_THRESHOLD -3 256*1024 any (or 0 if no MMAP support)
895 */
896 int dlmallopt(int, int);
897 
898 /*
899  malloc_footprint();
900  Returns the number of bytes obtained from the system. The total
901  number of bytes allocated by malloc, realloc etc., is less than this
902  value. Unlike mallinfo, this function returns only a precomputed
903  result, so can be called frequently to monitor memory consumption.
904  Even if locks are otherwise defined, this function does not use them,
905  so results might not be up to date.
906 */
907 size_t dlmalloc_footprint(void);
908 
909 /*
910  malloc_max_footprint();
911  Returns the maximum number of bytes obtained from the system. This
912  value will be greater than current footprint if deallocated space
913  has been reclaimed by the system. The peak number of bytes allocated
914  by malloc, realloc etc., is less than this value. Unlike mallinfo,
915  this function returns only a precomputed result, so can be called
916  frequently to monitor memory consumption. Even if locks are
917  otherwise defined, this function does not use them, so results might
918  not be up to date.
919 */
920 size_t dlmalloc_max_footprint(void);
921 
922 #if !NO_MALLINFO
923 /*
924  mallinfo()
925  Returns (by copy) a struct containing various summary statistics:
926 
927  arena: current total non-mmapped bytes allocated from system
928  ordblks: the number of free chunks
929  smblks: always zero.
930  hblks: current number of mmapped regions
931  hblkhd: total bytes held in mmapped regions
932  usmblks: the maximum total allocated space. This will be greater
933  than current total if trimming has occurred.
934  fsmblks: always zero
935  uordblks: current total allocated space (normal or mmapped)
936  fordblks: total free space
937  keepcost: the maximum number of bytes that could ideally be released
938  back to system via malloc_trim. ("ideally" means that
939  it ignores page restrictions etc.)
940 
941  Because these fields are ints, but internal bookkeeping may
942  be kept as longs, the reported values may wrap around zero and
943  thus be inaccurate.
944 */
945 struct mallinfo dlmallinfo(void);
946 #endif /* NO_MALLINFO */
947 
948 /*
949  independent_calloc(size_t n_elements, size_t element_size, void* chunks[]);
950 
951  independent_calloc is similar to calloc, but instead of returning a
952  single cleared space, it returns an array of pointers to n_elements
953  independent elements that can hold contents of size elem_size, each
954  of which starts out cleared, and can be independently freed,
955  realloc'ed etc. The elements are guaranteed to be adjacently
956  allocated (this is not guaranteed to occur with multiple callocs or
957  mallocs), which may also improve cache locality in some
958  applications.
959 
960  The "chunks" argument is optional (i.e., may be null, which is
961  probably the most typical usage). If it is null, the returned array
962  is itself dynamically allocated and should also be freed when it is
963  no longer needed. Otherwise, the chunks array must be of at least
964  n_elements in length. It is filled in with the pointers to the
965  chunks.
966 
967  In either case, independent_calloc returns this pointer array, or
968  null if the allocation failed. If n_elements is zero and "chunks"
969  is null, it returns a chunk representing an array with zero elements
970  (which should be freed if not wanted).
971 
972  Each element must be individually freed when it is no longer
973  needed. If you'd like to instead be able to free all at once, you
974  should instead use regular calloc and assign pointers into this
975  space to represent elements. (In this case though, you cannot
976  independently free elements.)
977 
978  independent_calloc simplifies and speeds up implementations of many
979  kinds of pools. It may also be useful when constructing large data
980  structures that initially have a fixed number of fixed-sized nodes,
981  but the number is not known at compile time, and some of the nodes
982  may later need to be freed. For example:
983 
984  struct Node { int item; struct Node* next; };
985 
986  struct Node* build_list() {
987  struct Node** pool;
988  int n = read_number_of_nodes_needed();
989  if (n <= 0) return 0;
990  pool = (struct Node**)(independent_calloc(n, sizeof(struct Node), 0);
991  if (pool == 0) die();
992  // organize into a linked list...
993  struct Node* first = pool[0];
994  for (i = 0; i < n-1; ++i)
995  pool[i]->next = pool[i+1];
996  free(pool); // Can now free the array (or not, if it is needed later)
997  return first;
998  }
999 */
1000 void** dlindependent_calloc(size_t, size_t, void**);
1001 
1002 /*
1003  independent_comalloc(size_t n_elements, size_t sizes[], void* chunks[]);
1004 
1005  independent_comalloc allocates, all at once, a set of n_elements
1006  chunks with sizes indicated in the "sizes" array. It returns
1007  an array of pointers to these elements, each of which can be
1008  independently freed, realloc'ed etc. The elements are guaranteed to
1009  be adjacently allocated (this is not guaranteed to occur with
1010  multiple callocs or mallocs), which may also improve cache locality
1011  in some applications.
1012 
1013  The "chunks" argument is optional (i.e., may be null). If it is null
1014  the returned array is itself dynamically allocated and should also
1015  be freed when it is no longer needed. Otherwise, the chunks array
1016  must be of at least n_elements in length. It is filled in with the
1017  pointers to the chunks.
1018 
1019  In either case, independent_comalloc returns this pointer array, or
1020  null if the allocation failed. If n_elements is zero and chunks is
1021  null, it returns a chunk representing an array with zero elements
1022  (which should be freed if not wanted).
1023 
1024  Each element must be individually freed when it is no longer
1025  needed. If you'd like to instead be able to free all at once, you
1026  should instead use a single regular malloc, and assign pointers at
1027  particular offsets in the aggregate space. (In this case though, you
1028  cannot independently free elements.)
1029 
1030  independent_comallac differs from independent_calloc in that each
1031  element may have a different size, and also that it does not
1032  automatically clear elements.
1033 
1034  independent_comalloc can be used to speed up allocation in cases
1035  where several structs or objects must always be allocated at the
1036  same time. For example:
1037 
1038  struct Head { ... }
1039  struct Foot { ... }
1040 
1041  void send_message(char* msg) {
1042  int msglen = strlen(msg);
1043  size_t sizes[3] = { sizeof(struct Head), msglen, sizeof(struct Foot) };
1044  void* chunks[3];
1045  if (independent_comalloc(3, sizes, chunks) == 0)
1046  die();
1047  struct Head* head = (struct Head*)(chunks[0]);
1048  char* body = (char*)(chunks[1]);
1049  struct Foot* foot = (struct Foot*)(chunks[2]);
1050  // ...
1051  }
1052 
1053  In general though, independent_comalloc is worth using only for
1054  larger values of n_elements. For small values, you probably won't
1055  detect enough difference from series of malloc calls to bother.
1056 
1057  Overuse of independent_comalloc can increase overall memory usage,
1058  since it cannot reuse existing noncontiguous small chunks that
1059  might be available for some of the elements.
1060 */
1061 void** dlindependent_comalloc(size_t, size_t*, void**);
1062 
1063 
1064 /*
1065  pvalloc(size_t n);
1066  Equivalent to valloc(minimum-page-that-holds(n)), that is,
1067  round up n to nearest pagesize.
1068  */
1069 void* dlpvalloc(size_t);
1070 
1071 /*
1072  malloc_trim(size_t pad);
1073 
1074  If possible, gives memory back to the system (via negative arguments
1075  to sbrk) if there is unused memory at the `high' end of the malloc
1076  pool or in unused MMAP segments. You can call this after freeing
1077  large blocks of memory to potentially reduce the system-level memory
1078  requirements of a program. However, it cannot guarantee to reduce
1079  memory. Under some allocation patterns, some large free blocks of
1080  memory will be locked between two used chunks, so they cannot be
1081  given back to the system.
1082 
1083  The `pad' argument to malloc_trim represents the amount of free
1084  trailing space to leave untrimmed. If this argument is zero, only
1085  the minimum amount of memory to maintain internal data structures
1086  will be left. Non-zero arguments can be supplied to maintain enough
1087  trailing space to service future expected allocations without having
1088  to re-obtain memory from the system.
1089 
1090  Malloc_trim returns 1 if it actually released any memory, else 0.
1091 */
1092 int dlmalloc_trim(size_t);
1093 
1094 /*
1095  malloc_stats();
1096  Prints on stderr the amount of space obtained from the system (both
1097  via sbrk and mmap), the maximum amount (which may be more than
1098  current if malloc_trim and/or munmap got called), and the current
1099  number of bytes allocated via malloc (or realloc, etc) but not yet
1100  freed. Note that this is the number of bytes allocated, not the
1101  number requested. It will be larger than the number requested
1102  because of alignment and bookkeeping overhead. Because it includes
1103  alignment wastage as being in use, this figure may be greater than
1104  zero even when no user-level chunks are allocated.
1105 
1106  The reported current and maximum system memory can be inaccurate if
1107  a program makes other calls to system memory allocation functions
1108  (normally sbrk) outside of malloc.
1109 
1110  malloc_stats prints only the most commonly interesting statistics.
1111  More information can be obtained by calling mallinfo.
1112 */
1113 void dlmalloc_stats(void);
1114 
1115 #endif /* ONLY_MSPACES */
1116 
1117 /*
1118  malloc_usable_size(void* p);
1119 
1120  Returns the number of bytes you can actually use in
1121  an allocated chunk, which may be more than you requested (although
1122  often not) due to alignment and minimum size constraints.
1123  You can use this many bytes without worrying about
1124  overwriting other allocated objects. This is not a particularly great
1125  programming practice. malloc_usable_size can be more useful in
1126  debugging and assertions, for example:
1127 
1128  p = malloc(n);
1129  assert(malloc_usable_size(p) >= 256);
1130 */
1131 size_t dlmalloc_usable_size(void*);
1132 
1133 
1134 #if MSPACES
1135 
1136 /*
1137  mspace is an opaque type representing an independent
1138  region of space that supports mspace_malloc, etc.
1139 */
1140 typedef void* mspace;
1141 
1142 /*
1143  create_mspace creates and returns a new independent space with the
1144  given initial capacity, or, if 0, the default granularity size. It
1145  returns null if there is no system memory available to create the
1146  space. If argument locked is non-zero, the space uses a separate
1147  lock to control access. The capacity of the space will grow
1148  dynamically as needed to service mspace_malloc requests. You can
1149  control the sizes of incremental increases of this space by
1150  compiling with a different DEFAULT_GRANULARITY or dynamically
1151  setting with mallopt(M_GRANULARITY, value).
1152 */
1153 mspace create_mspace(size_t capacity, int locked);
1154 
1155 /*
1156  destroy_mspace destroys the given space, and attempts to return all
1157  of its memory back to the system, returning the total number of
1158  bytes freed. After destruction, the results of access to all memory
1159  used by the space become undefined.
1160 */
1161 size_t destroy_mspace(mspace msp);
1162 
1163 /*
1164  create_mspace_with_base uses the memory supplied as the initial base
1165  of a new mspace. Part (less than 128*sizeof(size_t) bytes) of this
1166  space is used for bookkeeping, so the capacity must be at least this
1167  large. (Otherwise 0 is returned.) When this initial space is
1168  exhausted, additional memory will be obtained from the system.
1169  Destroying this space will deallocate all additionally allocated
1170  space (if possible) but not the initial base.
1171 */
1172 mspace create_mspace_with_base(void* base, size_t capacity, int locked);
1173 
1174 /*
1175  mspace_track_large_chunks controls whether requests for large chunks
1176  are allocated in their own untracked mmapped regions, separate from
1177  others in this mspace. By default large chunks are not tracked,
1178  which reduces fragmentation. However, such chunks are not
1179  necessarily released to the system upon destroy_mspace. Enabling
1180  tracking by setting to true may increase fragmentation, but avoids
1181  leakage when relying on destroy_mspace to release all memory
1182  allocated using this space. The function returns the previous
1183  setting.
1184 */
1185 int mspace_track_large_chunks(mspace msp, int enable);
1186 
1187 
1188 /*
1189  mspace_malloc behaves as malloc, but operates within
1190  the given space.
1191 */
1192 void* mspace_malloc(mspace msp, size_t bytes);
1193 
1194 /*
1195  mspace_free behaves as free, but operates within
1196  the given space.
1197 
1198  If compiled with FOOTERS==1, mspace_free is not actually needed.
1199  free may be called instead of mspace_free because freed chunks from
1200  any space are handled by their originating spaces.
1201 */
1202 void mspace_free(mspace msp, void* mem);
1203 
1204 /*
1205  mspace_realloc behaves as realloc, but operates within
1206  the given space.
1207 
1208  If compiled with FOOTERS==1, mspace_realloc is not actually
1209  needed. realloc may be called instead of mspace_realloc because
1210  realloced chunks from any space are handled by their originating
1211  spaces.
1212 */
1213 void* mspace_realloc(mspace msp, void* mem, size_t newsize);
1214 
1215 /*
1216  mspace_calloc behaves as calloc, but operates within
1217  the given space.
1218 */
1219 void* mspace_calloc(mspace msp, size_t n_elements, size_t elem_size);
1220 
1221 /*
1222  mspace_memalign behaves as memalign, but operates within
1223  the given space.
1224 */
1225 void* mspace_memalign(mspace msp, size_t alignment, size_t bytes);
1226 
1227 /*
1228  mspace_independent_calloc behaves as independent_calloc, but
1229  operates within the given space.
1230 */
1231 void** mspace_independent_calloc(mspace msp, size_t n_elements,
1232  size_t elem_size, void* chunks[]);
1233 
1234 /*
1235  mspace_independent_comalloc behaves as independent_comalloc, but
1236  operates within the given space.
1237 */
1238 void** mspace_independent_comalloc(mspace msp, size_t n_elements,
1239  size_t sizes[], void* chunks[]);
1240 
1241 /*
1242  mspace_footprint() returns the number of bytes obtained from the
1243  system for this space.
1244 */
1245 size_t mspace_footprint(mspace msp);
1246 
1247 /*
1248  mspace_max_footprint() returns the peak number of bytes obtained from the
1249  system for this space.
1250 */
1251 size_t mspace_max_footprint(mspace msp);
1252 
1253 
1254 #if !NO_MALLINFO
1255 /*
1256  mspace_mallinfo behaves as mallinfo, but reports properties of
1257  the given space.
1258 */
1259 struct mallinfo mspace_mallinfo(mspace msp);
1260 #endif /* NO_MALLINFO */
1261 
1262 /*
1263  malloc_usable_size(void* p) behaves the same as malloc_usable_size;
1264 */
1265  size_t mspace_usable_size(void* mem);
1266 
1267 /*
1268  mspace_malloc_stats behaves as malloc_stats, but reports
1269  properties of the given space.
1270 */
1271 void mspace_malloc_stats(mspace msp);
1272 
1273 /*
1274  mspace_trim behaves as malloc_trim, but
1275  operates within the given space.
1276 */
1277 int mspace_trim(mspace msp, size_t pad);
1278 
1279 /*
1280  An alias for mallopt.
1281 */
1282 int mspace_mallopt(int, int);
1283 
1284 #endif /* MSPACES */
1285 
1286 #ifdef __cplusplus
1287 } /* end of extern "C" */
1288 #endif /* __cplusplus */
1289 
1290 /*
1291  ========================================================================
1292  To make a fully customizable malloc.h header file, cut everything
1293  above this line, put into file malloc.h, edit to suit, and #include it
1294  on the next line, as well as in programs that use this malloc.
1295  ========================================================================
1296 */
1297 
1298 /* #include "malloc.h" */
1299 
1300 /*------------------------------ internal #includes ---------------------- */
1301 
1302 #ifdef _MSC_VER
1303 #pragma warning( disable : 4146 ) /* no "unsigned" warnings */
1304 #endif /* WIN32 */
1305 
1306 #include <stdio.h> /* for printing in malloc_stats */
1307 
1308 #ifndef LACKS_ERRNO_H
1309 #include <errno.h> /* for MALLOC_FAILURE_ACTION */
1310 #endif /* LACKS_ERRNO_H */
1311 #if FOOTERS || DEBUG
1312 #include <time.h> /* for magic initialization */
1313 #endif /* FOOTERS */
1314 #ifndef LACKS_STDLIB_H
1315 #include <stdlib.h> /* for abort() */
1316 #endif /* LACKS_STDLIB_H */
1317 #ifdef DEBUG
1318 #if ABORT_ON_ASSERT_FAILURE
1319 #undef assert
1320 #define assert(x) if(!(x)) ABORT
1321 #else /* ABORT_ON_ASSERT_FAILURE */
1322 #include <assert.h>
1323 #endif /* ABORT_ON_ASSERT_FAILURE */
1324 #else /* DEBUG */
1325 #ifndef assert
1326 #define assert(x)
1327 #endif
1328 #define DEBUG 0
1329 #endif /* DEBUG */
1330 #ifndef LACKS_STRING_H
1331 #include <string.h> /* for memset etc */
1332 #endif /* LACKS_STRING_H */
1333 #if USE_BUILTIN_FFS
1334 #ifndef LACKS_STRINGS_H
1335 #include <strings.h> /* for ffs */
1336 #endif /* LACKS_STRINGS_H */
1337 #endif /* USE_BUILTIN_FFS */
1338 #if HAVE_MMAP
1339 #ifndef LACKS_SYS_MMAN_H
1340 /* On some versions of linux, mremap decl in mman.h needs __USE_GNU set */
1341 #if (defined(linux) && !defined(__USE_GNU))
1342 #define __USE_GNU 1
1343 #include <sys/mman.h> /* for mmap */
1344 #undef __USE_GNU
1345 #else
1346 #include <sys/mman.h> /* for mmap */
1347 #endif /* linux */
1348 #endif /* LACKS_SYS_MMAN_H */
1349 #ifndef LACKS_FCNTL_H
1350 #include <fcntl.h>
1351 #endif /* LACKS_FCNTL_H */
1352 #endif /* HAVE_MMAP */
1353 #ifndef LACKS_UNISTD_H
1354 #include <unistd.h> /* for sbrk, sysconf */
1355 #else /* LACKS_UNISTD_H */
1356 #if !defined(__FreeBSD__) && !defined(__OpenBSD__) && !defined(__NetBSD__)
1357 extern void* sbrk(ptrdiff_t);
1358 #endif /* FreeBSD etc */
1359 #endif /* LACKS_UNISTD_H */
1360 
1361 /* Declarations for locking */
1362 #if USE_LOCKS
1363 #ifndef WIN32
1364 #include <pthread.h>
1365 #if defined (__SVR4) && defined (__sun) /* solaris */
1366 #include <thread.h>
1367 #endif /* solaris */
1368 #elif defined(_MSC_VER)
1369 #ifndef _M_AMD64
1370 /* These are already defined on AMD64 builds */
1371 #ifdef __cplusplus
1372 extern "C" {
1373 #endif /* __cplusplus */
1374 LONG __cdecl _InterlockedCompareExchange(LONG volatile *Dest, LONG Exchange, LONG Comp);
1375 LONG __cdecl _InterlockedExchange(LONG volatile *Target, LONG Value);
1376 #ifdef __cplusplus
1377 }
1378 #endif /* __cplusplus */
1379 #endif /* _M_AMD64 */
1380 #pragma intrinsic (_InterlockedCompareExchange)
1381 #pragma intrinsic (_InterlockedExchange)
1382 #define interlockedcompareexchange _InterlockedCompareExchange
1383 #define interlockedexchange _InterlockedExchange
1384 #elif defined(WIN32) && defined(__GNUC__)
1385 /* MinGW or something like it */
1386 #define interlockedcompareexchange(a, b, c) __sync_val_compare_and_swap(a, c, b)
1387 #define interlockedexchange __sync_lock_test_and_set
1388 #endif /* Win32 */
1389 #endif /* USE_LOCKS */
1390 
1391 /* Declarations for bit scanning on win32 */
1392 #if defined(_MSC_VER) && _MSC_VER>=1300
1393 #ifndef BitScanForward /* Try to avoid pulling in WinNT.h */
1394 #ifdef __cplusplus
1395 extern "C" {
1396 #endif /* __cplusplus */
1397 unsigned char _BitScanForward(unsigned long *index, unsigned long mask);
1398 unsigned char _BitScanReverse(unsigned long *index, unsigned long mask);
1399 #ifdef __cplusplus
1400 }
1401 #endif /* __cplusplus */
1402 
1403 #define BitScanForward _BitScanForward
1404 #define BitScanReverse _BitScanReverse
1405 #pragma intrinsic(_BitScanForward)
1406 #pragma intrinsic(_BitScanReverse)
1407 #endif /* BitScanForward */
1408 #endif /* defined(_MSC_VER) && _MSC_VER>=1300 */
1409 
1410 #ifndef WIN32
1411 #ifndef malloc_getpagesize
1412 # ifdef _SC_PAGESIZE /* some SVR4 systems omit an underscore */
1413 # ifndef _SC_PAGE_SIZE
1414 # define _SC_PAGE_SIZE _SC_PAGESIZE
1415 # endif
1416 # endif
1417 # ifdef _SC_PAGE_SIZE
1418 # define malloc_getpagesize sysconf(_SC_PAGE_SIZE)
1419 # else
1420 # if defined(BSD) || defined(DGUX) || defined(HAVE_GETPAGESIZE)
1421  extern size_t getpagesize();
1422 # define malloc_getpagesize getpagesize()
1423 # else
1424 # ifdef WIN32 /* use supplied emulation of getpagesize */
1425 # define malloc_getpagesize getpagesize()
1426 # else
1427 # ifndef LACKS_SYS_PARAM_H
1428 # include <sys/param.h>
1429 # endif
1430 # ifdef EXEC_PAGESIZE
1431 # define malloc_getpagesize EXEC_PAGESIZE
1432 # else
1433 # ifdef NBPG
1434 # ifndef CLSIZE
1435 # define malloc_getpagesize NBPG
1436 # else
1437 # define malloc_getpagesize (NBPG * CLSIZE)
1438 # endif
1439 # else
1440 # ifdef NBPC
1441 # define malloc_getpagesize NBPC
1442 # else
1443 # ifdef PAGESIZE
1444 # define malloc_getpagesize PAGESIZE
1445 # else /* just guess */
1446 # define malloc_getpagesize ((size_t)4096U)
1447 # endif
1448 # endif
1449 # endif
1450 # endif
1451 # endif
1452 # endif
1453 # endif
1454 #endif
1455 #endif
1456 
1457 
1458 
1459 /* ------------------- size_t and alignment properties -------------------- */
1460 
1461 /* The byte and bit size of a size_t */
1462 #define SIZE_T_SIZE (sizeof(size_t))
1463 #define SIZE_T_BITSIZE (sizeof(size_t) << 3)
1464 
1465 /* Some constants coerced to size_t */
1466 /* Annoying but necessary to avoid errors on some platforms */
1467 #define SIZE_T_ZERO ((size_t)0)
1468 #define SIZE_T_ONE ((size_t)1)
1469 #define SIZE_T_TWO ((size_t)2)
1470 #define SIZE_T_FOUR ((size_t)4)
1471 #define TWO_SIZE_T_SIZES (SIZE_T_SIZE<<1)
1472 #define FOUR_SIZE_T_SIZES (SIZE_T_SIZE<<2)
1473 #define SIX_SIZE_T_SIZES (FOUR_SIZE_T_SIZES+TWO_SIZE_T_SIZES)
1474 #define HALF_MAX_SIZE_T (MAX_SIZE_T / 2U)
1475 
1476 /* The bit mask value corresponding to MALLOC_ALIGNMENT */
1477 #define CHUNK_ALIGN_MASK (MALLOC_ALIGNMENT - SIZE_T_ONE)
1478 
1479 /* True if address a has acceptable alignment */
1480 #define is_aligned(A) (((size_t)((A)) & (CHUNK_ALIGN_MASK)) == 0)
1481 
1482 /* the number of bytes to offset an address to align it */
1483 #define align_offset(A)\
1484  ((((size_t)(A) & CHUNK_ALIGN_MASK) == 0)? 0 :\
1485  ((MALLOC_ALIGNMENT - ((size_t)(A) & CHUNK_ALIGN_MASK)) & CHUNK_ALIGN_MASK))
1486 
1487 /*
1488  malloc_params holds global properties, including those that can be
1489  dynamically set using mallopt. There is a single instance, mparams,
1490  initialized in init_mparams. Note that the non-zeroness of "magic"
1491  also serves as an initialization flag.
1492 */
1493 typedef unsigned int flag_t;
1495  volatile size_t magic;
1496  size_t page_size;
1497  size_t granularity;
1498  size_t mmap_threshold;
1499  size_t trim_threshold;
1500  flag_t default_mflags;
1501 };
1502 
1503 static struct malloc_params mparams;
1504 
1505 /* Ensure mparams initialized */
1506 #define ensure_initialization() (void)(mparams.magic != 0 || init_mparams())
1507 
1508 /* -------------------------- MMAP preliminaries ------------------------- */
1509 
1510 /*
1511  If HAVE_MORECORE or HAVE_MMAP are false, we just define calls and
1512  checks to fail so compiler optimizer can delete code rather than
1513  using so many "#if"s.
1514 */
1515 
1516 
1517 /* MORECORE and MMAP must return MFAIL on failure */
1518 #define MFAIL ((void*)(MAX_SIZE_T))
1519 #define CMFAIL ((char*)(MFAIL)) /* defined for convenience */
1520 
1521 #if HAVE_MMAP
1522 
1523 #ifndef WIN32
1524 #if !defined(MAP_ANONYMOUS) && defined(MAP_ANON)
1525 #define MAP_ANONYMOUS MAP_ANON
1526 #endif /* MAP_ANON */
1527 #ifdef DEFAULT_GRANULARITY_ALIGNED
1528 #define MMAP_IMPL mmap_aligned
1529 static void* lastAlignedmmap; /* Used as a hint */
1530 static void* mmap_aligned(void *start, size_t length, int prot, int flags, int fd, off_t offset) {
1531  void* baseaddress = 0;
1532  void* ptr = 0;
1533  if(!start) {
1534  baseaddress = lastAlignedmmap;
1535  for(;;) {
1536  if(baseaddress) flags|=MAP_FIXED;
1537  ptr = mmap(baseaddress, length, prot, flags, fd, offset);
1538  if(!ptr)
1539  baseaddress = (void*)((size_t)baseaddress + mparams.granularity);
1540  else if((size_t)ptr & (mparams.granularity - SIZE_T_ONE)) {
1541  munmap(ptr, length);
1542  baseaddress = (void*)(((size_t)ptr + mparams.granularity) & ~(mparams.granularity - SIZE_T_ONE));
1543  }
1544  else break;
1545  }
1546  }
1547  else ptr = mmap(start, length, prot, flags, fd, offset);
1548  if(ptr) lastAlignedmmap = (void*)((size_t) ptr + mparams.granularity);
1549  return ptr;
1550 }
1551 #else
1552 #define MMAP_IMPL mmap
1553 #endif /* DEFAULT_GRANULARITY_ALIGNED */
1554 #define MUNMAP_DEFAULT(a, s) munmap((a), (s))
1555 #define MMAP_PROT (PROT_READ|PROT_WRITE)
1556 #ifdef MAP_ANONYMOUS
1557 #define MMAP_FLAGS (MAP_PRIVATE|MAP_ANONYMOUS)
1558 #define MMAP_DEFAULT(s) MMAP_IMPL(0, (s), MMAP_PROT, MMAP_FLAGS, -1, 0)
1559 #else /* MAP_ANONYMOUS */
1560 /*
1561  Nearly all versions of mmap support MAP_ANONYMOUS, so the following
1562  is unlikely to be needed, but is supplied just in case.
1563 */
1564 #define MMAP_FLAGS (MAP_PRIVATE)
1565 static int dev_zero_fd = -1; /* Cached file descriptor for /dev/zero. */
1566 #define MMAP_DEFAULT(s) ((dev_zero_fd < 0) ? \
1567  (dev_zero_fd = open("/dev/zero", O_RDWR), \
1568  MMAP_IMPL(0, (s), MMAP_PROT, MMAP_FLAGS, dev_zero_fd, 0)) : \
1569  MMAP_IMPL(0, (s), MMAP_PROT, MMAP_FLAGS, dev_zero_fd, 0))
1570 #endif /* MAP_ANONYMOUS */
1571 
1572 #define DIRECT_MMAP_DEFAULT(s) MMAP_DEFAULT(s)
1573 
1574 #else /* WIN32 */
1575 
1576 /* Win32 MMAP via VirtualAlloc */
1577 #ifdef DEFAULT_GRANULARITY_ALIGNED
1578 static void* lastWin32mmap; /* Used as a hint */
1579 #endif /* DEFAULT_GRANULARITY_ALIGNED */
1580 #ifdef ENABLE_LARGE_PAGES
1581 static int largepagesavailable = 1;
1582 #endif /* ENABLE_LARGE_PAGES */
1583 static FORCEINLINE void* win32mmap(size_t size) {
1584  void* baseaddress = 0;
1585  void* ptr = 0;
1586 #ifdef ENABLE_LARGE_PAGES
1587  /* Note that large pages are *always* allocated on a large page boundary.
1588  If however granularity is small then don't waste a kernel call if size
1589  isn't around the size of a large page */
1590  if(largepagesavailable && size >= 1*1024*1024) {
1591  ptr = VirtualAlloc(baseaddress, size, MEM_RESERVE|MEM_COMMIT|MEM_LARGE_PAGES, PAGE_READWRITE);
1592  if(!ptr) {
1593  if(ERROR_PRIVILEGE_NOT_HELD==GetLastError()) largepagesavailable=0;
1594  }
1595  }
1596 #endif
1597  if(!ptr) {
1598 #ifdef DEFAULT_GRANULARITY_ALIGNED
1599  /* We try to avoid overhead by speculatively reserving at aligned
1600  addresses until we succeed */
1601  baseaddress = lastWin32mmap;
1602  for(;;) {
1603  void* reserveaddr = VirtualAlloc(baseaddress, size, MEM_RESERVE, PAGE_READWRITE);
1604  if(!reserveaddr)
1605  baseaddress = (void*)((size_t)baseaddress + mparams.granularity);
1606  else if((size_t)reserveaddr & (mparams.granularity - SIZE_T_ONE)) {
1607  VirtualFree(reserveaddr, 0, MEM_RELEASE);
1608  baseaddress = (void*)(((size_t)reserveaddr + mparams.granularity) & ~(mparams.granularity - SIZE_T_ONE));
1609  }
1610  else break;
1611  }
1612 #endif
1613  if(!ptr) ptr = VirtualAlloc(baseaddress, size, baseaddress ? MEM_COMMIT : MEM_RESERVE|MEM_COMMIT, PAGE_READWRITE);
1614 #if DEBUG
1615  if(lastWin32mmap && ptr!=lastWin32mmap) printf("Non-contiguous VirtualAlloc between %p and %p\n", ptr, lastWin32mmap);
1616 #endif
1617 #ifdef DEFAULT_GRANULARITY_ALIGNED
1618  if(ptr) lastWin32mmap = (void*)((size_t) ptr + mparams.granularity);
1619 #endif
1620  }
1621 #if DEBUG
1622 #ifdef ENABLE_LARGE_PAGES
1623  printf("VirtualAlloc returns %p size %u. LargePagesAvailable=%d\n", ptr, size, largepagesavailable);
1624 #else
1625  printf("VirtualAlloc returns %p size %u\n", ptr, size);
1626 #endif
1627 #endif
1628  return (ptr != 0)? ptr: MFAIL;
1629 }
1630 
1631 /* For direct MMAP, use MEM_TOP_DOWN to minimize interference */
1632 static FORCEINLINE void* win32direct_mmap(size_t size) {
1633  void* ptr = VirtualAlloc(0, size, MEM_RESERVE|MEM_COMMIT|MEM_TOP_DOWN,
1634  PAGE_READWRITE);
1635  return (ptr != 0)? ptr: MFAIL;
1636 }
1637 
1638 /* This function supports releasing coalesed segments */
1639 static FORCEINLINE int win32munmap(void* ptr, size_t size) {
1640  MEMORY_BASIC_INFORMATION minfo;
1641  char* cptr = (char*)ptr;
1642  while (size) {
1643  if (VirtualQuery(cptr, &minfo, sizeof(minfo)) == 0)
1644  return -1;
1645  if (minfo.BaseAddress != cptr || minfo.AllocationBase != cptr ||
1646  minfo.State != MEM_COMMIT || minfo.RegionSize > size)
1647  return -1;
1648  if (VirtualFree(cptr, 0, MEM_RELEASE) == 0)
1649  return -1;
1650  cptr += minfo.RegionSize;
1651  size -= minfo.RegionSize;
1652  }
1653  return 0;
1654 }
1655 
1656 #define MMAP_DEFAULT(s) win32mmap(s)
1657 #define MUNMAP_DEFAULT(a, s) win32munmap((a), (s))
1658 #define DIRECT_MMAP_DEFAULT(s) win32direct_mmap(s)
1659 #endif /* WIN32 */
1660 #endif /* HAVE_MMAP */
1661 
1662 #if HAVE_MREMAP
1663 #ifndef WIN32
1664 #define MREMAP_DEFAULT(addr, osz, nsz, mv) mremap((addr), (osz), (nsz), (mv))
1665 #endif /* WIN32 */
1666 #endif /* HAVE_MREMAP */
1667 
1668 
1672 #if HAVE_MORECORE
1673  #ifdef MORECORE
1674  #define CALL_MORECORE(S) MORECORE(S)
1675  #else /* MORECORE */
1676  #define CALL_MORECORE(S) MORECORE_DEFAULT(S)
1677  #endif /* MORECORE */
1678 #else /* HAVE_MORECORE */
1679  #define CALL_MORECORE(S) MFAIL
1680 #endif /* HAVE_MORECORE */
1681 
1685 #if HAVE_MMAP
1686  #define USE_MMAP_BIT (SIZE_T_ONE)
1687 
1688  #ifdef MMAP
1689  #define CALL_MMAP(s) MMAP(s)
1690  #else /* MMAP */
1691  #define CALL_MMAP(s) MMAP_DEFAULT(s)
1692  #endif /* MMAP */
1693  #ifdef MUNMAP
1694  #define CALL_MUNMAP(a, s) MUNMAP((a), (s))
1695  #else /* MUNMAP */
1696  #define CALL_MUNMAP(a, s) MUNMAP_DEFAULT((a), (s))
1697  #endif /* MUNMAP */
1698  #ifdef DIRECT_MMAP
1699  #define CALL_DIRECT_MMAP(s) DIRECT_MMAP(s)
1700  #else /* DIRECT_MMAP */
1701  #define CALL_DIRECT_MMAP(s) DIRECT_MMAP_DEFAULT(s)
1702  #endif /* DIRECT_MMAP */
1703 #else /* HAVE_MMAP */
1704  #define USE_MMAP_BIT (SIZE_T_ZERO)
1705 
1706  #define MMAP(s) MFAIL
1707  #define MUNMAP(a, s) (-1)
1708  #define DIRECT_MMAP(s) MFAIL
1709  #define CALL_DIRECT_MMAP(s) DIRECT_MMAP(s)
1710  #define CALL_MMAP(s) MMAP(s)
1711  #define CALL_MUNMAP(a, s) MUNMAP((a), (s))
1712 #endif /* HAVE_MMAP */
1713 
1717 #if HAVE_MMAP && HAVE_MREMAP
1718  #ifdef MREMAP
1719  #define CALL_MREMAP(addr, osz, nsz, mv) MREMAP((addr), (osz), (nsz), (mv))
1720  #else /* MREMAP */
1721  #define CALL_MREMAP(addr, osz, nsz, mv) MREMAP_DEFAULT((addr), (osz), (nsz), (mv))
1722  #endif /* MREMAP */
1723 #else /* HAVE_MMAP && HAVE_MREMAP */
1724  #define CALL_MREMAP(addr, osz, nsz, mv) MFAIL
1725 #endif /* HAVE_MMAP && HAVE_MREMAP */
1726 
1727 /* mstate bit set if continguous morecore disabled or failed */
1728 #define USE_NONCONTIGUOUS_BIT (4U)
1729 
1730 /* segment bit set in create_mspace_with_base */
1731 #define EXTERN_BIT (8U)
1732 
1733 
1734 /* --------------------------- Lock preliminaries ------------------------ */
1735 
1736 /*
1737  When locks are defined, there is one global lock, plus
1738  one per-mspace lock.
1739 
1740  The global lock_ensures that mparams.magic and other unique
1741  mparams values are initialized only once. It also protects
1742  sequences of calls to MORECORE. In many cases sys_alloc requires
1743  two calls, that should not be interleaved with calls by other
1744  threads. This does not protect against direct calls to MORECORE
1745  by other threads not using this lock, so there is still code to
1746  cope the best we can on interference.
1747 
1748  Per-mspace locks surround calls to malloc, free, etc. To enable use
1749  in layered extensions, per-mspace locks are reentrant.
1750 
1751  Because lock-protected regions generally have bounded times, it is
1752  OK to use the supplied simple spinlocks in the custom versions for
1753  x86. Spinlocks are likely to improve performance for lightly
1754  contended applications, but worsen performance under heavy
1755  contention.
1756 
1757  If USE_LOCKS is > 1, the definitions of lock routines here are
1758  bypassed, in which case you will need to define the type MLOCK_T,
1759  and at least INITIAL_LOCK, ACQUIRE_LOCK, RELEASE_LOCK and possibly
1760  TRY_LOCK (which is not used in this malloc, but commonly needed in
1761  extensions.) You must also declare a
1762  static MLOCK_T malloc_global_mutex = { initialization values };.
1763 
1764 */
1765 
1766 #if USE_LOCKS == 1
1767 
1768 #if USE_SPIN_LOCKS && SPIN_LOCKS_AVAILABLE
1769 #ifndef WIN32
1770 
1771 /* Custom pthread-style spin locks on x86 and x64 for gcc */
1772 struct pthread_mlock_t {
1773  volatile unsigned int l;
1774  char cachelinepadding[64];
1775  unsigned int c;
1776  pthread_t threadid;
1777 };
1778 #define MLOCK_T struct pthread_mlock_t
1779 #define CURRENT_THREAD pthread_self()
1780 #define INITIAL_LOCK(sl) ((sl)->threadid = 0, (sl)->l = (sl)->c = 0, 0)
1781 #define ACQUIRE_LOCK(sl) pthread_acquire_lock(sl)
1782 #define RELEASE_LOCK(sl) pthread_release_lock(sl)
1783 #define TRY_LOCK(sl) pthread_try_lock(sl)
1784 #define SPINS_PER_YIELD 63
1785 
1786 static MLOCK_T malloc_global_mutex = { 0, "", 0, 0};
1787 
1788 static FORCEINLINE int pthread_acquire_lock (MLOCK_T *sl) {
1789  int spins = 0;
1790  volatile unsigned int* lp = &sl->l;
1791  for (;;) {
1792  if (*lp != 0) {
1793  if (sl->threadid == CURRENT_THREAD) {
1794  ++sl->c;
1795  return 0;
1796  }
1797  }
1798  else {
1799  /* place args to cmpxchgl in locals to evade oddities in some gccs */
1800  int cmp = 0;
1801  int val = 1;
1802  int ret;
1803  __asm__ __volatile__ ("lock; cmpxchgl %1, %2"
1804  : "=a" (ret)
1805  : "r" (val), "m" (*(lp)), "0"(cmp)
1806  : "memory", "cc");
1807  if (!ret) {
1808  assert(!sl->threadid);
1809  sl->threadid = CURRENT_THREAD;
1810  sl->c = 1;
1811  return 0;
1812  }
1813  }
1814  if ((++spins & SPINS_PER_YIELD) == 0) {
1815 #if defined (__SVR4) && defined (__sun) /* solaris */
1816  thr_yield();
1817 #else
1818 #if defined(__linux__) || defined(__FreeBSD__) || defined(__APPLE__)
1819  sched_yield();
1820 #else /* no-op yield on unknown systems */
1821  ;
1822 #endif /* __linux__ || __FreeBSD__ || __APPLE__ */
1823 #endif /* solaris */
1824  }
1825  }
1826 }
1827 
1828 static FORCEINLINE void pthread_release_lock (MLOCK_T *sl) {
1829  volatile unsigned int* lp = &sl->l;
1830  assert(*lp != 0);
1831  assert(sl->threadid == CURRENT_THREAD);
1832  if (--sl->c == 0) {
1833  sl->threadid = 0;
1834  int prev = 0;
1835  int ret;
1836  __asm__ __volatile__ ("lock; xchgl %0, %1"
1837  : "=r" (ret)
1838  : "m" (*(lp)), "0"(prev)
1839  : "memory");
1840  }
1841 }
1842 
1843 static FORCEINLINE int pthread_try_lock (MLOCK_T *sl) {
1844  volatile unsigned int* lp = &sl->l;
1845  if (*lp != 0) {
1846  if (sl->threadid == CURRENT_THREAD) {
1847  ++sl->c;
1848  return 1;
1849  }
1850  }
1851  else {
1852  int cmp = 0;
1853  int val = 1;
1854  int ret;
1855  __asm__ __volatile__ ("lock; cmpxchgl %1, %2"
1856  : "=a" (ret)
1857  : "r" (val), "m" (*(lp)), "0"(cmp)
1858  : "memory", "cc");
1859  if (!ret) {
1860  assert(!sl->threadid);
1861  sl->threadid = CURRENT_THREAD;
1862  sl->c = 1;
1863  return 1;
1864  }
1865  }
1866  return 0;
1867 }
1868 
1869 
1870 #else /* WIN32 */
1871 /* Custom win32-style spin locks on x86 and x64 for MSC */
1872 struct win32_mlock_t {
1873  volatile long l;
1874  char cachelinepadding[64];
1875  unsigned int c;
1876  long threadid;
1877 };
1878 
1879 #define MLOCK_T struct win32_mlock_t
1880 #define CURRENT_THREAD ((long)GetCurrentThreadId())
1881 #define INITIAL_LOCK(sl) ((sl)->threadid = 0, (sl)->l = (sl)->c = 0, 0)
1882 #define ACQUIRE_LOCK(sl) win32_acquire_lock(sl)
1883 #define RELEASE_LOCK(sl) win32_release_lock(sl)
1884 #define TRY_LOCK(sl) win32_try_lock(sl)
1885 #define SPINS_PER_YIELD 63
1886 
1887 static MLOCK_T malloc_global_mutex = {0, "", 0, 0};
1888 
1889 static FORCEINLINE int win32_acquire_lock (MLOCK_T *sl) {
1890  int spins = 0;
1891  for (;;) {
1892  if (sl->l != 0) {
1893  if (sl->threadid == CURRENT_THREAD) {
1894  ++sl->c;
1895  return 0;
1896  }
1897  }
1898  else {
1899  if (!interlockedexchange(&sl->l, 1)) {
1900  assert(!sl->threadid);
1901  sl->threadid = CURRENT_THREAD;
1902  sl->c = 1;
1903  return 0;
1904  }
1905  }
1906  if ((++spins & SPINS_PER_YIELD) == 0)
1907  SleepEx(0, FALSE);
1908  }
1909 }
1910 
1911 static FORCEINLINE void win32_release_lock (MLOCK_T *sl) {
1912  assert(sl->threadid == CURRENT_THREAD);
1913  assert(sl->l != 0);
1914  if (--sl->c == 0) {
1915  sl->threadid = 0;
1916  interlockedexchange (&sl->l, 0);
1917  }
1918 }
1919 
1920 static FORCEINLINE int win32_try_lock (MLOCK_T *sl) {
1921  if (sl->l != 0) {
1922  if (sl->threadid == CURRENT_THREAD) {
1923  ++sl->c;
1924  return 1;
1925  }
1926  }
1927  else {
1928  if (!interlockedexchange(&sl->l, 1)){
1929  assert(!sl->threadid);
1930  sl->threadid = CURRENT_THREAD;
1931  sl->c = 1;
1932  return 1;
1933  }
1934  }
1935  return 0;
1936 }
1937 
1938 #endif /* WIN32 */
1939 #else /* USE_SPIN_LOCKS */
1940 
1941 #ifndef WIN32
1942 /* pthreads-based locks */
1943 
1944 #define MLOCK_T pthread_mutex_t
1945 #define CURRENT_THREAD pthread_self()
1946 #define INITIAL_LOCK(sl) pthread_init_lock(sl)
1947 #define ACQUIRE_LOCK(sl) pthread_mutex_lock(sl)
1948 #define RELEASE_LOCK(sl) pthread_mutex_unlock(sl)
1949 #define TRY_LOCK(sl) (!pthread_mutex_trylock(sl))
1950 
1951 static MLOCK_T malloc_global_mutex = PTHREAD_MUTEX_INITIALIZER;
1952 
1953 /* Cope with old-style linux recursive lock initialization by adding */
1954 /* skipped internal declaration from pthread.h */
1955 #ifdef linux
1956 #ifndef PTHREAD_MUTEX_RECURSIVE
1957 extern int pthread_mutexattr_setkind_np __P ((pthread_mutexattr_t *__attr,
1958  int __kind));
1959 #define PTHREAD_MUTEX_RECURSIVE PTHREAD_MUTEX_RECURSIVE_NP
1960 #define pthread_mutexattr_settype(x,y) pthread_mutexattr_setkind_np(x,y)
1961 #endif
1962 #endif
1963 
1964 static int pthread_init_lock (MLOCK_T *sl) {
1965  pthread_mutexattr_t attr;
1966  if (pthread_mutexattr_init(&attr)) return 1;
1967  if (pthread_mutexattr_settype(&attr, PTHREAD_MUTEX_RECURSIVE)) return 1;
1968  if (pthread_mutex_init(sl, &attr)) return 1;
1969  if (pthread_mutexattr_destroy(&attr)) return 1;
1970  return 0;
1971 }
1972 
1973 #else /* WIN32 */
1974 /* Win32 critical sections */
1975 #define MLOCK_T CRITICAL_SECTION
1976 #define CURRENT_THREAD GetCurrentThreadId()
1977 #define INITIAL_LOCK(sl) (!InitializeCriticalSectionAndSpinCount((sl), 0x80000000|4000))
1978 #define ACQUIRE_LOCK(sl) (EnterCriticalSection(sl), 0)
1979 #define RELEASE_LOCK(sl) LeaveCriticalSection(sl)
1980 #define TRY_LOCK(sl) TryEnterCriticalSection(sl)
1981 #define NEED_GLOBAL_LOCK_INIT
1982 
1983 static MLOCK_T malloc_global_mutex;
1984 static volatile long malloc_global_mutex_status;
1985 
1986 /* Use spin loop to initialize global lock */
1987 static void init_malloc_global_mutex() {
1988  for (;;) {
1989  long stat = malloc_global_mutex_status;
1990  if (stat > 0)
1991  return;
1992  /* transition to < 0 while initializing, then to > 0) */
1993  if (stat == 0 &&
1994  interlockedcompareexchange(&malloc_global_mutex_status, -1, 0) == 0) {
1995  InitializeCriticalSection(&malloc_global_mutex);
1996  interlockedexchange(&malloc_global_mutex_status,1);
1997  return;
1998  }
1999  SleepEx(0, FALSE);
2000  }
2001 }
2002 
2003 #endif /* WIN32 */
2004 #endif /* USE_SPIN_LOCKS */
2005 #endif /* USE_LOCKS == 1 */
2006 
2007 /* ----------------------- User-defined locks ------------------------ */
2008 
2009 #if USE_LOCKS > 1
2010 /* Define your own lock implementation here */
2011 /* #define INITIAL_LOCK(sl) ... */
2012 /* #define ACQUIRE_LOCK(sl) ... */
2013 /* #define RELEASE_LOCK(sl) ... */
2014 /* #define TRY_LOCK(sl) ... */
2015 /* static MLOCK_T malloc_global_mutex = ... */
2016 #endif /* USE_LOCKS > 1 */
2017 
2018 /* ----------------------- Lock-based state ------------------------ */
2019 
2020 #if USE_LOCKS
2021 #define USE_LOCK_BIT (2U)
2022 #else /* USE_LOCKS */
2023 #define USE_LOCK_BIT (0U)
2024 #define INITIAL_LOCK(l)
2025 #endif /* USE_LOCKS */
2026 
2027 #if USE_LOCKS
2028 #ifndef ACQUIRE_MALLOC_GLOBAL_LOCK
2029 #define ACQUIRE_MALLOC_GLOBAL_LOCK() ACQUIRE_LOCK(&malloc_global_mutex);
2030 #endif
2031 #ifndef RELEASE_MALLOC_GLOBAL_LOCK
2032 #define RELEASE_MALLOC_GLOBAL_LOCK() RELEASE_LOCK(&malloc_global_mutex);
2033 #endif
2034 #else /* USE_LOCKS */
2035 #define ACQUIRE_MALLOC_GLOBAL_LOCK()
2036 #define RELEASE_MALLOC_GLOBAL_LOCK()
2037 #endif /* USE_LOCKS */
2038 
2039 
2040 /* ----------------------- Chunk representations ------------------------ */
2041 
2042 /*
2043  (The following includes lightly edited explanations by Colin Plumb.)
2044 
2045  The malloc_chunk declaration below is misleading (but accurate and
2046  necessary). It declares a "view" into memory allowing access to
2047  necessary fields at known offsets from a given base.
2048 
2049  Chunks of memory are maintained using a `boundary tag' method as
2050  originally described by Knuth. (See the paper by Paul Wilson
2051  ftp://ftp.cs.utexas.edu/pub/garbage/allocsrv.ps for a survey of such
2052  techniques.) Sizes of free chunks are stored both in the front of
2053  each chunk and at the end. This makes consolidating fragmented
2054  chunks into bigger chunks fast. The head fields also hold bits
2055  representing whether chunks are free or in use.
2056 
2057  Here are some pictures to make it clearer. They are "exploded" to
2058  show that the state of a chunk can be thought of as extending from
2059  the high 31 bits of the head field of its header through the
2060  prev_foot and PINUSE_BIT bit of the following chunk header.
2061 
2062  A chunk that's in use looks like:
2063 
2064  chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2065  | Size of previous chunk (if P = 0) |
2066  +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2067  +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P|
2068  | Size of this chunk 1| +-+
2069  mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2070  | |
2071  +- -+
2072  | |
2073  +- -+
2074  | :
2075  +- size - sizeof(size_t) available payload bytes -+
2076  : |
2077  chunk-> +- -+
2078  | |
2079  +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2080  +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |1|
2081  | Size of next chunk (may or may not be in use) | +-+
2082  mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2083 
2084  And if it's free, it looks like this:
2085 
2086  chunk-> +- -+
2087  | User payload (must be in use, or we would have merged!) |
2088  +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2089  +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P|
2090  | Size of this chunk 0| +-+
2091  mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2092  | Next pointer |
2093  +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2094  | Prev pointer |
2095  +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2096  | :
2097  +- size - sizeof(struct chunk) unused bytes -+
2098  : |
2099  chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2100  | Size of this chunk |
2101  +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2102  +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |0|
2103  | Size of next chunk (must be in use, or we would have merged)| +-+
2104  mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2105  | :
2106  +- User payload -+
2107  : |
2108  +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2109  |0|
2110  +-+
2111  Note that since we always merge adjacent free chunks, the chunks
2112  adjacent to a free chunk must be in use.
2113 
2114  Given a pointer to a chunk (which can be derived trivially from the
2115  payload pointer) we can, in O(1) time, find out whether the adjacent
2116  chunks are free, and if so, unlink them from the lists that they
2117  are on and merge them with the current chunk.
2118 
2119  Chunks always begin on even word boundaries, so the mem portion
2120  (which is returned to the user) is also on an even word boundary, and
2121  thus at least double-word aligned.
2122 
2123  The P (PINUSE_BIT) bit, stored in the unused low-order bit of the
2124  chunk size (which is always a multiple of two words), is an in-use
2125  bit for the *previous* chunk. If that bit is *clear*, then the
2126  word before the current chunk size contains the previous chunk
2127  size, and can be used to find the front of the previous chunk.
2128  The very first chunk allocated always has this bit set, preventing
2129  access to non-existent (or non-owned) memory. If pinuse is set for
2130  any given chunk, then you CANNOT determine the size of the
2131  previous chunk, and might even get a memory addressing fault when
2132  trying to do so.
2133 
2134  The C (CINUSE_BIT) bit, stored in the unused second-lowest bit of
2135  the chunk size redundantly records whether the current chunk is
2136  inuse (unless the chunk is mmapped). This redundancy enables usage
2137  checks within free and realloc, and reduces indirection when freeing
2138  and consolidating chunks.
2139 
2140  Each freshly allocated chunk must have both cinuse and pinuse set.
2141  That is, each allocated chunk borders either a previously allocated
2142  and still in-use chunk, or the base of its memory arena. This is
2143  ensured by making all allocations from the the `lowest' part of any
2144  found chunk. Further, no free chunk physically borders another one,
2145  so each free chunk is known to be preceded and followed by either
2146  inuse chunks or the ends of memory.
2147 
2148  Note that the `foot' of the current chunk is actually represented
2149  as the prev_foot of the NEXT chunk. This makes it easier to
2150  deal with alignments etc but can be very confusing when trying
2151  to extend or adapt this code.
2152 
2153  The exceptions to all this are
2154 
2155  1. The special chunk `top' is the top-most available chunk (i.e.,
2156  the one bordering the end of available memory). It is treated
2157  specially. Top is never included in any bin, is used only if
2158  no other chunk is available, and is released back to the
2159  system if it is very large (see M_TRIM_THRESHOLD). In effect,
2160  the top chunk is treated as larger (and thus less well
2161  fitting) than any other available chunk. The top chunk
2162  doesn't update its trailing size field since there is no next
2163  contiguous chunk that would have to index off it. However,
2164  space is still allocated for it (TOP_FOOT_SIZE) to enable
2165  separation or merging when space is extended.
2166 
2167  3. Chunks allocated via mmap, have both cinuse and pinuse bits
2168  cleared in their head fields. Because they are allocated
2169  one-by-one, each must carry its own prev_foot field, which is
2170  also used to hold the offset this chunk has within its mmapped
2171  region, which is needed to preserve alignment. Each mmapped
2172  chunk is trailed by the first two fields of a fake next-chunk
2173  for sake of usage checks.
2174 
2175 */
2176 
2178  size_t prev_foot; /* Size of previous chunk (if free). */
2179  size_t head; /* Size and inuse bits. */
2180  struct malloc_chunk* fd; /* double links -- used only if free. */
2181  struct malloc_chunk* bk;
2182 };
2183 
2184 typedef struct malloc_chunk mchunk;
2185 typedef struct malloc_chunk* mchunkptr;
2186 typedef struct malloc_chunk* sbinptr; /* The type of bins of chunks */
2187 typedef unsigned int bindex_t; /* Described below */
2188 typedef unsigned int binmap_t; /* Described below */
2189 
2190 /* ------------------- Chunks sizes and alignments ----------------------- */
2191 
2192 #define MCHUNK_SIZE (sizeof(mchunk))
2193 
2194 #if FOOTERS
2195 #define CHUNK_OVERHEAD (TWO_SIZE_T_SIZES)
2196 #else /* FOOTERS */
2197 #define CHUNK_OVERHEAD (SIZE_T_SIZE)
2198 #endif /* FOOTERS */
2199 
2200 /* MMapped chunks need a second word of overhead ... */
2201 #define MMAP_CHUNK_OVERHEAD (TWO_SIZE_T_SIZES)
2202 /* ... and additional padding for fake next-chunk at foot */
2203 #define MMAP_FOOT_PAD (FOUR_SIZE_T_SIZES)
2204 
2205 /* The smallest size we can malloc is an aligned minimal chunk */
2206 #define MIN_CHUNK_SIZE\
2207  ((MCHUNK_SIZE + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK)
2208 
2209 /* conversion from malloc headers to user pointers, and back */
2210 #define chunk2mem(p) ((void*)((char*)(p) + TWO_SIZE_T_SIZES))
2211 #define mem2chunk(mem) ((mchunkptr)((char*)(mem) - TWO_SIZE_T_SIZES))
2212 /* chunk associated with aligned address A */
2213 #define align_as_chunk(A) (mchunkptr)((A) + align_offset(chunk2mem(A)))
2214 
2215 /* Bounds on request (not chunk) sizes. */
2216 #define MAX_REQUEST ((-MIN_CHUNK_SIZE) << 2)
2217 #define MIN_REQUEST (MIN_CHUNK_SIZE - CHUNK_OVERHEAD - SIZE_T_ONE)
2218 
2219 /* pad request bytes into a usable size */
2220 #define pad_request(req) \
2221  (((req) + CHUNK_OVERHEAD + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK)
2222 
2223 /* pad request, checking for minimum (but not maximum) */
2224 #define request2size(req) \
2225  (((req) < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(req))
2226 
2227 
2228 /* ------------------ Operations on head and foot fields ----------------- */
2229 
2230 /*
2231  The head field of a chunk is or'ed with PINUSE_BIT when previous
2232  adjacent chunk in use, and or'ed with CINUSE_BIT if this chunk is in
2233  use, unless mmapped, in which case both bits are cleared.
2234 
2235  FLAG4_BIT is not used by this malloc, but might be useful in extensions.
2236 */
2237 
2238 #define PINUSE_BIT (SIZE_T_ONE)
2239 #define CINUSE_BIT (SIZE_T_TWO)
2240 #define FLAG4_BIT (SIZE_T_FOUR)
2241 #define INUSE_BITS (PINUSE_BIT|CINUSE_BIT)
2242 #define FLAG_BITS (PINUSE_BIT|CINUSE_BIT|FLAG4_BIT)
2243 
2244 /* Head value for fenceposts */
2245 #define FENCEPOST_HEAD (INUSE_BITS|SIZE_T_SIZE)
2246 
2247 /* extraction of fields from head words */
2248 #define cinuse(p) ((p)->head & CINUSE_BIT)
2249 #define pinuse(p) ((p)->head & PINUSE_BIT)
2250 #define is_inuse(p) (((p)->head & INUSE_BITS) != PINUSE_BIT)
2251 #define is_mmapped(p) (((p)->head & INUSE_BITS) == 0)
2252 
2253 #define chunksize(p) ((p)->head & ~(FLAG_BITS))
2254 
2255 #define clear_pinuse(p) ((p)->head &= ~PINUSE_BIT)
2256 
2257 /* Treat space at ptr +/- offset as a chunk */
2258 #define chunk_plus_offset(p, s) ((mchunkptr)(((char*)(p)) + (s)))
2259 #define chunk_minus_offset(p, s) ((mchunkptr)(((char*)(p)) - (s)))
2260 
2261 /* Ptr to next or previous physical malloc_chunk. */
2262 #define next_chunk(p) ((mchunkptr)( ((char*)(p)) + ((p)->head & ~FLAG_BITS)))
2263 #define prev_chunk(p) ((mchunkptr)( ((char*)(p)) - ((p)->prev_foot) ))
2264 
2265 /* extract next chunk's pinuse bit */
2266 #define next_pinuse(p) ((next_chunk(p)->head) & PINUSE_BIT)
2267 
2268 /* Get/set size at footer */
2269 #define get_foot(p, s) (((mchunkptr)((char*)(p) + (s)))->prev_foot)
2270 #define set_foot(p, s) (((mchunkptr)((char*)(p) + (s)))->prev_foot = (s))
2271 
2272 /* Set size, pinuse bit, and foot */
2273 #define set_size_and_pinuse_of_free_chunk(p, s)\
2274  ((p)->head = (s|PINUSE_BIT), set_foot(p, s))
2275 
2276 /* Set size, pinuse bit, foot, and clear next pinuse */
2277 #define set_free_with_pinuse(p, s, n)\
2278  (clear_pinuse(n), set_size_and_pinuse_of_free_chunk(p, s))
2279 
2280 /* Get the internal overhead associated with chunk p */
2281 #define overhead_for(p)\
2282  (is_mmapped(p)? MMAP_CHUNK_OVERHEAD : CHUNK_OVERHEAD)
2283 
2284 /* Return true if malloced space is not necessarily cleared */
2285 #if MMAP_CLEARS
2286 #define calloc_must_clear(p) (!is_mmapped(p))
2287 #else /* MMAP_CLEARS */
2288 #define calloc_must_clear(p) (1)
2289 #endif /* MMAP_CLEARS */
2290 
2291 /* ---------------------- Overlaid data structures ----------------------- */
2292 
2293 /*
2294  When chunks are not in use, they are treated as nodes of either
2295  lists or trees.
2296 
2297  "Small" chunks are stored in circular doubly-linked lists, and look
2298  like this:
2299 
2300  chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2301  | Size of previous chunk |
2302  +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2303  `head:' | Size of chunk, in bytes |P|
2304  mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2305  | Forward pointer to next chunk in list |
2306  +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2307  | Back pointer to previous chunk in list |
2308  +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2309  | Unused space (may be 0 bytes long) .
2310  . .
2311  . |
2312 nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2313  `foot:' | Size of chunk, in bytes |
2314  +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2315 
2316  Larger chunks are kept in a form of bitwise digital trees (aka
2317  tries) keyed on chunksizes. Because malloc_tree_chunks are only for
2318  free chunks greater than 256 bytes, their size doesn't impose any
2319  constraints on user chunk sizes. Each node looks like:
2320 
2321  chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2322  | Size of previous chunk |
2323  +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2324  `head:' | Size of chunk, in bytes |P|
2325  mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2326  | Forward pointer to next chunk of same size |
2327  +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2328  | Back pointer to previous chunk of same size |
2329  +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2330  | Pointer to left child (child[0]) |
2331  +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2332  | Pointer to right child (child[1]) |
2333  +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2334  | Pointer to parent |
2335  +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2336  | bin index of this chunk |
2337  +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2338  | Unused space .
2339  . |
2340 nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2341  `foot:' | Size of chunk, in bytes |
2342  +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2343 
2344  Each tree holding treenodes is a tree of unique chunk sizes. Chunks
2345  of the same size are arranged in a circularly-linked list, with only
2346  the oldest chunk (the next to be used, in our FIFO ordering)
2347  actually in the tree. (Tree members are distinguished by a non-null
2348  parent pointer.) If a chunk with the same size an an existing node
2349  is inserted, it is linked off the existing node using pointers that
2350  work in the same way as fd/bk pointers of small chunks.
2351 
2352  Each tree contains a power of 2 sized range of chunk sizes (the
2353  smallest is 0x100 <= x < 0x180), which is is divided in half at each
2354  tree level, with the chunks in the smaller half of the range (0x100
2355  <= x < 0x140 for the top nose) in the left subtree and the larger
2356  half (0x140 <= x < 0x180) in the right subtree. This is, of course,
2357  done by inspecting individual bits.
2358 
2359  Using these rules, each node's left subtree contains all smaller
2360  sizes than its right subtree. However, the node at the root of each
2361  subtree has no particular ordering relationship to either. (The
2362  dividing line between the subtree sizes is based on trie relation.)
2363  If we remove the last chunk of a given size from the interior of the
2364  tree, we need to replace it with a leaf node. The tree ordering
2365  rules permit a node to be replaced by any leaf below it.
2366 
2367  The smallest chunk in a tree (a common operation in a best-fit
2368  allocator) can be found by walking a path to the leftmost leaf in
2369  the tree. Unlike a usual binary tree, where we follow left child
2370  pointers until we reach a null, here we follow the right child
2371  pointer any time the left one is null, until we reach a leaf with
2372  both child pointers null. The smallest chunk in the tree will be
2373  somewhere along that path.
2374 
2375  The worst case number of steps to add, find, or remove a node is
2376  bounded by the number of bits differentiating chunks within
2377  bins. Under current bin calculations, this ranges from 6 up to 21
2378  (for 32 bit sizes) or up to 53 (for 64 bit sizes). The typical case
2379  is of course much better.
2380 */
2381 
2383  /* The first four fields must be compatible with malloc_chunk */
2384  size_t prev_foot;
2385  size_t head;
2386  struct malloc_tree_chunk* fd;
2387  struct malloc_tree_chunk* bk;
2388 
2389  struct malloc_tree_chunk* child[2];
2390  struct malloc_tree_chunk* parent;
2391  bindex_t index;
2392 };
2393 
2394 typedef struct malloc_tree_chunk tchunk;
2395 typedef struct malloc_tree_chunk* tchunkptr;
2396 typedef struct malloc_tree_chunk* tbinptr; /* The type of bins of trees */
2397 
2398 /* A little helper macro for trees */
2399 #define leftmost_child(t) ((t)->child[0] != 0? (t)->child[0] : (t)->child[1])
2400 
2401 /* ----------------------------- Segments -------------------------------- */
2402 
2403 /*
2404  Each malloc space may include non-contiguous segments, held in a
2405  list headed by an embedded malloc_segment record representing the
2406  top-most space. Segments also include flags holding properties of
2407  the space. Large chunks that are directly allocated by mmap are not
2408  included in this list. They are instead independently created and
2409  destroyed without otherwise keeping track of them.
2410 
2411  Segment management mainly comes into play for spaces allocated by
2412  MMAP. Any call to MMAP might or might not return memory that is
2413  adjacent to an existing segment. MORECORE normally contiguously
2414  extends the current space, so this space is almost always adjacent,
2415  which is simpler and faster to deal with. (This is why MORECORE is
2416  used preferentially to MMAP when both are available -- see
2417  sys_alloc.) When allocating using MMAP, we don't use any of the
2418  hinting mechanisms (inconsistently) supported in various
2419  implementations of unix mmap, or distinguish reserving from
2420  committing memory. Instead, we just ask for space, and exploit
2421  contiguity when we get it. It is probably possible to do
2422  better than this on some systems, but no general scheme seems
2423  to be significantly better.
2424 
2425  Management entails a simpler variant of the consolidation scheme
2426  used for chunks to reduce fragmentation -- new adjacent memory is
2427  normally prepended or appended to an existing segment. However,
2428  there are limitations compared to chunk consolidation that mostly
2429  reflect the fact that segment processing is relatively infrequent
2430  (occurring only when getting memory from system) and that we
2431  don't expect to have huge numbers of segments:
2432 
2433  * Segments are not indexed, so traversal requires linear scans. (It
2434  would be possible to index these, but is not worth the extra
2435  overhead and complexity for most programs on most platforms.)
2436  * New segments are only appended to old ones when holding top-most
2437  memory; if they cannot be prepended to others, they are held in
2438  different segments.
2439 
2440  Except for the top-most segment of an mstate, each segment record
2441  is kept at the tail of its segment. Segments are added by pushing
2442  segment records onto the list headed by &mstate.seg for the
2443  containing mstate.
2444 
2445  Segment flags control allocation/merge/deallocation policies:
2446  * If EXTERN_BIT set, then we did not allocate this segment,
2447  and so should not try to deallocate or merge with others.
2448  (This currently holds only for the initial segment passed
2449  into create_mspace_with_base.)
2450  * If USE_MMAP_BIT set, the segment may be merged with
2451  other surrounding mmapped segments and trimmed/de-allocated
2452  using munmap.
2453  * If neither bit is set, then the segment was obtained using
2454  MORECORE so can be merged with surrounding MORECORE'd segments
2455  and deallocated/trimmed using MORECORE with negative arguments.
2456 */
2457 
2459  char* base; /* base address */
2460  size_t size; /* allocated size */
2461  struct malloc_segment* next; /* ptr to next segment */
2462  flag_t sflags; /* mmap and extern flag */
2463 };
2464 
2465 #define is_mmapped_segment(S) ((S)->sflags & USE_MMAP_BIT)
2466 #define is_extern_segment(S) ((S)->sflags & EXTERN_BIT)
2467 
2468 typedef struct malloc_segment msegment;
2469 typedef struct malloc_segment* msegmentptr;
2470 
2471 /* ---------------------------- malloc_state ----------------------------- */
2472 
2473 /*
2474  A malloc_state holds all of the bookkeeping for a space.
2475  The main fields are:
2476 
2477  Top
2478  The topmost chunk of the currently active segment. Its size is
2479  cached in topsize. The actual size of topmost space is
2480  topsize+TOP_FOOT_SIZE, which includes space reserved for adding
2481  fenceposts and segment records if necessary when getting more
2482  space from the system. The size at which to autotrim top is
2483  cached from mparams in trim_check, except that it is disabled if
2484  an autotrim fails.
2485 
2486  Designated victim (dv)
2487  This is the preferred chunk for servicing small requests that
2488  don't have exact fits. It is normally the chunk split off most
2489  recently to service another small request. Its size is cached in
2490  dvsize. The link fields of this chunk are not maintained since it
2491  is not kept in a bin.
2492 
2493  SmallBins
2494  An array of bin headers for free chunks. These bins hold chunks
2495  with sizes less than MIN_LARGE_SIZE bytes. Each bin contains
2496  chunks of all the same size, spaced 8 bytes apart. To simplify
2497  use in double-linked lists, each bin header acts as a malloc_chunk
2498  pointing to the real first node, if it exists (else pointing to
2499  itself). This avoids special-casing for headers. But to avoid
2500  waste, we allocate only the fd/bk pointers of bins, and then use
2501  repositioning tricks to treat these as the fields of a chunk.
2502 
2503  TreeBins
2504  Treebins are pointers to the roots of trees holding a range of
2505  sizes. There are 2 equally spaced treebins for each power of two
2506  from TREE_SHIFT to TREE_SHIFT+16. The last bin holds anything
2507  larger.
2508 
2509  Bin maps
2510  There is one bit map for small bins ("smallmap") and one for
2511  treebins ("treemap). Each bin sets its bit when non-empty, and
2512  clears the bit when empty. Bit operations are then used to avoid
2513  bin-by-bin searching -- nearly all "search" is done without ever
2514  looking at bins that won't be selected. The bit maps
2515  conservatively use 32 bits per map word, even if on 64bit system.
2516  For a good description of some of the bit-based techniques used
2517  here, see Henry S. Warren Jr's book "Hacker's Delight" (and
2518  supplement at http://hackersdelight.org/). Many of these are
2519  intended to reduce the branchiness of paths through malloc etc, as
2520  well as to reduce the number of memory locations read or written.
2521 
2522  Segments
2523  A list of segments headed by an embedded malloc_segment record
2524  representing the initial space.
2525 
2526  Address check support
2527  The least_addr field is the least address ever obtained from
2528  MORECORE or MMAP. Attempted frees and reallocs of any address less
2529  than this are trapped (unless INSECURE is defined).
2530 
2531  Magic tag
2532  A cross-check field that should always hold same value as mparams.magic.
2533 
2534  Flags
2535  Bits recording whether to use MMAP, locks, or contiguous MORECORE
2536 
2537  Statistics
2538  Each space keeps track of current and maximum system memory
2539  obtained via MORECORE or MMAP.
2540 
2541  Trim support
2542  Fields holding the amount of unused topmost memory that should trigger
2543  timming, and a counter to force periodic scanning to release unused
2544  non-topmost segments.
2545 
2546  Locking
2547  If USE_LOCKS is defined, the "mutex" lock is acquired and released
2548  around every public call using this mspace.
2549 
2550  Extension support
2551  A void* pointer and a size_t field that can be used to help implement
2552  extensions to this malloc.
2553 */
2554 
2555 /* Bin types, widths and sizes */
2556 #define NSMALLBINS (32U)
2557 #define NTREEBINS (32U)
2558 #define SMALLBIN_SHIFT (3U)
2559 #define SMALLBIN_WIDTH (SIZE_T_ONE << SMALLBIN_SHIFT)
2560 #define TREEBIN_SHIFT (8U)
2561 #define MIN_LARGE_SIZE (SIZE_T_ONE << TREEBIN_SHIFT)
2562 #define MAX_SMALL_SIZE (MIN_LARGE_SIZE - SIZE_T_ONE)
2563 #define MAX_SMALL_REQUEST (MAX_SMALL_SIZE - CHUNK_ALIGN_MASK - CHUNK_OVERHEAD)
2564 
2566  binmap_t smallmap;
2567  binmap_t treemap;
2568  size_t dvsize;
2569  size_t topsize;
2570  char* least_addr;
2571  mchunkptr dv;
2572  mchunkptr top;
2573  size_t trim_check;
2574  size_t release_checks;
2575  size_t magic;
2576  mchunkptr smallbins[(NSMALLBINS+1)*2];
2577  tbinptr treebins[NTREEBINS];
2578  size_t footprint;
2579  size_t max_footprint;
2580  flag_t mflags;
2581  msegment seg;
2582 #if USE_LOCKS
2583  MLOCK_T mutex; /* locate lock among fields that rarely change */
2584 #endif /* USE_LOCKS */
2585  void* extp; /* Unused but available for extensions */
2586  size_t exts;
2587 };
2588 
2589 typedef struct malloc_state* mstate;
2590 
2591 /* ------------- Global malloc_state and malloc_params ------------------- */
2592 
2593 #if !ONLY_MSPACES
2594 
2595 /* The global malloc_state used for all non-"mspace" calls */
2596 static struct malloc_state _gm_;
2597 #define gm (&_gm_)
2598 #define is_global(M) ((M) == &_gm_)
2599 
2600 #endif /* !ONLY_MSPACES */
2601 
2602 #define is_initialized(M) ((M)->top != 0)
2603 
2604 /* -------------------------- system alloc setup ------------------------- */
2605 
2606 /* Operations on mflags */
2607 
2608 #define use_lock(M) ((M)->mflags & USE_LOCK_BIT)
2609 #define enable_lock(M) ((M)->mflags |= USE_LOCK_BIT)
2610 #define disable_lock(M) ((M)->mflags &= ~USE_LOCK_BIT)
2611 
2612 #define use_mmap(M) ((M)->mflags & USE_MMAP_BIT)
2613 #define enable_mmap(M) ((M)->mflags |= USE_MMAP_BIT)
2614 #define disable_mmap(M) ((M)->mflags &= ~USE_MMAP_BIT)
2615 
2616 #define use_noncontiguous(M) ((M)->mflags & USE_NONCONTIGUOUS_BIT)
2617 #define disable_contiguous(M) ((M)->mflags |= USE_NONCONTIGUOUS_BIT)
2618 
2619 #define set_lock(M,L)\
2620  ((M)->mflags = (L)?\
2621  ((M)->mflags | USE_LOCK_BIT) :\
2622  ((M)->mflags & ~USE_LOCK_BIT))
2623 
2624 /* page-align a size */
2625 #define page_align(S)\
2626  (((S) + (mparams.page_size - SIZE_T_ONE)) & ~(mparams.page_size - SIZE_T_ONE))
2627 
2628 /* granularity-align a size */
2629 #define granularity_align(S)\
2630  (((S) + (mparams.granularity - SIZE_T_ONE))\
2631  & ~(mparams.granularity - SIZE_T_ONE))
2632 
2633 
2634 /* For mmap, use granularity alignment on windows, else page-align */
2635 #ifdef WIN32
2636 #define mmap_align(S) granularity_align(S)
2637 #else
2638 #define mmap_align(S) page_align(S)
2639 #endif
2640 
2641 /* For sys_alloc, enough padding to ensure can malloc request on success */
2642 #define SYS_ALLOC_PADDING (TOP_FOOT_SIZE + MALLOC_ALIGNMENT)
2643 
2644 #define is_page_aligned(S)\
2645  (((size_t)(S) & (mparams.page_size - SIZE_T_ONE)) == 0)
2646 #define is_granularity_aligned(S)\
2647  (((size_t)(S) & (mparams.granularity - SIZE_T_ONE)) == 0)
2648 
2649 /* True if segment S holds address A */
2650 #define segment_holds(S, A)\
2651  ((char*)(A) >= S->base && (char*)(A) < S->base + S->size)
2652 
2653 /* Return segment holding given address */
2654 static msegmentptr segment_holding(mstate m, char* addr) {
2655  msegmentptr sp = &m->seg;
2656  for (;;) {
2657  if (addr >= sp->base && addr < sp->base + sp->size)
2658  return sp;
2659  if ((sp = sp->next) == 0)
2660  return 0;
2661  }
2662 }
2663 
2664 /* Return true if segment contains a segment link */
2665 static int has_segment_link(mstate m, msegmentptr ss) {
2666  msegmentptr sp = &m->seg;
2667  for (;;) {
2668  if ((char*)sp >= ss->base && (char*)sp < ss->base + ss->size)
2669  return 1;
2670  if ((sp = sp->next) == 0)
2671  return 0;
2672  }
2673 }
2674 
2675 #ifndef MORECORE_CANNOT_TRIM
2676 #define should_trim(M,s) ((s) > (M)->trim_check)
2677 #else /* MORECORE_CANNOT_TRIM */
2678 #define should_trim(M,s) (0)
2679 #endif /* MORECORE_CANNOT_TRIM */
2680 
2681 /*
2682  TOP_FOOT_SIZE is padding at the end of a segment, including space
2683  that may be needed to place segment records and fenceposts when new
2684  noncontiguous segments are added.
2685 */
2686 #define TOP_FOOT_SIZE\
2687  (align_offset(chunk2mem(0))+pad_request(sizeof(struct malloc_segment))+MIN_CHUNK_SIZE)
2688 
2689 
2690 /* ------------------------------- Hooks -------------------------------- */
2691 
2692 /*
2693  PREACTION should be defined to return 0 on success, and nonzero on
2694  failure. If you are not using locking, you can redefine these to do
2695  anything you like.
2696 */
2697 
2698 #if USE_LOCKS
2699 
2700 #define PREACTION(M) ((use_lock(M))? ACQUIRE_LOCK(&(M)->mutex) : 0)
2701 #define POSTACTION(M) { if (use_lock(M)) RELEASE_LOCK(&(M)->mutex); }
2702 #else /* USE_LOCKS */
2703 
2704 #ifndef PREACTION
2705 #define PREACTION(M) (0)
2706 #endif /* PREACTION */
2707 
2708 #ifndef POSTACTION
2709 #define POSTACTION(M)
2710 #endif /* POSTACTION */
2711 
2712 #endif /* USE_LOCKS */
2713 
2714 /*
2715  CORRUPTION_ERROR_ACTION is triggered upon detected bad addresses.
2716  USAGE_ERROR_ACTION is triggered on detected bad frees and
2717  reallocs. The argument p is an address that might have triggered the
2718  fault. It is ignored by the two predefined actions, but might be
2719  useful in custom actions that try to help diagnose errors.
2720 */
2721 
2722 #if PROCEED_ON_ERROR
2723 
2724 /* A count of the number of corruption errors causing resets */
2725 int malloc_corruption_error_count;
2726 
2727 /* default corruption action */
2728 static void reset_on_error(mstate m);
2729 
2730 #define CORRUPTION_ERROR_ACTION(m) reset_on_error(m)
2731 #define USAGE_ERROR_ACTION(m, p)
2732 
2733 #else /* PROCEED_ON_ERROR */
2734 
2735 #ifndef CORRUPTION_ERROR_ACTION
2736 #define CORRUPTION_ERROR_ACTION(m) ABORT
2737 #endif /* CORRUPTION_ERROR_ACTION */
2738 
2739 #ifndef USAGE_ERROR_ACTION
2740 #define USAGE_ERROR_ACTION(m,p) ABORT
2741 #endif /* USAGE_ERROR_ACTION */
2742 
2743 #endif /* PROCEED_ON_ERROR */
2744 
2745 /* -------------------------- Debugging setup ---------------------------- */
2746 
2747 #if ! DEBUG
2748 
2749 #define check_free_chunk(M,P)
2750 #define check_inuse_chunk(M,P)
2751 #define check_malloced_chunk(M,P,N)
2752 #define check_mmapped_chunk(M,P)
2753 #define check_malloc_state(M)
2754 #define check_top_chunk(M,P)
2755 
2756 #else /* DEBUG */
2757 #define check_free_chunk(M,P) do_check_free_chunk(M,P)
2758 #define check_inuse_chunk(M,P) do_check_inuse_chunk(M,P)
2759 #define check_top_chunk(M,P) do_check_top_chunk(M,P)
2760 #define check_malloced_chunk(M,P,N) do_check_malloced_chunk(M,P,N)
2761 #define check_mmapped_chunk(M,P) do_check_mmapped_chunk(M,P)
2762 #define check_malloc_state(M) do_check_malloc_state(M)
2763 
2764 static void do_check_any_chunk(mstate m, mchunkptr p);
2765 static void do_check_top_chunk(mstate m, mchunkptr p);
2766 static void do_check_mmapped_chunk(mstate m, mchunkptr p);
2767 static void do_check_inuse_chunk(mstate m, mchunkptr p);
2768 static void do_check_free_chunk(mstate m, mchunkptr p);
2769 static void do_check_malloced_chunk(mstate m, void* mem, size_t s);
2770 static void do_check_tree(mstate m, tchunkptr t);
2771 static void do_check_treebin(mstate m, bindex_t i);
2772 static void do_check_smallbin(mstate m, bindex_t i);
2773 static void do_check_malloc_state(mstate m);
2774 static int bin_find(mstate m, mchunkptr x);
2775 static size_t traverse_and_check(mstate m);
2776 #endif /* DEBUG */
2777 
2778 /* ---------------------------- Indexing Bins ---------------------------- */
2779 
2780 #define is_small(s) (((s) >> SMALLBIN_SHIFT) < NSMALLBINS)
2781 #define small_index(s) (bindex_t)((s) >> SMALLBIN_SHIFT)
2782 #define small_index2size(i) ((i) << SMALLBIN_SHIFT)
2783 #define MIN_SMALL_INDEX (small_index(MIN_CHUNK_SIZE))
2784 
2785 /* addressing by index. See above about smallbin repositioning */
2786 #define smallbin_at(M, i) ((sbinptr)((char*)&((M)->smallbins[(i)<<1])))
2787 #define treebin_at(M,i) (&((M)->treebins[i]))
2788 
2789 /* assign tree index for size S to variable I. Use x86 asm if possible */
2790 #if defined(__GNUC__) && (defined(__i386__) || defined(__x86_64__))
2791 #define compute_tree_index(S, I)\
2792 {\
2793  unsigned int X = S >> TREEBIN_SHIFT;\
2794  if (X == 0)\
2795  I = 0;\
2796  else if (X > 0xFFFF)\
2797  I = NTREEBINS-1;\
2798  else {\
2799  unsigned int K;\
2800  __asm__("bsrl\t%1, %0\n\t" : "=r" (K) : "g" (X));\
2801  I = (bindex_t)((K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1)));\
2802  }\
2803 }
2804 
2805 #elif defined (__INTEL_COMPILER)
2806 #define compute_tree_index(S, I)\
2807 {\
2808  size_t X = S >> TREEBIN_SHIFT;\
2809  if (X == 0)\
2810  I = 0;\
2811  else if (X > 0xFFFF)\
2812  I = NTREEBINS-1;\
2813  else {\
2814  unsigned int K = _bit_scan_reverse (X); \
2815  I = (bindex_t)((K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1)));\
2816  }\
2817 }
2818 
2819 #elif defined(_MSC_VER) && _MSC_VER>=1300
2820 #define compute_tree_index(S, I)\
2821 {\
2822  size_t X = S >> TREEBIN_SHIFT;\
2823  if (X == 0)\
2824  I = 0;\
2825  else if (X > 0xFFFF)\
2826  I = NTREEBINS-1;\
2827  else {\
2828  unsigned int K;\
2829  _BitScanReverse((DWORD *) &K, (DWORD) X);\
2830  I = (bindex_t)((K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1)));\
2831  }\
2832 }
2833 
2834 #else /* GNUC */
2835 #define compute_tree_index(S, I)\
2836 {\
2837  size_t X = S >> TREEBIN_SHIFT;\
2838  if (X == 0)\
2839  I = 0;\
2840  else if (X > 0xFFFF)\
2841  I = NTREEBINS-1;\
2842  else {\
2843  unsigned int Y = (unsigned int)X;\
2844  unsigned int N = ((Y - 0x100) >> 16) & 8;\
2845  unsigned int K = (((Y <<= N) - 0x1000) >> 16) & 4;\
2846  N += K;\
2847  N += K = (((Y <<= K) - 0x4000) >> 16) & 2;\
2848  K = 14 - N + ((Y <<= K) >> 15);\
2849  I = (K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1));\
2850  }\
2851 }
2852 #endif /* GNUC */
2853 
2854 /* Bit representing maximum resolved size in a treebin at i */
2855 #define bit_for_tree_index(i) \
2856  (i == NTREEBINS-1)? (SIZE_T_BITSIZE-1) : (((i) >> 1) + TREEBIN_SHIFT - 2)
2857 
2858 /* Shift placing maximum resolved bit in a treebin at i as sign bit */
2859 #define leftshift_for_tree_index(i) \
2860  ((i == NTREEBINS-1)? 0 : \
2861  ((SIZE_T_BITSIZE-SIZE_T_ONE) - (((i) >> 1) + TREEBIN_SHIFT - 2)))
2862 
2863 /* The size of the smallest chunk held in bin with index i */
2864 #define minsize_for_tree_index(i) \
2865  ((SIZE_T_ONE << (((i) >> 1) + TREEBIN_SHIFT)) | \
2866  (((size_t)((i) & SIZE_T_ONE)) << (((i) >> 1) + TREEBIN_SHIFT - 1)))
2867 
2868 
2869 /* ------------------------ Operations on bin maps ----------------------- */
2870 
2871 /* bit corresponding to given index */
2872 #define idx2bit(i) ((binmap_t)(1) << (i))
2873 
2874 /* Mark/Clear bits with given index */
2875 #define mark_smallmap(M,i) ((M)->smallmap |= idx2bit(i))
2876 #define clear_smallmap(M,i) ((M)->smallmap &= ~idx2bit(i))
2877 #define smallmap_is_marked(M,i) ((M)->smallmap & idx2bit(i))
2878 
2879 #define mark_treemap(M,i) ((M)->treemap |= idx2bit(i))
2880 #define clear_treemap(M,i) ((M)->treemap &= ~idx2bit(i))
2881 #define treemap_is_marked(M,i) ((M)->treemap & idx2bit(i))
2882 
2883 /* isolate the least set bit of a bitmap */
2884 #define least_bit(x) ((x) & -(x))
2885 
2886 /* mask with all bits to left of least bit of x on */
2887 #define left_bits(x) ((x<<1) | -(x<<1))
2888 
2889 /* mask with all bits to left of or equal to least bit of x on */
2890 #define same_or_left_bits(x) ((x) | -(x))
2891 
2892 /* index corresponding to given bit. Use x86 asm if possible */
2893 
2894 #if defined(__GNUC__) && (defined(__i386__) || defined(__x86_64__))
2895 #define compute_bit2idx(X, I)\
2896 {\
2897  unsigned int J;\
2898  __asm__("bsfl\t%1, %0\n\t" : "=r" (J) : "g" (X));\
2899  I = (bindex_t)J;\
2900 }
2901 
2902 #elif defined (__INTEL_COMPILER)
2903 #define compute_bit2idx(X, I)\
2904 {\
2905  unsigned int J;\
2906  J = _bit_scan_forward (X); \
2907  I = (bindex_t)J;\
2908 }
2909 
2910 #elif defined(_MSC_VER) && _MSC_VER>=1300
2911 #define compute_bit2idx(X, I)\
2912 {\
2913  unsigned int J;\
2914  _BitScanForward((DWORD *) &J, X);\
2915  I = (bindex_t)J;\
2916 }
2917 
2918 #elif USE_BUILTIN_FFS
2919 #define compute_bit2idx(X, I) I = ffs(X)-1
2920 
2921 #else
2922 #define compute_bit2idx(X, I)\
2923 {\
2924  unsigned int Y = X - 1;\
2925  unsigned int K = Y >> (16-4) & 16;\
2926  unsigned int N = K; Y >>= K;\
2927  N += K = Y >> (8-3) & 8; Y >>= K;\
2928  N += K = Y >> (4-2) & 4; Y >>= K;\
2929  N += K = Y >> (2-1) & 2; Y >>= K;\
2930  N += K = Y >> (1-0) & 1; Y >>= K;\
2931  I = (bindex_t)(N + Y);\
2932 }
2933 #endif /* GNUC */
2934 
2935 
2936 /* ----------------------- Runtime Check Support ------------------------- */
2937 
2938 /*
2939  For security, the main invariant is that malloc/free/etc never
2940  writes to a static address other than malloc_state, unless static
2941  malloc_state itself has been corrupted, which cannot occur via
2942  malloc (because of these checks). In essence this means that we
2943  believe all pointers, sizes, maps etc held in malloc_state, but
2944  check all of those linked or offsetted from other embedded data
2945  structures. These checks are interspersed with main code in a way
2946  that tends to minimize their run-time cost.
2947 
2948  When FOOTERS is defined, in addition to range checking, we also
2949  verify footer fields of inuse chunks, which can be used guarantee
2950  that the mstate controlling malloc/free is intact. This is a
2951  streamlined version of the approach described by William Robertson
2952  et al in "Run-time Detection of Heap-based Overflows" LISA'03
2953  http://www.usenix.org/events/lisa03/tech/robertson.html The footer
2954  of an inuse chunk holds the xor of its mstate and a random seed,
2955  that is checked upon calls to free() and realloc(). This is
2956  (probablistically) unguessable from outside the program, but can be
2957  computed by any code successfully malloc'ing any chunk, so does not
2958  itself provide protection against code that has already broken
2959  security through some other means. Unlike Robertson et al, we
2960  always dynamically check addresses of all offset chunks (previous,
2961  next, etc). This turns out to be cheaper than relying on hashes.
2962 */
2963 
2964 #if !INSECURE
2965 /* Check if address a is at least as high as any from MORECORE or MMAP */
2966 #define ok_address(M, a) ((char*)(a) >= (M)->least_addr)
2967 /* Check if address of next chunk n is higher than base chunk p */
2968 #define ok_next(p, n) ((char*)(p) < (char*)(n))
2969 /* Check if p has inuse status */
2970 #define ok_inuse(p) is_inuse(p)
2971 /* Check if p has its pinuse bit on */
2972 #define ok_pinuse(p) pinuse(p)
2973 
2974 #else /* !INSECURE */
2975 #define ok_address(M, a) (1)
2976 #define ok_next(b, n) (1)
2977 #define ok_inuse(p) (1)
2978 #define ok_pinuse(p) (1)
2979 #endif /* !INSECURE */
2980 
2981 #if (FOOTERS && !INSECURE)
2982 /* Check if (alleged) mstate m has expected magic field */
2983 #define ok_magic(M) ((M)->magic == mparams.magic)
2984 #else /* (FOOTERS && !INSECURE) */
2985 #define ok_magic(M) (1)
2986 #endif /* (FOOTERS && !INSECURE) */
2987 
2988 
2989 /* In gcc, use __builtin_expect to minimize impact of checks */
2990 #if !INSECURE
2991 #if defined(__GNUC__) && __GNUC__ >= 3
2992 #define RTCHECK(e) __builtin_expect(e, 1)
2993 #else /* GNUC */
2994 #define RTCHECK(e) (e)
2995 #endif /* GNUC */
2996 #else /* !INSECURE */
2997 #define RTCHECK(e) (1)
2998 #endif /* !INSECURE */
2999 
3000 /* macros to set up inuse chunks with or without footers */
3001 
3002 #if !FOOTERS
3003 
3004 #define mark_inuse_foot(M,p,s)
3005 
3006 /* Macros for setting head/foot of non-mmapped chunks */
3007 
3008 /* Set cinuse bit and pinuse bit of next chunk */
3009 #define set_inuse(M,p,s)\
3010  ((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\
3011  ((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT)
3012 
3013 /* Set cinuse and pinuse of this chunk and pinuse of next chunk */
3014 #define set_inuse_and_pinuse(M,p,s)\
3015  ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
3016  ((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT)
3017 
3018 /* Set size, cinuse and pinuse bit of this chunk */
3019 #define set_size_and_pinuse_of_inuse_chunk(M, p, s)\
3020  ((p)->head = (s|PINUSE_BIT|CINUSE_BIT))
3021 
3022 #else /* FOOTERS */
3023 
3024 /* Set foot of inuse chunk to be xor of mstate and seed */
3025 #define mark_inuse_foot(M,p,s)\
3026  (((mchunkptr)((char*)(p) + (s)))->prev_foot = ((size_t)(M) ^ mparams.magic))
3027 
3028 #define get_mstate_for(p)\
3029  ((mstate)(((mchunkptr)((char*)(p) +\
3030  (chunksize(p))))->prev_foot ^ mparams.magic))
3031 
3032 #define set_inuse(M,p,s)\
3033  ((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\
3034  (((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT), \
3035  mark_inuse_foot(M,p,s))
3036 
3037 #define set_inuse_and_pinuse(M,p,s)\
3038  ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
3039  (((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT),\
3040  mark_inuse_foot(M,p,s))
3041 
3042 #define set_size_and_pinuse_of_inuse_chunk(M, p, s)\
3043  ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
3044  mark_inuse_foot(M, p, s))
3045 
3046 #endif /* !FOOTERS */
3047 
3048 /* ---------------------------- setting mparams -------------------------- */
3049 
3050 #ifdef ENABLE_LARGE_PAGES
3051 typedef size_t (WINAPI *GetLargePageMinimum_t)(void);
3052 #endif
3053 
3054 /* Initialize mparams */
3055 static int init_mparams(void) {
3056 #ifdef NEED_GLOBAL_LOCK_INIT
3057  if (malloc_global_mutex_status <= 0)
3058  init_malloc_global_mutex();
3059 #endif
3060 
3061  ACQUIRE_MALLOC_GLOBAL_LOCK();
3062  if (mparams.magic == 0) {
3063  size_t magic;
3064  size_t psize;
3065  size_t gsize;
3066 
3067 #ifndef WIN32
3068  psize = malloc_getpagesize;
3069  gsize = ((DEFAULT_GRANULARITY != 0)? DEFAULT_GRANULARITY : psize);
3070 #else /* WIN32 */
3071  {
3072  SYSTEM_INFO system_info;
3073  GetSystemInfo(&system_info);
3074  psize = system_info.dwPageSize;
3075  gsize = ((DEFAULT_GRANULARITY != 0)?
3076  DEFAULT_GRANULARITY : system_info.dwAllocationGranularity);
3077 #ifdef ENABLE_LARGE_PAGES
3078  {
3079  GetLargePageMinimum_t GetLargePageMinimum_ = (GetLargePageMinimum_t) GetProcAddress(GetModuleHandle(__T("kernel32.dll")), "GetLargePageMinimum");
3080  if(GetLargePageMinimum_) {
3081  size_t largepagesize = GetLargePageMinimum_();
3082  if(largepagesize) {
3083  psize = largepagesize;
3084  gsize = ((DEFAULT_GRANULARITY != 0)?
3085  DEFAULT_GRANULARITY : largepagesize);
3086  if(gsize < largepagesize) gsize = largepagesize;
3087  }
3088  }
3089  }
3090 #endif
3091  }
3092 #endif /* WIN32 */
3093 
3094  /* Sanity-check configuration:
3095  size_t must be unsigned and as wide as pointer type.
3096  ints must be at least 4 bytes.
3097  alignment must be at least 8.
3098  Alignment, min chunk size, and page size must all be powers of 2.
3099  */
3100  if ((sizeof(size_t) != sizeof(char*)) ||
3101  (MAX_SIZE_T < MIN_CHUNK_SIZE) ||
3102  (sizeof(int) < 4) ||
3103  (MALLOC_ALIGNMENT < (size_t)8U) ||
3104  ((MALLOC_ALIGNMENT & (MALLOC_ALIGNMENT-SIZE_T_ONE)) != 0) ||
3105  ((MCHUNK_SIZE & (MCHUNK_SIZE-SIZE_T_ONE)) != 0) ||
3106  ((gsize & (gsize-SIZE_T_ONE)) != 0) ||
3107  ((psize & (psize-SIZE_T_ONE)) != 0))
3108  ABORT;
3109 
3110  mparams.granularity = gsize;
3111  mparams.page_size = psize;
3112  mparams.mmap_threshold = DEFAULT_MMAP_THRESHOLD;
3113  mparams.trim_threshold = DEFAULT_TRIM_THRESHOLD;
3114 #if MORECORE_CONTIGUOUS
3115  mparams.default_mflags = USE_LOCK_BIT|USE_MMAP_BIT;
3116 #else /* MORECORE_CONTIGUOUS */
3117  mparams.default_mflags = USE_LOCK_BIT|USE_MMAP_BIT|USE_NONCONTIGUOUS_BIT;
3118 #endif /* MORECORE_CONTIGUOUS */
3119 
3120 #if !ONLY_MSPACES
3121  /* Set up lock for main malloc area */
3122  gm->mflags = mparams.default_mflags;
3123  INITIAL_LOCK(&gm->mutex);
3124 #endif
3125 
3126  {
3127 #if USE_DEV_RANDOM
3128  int fd;
3129  unsigned char buf[sizeof(size_t)];
3130  /* Try to use /dev/urandom, else fall back on using time */
3131  if ((fd = open("/dev/urandom", O_RDONLY)) >= 0 &&
3132  read(fd, buf, sizeof(buf)) == sizeof(buf)) {
3133  magic = *((size_t *) buf);
3134  close(fd);
3135  }
3136  else
3137 #endif /* USE_DEV_RANDOM */
3138 #ifdef WIN32
3139  magic = (size_t)(GetTickCount() ^ (size_t)0x55555555U);
3140 #else
3141  magic = (size_t)(time(0) ^ (size_t)0x55555555U);
3142 #endif
3143  magic |= (size_t)8U; /* ensure nonzero */
3144  magic &= ~(size_t)7U; /* improve chances of fault for bad values */
3145  mparams.magic = magic;
3146  }
3147  }
3148 
3149  RELEASE_MALLOC_GLOBAL_LOCK();
3150  return 1;
3151 }
3152 
3153 /* support for mallopt */
3154 static int change_mparam(int param_number, int value) {
3155  size_t val;
3156  ensure_initialization();
3157  val = (value == -1)? MAX_SIZE_T : (size_t)value;
3158  switch(param_number) {
3159  case M_TRIM_THRESHOLD:
3160  mparams.trim_threshold = val;
3161  return 1;
3162  case M_GRANULARITY:
3163  if (val >= mparams.page_size && ((val & (val-1)) == 0)) {
3164  mparams.granularity = val;
3165  return 1;
3166  }
3167  else
3168  return 0;
3169  case M_MMAP_THRESHOLD:
3170  mparams.mmap_threshold = val;
3171  return 1;
3172  default:
3173  return 0;
3174  }
3175 }
3176 
3177 #if DEBUG
3178 /* ------------------------- Debugging Support --------------------------- */
3179 
3180 /* Check properties of any chunk, whether free, inuse, mmapped etc */
3181 static void do_check_any_chunk(mstate m, mchunkptr p) {
3182  assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
3183  assert(ok_address(m, p));
3184 }
3185 
3186 /* Check properties of top chunk */
3187 static void do_check_top_chunk(mstate m, mchunkptr p) {
3188  msegmentptr sp = segment_holding(m, (char*)p);
3189  size_t sz = p->head & ~INUSE_BITS; /* third-lowest bit can be set! */
3190  assert(sp != 0);
3191  assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
3192  assert(ok_address(m, p));
3193  assert(sz == m->topsize);
3194  assert(sz > 0);
3195  assert(sz == ((sp->base + sp->size) - (char*)p) - TOP_FOOT_SIZE);
3196  assert(pinuse(p));
3197  assert(!pinuse(chunk_plus_offset(p, sz)));
3198 }
3199 
3200 /* Check properties of (inuse) mmapped chunks */
3201 static void do_check_mmapped_chunk(mstate m, mchunkptr p) {
3202  size_t sz = chunksize(p);
3203  size_t len = (sz + (p->prev_foot) + MMAP_FOOT_PAD);
3204  assert(is_mmapped(p));
3205  assert(use_mmap(m));
3206  assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
3207  assert(ok_address(m, p));
3208  assert(!is_small(sz));
3209  assert((len & (mparams.page_size-SIZE_T_ONE)) == 0);
3210  assert(chunk_plus_offset(p, sz)->head == FENCEPOST_HEAD);
3211  assert(chunk_plus_offset(p, sz+SIZE_T_SIZE)->head == 0);
3212 }
3213 
3214 /* Check properties of inuse chunks */
3215 static void do_check_inuse_chunk(mstate m, mchunkptr p) {
3216  do_check_any_chunk(m, p);
3217  assert(is_inuse(p));
3218  assert(next_pinuse(p));
3219  /* If not pinuse and not mmapped, previous chunk has OK offset */
3220  assert(is_mmapped(p) || pinuse(p) || next_chunk(prev_chunk(p)) == p);
3221  if (is_mmapped(p))
3222  do_check_mmapped_chunk(m, p);
3223 }
3224 
3225 /* Check properties of free chunks */
3226 static void do_check_free_chunk(mstate m, mchunkptr p) {
3227  size_t sz = chunksize(p);
3228  mchunkptr next = chunk_plus_offset(p, sz);
3229  do_check_any_chunk(m, p);
3230  assert(!is_inuse(p));
3231  assert(!next_pinuse(p));
3232  assert (!is_mmapped(p));
3233  if (p != m->dv && p != m->top) {
3234  if (sz >= MIN_CHUNK_SIZE) {
3235  assert((sz & CHUNK_ALIGN_MASK) == 0);
3236  assert(is_aligned(chunk2mem(p)));
3237  assert(next->prev_foot == sz);
3238  assert(pinuse(p));
3239  assert (next == m->top || is_inuse(next));
3240  assert(p->fd->bk == p);
3241  assert(p->bk->fd == p);
3242  }
3243  else /* markers are always of size SIZE_T_SIZE */
3244  assert(sz == SIZE_T_SIZE);
3245  }
3246 }
3247 
3248 /* Check properties of malloced chunks at the point they are malloced */
3249 static void do_check_malloced_chunk(mstate m, void* mem, size_t s) {
3250  if (mem != 0) {
3251  mchunkptr p = mem2chunk(mem);
3252  size_t sz = p->head & ~INUSE_BITS;
3253  do_check_inuse_chunk(m, p);
3254  assert((sz & CHUNK_ALIGN_MASK) == 0);
3255  assert(sz >= MIN_CHUNK_SIZE);
3256  assert(sz >= s);
3257  /* unless mmapped, size is less than MIN_CHUNK_SIZE more than request */
3258  assert(is_mmapped(p) || sz < (s + MIN_CHUNK_SIZE));
3259  }
3260 }
3261 
3262 /* Check a tree and its subtrees. */
3263 static void do_check_tree(mstate m, tchunkptr t) {
3264  tchunkptr head = 0;
3265  tchunkptr u = t;
3266  bindex_t tindex = t->index;
3267  size_t tsize = chunksize(t);
3268  bindex_t idx;
3269  compute_tree_index(tsize, idx);
3270  assert(tindex == idx);
3271  assert(tsize >= MIN_LARGE_SIZE);
3272  assert(tsize >= minsize_for_tree_index(idx));
3273  assert((idx == NTREEBINS-1) || (tsize < minsize_for_tree_index((idx+1))));
3274 
3275  do { /* traverse through chain of same-sized nodes */
3276  do_check_any_chunk(m, ((mchunkptr)u));
3277  assert(u->index == tindex);
3278  assert(chunksize(u) == tsize);
3279  assert(!is_inuse(u));
3280  assert(!next_pinuse(u));
3281  assert(u->fd->bk == u);
3282  assert(u->bk->fd == u);
3283  if (u->parent == 0) {
3284  assert(u->child[0] == 0);
3285  assert(u->child[1] == 0);
3286  }
3287  else {
3288  assert(head == 0); /* only one node on chain has parent */
3289  head = u;
3290  assert(u->parent != u);
3291  assert (u->parent->child[0] == u ||
3292  u->parent->child[1] == u ||
3293  *((tbinptr*)(u->parent)) == u);
3294  if (u->child[0] != 0) {
3295  assert(u->child[0]->parent == u);
3296  assert(u->child[0] != u);
3297  do_check_tree(m, u->child[0]);
3298  }
3299  if (u->child[1] != 0) {
3300  assert(u->child[1]->parent == u);
3301  assert(u->child[1] != u);
3302  do_check_tree(m, u->child[1]);
3303  }
3304  if (u->child[0] != 0 && u->child[1] != 0) {
3305  assert(chunksize(u->child[0]) < chunksize(u->child[1]));
3306  }
3307  }
3308  u = u->fd;
3309  } while (u != t);
3310  assert(head != 0);
3311 }
3312 
3313 /* Check all the chunks in a treebin. */
3314 static void do_check_treebin(mstate m, bindex_t i) {
3315  tbinptr* tb = treebin_at(m, i);
3316  tchunkptr t = *tb;
3317  int empty = (m->treemap & (1U << i)) == 0;
3318  if (t == 0)
3319  assert(empty);
3320  if (!empty)
3321  do_check_tree(m, t);
3322 }
3323 
3324 /* Check all the chunks in a smallbin. */
3325 static void do_check_smallbin(mstate m, bindex_t i) {
3326  sbinptr b = smallbin_at(m, i);
3327  mchunkptr p = b->bk;
3328  unsigned int empty = (m->smallmap & (1U << i)) == 0;
3329  if (p == b)
3330  assert(empty);
3331  if (!empty) {
3332  for (; p != b; p = p->bk) {
3333  size_t size = chunksize(p);
3334  mchunkptr q;
3335  /* each chunk claims to be free */
3336  do_check_free_chunk(m, p);
3337  /* chunk belongs in bin */
3338  assert(small_index(size) == i);
3339  assert(p->bk == b || chunksize(p->bk) == chunksize(p));
3340  /* chunk is followed by an inuse chunk */
3341  q = next_chunk(p);
3342  if (q->head != FENCEPOST_HEAD)
3343  do_check_inuse_chunk(m, q);
3344  }
3345  }
3346 }
3347 
3348 /* Find x in a bin. Used in other check functions. */
3349 static int bin_find(mstate m, mchunkptr x) {
3350  size_t size = chunksize(x);
3351  if (is_small(size)) {
3352  bindex_t sidx = small_index(size);
3353  sbinptr b = smallbin_at(m, sidx);
3354  if (smallmap_is_marked(m, sidx)) {
3355  mchunkptr p = b;
3356  do {
3357  if (p == x)
3358  return 1;
3359  } while ((p = p->fd) != b);
3360  }
3361  }
3362  else {
3363  bindex_t tidx;
3364  compute_tree_index(size, tidx);
3365  if (treemap_is_marked(m, tidx)) {
3366  tchunkptr t = *treebin_at(m, tidx);
3367  size_t sizebits = size << leftshift_for_tree_index(tidx);
3368  while (t != 0 && chunksize(t) != size) {
3369  t = t->child[(sizebits >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1];
3370  sizebits <<= 1;
3371  }
3372  if (t != 0) {
3373  tchunkptr u = t;
3374  do {
3375  if (u == (tchunkptr)x)
3376  return 1;
3377  } while ((u = u->fd) != t);
3378  }
3379  }
3380  }
3381  return 0;
3382 }
3383 
3384 /* Traverse each chunk and check it; return total */
3385 static size_t traverse_and_check(mstate m) {
3386  size_t sum = 0;
3387  if (is_initialized(m)) {
3388  msegmentptr s = &m->seg;
3389  sum += m->topsize + TOP_FOOT_SIZE;
3390  while (s != 0) {
3391  mchunkptr q = align_as_chunk(s->base);
3392  mchunkptr lastq = 0;
3393  assert(pinuse(q));
3394  while (segment_holds(s, q) &&
3395  q != m->top && q->head != FENCEPOST_HEAD) {
3396  sum += chunksize(q);
3397  if (is_inuse(q)) {
3398  assert(!bin_find(m, q));
3399  do_check_inuse_chunk(m, q);
3400  }
3401  else {
3402  assert(q == m->dv || bin_find(m, q));
3403  assert(lastq == 0 || is_inuse(lastq)); /* Not 2 consecutive free */
3404  do_check_free_chunk(m, q);
3405  }
3406  lastq = q;
3407  q = next_chunk(q);
3408  }
3409  s = s->next;
3410  }
3411  }
3412  return sum;
3413 }
3414 
3415 /* Check all properties of malloc_state. */
3416 static void do_check_malloc_state(mstate m) {
3417  bindex_t i;
3418  size_t total;
3419  /* check bins */
3420  for (i = 0; i < NSMALLBINS; ++i)
3421  do_check_smallbin(m, i);
3422  for (i = 0; i < NTREEBINS; ++i)
3423  do_check_treebin(m, i);
3424 
3425  if (m->dvsize != 0) { /* check dv chunk */
3426  do_check_any_chunk(m, m->dv);
3427  assert(m->dvsize == chunksize(m->dv));
3428  assert(m->dvsize >= MIN_CHUNK_SIZE);
3429  assert(bin_find(m, m->dv) == 0);
3430  }
3431 
3432  if (m->top != 0) { /* check top chunk */
3433  do_check_top_chunk(m, m->top);
3434  /*assert(m->topsize == chunksize(m->top)); redundant */
3435  assert(m->topsize > 0);
3436  assert(bin_find(m, m->top) == 0);
3437  }
3438 
3439  total = traverse_and_check(m);
3440  assert(total <= m->footprint);
3441  assert(m->footprint <= m->max_footprint);
3442 }
3443 #endif /* DEBUG */
3444 
3445 /* ----------------------------- statistics ------------------------------ */
3446 
3447 #if !NO_MALLINFO
3448 static struct mallinfo internal_mallinfo(mstate m) {
3449  struct mallinfo nm = { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 };
3450  ensure_initialization();
3451  if (!PREACTION(m)) {
3452  check_malloc_state(m);
3453  if (is_initialized(m)) {
3454  size_t nfree = SIZE_T_ONE; /* top always free */
3455  size_t mfree = m->topsize + TOP_FOOT_SIZE;
3456  size_t sum = mfree;
3457  msegmentptr s = &m->seg;
3458  while (s != 0) {
3459  mchunkptr q = align_as_chunk(s->base);
3460  while (segment_holds(s, q) &&
3461  q != m->top && q->head != FENCEPOST_HEAD) {
3462  size_t sz = chunksize(q);
3463  sum += sz;
3464  if (!is_inuse(q)) {
3465  mfree += sz;
3466  ++nfree;
3467  }
3468  q = next_chunk(q);
3469  }
3470  s = s->next;
3471  }
3472 
3473  nm.arena = sum;
3474  nm.ordblks = nfree;
3475  nm.hblkhd = m->footprint - sum;
3476  nm.usmblks = m->max_footprint;
3477  nm.uordblks = m->footprint - mfree;
3478  nm.fordblks = mfree;
3479  nm.keepcost = m->topsize;
3480  }
3481 
3482  POSTACTION(m);
3483  }
3484  return nm;
3485 }
3486 #endif /* !NO_MALLINFO */
3487 
3488 static void internal_malloc_stats(mstate m) {
3489  ensure_initialization();
3490  if (!PREACTION(m)) {
3491  size_t maxfp = 0;
3492  size_t fp = 0;
3493  size_t used = 0;
3494  check_malloc_state(m);
3495  if (is_initialized(m)) {
3496  msegmentptr s = &m->seg;
3497  maxfp = m->max_footprint;
3498  fp = m->footprint;
3499  used = fp - (m->topsize + TOP_FOOT_SIZE);
3500 
3501  while (s != 0) {
3502  mchunkptr q = align_as_chunk(s->base);
3503  while (segment_holds(s, q) &&
3504  q != m->top && q->head != FENCEPOST_HEAD) {
3505  if (!is_inuse(q))
3506  used -= chunksize(q);
3507  q = next_chunk(q);
3508  }
3509  s = s->next;
3510  }
3511  }
3512 
3513  fprintf(stderr, "max system bytes = %10lu\n", (unsigned long)(maxfp));
3514  fprintf(stderr, "system bytes = %10lu\n", (unsigned long)(fp));
3515  fprintf(stderr, "in use bytes = %10lu\n", (unsigned long)(used));
3516 
3517  POSTACTION(m);
3518  }
3519 }
3520 
3521 /* ----------------------- Operations on smallbins ----------------------- */
3522 
3523 /*
3524  Various forms of linking and unlinking are defined as macros. Even
3525  the ones for trees, which are very long but have very short typical
3526  paths. This is ugly but reduces reliance on inlining support of
3527  compilers.
3528 */
3529 
3530 /* Link a free chunk into a smallbin */
3531 #define insert_small_chunk(M, P, S) {\
3532  bindex_t I = small_index(S);\
3533  mchunkptr B = smallbin_at(M, I);\
3534  mchunkptr F = B;\
3535  assert(S >= MIN_CHUNK_SIZE);\
3536  if (!smallmap_is_marked(M, I))\
3537  mark_smallmap(M, I);\
3538  else if (RTCHECK(ok_address(M, B->fd)))\
3539  F = B->fd;\
3540  else {\
3541  CORRUPTION_ERROR_ACTION(M);\
3542  }\
3543  B->fd = P;\
3544  F->bk = P;\
3545  P->fd = F;\
3546  P->bk = B;\
3547 }
3548 
3549 /* Unlink a chunk from a smallbin */
3550 #define unlink_small_chunk(M, P, S) {\
3551  mchunkptr F = P->fd;\
3552  mchunkptr B = P->bk;\
3553  bindex_t I = small_index(S);\
3554  assert(P != B);\
3555  assert(P != F);\
3556  assert(chunksize(P) == small_index2size(I));\
3557  if (F == B)\
3558  clear_smallmap(M, I);\
3559  else if (RTCHECK((F == smallbin_at(M,I) || ok_address(M, F)) &&\
3560  (B == smallbin_at(M,I) || ok_address(M, B)))) {\
3561  F->bk = B;\
3562  B->fd = F;\
3563  }\
3564  else {\
3565  CORRUPTION_ERROR_ACTION(M);\
3566  }\
3567 }
3568 
3569 /* Unlink the first chunk from a smallbin */
3570 #define unlink_first_small_chunk(M, B, P, I) {\
3571  mchunkptr F = P->fd;\
3572  assert(P != B);\
3573  assert(P != F);\
3574  assert(chunksize(P) == small_index2size(I));\
3575  if (B == F)\
3576  clear_smallmap(M, I);\
3577  else if (RTCHECK(ok_address(M, F))) {\
3578  B->fd = F;\
3579  F->bk = B;\
3580  }\
3581  else {\
3582  CORRUPTION_ERROR_ACTION(M);\
3583  }\
3584 }
3585 
3586 
3587 
3588 /* Replace dv node, binning the old one */
3589 /* Used only when dvsize known to be small */
3590 #define replace_dv(M, P, S) {\
3591  size_t DVS = M->dvsize;\
3592  if (DVS != 0) {\
3593  mchunkptr DV = M->dv;\
3594  assert(is_small(DVS));\
3595  insert_small_chunk(M, DV, DVS);\
3596  }\
3597  M->dvsize = S;\
3598  M->dv = P;\
3599 }
3600 
3601 /* ------------------------- Operations on trees ------------------------- */
3602 
3603 /* Insert chunk into tree */
3604 #define insert_large_chunk(M, X, S) {\
3605  tbinptr* H;\
3606  bindex_t I;\
3607  compute_tree_index(S, I);\
3608  H = treebin_at(M, I);\
3609  X->index = I;\
3610  X->child[0] = X->child[1] = 0;\
3611  if (!treemap_is_marked(M, I)) {\
3612  mark_treemap(M, I);\
3613  *H = X;\
3614  X->parent = (tchunkptr)H;\
3615  X->fd = X->bk = X;\
3616  }\
3617  else {\
3618  tchunkptr T = *H;\
3619  size_t K = S << leftshift_for_tree_index(I);\
3620  for (;;) {\
3621  if (chunksize(T) != S) {\
3622  tchunkptr* C = &(T->child[(K >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1]);\
3623  K <<= 1;\
3624  if (*C != 0)\
3625  T = *C;\
3626  else if (RTCHECK(ok_address(M, C))) {\
3627  *C = X;\
3628  X->parent = T;\
3629  X->fd = X->bk = X;\
3630  break;\
3631  }\
3632  else {\
3633  CORRUPTION_ERROR_ACTION(M);\
3634  break;\
3635  }\
3636  }\
3637  else {\
3638  tchunkptr F = T->fd;\
3639  if (RTCHECK(ok_address(M, T) && ok_address(M, F))) {\
3640  T->fd = F->bk = X;\
3641  X->fd = F;\
3642  X->bk = T;\
3643  X->parent = 0;\
3644  break;\
3645  }\
3646  else {\
3647  CORRUPTION_ERROR_ACTION(M);\
3648  break;\
3649  }\
3650  }\
3651  }\
3652  }\
3653 }
3654 
3655 /*
3656  Unlink steps:
3657 
3658  1. If x is a chained node, unlink it from its same-sized fd/bk links
3659  and choose its bk node as its replacement.
3660  2. If x was the last node of its size, but not a leaf node, it must
3661  be replaced with a leaf node (not merely one with an open left or
3662  right), to make sure that lefts and rights of descendents
3663  correspond properly to bit masks. We use the rightmost descendent
3664  of x. We could use any other leaf, but this is easy to locate and
3665  tends to counteract removal of leftmosts elsewhere, and so keeps
3666  paths shorter than minimally guaranteed. This doesn't loop much
3667  because on average a node in a tree is near the bottom.
3668  3. If x is the base of a chain (i.e., has parent links) relink
3669  x's parent and children to x's replacement (or null if none).
3670 */
3671 
3672 #define unlink_large_chunk(M, X) {\
3673  tchunkptr XP = X->parent;\
3674  tchunkptr R;\
3675  if (X->bk != X) {\
3676  tchunkptr F = X->fd;\
3677  R = X->bk;\
3678  if (RTCHECK(ok_address(M, F))) {\
3679  F->bk = R;\
3680  R->fd = F;\
3681  }\
3682  else {\
3683  CORRUPTION_ERROR_ACTION(M);\
3684  }\
3685  }\
3686  else {\
3687  tchunkptr* RP;\
3688  if (((R = *(RP = &(X->child[1]))) != 0) ||\
3689  ((R = *(RP = &(X->child[0]))) != 0)) {\
3690  tchunkptr* CP;\
3691  while ((*(CP = &(R->child[1])) != 0) ||\
3692  (*(CP = &(R->child[0])) != 0)) {\
3693  R = *(RP = CP);\
3694  }\
3695  if (RTCHECK(ok_address(M, RP)))\
3696  *RP = 0;\
3697  else {\
3698  CORRUPTION_ERROR_ACTION(M);\
3699  }\
3700  }\
3701  }\
3702  if (XP != 0) {\
3703  tbinptr* H = treebin_at(M, X->index);\
3704  if (X == *H) {\
3705  if ((*H = R) == 0) \
3706  clear_treemap(M, X->index);\
3707  }\
3708  else if (RTCHECK(ok_address(M, XP))) {\
3709  if (XP->child[0] == X) \
3710  XP->child[0] = R;\
3711  else \
3712  XP->child[1] = R;\
3713  }\
3714  else\
3715  CORRUPTION_ERROR_ACTION(M);\
3716  if (R != 0) {\
3717  if (RTCHECK(ok_address(M, R))) {\
3718  tchunkptr C0, C1;\
3719  R->parent = XP;\
3720  if ((C0 = X->child[0]) != 0) {\
3721  if (RTCHECK(ok_address(M, C0))) {\
3722  R->child[0] = C0;\
3723  C0->parent = R;\
3724  }\
3725  else\
3726  CORRUPTION_ERROR_ACTION(M);\
3727  }\
3728  if ((C1 = X->child[1]) != 0) {\
3729  if (RTCHECK(ok_address(M, C1))) {\
3730  R->child[1] = C1;\
3731  C1->parent = R;\
3732  }\
3733  else\
3734  CORRUPTION_ERROR_ACTION(M);\
3735  }\
3736  }\
3737  else\
3738  CORRUPTION_ERROR_ACTION(M);\
3739  }\
3740  }\
3741 }
3742 
3743 /* Relays to large vs small bin operations */
3744 
3745 #define insert_chunk(M, P, S)\
3746  if (is_small(S)) insert_small_chunk(M, P, S)\
3747  else { tchunkptr TP = (tchunkptr)(P); insert_large_chunk(M, TP, S); }
3748 
3749 #define unlink_chunk(M, P, S)\
3750  if (is_small(S)) unlink_small_chunk(M, P, S)\
3751  else { tchunkptr TP = (tchunkptr)(P); unlink_large_chunk(M, TP); }
3752 
3753 
3754 /* Relays to internal calls to malloc/free from realloc, memalign etc */
3755 
3756 #if ONLY_MSPACES
3757 #define internal_malloc(m, b) mspace_malloc(m, b)
3758 #define internal_free(m, mem) mspace_free(m,mem);
3759 #else /* ONLY_MSPACES */
3760 #if MSPACES
3761 #define internal_malloc(m, b)\
3762  (m == gm)? dlmalloc(b) : mspace_malloc(m, b)
3763 #define internal_free(m, mem)\
3764  if (m == gm) dlfree(mem); else mspace_free(m,mem);
3765 #else /* MSPACES */
3766 #define internal_malloc(m, b) dlmalloc(b)
3767 #define internal_free(m, mem) dlfree(mem)
3768 #endif /* MSPACES */
3769 #endif /* ONLY_MSPACES */
3770 
3771 /* ----------------------- Direct-mmapping chunks ----------------------- */
3772 
3773 /*
3774  Directly mmapped chunks are set up with an offset to the start of
3775  the mmapped region stored in the prev_foot field of the chunk. This
3776  allows reconstruction of the required argument to MUNMAP when freed,
3777  and also allows adjustment of the returned chunk to meet alignment
3778  requirements (especially in memalign).
3779 */
3780 
3781 /* Malloc using mmap */
3782 static void* mmap_alloc(mstate m, size_t nb) {
3783  size_t mmsize = mmap_align(nb + SIX_SIZE_T_SIZES + CHUNK_ALIGN_MASK);
3784  if (mmsize > nb) { /* Check for wrap around 0 */
3785  char* mm = (char*)(CALL_DIRECT_MMAP(mmsize));
3786  if (mm != CMFAIL) {
3787  size_t offset = align_offset(chunk2mem(mm));
3788  size_t psize = mmsize - offset - MMAP_FOOT_PAD;
3789  mchunkptr p = (mchunkptr)(mm + offset);
3790  p->prev_foot = offset;
3791  p->head = psize;
3792  mark_inuse_foot(m, p, psize);
3793  chunk_plus_offset(p, psize)->head = FENCEPOST_HEAD;
3794  chunk_plus_offset(p, psize+SIZE_T_SIZE)->head = 0;
3795 
3796  if (m->least_addr == 0 || mm < m->least_addr)
3797  m->least_addr = mm;
3798  if ((m->footprint += mmsize) > m->max_footprint)
3799  m->max_footprint = m->footprint;
3800  assert(is_aligned(chunk2mem(p)));
3801  check_mmapped_chunk(m, p);
3802  return chunk2mem(p);
3803  }
3804  }
3805  return 0;
3806 }
3807 
3808 /* Realloc using mmap */
3809 static mchunkptr mmap_resize(mstate m, mchunkptr oldp, size_t nb) {
3810  size_t oldsize = chunksize(oldp);
3811  if (is_small(nb)) /* Can't shrink mmap regions below small size */
3812  return 0;
3813  /* Keep old chunk if big enough but not too big */
3814  if (oldsize >= nb + SIZE_T_SIZE &&
3815  (oldsize - nb) <= (mparams.granularity << 1))
3816  return oldp;
3817  else {
3818  size_t offset = oldp->prev_foot;
3819  size_t oldmmsize = oldsize + offset + MMAP_FOOT_PAD;
3820  size_t newmmsize = mmap_align(nb + SIX_SIZE_T_SIZES + CHUNK_ALIGN_MASK);
3821  char* cp = (char*)CALL_MREMAP((char*)oldp - offset,
3822  oldmmsize, newmmsize, 1);
3823  if (cp != CMFAIL) {
3824  mchunkptr newp = (mchunkptr)(cp + offset);
3825  size_t psize = newmmsize - offset - MMAP_FOOT_PAD;
3826  newp->head = psize;
3827  mark_inuse_foot(m, newp, psize);
3828  chunk_plus_offset(newp, psize)->head = FENCEPOST_HEAD;
3829  chunk_plus_offset(newp, psize+SIZE_T_SIZE)->head = 0;
3830 
3831  if (cp < m->least_addr)
3832  m->least_addr = cp;
3833  if ((m->footprint += newmmsize - oldmmsize) > m->max_footprint)
3834  m->max_footprint = m->footprint;
3835  check_mmapped_chunk(m, newp);
3836  return newp;
3837  }
3838  }
3839  return 0;
3840 }
3841 
3842 /* -------------------------- mspace management -------------------------- */
3843 
3844 /* Initialize top chunk and its size */
3845 static void init_top(mstate m, mchunkptr p, size_t psize) {
3846  /* Ensure alignment */
3847  size_t offset = align_offset(chunk2mem(p));
3848  p = (mchunkptr)((char*)p + offset);
3849  psize -= offset;
3850 
3851  m->top = p;
3852  m->topsize = psize;
3853  p->head = psize | PINUSE_BIT;
3854  /* set size of fake trailing chunk holding overhead space only once */
3855  chunk_plus_offset(p, psize)->head = TOP_FOOT_SIZE;
3856  m->trim_check = mparams.trim_threshold; /* reset on each update */
3857 }
3858 
3859 /* Initialize bins for a new mstate that is otherwise zeroed out */
3860 static void init_bins(mstate m) {
3861  /* Establish circular links for smallbins */
3862  bindex_t i;
3863  for (i = 0; i < NSMALLBINS; ++i) {
3864  sbinptr bin = smallbin_at(m,i);
3865  bin->fd = bin->bk = bin;
3866  }
3867 }
3868 
3869 #if PROCEED_ON_ERROR
3870 
3871 /* default corruption action */
3872 static void reset_on_error(mstate m) {
3873  int i;
3874  ++malloc_corruption_error_count;
3875  /* Reinitialize fields to forget about all memory */
3876  m->smallbins = m->treebins = 0;
3877  m->dvsize = m->topsize = 0;
3878  m->seg.base = 0;
3879  m->seg.size = 0;
3880  m->seg.next = 0;
3881  m->top = m->dv = 0;
3882  for (i = 0; i < NTREEBINS; ++i)
3883  *treebin_at(m, i) = 0;
3884  init_bins(m);
3885 }
3886 #endif /* PROCEED_ON_ERROR */
3887 
3888 /* Allocate chunk and prepend remainder with chunk in successor base. */
3889 static void* prepend_alloc(mstate m, char* newbase, char* oldbase,
3890  size_t nb) {
3891  mchunkptr p = align_as_chunk(newbase);
3892  mchunkptr oldfirst = align_as_chunk(oldbase);
3893  size_t psize = (char*)oldfirst - (char*)p;
3894  mchunkptr q = chunk_plus_offset(p, nb);
3895  size_t qsize = psize - nb;
3896  set_size_and_pinuse_of_inuse_chunk(m, p, nb);
3897 
3898  assert((char*)oldfirst > (char*)q);
3899  assert(pinuse(oldfirst));
3900  assert(qsize >= MIN_CHUNK_SIZE);
3901 
3902  /* consolidate remainder with first chunk of old base */
3903  if (oldfirst == m->top) {
3904  size_t tsize = m->topsize += qsize;
3905  m->top = q;
3906  q->head = tsize | PINUSE_BIT;
3907  check_top_chunk(m, q);
3908  }
3909  else if (oldfirst == m->dv) {
3910  size_t dsize = m->dvsize += qsize;
3911  m->dv = q;
3912  set_size_and_pinuse_of_free_chunk(q, dsize);
3913  }
3914  else {
3915  if (!is_inuse(oldfirst)) {
3916  size_t nsize = chunksize(oldfirst);
3917  unlink_chunk(m, oldfirst, nsize);
3918  oldfirst = chunk_plus_offset(oldfirst, nsize);
3919  qsize += nsize;
3920  }
3921  set_free_with_pinuse(q, qsize, oldfirst);
3922  insert_chunk(m, q, qsize);
3923  check_free_chunk(m, q);
3924  }
3925 
3926  check_malloced_chunk(m, chunk2mem(p), nb);
3927  return chunk2mem(p);
3928 }
3929 
3930 /* Add a segment to hold a new noncontiguous region */
3931 static void add_segment(mstate m, char* tbase, size_t tsize, flag_t mmapped) {
3932  /* Determine locations and sizes of segment, fenceposts, old top */
3933  char* old_top = (char*)m->top;
3934  msegmentptr oldsp = segment_holding(m, old_top);
3935  char* old_end = oldsp->base + oldsp->size;
3936  size_t ssize = pad_request(sizeof(struct malloc_segment));
3937  char* rawsp = old_end - (ssize + FOUR_SIZE_T_SIZES + CHUNK_ALIGN_MASK);
3938  size_t offset = align_offset(chunk2mem(rawsp));
3939  char* asp = rawsp + offset;
3940  char* csp = (asp < (old_top + MIN_CHUNK_SIZE))? old_top : asp;
3941  mchunkptr sp = (mchunkptr)csp;
3942  msegmentptr ss = (msegmentptr)(chunk2mem(sp));
3943  mchunkptr tnext = chunk_plus_offset(sp, ssize);
3944  mchunkptr p = tnext;
3945  int nfences = 0;
3946 
3947  /* reset top to new space */
3948  init_top(m, (mchunkptr)tbase, tsize - TOP_FOOT_SIZE);
3949 
3950  /* Set up segment record */
3951  assert(is_aligned(ss));
3952  set_size_and_pinuse_of_inuse_chunk(m, sp, ssize);
3953  *ss = m->seg; /* Push current record */
3954  m->seg.base = tbase;
3955  m->seg.size = tsize;
3956  m->seg.sflags = mmapped;
3957  m->seg.next = ss;
3958 
3959  /* Insert trailing fenceposts */
3960  for (;;) {
3961  mchunkptr nextp = chunk_plus_offset(p, SIZE_T_SIZE);
3962  p->head = FENCEPOST_HEAD;
3963  ++nfences;
3964  if ((char*)(&(nextp->head)) < old_end)
3965  p = nextp;
3966  else
3967  break;
3968  }
3969  assert(nfences >= 2);
3970 
3971  /* Insert the rest of old top into a bin as an ordinary free chunk */
3972  if (csp != old_top) {
3973  mchunkptr q = (mchunkptr)old_top;
3974  size_t psize = csp - old_top;
3975  mchunkptr tn = chunk_plus_offset(q, psize);
3976  set_free_with_pinuse(q, psize, tn);
3977  insert_chunk(m, q, psize);
3978  }
3979 
3980  check_top_chunk(m, m->top);
3981 }
3982 
3983 /* -------------------------- System allocation -------------------------- */
3984 
3985 /* Get memory from system using MORECORE or MMAP */
3986 static void* sys_alloc(mstate m, size_t nb) {
3987  char* tbase = CMFAIL;
3988  size_t tsize = 0;
3989  flag_t mmap_flag = 0;
3990 
3991  ensure_initialization();
3992 
3993  /* Directly map large chunks, but only if already initialized */
3994  if (use_mmap(m) && nb >= mparams.mmap_threshold && m->topsize != 0) {
3995  void* mem = mmap_alloc(m, nb);
3996  if (mem != 0)
3997  return mem;
3998  }
3999 
4000  /*
4001  Try getting memory in any of three ways (in most-preferred to
4002  least-preferred order):
4003  1. A call to MORECORE that can normally contiguously extend memory.
4004  (disabled if not MORECORE_CONTIGUOUS or not HAVE_MORECORE or
4005  or main space is mmapped or a previous contiguous call failed)
4006  2. A call to MMAP new space (disabled if not HAVE_MMAP).
4007  Note that under the default settings, if MORECORE is unable to
4008  fulfill a request, and HAVE_MMAP is true, then mmap is
4009  used as a noncontiguous system allocator. This is a useful backup
4010  strategy for systems with holes in address spaces -- in this case
4011  sbrk cannot contiguously expand the heap, but mmap may be able to
4012  find space.
4013  3. A call to MORECORE that cannot usually contiguously extend memory.
4014  (disabled if not HAVE_MORECORE)
4015 
4016  In all cases, we need to request enough bytes from system to ensure
4017  we can malloc nb bytes upon success, so pad with enough space for
4018  top_foot, plus alignment-pad to make sure we don't lose bytes if
4019  not on boundary, and round this up to a granularity unit.
4020  */
4021 
4022  if (MORECORE_CONTIGUOUS && !use_noncontiguous(m)) {
4023  char* br = CMFAIL;
4024  msegmentptr ss = (m->top == 0)? 0 : segment_holding(m, (char*)m->top);
4025  size_t asize = 0;
4026  ACQUIRE_MALLOC_GLOBAL_LOCK();
4027 
4028  if (ss == 0) { /* First time through or recovery */
4029  char* base = (char*)CALL_MORECORE(0);
4030  if (base != CMFAIL) {
4031  asize = granularity_align(nb + SYS_ALLOC_PADDING);
4032  /* Adjust to end on a page boundary */
4033  if (!is_page_aligned(base))
4034  asize += (page_align((size_t)base) - (size_t)base);
4035  /* Can't call MORECORE if size is negative when treated as signed */
4036  if (asize < HALF_MAX_SIZE_T &&
4037  (br = (char*)(CALL_MORECORE(asize))) == base) {
4038  tbase = base;
4039  tsize = asize;
4040  }
4041  }
4042  }
4043  else {
4044  /* Subtract out existing available top space from MORECORE request. */
4045  asize = granularity_align(nb - m->topsize + SYS_ALLOC_PADDING);
4046  /* Use mem here only if it did continuously extend old space */
4047  if (asize < HALF_MAX_SIZE_T &&
4048  (br = (char*)(CALL_MORECORE(asize))) == ss->base+ss->size) {
4049  tbase = br;
4050  tsize = asize;
4051  }
4052  }
4053 
4054  if (tbase == CMFAIL) { /* Cope with partial failure */
4055  if (br != CMFAIL) { /* Try to use/extend the space we did get */
4056  if (asize < HALF_MAX_SIZE_T &&
4057  asize < nb + SYS_ALLOC_PADDING) {
4058  size_t esize = granularity_align(nb + SYS_ALLOC_PADDING - asize);
4059  if (esize < HALF_MAX_SIZE_T) {
4060  char* end = (char*)CALL_MORECORE(esize);
4061  if (end != CMFAIL)
4062  asize += esize;
4063  else { /* Can't use; try to release */
4064  (void) CALL_MORECORE(-asize);
4065  br = CMFAIL;
4066  }
4067  }
4068  }
4069  }
4070  if (br != CMFAIL) { /* Use the space we did get */
4071  tbase = br;
4072  tsize = asize;
4073  }
4074  else
4075  disable_contiguous(m); /* Don't try contiguous path in the future */
4076  }
4077 
4078  RELEASE_MALLOC_GLOBAL_LOCK();
4079  }
4080 
4081  if (HAVE_MMAP && tbase == CMFAIL) { /* Try MMAP */
4082  size_t rsize = granularity_align(nb + SYS_ALLOC_PADDING);
4083  if (rsize > nb) { /* Fail if wraps around zero */
4084  char* mp = (char*)(CALL_MMAP(rsize));
4085  if (mp != CMFAIL) {
4086  tbase = mp;
4087  tsize = rsize;
4088  mmap_flag = USE_MMAP_BIT;
4089  }
4090  }
4091  }
4092 
4093  if (HAVE_MORECORE && tbase == CMFAIL) { /* Try noncontiguous MORECORE */
4094  size_t asize = granularity_align(nb + SYS_ALLOC_PADDING);
4095  if (asize < HALF_MAX_SIZE_T) {
4096  char* br = CMFAIL;
4097  char* end = CMFAIL;
4098  ACQUIRE_MALLOC_GLOBAL_LOCK();
4099  br = (char*)(CALL_MORECORE(asize));
4100  end = (char*)(CALL_MORECORE(0));
4101  RELEASE_MALLOC_GLOBAL_LOCK();
4102  if (br != CMFAIL && end != CMFAIL && br < end) {
4103  size_t ssize = end - br;
4104  if (ssize > nb + TOP_FOOT_SIZE) {
4105  tbase = br;
4106  tsize = ssize;
4107  }
4108  }
4109  }
4110  }
4111 
4112  if (tbase != CMFAIL) {
4113 
4114  if ((m->footprint += tsize) > m->max_footprint)
4115  m->max_footprint = m->footprint;
4116 
4117  if (!is_initialized(m)) { /* first-time initialization */
4118  if (m->least_addr == 0 || tbase < m->least_addr)
4119  m->least_addr = tbase;
4120  m->seg.base = tbase;
4121  m->seg.size = tsize;
4122  m->seg.sflags = mmap_flag;
4123  m->magic = mparams.magic;
4124  m->release_checks = MAX_RELEASE_CHECK_RATE;
4125  init_bins(m);
4126 #if !ONLY_MSPACES
4127  if (is_global(m))
4128  init_top(m, (mchunkptr)tbase, tsize - TOP_FOOT_SIZE);
4129  else
4130 #endif
4131  {
4132  /* Offset top by embedded malloc_state */
4133  mchunkptr mn = next_chunk(mem2chunk(m));
4134  init_top(m, mn, (size_t)((tbase + tsize) - (char*)mn) -TOP_FOOT_SIZE);
4135  }
4136  }
4137 
4138  else {
4139  /* Try to merge with an existing segment */
4140  msegmentptr sp = &m->seg;
4141  /* Only consider most recent segment if traversal suppressed */
4142  while (sp != 0 && tbase != sp->base + sp->size)
4143  sp = (NO_SEGMENT_TRAVERSAL) ? 0 : sp->next;
4144  if (sp != 0 &&
4145  !is_extern_segment(sp) &&
4146  (sp->sflags & USE_MMAP_BIT) == mmap_flag &&
4147  segment_holds(sp, m->top)) { /* append */
4148  sp->size += tsize;
4149  init_top(m, m->top, m->topsize + tsize);
4150  }
4151  else {
4152  if (tbase < m->least_addr)
4153  m->least_addr = tbase;
4154  sp = &m->seg;
4155  while (sp != 0 && sp->base != tbase + tsize)
4156  sp = (NO_SEGMENT_TRAVERSAL) ? 0 : sp->next;
4157  if (sp != 0 &&
4158  !is_extern_segment(sp) &&
4159  (sp->sflags & USE_MMAP_BIT) == mmap_flag) {
4160  char* oldbase = sp->base;
4161  sp->base = tbase;
4162  sp->size += tsize;
4163  return prepend_alloc(m, tbase, oldbase, nb);
4164  }
4165  else
4166  add_segment(m, tbase, tsize, mmap_flag);
4167  }
4168  }
4169 
4170  if (nb < m->topsize) { /* Allocate from new or extended top space */
4171  size_t rsize = m->topsize -= nb;
4172  mchunkptr p = m->top;
4173  mchunkptr r = m->top = chunk_plus_offset(p, nb);
4174  r->head = rsize | PINUSE_BIT;
4175  set_size_and_pinuse_of_inuse_chunk(m, p, nb);
4176  check_top_chunk(m, m->top);
4177  check_malloced_chunk(m, chunk2mem(p), nb);
4178  return chunk2mem(p);
4179  }
4180  }
4181 
4182  MALLOC_FAILURE_ACTION;
4183  return 0;
4184 }
4185 
4186 /* ----------------------- system deallocation -------------------------- */
4187 
4188 /* Unmap and unlink any mmapped segments that don't contain used chunks */
4189 static size_t release_unused_segments(mstate m) {
4190  size_t released = 0;
4191  int nsegs = 0;
4192  msegmentptr pred = &m->seg;
4193  msegmentptr sp = pred->next;
4194  while (sp != 0) {
4195  char* base = sp->base;
4196  size_t size = sp->size;
4197  msegmentptr next = sp->next;
4198  ++nsegs;
4199  if (is_mmapped_segment(sp) && !is_extern_segment(sp)) {
4200  mchunkptr p = align_as_chunk(base);
4201  size_t psize = chunksize(p);
4202  /* Can unmap if first chunk holds entire segment and not pinned */
4203  if (!is_inuse(p) && (char*)p + psize >= base + size - TOP_FOOT_SIZE) {
4204  tchunkptr tp = (tchunkptr)p;
4205  assert(segment_holds(sp, (char*)sp));
4206  if (p == m->dv) {
4207  m->dv = 0;
4208  m->dvsize = 0;
4209  }
4210  else {
4211  unlink_large_chunk(m, tp);
4212  }
4213  if (CALL_MUNMAP(base, size) == 0) {
4214  released += size;
4215  m->footprint -= size;
4216  /* unlink obsoleted record */
4217  sp = pred;
4218  sp->next = next;
4219  }
4220  else { /* back out if cannot unmap */
4221  insert_large_chunk(m, tp, psize);
4222  }
4223  }
4224  }
4225  if (NO_SEGMENT_TRAVERSAL) /* scan only first segment */
4226  break;
4227  pred = sp;
4228  sp = next;
4229  }
4230  /* Reset check counter */
4231  m->release_checks = ((nsegs > MAX_RELEASE_CHECK_RATE)?
4232  nsegs : MAX_RELEASE_CHECK_RATE);
4233  return released;
4234 }
4235 
4236 static int sys_trim(mstate m, size_t pad) {
4237  size_t released = 0;
4238  ensure_initialization();
4239  if (pad < MAX_REQUEST && is_initialized(m)) {
4240  pad += TOP_FOOT_SIZE; /* ensure enough room for segment overhead */
4241 
4242  if (m->topsize > pad) {
4243  /* Shrink top space in granularity-size units, keeping at least one */
4244  size_t unit = mparams.granularity;
4245  size_t extra = ((m->topsize - pad + (unit - SIZE_T_ONE)) / unit -
4246  SIZE_T_ONE) * unit;
4247  msegmentptr sp = segment_holding(m, (char*)m->top);
4248 
4249  if (!is_extern_segment(sp)) {
4250  if (is_mmapped_segment(sp)) {
4251  if (HAVE_MMAP &&
4252  sp->size >= extra &&
4253  !has_segment_link(m, sp)) { /* can't shrink if pinned */
4254  size_t newsize = sp->size - extra;
4255  /* Prefer mremap, fall back to munmap */
4256  if ((CALL_MREMAP(sp->base, sp->size, newsize, 0) != MFAIL) ||
4257  (CALL_MUNMAP(sp->base + newsize, extra) == 0)) {
4258  released = extra;
4259  }
4260  }
4261  }
4262  else if (HAVE_MORECORE) {
4263  if (extra >= HALF_MAX_SIZE_T) /* Avoid wrapping negative */
4264  extra = (HALF_MAX_SIZE_T) + SIZE_T_ONE - unit;
4265  ACQUIRE_MALLOC_GLOBAL_LOCK();
4266  {
4267  /* Make sure end of memory is where we last set it. */
4268  char* old_br = (char*)(CALL_MORECORE(0));
4269  if (old_br == sp->base + sp->size) {
4270  char* rel_br = (char*)(CALL_MORECORE(-extra));
4271  char* new_br = (char*)(CALL_MORECORE(0));
4272  if (rel_br != CMFAIL && new_br < old_br)
4273  released = old_br - new_br;
4274  }
4275  }
4276  RELEASE_MALLOC_GLOBAL_LOCK();
4277  }
4278  }
4279 
4280  if (released != 0) {
4281  sp->size -= released;
4282  m->footprint -= released;
4283  init_top(m, m->top, m->topsize - released);
4284  check_top_chunk(m, m->top);
4285  }
4286  }
4287 
4288  /* Unmap any unused mmapped segments */
4289  if (HAVE_MMAP)
4290  released += release_unused_segments(m);
4291 
4292  /* On failure, disable autotrim to avoid repeated failed future calls */
4293  if (released == 0 && m->topsize > m->trim_check)
4294  m->trim_check = MAX_SIZE_T;
4295  }
4296 
4297  return (released != 0)? 1 : 0;
4298 }
4299 
4300 
4301 /* ---------------------------- malloc support --------------------------- */
4302 
4303 /* allocate a large request from the best fitting chunk in a treebin */
4304 static void* tmalloc_large(mstate m, size_t nb) {
4305  tchunkptr v = 0;
4306  size_t rsize = -nb; /* Unsigned negation */
4307  tchunkptr t;
4308  bindex_t idx;
4309  compute_tree_index(nb, idx);
4310  if ((t = *treebin_at(m, idx)) != 0) {
4311  /* Traverse tree for this bin looking for node with size == nb */
4312  size_t sizebits = nb << leftshift_for_tree_index(idx);
4313  tchunkptr rst = 0; /* The deepest untaken right subtree */
4314  for (;;) {
4315  tchunkptr rt;
4316  size_t trem = chunksize(t) - nb;
4317  if (trem < rsize) {
4318  v = t;
4319  if ((rsize = trem) == 0)
4320  break;
4321  }
4322  rt = t->child[1];
4323  t = t->child[(sizebits >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1];
4324  if (rt != 0 && rt != t)
4325  rst = rt;
4326  if (t == 0) {
4327  t = rst; /* set t to least subtree holding sizes > nb */
4328  break;
4329  }
4330  sizebits <<= 1;
4331  }
4332  }
4333  if (t == 0 && v == 0) { /* set t to root of next non-empty treebin */
4334  binmap_t leftbits = left_bits(idx2bit(idx)) & m->treemap;
4335  if (leftbits != 0) {
4336  bindex_t i;
4337  binmap_t leastbit = least_bit(leftbits);
4338  compute_bit2idx(leastbit, i);
4339  t = *treebin_at(m, i);
4340  }
4341  }
4342 
4343  while (t != 0) { /* find smallest of tree or subtree */
4344  size_t trem = chunksize(t) - nb;
4345  if (trem < rsize) {
4346  rsize = trem;
4347  v = t;
4348  }
4349  t = leftmost_child(t);
4350  }
4351 
4352  /* If dv is a better fit, return 0 so malloc will use it */
4353  if (v != 0 && rsize < (size_t)(m->dvsize - nb)) {
4354  if (RTCHECK(ok_address(m, v))) { /* split */
4355  mchunkptr r = chunk_plus_offset(v, nb);
4356  assert(chunksize(v) == rsize + nb);
4357  if (RTCHECK(ok_next(v, r))) {
4358  unlink_large_chunk(m, v);
4359  if (rsize < MIN_CHUNK_SIZE)
4360  set_inuse_and_pinuse(m, v, (rsize + nb));
4361  else {
4362  set_size_and_pinuse_of_inuse_chunk(m, v, nb);
4363  set_size_and_pinuse_of_free_chunk(r, rsize);
4364  insert_chunk(m, r, rsize);
4365  }
4366  return chunk2mem(v);
4367  }
4368  }
4369  CORRUPTION_ERROR_ACTION(m);
4370  }
4371  return 0;
4372 }
4373 
4374 /* allocate a small request from the best fitting chunk in a treebin */
4375 static void* tmalloc_small(mstate m, size_t nb) {
4376  tchunkptr t, v;
4377  size_t rsize;
4378  bindex_t i;
4379  binmap_t leastbit = least_bit(m->treemap);
4380  compute_bit2idx(leastbit, i);
4381  v = t = *treebin_at(m, i);
4382  rsize = chunksize(t) - nb;
4383 
4384  while ((t = leftmost_child(t)) != 0) {
4385  size_t trem = chunksize(t) - nb;
4386  if (trem < rsize) {
4387  rsize = trem;
4388  v = t;
4389  }
4390  }
4391 
4392  if (RTCHECK(ok_address(m, v))) {
4393  mchunkptr r = chunk_plus_offset(v, nb);
4394  assert(chunksize(v) == rsize + nb);
4395  if (RTCHECK(ok_next(v, r))) {
4396  unlink_large_chunk(m, v);
4397  if (rsize < MIN_CHUNK_SIZE)
4398  set_inuse_and_pinuse(m, v, (rsize + nb));
4399  else {
4400  set_size_and_pinuse_of_inuse_chunk(m, v, nb);
4401  set_size_and_pinuse_of_free_chunk(r, rsize);
4402  replace_dv(m, r, rsize);
4403  }
4404  return chunk2mem(v);
4405  }
4406  }
4407 
4408  CORRUPTION_ERROR_ACTION(m);
4409  return 0;
4410 }
4411 
4412 /* --------------------------- realloc support --------------------------- */
4413 
4414 static void* internal_realloc(mstate m, void* oldmem, size_t bytes) {
4415  if (bytes >= MAX_REQUEST) {
4416  MALLOC_FAILURE_ACTION;
4417  return 0;
4418  }
4419  if (!PREACTION(m)) {
4420  mchunkptr oldp = mem2chunk(oldmem);
4421  size_t oldsize = chunksize(oldp);
4422  mchunkptr next = chunk_plus_offset(oldp, oldsize);
4423  mchunkptr newp = 0;
4424  void* extra = 0;
4425 
4426  /* Try to either shrink or extend into top. Else malloc-copy-free */
4427 
4428  if (RTCHECK(ok_address(m, oldp) && ok_inuse(oldp) &&
4429  ok_next(oldp, next) && ok_pinuse(next))) {
4430  size_t nb = request2size(bytes);
4431  if (is_mmapped(oldp))
4432  newp = mmap_resize(m, oldp, nb);
4433  else if (oldsize >= nb) { /* already big enough */
4434  size_t rsize = oldsize - nb;
4435  newp = oldp;
4436  if (rsize >= MIN_CHUNK_SIZE) {
4437  mchunkptr remainder = chunk_plus_offset(newp, nb);
4438  set_inuse(m, newp, nb);
4439  set_inuse_and_pinuse(m, remainder, rsize);
4440  extra = chunk2mem(remainder);
4441  }
4442  }
4443  else if (next == m->top && oldsize + m->topsize > nb) {
4444  /* Expand into top */
4445  size_t newsize = oldsize + m->topsize;
4446  size_t newtopsize = newsize - nb;
4447  mchunkptr newtop = chunk_plus_offset(oldp, nb);
4448  set_inuse(m, oldp, nb);
4449  newtop->head = newtopsize |PINUSE_BIT;
4450  m->top = newtop;
4451  m->topsize = newtopsize;
4452  newp = oldp;
4453  }
4454  }
4455  else {
4456  USAGE_ERROR_ACTION(m, oldmem);
4457  POSTACTION(m);
4458  return 0;
4459  }
4460 #if DEBUG
4461  if (newp != 0) {
4462  check_inuse_chunk(m, newp); /* Check requires lock */
4463  }
4464 #endif
4465 
4466  POSTACTION(m);
4467 
4468  if (newp != 0) {
4469  if (extra != 0) {
4470  internal_free(m, extra);
4471  }
4472  return chunk2mem(newp);
4473  }
4474  else {
4475  void* newmem = internal_malloc(m, bytes);
4476  if (newmem != 0) {
4477  size_t oc = oldsize - overhead_for(oldp);
4478  memcpy(newmem, oldmem, (oc < bytes)? oc : bytes);
4479  internal_free(m, oldmem);
4480  }
4481  return newmem;
4482  }
4483  }
4484  return 0;
4485 }
4486 
4487 /* --------------------------- memalign support -------------------------- */
4488 
4489 static void* internal_memalign(mstate m, size_t alignment, size_t bytes) {
4490  if (alignment <= MALLOC_ALIGNMENT) /* Can just use malloc */
4491  return internal_malloc(m, bytes);
4492  if (alignment < MIN_CHUNK_SIZE) /* must be at least a minimum chunk size */
4493  alignment = MIN_CHUNK_SIZE;
4494  if ((alignment & (alignment-SIZE_T_ONE)) != 0) {/* Ensure a power of 2 */
4495  size_t a = MALLOC_ALIGNMENT << 1;
4496  while (a < alignment) a <<= 1;
4497  alignment = a;
4498  }
4499 
4500  if (bytes >= MAX_REQUEST - alignment) {
4501  if (m != 0) { /* Test isn't needed but avoids compiler warning */
4502  MALLOC_FAILURE_ACTION;
4503  }
4504  }
4505  else {
4506  size_t nb = request2size(bytes);
4507  size_t req = nb + alignment + MIN_CHUNK_SIZE - CHUNK_OVERHEAD;
4508  char* mem = (char*)internal_malloc(m, req);
4509  if (mem != 0) {
4510  void* leader = 0;
4511  void* trailer = 0;
4512  mchunkptr p = mem2chunk(mem);
4513 
4514  if (PREACTION(m)) return 0;
4515  if ((((size_t)(mem)) % alignment) != 0) { /* misaligned */
4516  /*
4517  Find an aligned spot inside chunk. Since we need to give
4518  back leading space in a chunk of at least MIN_CHUNK_SIZE, if
4519  the first calculation places us at a spot with less than
4520  MIN_CHUNK_SIZE leader, we can move to the next aligned spot.
4521  We've allocated enough total room so that this is always
4522  possible.
4523  */
4524  char* br = (char*)mem2chunk((size_t)(((size_t)(mem +
4525  alignment -
4526  SIZE_T_ONE)) &
4527  -alignment));
4528  char* pos = ((size_t)(br - (char*)(p)) >= MIN_CHUNK_SIZE)?
4529  br : br+alignment;
4530  mchunkptr newp = (mchunkptr)pos;
4531  size_t leadsize = pos - (char*)(p);
4532  size_t newsize = chunksize(p) - leadsize;
4533 
4534  if (is_mmapped(p)) { /* For mmapped chunks, just adjust offset */
4535  newp->prev_foot = p->prev_foot + leadsize;
4536  newp->head = newsize;
4537  }
4538  else { /* Otherwise, give back leader, use the rest */
4539  set_inuse(m, newp, newsize);
4540  set_inuse(m, p, leadsize);
4541  leader = chunk2mem(p);
4542  }
4543  p = newp;
4544  }
4545 
4546  /* Give back spare room at the end */
4547  if (!is_mmapped(p)) {
4548  size_t size = chunksize(p);
4549  if (size > nb + MIN_CHUNK_SIZE) {
4550  size_t remainder_size = size - nb;
4551  mchunkptr remainder = chunk_plus_offset(p, nb);
4552  set_inuse(m, p, nb);
4553  set_inuse(m, remainder, remainder_size);
4554  trailer = chunk2mem(remainder);
4555  }
4556  }
4557 
4558  assert (chunksize(p) >= nb);
4559  assert((((size_t)(chunk2mem(p))) % alignment) == 0);
4560  check_inuse_chunk(m, p);
4561  POSTACTION(m);
4562  if (leader != 0) {
4563  internal_free(m, leader);
4564  }
4565  if (trailer != 0) {
4566  internal_free(m, trailer);
4567  }
4568  return chunk2mem(p);
4569  }
4570  }
4571  return 0;
4572 }
4573 
4574 /* ------------------------ comalloc/coalloc support --------------------- */
4575 
4576 static void** ialloc(mstate m,
4577  size_t n_elements,
4578  size_t* sizes,
4579  int opts,
4580  void* chunks[]) {
4581  /*
4582  This provides common support for independent_X routines, handling
4583  all of the combinations that can result.
4584 
4585  The opts arg has:
4586  bit 0 set if all elements are same size (using sizes[0])
4587  bit 1 set if elements should be zeroed
4588  */
4589 
4590  size_t element_size; /* chunksize of each element, if all same */
4591  size_t contents_size; /* total size of elements */
4592  size_t array_size; /* request size of pointer array */
4593  void* mem; /* malloced aggregate space */
4594  mchunkptr p; /* corresponding chunk */
4595  size_t remainder_size; /* remaining bytes while splitting */
4596  void** marray; /* either "chunks" or malloced ptr array */
4597  mchunkptr array_chunk; /* chunk for malloced ptr array */
4598  flag_t was_enabled; /* to disable mmap */
4599  size_t size;
4600  size_t i;
4601 
4602  ensure_initialization();
4603  /* compute array length, if needed */
4604  if (chunks != 0) {
4605  if (n_elements == 0)
4606  return chunks; /* nothing to do */
4607  marray = chunks;
4608  array_size = 0;
4609  }
4610  else {
4611  /* if empty req, must still return chunk representing empty array */
4612  if (n_elements == 0)
4613  return (void**)internal_malloc(m, 0);
4614  marray = 0;
4615  array_size = request2size(n_elements * (sizeof(void*)));
4616  }
4617 
4618  /* compute total element size */
4619  if (opts & 0x1) { /* all-same-size */
4620  element_size = request2size(*sizes);
4621  contents_size = n_elements * element_size;
4622  }
4623  else { /* add up all the sizes */
4624  element_size = 0;
4625  contents_size = 0;
4626  for (i = 0; i != n_elements; ++i)
4627  contents_size += request2size(sizes[i]);
4628  }
4629 
4630  size = contents_size + array_size;
4631 
4632  /*
4633  Allocate the aggregate chunk. First disable direct-mmapping so
4634  malloc won't use it, since we would not be able to later
4635  free/realloc space internal to a segregated mmap region.
4636  */
4637  was_enabled = use_mmap(m);
4638  disable_mmap(m);
4639  mem = internal_malloc(m, size - CHUNK_OVERHEAD);
4640  if (was_enabled)
4641  enable_mmap(m);
4642  if (mem == 0)
4643  return 0;
4644 
4645  if (PREACTION(m)) return 0;
4646  p = mem2chunk(mem);
4647  remainder_size = chunksize(p);
4648 
4649  assert(!is_mmapped(p));
4650 
4651  if (opts & 0x2) { /* optionally clear the elements */
4652  memset((size_t*)mem, 0, remainder_size - SIZE_T_SIZE - array_size);
4653  }
4654 
4655  /* If not provided, allocate the pointer array as final part of chunk */
4656  if (marray == 0) {
4657  size_t array_chunk_size;
4658  array_chunk = chunk_plus_offset(p, contents_size);
4659  array_chunk_size = remainder_size - contents_size;
4660  marray = (void**) (chunk2mem(array_chunk));
4661  set_size_and_pinuse_of_inuse_chunk(m, array_chunk, array_chunk_size);
4662  remainder_size = contents_size;
4663  }
4664 
4665  /* split out elements */
4666  for (i = 0; ; ++i) {
4667  marray[i] = chunk2mem(p);
4668  if (i != n_elements-1) {
4669  if (element_size != 0)
4670  size = element_size;
4671  else
4672  size = request2size(sizes[i]);
4673  remainder_size -= size;
4674  set_size_and_pinuse_of_inuse_chunk(m, p, size);
4675  p = chunk_plus_offset(p, size);
4676  }
4677  else { /* the final element absorbs any overallocation slop */
4678  set_size_and_pinuse_of_inuse_chunk(m, p, remainder_size);
4679  break;
4680  }
4681  }
4682 
4683 #if DEBUG
4684  if (marray != chunks) {
4685  /* final element must have exactly exhausted chunk */
4686  if (element_size != 0) {
4687  assert(remainder_size == element_size);
4688  }
4689  else {
4690  assert(remainder_size == request2size(sizes[i]));
4691  }
4692  check_inuse_chunk(m, mem2chunk(marray));
4693  }
4694  for (i = 0; i != n_elements; ++i)
4695  check_inuse_chunk(m, mem2chunk(marray[i]));
4696 
4697 #endif /* DEBUG */
4698 
4699  POSTACTION(m);
4700  return marray;
4701 }
4702 
4703 
4704 /* -------------------------- public routines ---------------------------- */
4705 
4706 #if !ONLY_MSPACES
4707 
4708 void* dlmalloc(size_t bytes) {
4709  /*
4710  Basic algorithm:
4711  If a small request (< 256 bytes minus per-chunk overhead):
4712  1. If one exists, use a remainderless chunk in associated smallbin.
4713  (Remainderless means that there are too few excess bytes to
4714  represent as a chunk.)
4715  2. If it is big enough, use the dv chunk, which is normally the
4716  chunk adjacent to the one used for the most recent small request.
4717  3. If one exists, split the smallest available chunk in a bin,
4718  saving remainder in dv.
4719  4. If it is big enough, use the top chunk.
4720  5. If available, get memory from system and use it
4721  Otherwise, for a large request:
4722  1. Find the smallest available binned chunk that fits, and use it
4723  if it is better fitting than dv chunk, splitting if necessary.
4724  2. If better fitting than any binned chunk, use the dv chunk.
4725  3. If it is big enough, use the top chunk.
4726  4. If request size >= mmap threshold, try to directly mmap this chunk.
4727  5. If available, get memory from system and use it
4728 
4729  The ugly goto's here ensure that postaction occurs along all paths.
4730  */
4731 
4732 #if USE_LOCKS
4733  ensure_initialization(); /* initialize in sys_alloc if not using locks */
4734 #endif
4735 
4736  if (!PREACTION(gm)) {
4737  void* mem;
4738  size_t nb;
4739  if (bytes <= MAX_SMALL_REQUEST) {
4740  bindex_t idx;
4741  binmap_t smallbits;
4742  nb = (bytes < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(bytes);
4743  idx = small_index(nb);
4744  smallbits = gm->smallmap >> idx;
4745 
4746  if ((smallbits & 0x3U) != 0) { /* Remainderless fit to a smallbin. */
4747  mchunkptr b, p;
4748  idx += ~smallbits & 1; /* Uses next bin if idx empty */
4749  b = smallbin_at(gm, idx);
4750  p = b->fd;
4751  assert(chunksize(p) == small_index2size(idx));
4752  unlink_first_small_chunk(gm, b, p, idx);
4753  set_inuse_and_pinuse(gm, p, small_index2size(idx));
4754  mem = chunk2mem(p);
4755  check_malloced_chunk(gm, mem, nb);
4756  goto postaction;
4757  }
4758 
4759  else if (nb > gm->dvsize) {
4760  if (smallbits != 0) { /* Use chunk in next nonempty smallbin */
4761  mchunkptr b, p, r;
4762  size_t rsize;
4763  bindex_t i;
4764  binmap_t leftbits = (smallbits << idx) & left_bits(idx2bit(idx));
4765  binmap_t leastbit = least_bit(leftbits);
4766  compute_bit2idx(leastbit, i);
4767  b = smallbin_at(gm, i);
4768  p = b->fd;
4769  assert(chunksize(p) == small_index2size(i));
4770  unlink_first_small_chunk(gm, b, p, i);
4771  rsize = small_index2size(i) - nb;
4772  /* Fit here cannot be remainderless if 4byte sizes */
4773  if (SIZE_T_SIZE != 4 && rsize < MIN_CHUNK_SIZE)
4774  set_inuse_and_pinuse(gm, p, small_index2size(i));
4775  else {
4776  set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
4777  r = chunk_plus_offset(p, nb);
4778  set_size_and_pinuse_of_free_chunk(r, rsize);
4779  replace_dv(gm, r, rsize);
4780  }
4781  mem = chunk2mem(p);
4782  check_malloced_chunk(gm, mem, nb);
4783  goto postaction;
4784  }
4785 
4786  else if (gm->treemap != 0 && (mem = tmalloc_small(gm, nb)) != 0) {
4787  check_malloced_chunk(gm, mem, nb);
4788  goto postaction;
4789  }
4790  }
4791  }
4792  else if (bytes >= MAX_REQUEST)
4793  nb = MAX_SIZE_T; /* Too big to allocate. Force failure (in sys alloc) */
4794  else {
4795  nb = pad_request(bytes);
4796  if (gm->treemap != 0 && (mem = tmalloc_large(gm, nb)) != 0) {
4797  check_malloced_chunk(gm, mem, nb);
4798  goto postaction;
4799  }
4800  }
4801 
4802  if (nb <= gm->dvsize) {
4803  size_t rsize = gm->dvsize - nb;
4804  mchunkptr p = gm->dv;
4805  if (rsize >= MIN_CHUNK_SIZE) { /* split dv */
4806  mchunkptr r = gm->dv = chunk_plus_offset(p, nb);
4807  gm->dvsize = rsize;
4808  set_size_and_pinuse_of_free_chunk(r, rsize);
4809  set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
4810  }
4811  else { /* exhaust dv */
4812  size_t dvs = gm->dvsize;
4813  gm->dvsize = 0;
4814  gm->dv = 0;
4815  set_inuse_and_pinuse(gm, p, dvs);
4816  }
4817  mem = chunk2mem(p);
4818  check_malloced_chunk(gm, mem, nb);
4819  goto postaction;
4820  }
4821 
4822  else if (nb < gm->topsize) { /* Split top */
4823  size_t rsize = gm->topsize -= nb;
4824  mchunkptr p = gm->top;
4825  mchunkptr r = gm->top = chunk_plus_offset(p, nb);
4826  r->head = rsize | PINUSE_BIT;
4827  set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
4828  mem = chunk2mem(p);
4829  check_top_chunk(gm, gm->top);
4830  check_malloced_chunk(gm, mem, nb);
4831  goto postaction;
4832  }
4833 
4834  mem = sys_alloc(gm, nb);
4835 
4836  postaction:
4837  POSTACTION(gm);
4838  return mem;
4839  }
4840 
4841  return 0;
4842 }
4843 
4844 void dlfree(void* mem) {
4845  /*
4846  Consolidate freed chunks with preceeding or succeeding bordering
4847  free chunks, if they exist, and then place in a bin. Intermixed
4848  with special cases for top, dv, mmapped chunks, and usage errors.
4849  */
4850 
4851  if (mem != 0) {
4852  mchunkptr p = mem2chunk(mem);
4853 #if FOOTERS
4854  mstate fm = get_mstate_for(p);
4855  if (!ok_magic(fm)) {
4856  USAGE_ERROR_ACTION(fm, p);
4857  return;
4858  }
4859 #else /* FOOTERS */
4860 #define fm gm
4861 #endif /* FOOTERS */
4862  if (!PREACTION(fm)) {
4863  check_inuse_chunk(fm, p);
4864  if (RTCHECK(ok_address(fm, p) && ok_inuse(p))) {
4865  size_t psize = chunksize(p);
4866  mchunkptr next = chunk_plus_offset(p, psize);
4867  if (!pinuse(p)) {
4868  size_t prevsize = p->prev_foot;
4869  if (is_mmapped(p)) {
4870  psize += prevsize + MMAP_FOOT_PAD;
4871  if (CALL_MUNMAP((char*)p - prevsize, psize) == 0)
4872  fm->footprint -= psize;
4873  goto postaction;
4874  }
4875  else {
4876  mchunkptr prev = chunk_minus_offset(p, prevsize);
4877  psize += prevsize;
4878  p = prev;
4879  if (RTCHECK(ok_address(fm, prev))) { /* consolidate backward */
4880  if (p != fm->dv) {
4881  unlink_chunk(fm, p, prevsize);
4882  }
4883  else if ((next->head & INUSE_BITS) == INUSE_BITS) {
4884  fm->dvsize = psize;
4885  set_free_with_pinuse(p, psize, next);
4886  goto postaction;
4887  }
4888  }
4889  else
4890  goto erroraction;
4891  }
4892  }
4893 
4894  if (RTCHECK(ok_next(p, next) && ok_pinuse(next))) {
4895  if (!cinuse(next)) { /* consolidate forward */
4896  if (next == fm->top) {
4897  size_t tsize = fm->topsize += psize;
4898  fm->top = p;
4899  p->head = tsize | PINUSE_BIT;
4900  if (p == fm->dv) {
4901  fm->dv = 0;
4902  fm->dvsize = 0;
4903  }
4904  if (should_trim(fm, tsize))
4905  sys_trim(fm, 0);
4906  goto postaction;
4907  }
4908  else if (next == fm->dv) {
4909  size_t dsize = fm->dvsize += psize;
4910  fm->dv = p;
4911  set_size_and_pinuse_of_free_chunk(p, dsize);
4912  goto postaction;
4913  }
4914  else {
4915  size_t nsize = chunksize(next);
4916  psize += nsize;
4917  unlink_chunk(fm, next, nsize);
4918  set_size_and_pinuse_of_free_chunk(p, psize);
4919  if (p == fm->dv) {
4920  fm->dvsize = psize;
4921  goto postaction;
4922  }
4923  }
4924  }
4925  else
4926  set_free_with_pinuse(p, psize, next);
4927 
4928  if (is_small(psize)) {
4929  insert_small_chunk(fm, p, psize);
4930  check_free_chunk(fm, p);
4931  }
4932  else {
4933  tchunkptr tp = (tchunkptr)p;
4934  insert_large_chunk(fm, tp, psize);
4935  check_free_chunk(fm, p);
4936  if (--fm->release_checks == 0)
4937  release_unused_segments(fm);
4938  }
4939  goto postaction;
4940  }
4941  }
4942  erroraction:
4943  USAGE_ERROR_ACTION(fm, p);
4944  postaction:
4945  POSTACTION(fm);
4946  }
4947  }
4948 #if !FOOTERS
4949 #undef fm
4950 #endif /* FOOTERS */
4951 }
4952 
4953 void* dlcalloc(size_t n_elements, size_t elem_size) {
4954  void* mem;
4955  size_t req = 0;
4956  if (n_elements != 0) {
4957  req = n_elements * elem_size;
4958  if (((n_elements | elem_size) & ~(size_t)0xffff) &&
4959  (req / n_elements != elem_size))
4960  req = MAX_SIZE_T; /* force downstream failure on overflow */
4961  }
4962  mem = dlmalloc(req);
4963  if (mem != 0 && calloc_must_clear(mem2chunk(mem)))
4964  memset(mem, 0, req);
4965  return mem;
4966 }
4967 
4968 void* dlrealloc(void* oldmem, size_t bytes) {
4969  if (oldmem == 0)
4970  return dlmalloc(bytes);
4971 #ifdef REALLOC_ZERO_BYTES_FREES
4972  if (bytes == 0) {
4973  dlfree(oldmem);
4974  return 0;
4975  }
4976 #endif /* REALLOC_ZERO_BYTES_FREES */
4977  else {
4978 #if ! FOOTERS
4979  mstate m = gm;
4980 #else /* FOOTERS */
4981  mstate m = get_mstate_for(mem2chunk(oldmem));
4982  if (!ok_magic(m)) {
4983  USAGE_ERROR_ACTION(m, oldmem);
4984  return 0;
4985  }
4986 #endif /* FOOTERS */
4987  return internal_realloc(m, oldmem, bytes);
4988  }
4989 }
4990 
4991 void* dlmemalign(size_t alignment, size_t bytes) {
4992  return internal_memalign(gm, alignment, bytes);
4993 }
4994 
4995 void** dlindependent_calloc(size_t n_elements, size_t elem_size,
4996  void* chunks[]) {
4997  size_t sz = elem_size; /* serves as 1-element array */
4998  return ialloc(gm, n_elements, &sz, 3, chunks);
4999 }
5000 
5001 void** dlindependent_comalloc(size_t n_elements, size_t sizes[],
5002  void* chunks[]) {
5003  return ialloc(gm, n_elements, sizes, 0, chunks);
5004 }
5005 
5006 void* dlvalloc(size_t bytes) {
5007  size_t pagesz;
5008  ensure_initialization();
5009  pagesz = mparams.page_size;
5010  return dlmemalign(pagesz, bytes);
5011 }
5012 
5013 void* dlpvalloc(size_t bytes) {
5014  size_t pagesz;
5015  ensure_initialization();
5016  pagesz = mparams.page_size;
5017  return dlmemalign(pagesz, (bytes + pagesz - SIZE_T_ONE) & ~(pagesz - SIZE_T_ONE));
5018 }
5019 
5020 int dlmalloc_trim(size_t pad) {
5021  int result = 0;
5022  ensure_initialization();
5023  if (!PREACTION(gm)) {
5024  result = sys_trim(gm, pad);
5025  POSTACTION(gm);
5026  }
5027  return result;
5028 }
5029 
5030 size_t dlmalloc_footprint(void) {
5031  return gm->footprint;
5032 }
5033 
5034 size_t dlmalloc_max_footprint(void) {
5035  return gm->max_footprint;
5036 }
5037 
5038 #if !NO_MALLINFO
5039 struct mallinfo dlmallinfo(void) {
5040  return internal_mallinfo(gm);
5041 }
5042 #endif /* NO_MALLINFO */
5043 
5044 void dlmalloc_stats() {
5045  internal_malloc_stats(gm);
5046 }
5047 
5048 int dlmallopt(int param_number, int value) {
5049  return change_mparam(param_number, value);
5050 }
5051 
5052 #endif /* !ONLY_MSPACES */
5053 
5054 size_t dlmalloc_usable_size(void* mem) {
5055  if (mem != 0) {
5056  mchunkptr p = mem2chunk(mem);
5057  if (is_inuse(p))
5058  return chunksize(p) - overhead_for(p);
5059  }
5060  return 0;
5061 }
5062 
5063 /* ----------------------------- user mspaces ---------------------------- */
5064 
5065 #if MSPACES
5066 
5067 static mstate init_user_mstate(char* tbase, size_t tsize) {
5068  size_t msize = pad_request(sizeof(struct malloc_state));
5069  mchunkptr mn;
5070  mchunkptr msp = align_as_chunk(tbase);
5071  mstate m = (mstate)(chunk2mem(msp));
5072  memset(m, 0, msize);
5073  INITIAL_LOCK(&m->mutex);
5074  msp->head = (msize|INUSE_BITS);
5075  m->seg.base = m->least_addr = tbase;
5076  m->seg.size = m->footprint = m->max_footprint = tsize;
5077  m->magic = mparams.magic;
5078  m->release_checks = MAX_RELEASE_CHECK_RATE;
5079  m->mflags = mparams.default_mflags;
5080  m->extp = 0;
5081  m->exts = 0;
5082  disable_contiguous(m);
5083  init_bins(m);
5084  mn = next_chunk(mem2chunk(m));
5085  init_top(m, mn, (size_t)((tbase + tsize) - (char*)mn) - TOP_FOOT_SIZE);
5086  check_top_chunk(m, m->top);
5087  return m;
5088 }
5089 
5090 mspace create_mspace(size_t capacity, int locked) {
5091  mstate m = 0;
5092  size_t msize;
5093  ensure_initialization();
5094  msize = pad_request(sizeof(struct malloc_state));
5095  if (capacity < (size_t) -(msize + TOP_FOOT_SIZE + mparams.page_size)) {
5096  size_t rs = ((capacity == 0)? mparams.granularity :
5097  (capacity + TOP_FOOT_SIZE + msize));
5098  size_t tsize = granularity_align(rs);
5099  char* tbase = (char*)(CALL_MMAP(tsize));
5100  if (tbase != CMFAIL) {
5101  m = init_user_mstate(tbase, tsize);
5102  m->seg.sflags = USE_MMAP_BIT;
5103  set_lock(m, locked);
5104  }
5105  }
5106  return (mspace)m;
5107 }
5108 
5109 mspace create_mspace_with_base(void* base, size_t capacity, int locked) {
5110  mstate m = 0;
5111  size_t msize;
5112  ensure_initialization();
5113  msize = pad_request(sizeof(struct malloc_state));
5114  if (capacity > msize + TOP_FOOT_SIZE &&
5115  capacity < (size_t) -(msize + TOP_FOOT_SIZE + mparams.page_size)) {
5116  m = init_user_mstate((char*)base, capacity);
5117  m->seg.sflags = EXTERN_BIT;
5118  set_lock(m, locked);
5119  }
5120  return (mspace)m;
5121 }
5122 
5123 int mspace_track_large_chunks(mspace msp, int enable) {
5124  int ret = 0;
5125  mstate ms = (mstate)msp;
5126  if (!PREACTION(ms)) {
5127  if (!use_mmap(ms))
5128  ret = 1;
5129  if (!enable)
5130  enable_mmap(ms);
5131  else
5132  disable_mmap(ms);
5133  POSTACTION(ms);
5134  }
5135  return ret;
5136 }
5137 
5138 size_t destroy_mspace(mspace msp) {
5139  size_t freed = 0;
5140  mstate ms = (mstate)msp;
5141  if (ok_magic(ms)) {
5142  msegmentptr sp = &ms->seg;
5143  while (sp != 0) {
5144  char* base = sp->base;
5145  size_t size = sp->size;
5146  flag_t flag = sp->sflags;
5147  sp = sp->next;
5148  if ((flag & USE_MMAP_BIT) && !(flag & EXTERN_BIT) &&
5149  CALL_MUNMAP(base, size) == 0)
5150  freed += size;
5151  }
5152  }
5153  else {
5154  USAGE_ERROR_ACTION(ms,ms);
5155  }
5156  return freed;
5157 }
5158 
5159 /*
5160  mspace versions of routines are near-clones of the global
5161  versions. This is not so nice but better than the alternatives.
5162 */
5163 
5164 
5165 void* mspace_malloc(mspace msp, size_t bytes) {
5166  mstate ms = (mstate)msp;
5167  if (!ok_magic(ms)) {
5168  USAGE_ERROR_ACTION(ms,ms);
5169  return 0;
5170  }
5171  if (!PREACTION(ms)) {
5172  void* mem;
5173  size_t nb;
5174  if (bytes <= MAX_SMALL_REQUEST) {
5175  bindex_t idx;
5176  binmap_t smallbits;
5177  nb = (bytes < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(bytes);
5178  idx = small_index(nb);
5179  smallbits = ms->smallmap >> idx;
5180 
5181  if ((smallbits & 0x3U) != 0) { /* Remainderless fit to a smallbin. */
5182  mchunkptr b, p;
5183  idx += ~smallbits & 1; /* Uses next bin if idx empty */
5184  b = smallbin_at(ms, idx);
5185  p = b->fd;
5186  assert(chunksize(p) == small_index2size(idx));
5187  unlink_first_small_chunk(ms, b, p, idx);
5188  set_inuse_and_pinuse(ms, p, small_index2size(idx));
5189  mem = chunk2mem(p);
5190  check_malloced_chunk(ms, mem, nb);
5191  goto postaction;
5192  }
5193 
5194  else if (nb > ms->dvsize) {
5195  if (smallbits != 0) { /* Use chunk in next nonempty smallbin */
5196  mchunkptr b, p, r;
5197  size_t rsize;
5198  bindex_t i;
5199  binmap_t leftbits = (smallbits << idx) & left_bits(idx2bit(idx));
5200  binmap_t leastbit = least_bit(leftbits);
5201  compute_bit2idx(leastbit, i);
5202  b = smallbin_at(ms, i);
5203  p = b->fd;
5204  assert(chunksize(p) == small_index2size(i));
5205  unlink_first_small_chunk(ms, b, p, i);
5206  rsize = small_index2size(i) - nb;
5207  /* Fit here cannot be remainderless if 4byte sizes */
5208  if (SIZE_T_SIZE != 4 && rsize < MIN_CHUNK_SIZE)
5209  set_inuse_and_pinuse(ms, p, small_index2size(i));
5210  else {
5211  set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
5212  r = chunk_plus_offset(p, nb);
5213  set_size_and_pinuse_of_free_chunk(r, rsize);
5214  replace_dv(ms, r, rsize);
5215  }
5216  mem = chunk2mem(p);
5217  check_malloced_chunk(ms, mem, nb);
5218  goto postaction;
5219  }
5220 
5221  else if (ms->treemap != 0 && (mem = tmalloc_small(ms, nb)) != 0) {
5222  check_malloced_chunk(ms, mem, nb);
5223  goto postaction;
5224  }
5225  }
5226  }
5227  else if (bytes >= MAX_REQUEST)
5228  nb = MAX_SIZE_T; /* Too big to allocate. Force failure (in sys alloc) */
5229  else {
5230  nb = pad_request(bytes);
5231  if (ms->treemap != 0 && (mem = tmalloc_large(ms, nb)) != 0) {
5232  check_malloced_chunk(ms, mem, nb);
5233  goto postaction;
5234  }
5235  }
5236 
5237  if (nb <= ms->dvsize) {
5238  size_t rsize = ms->dvsize - nb;
5239  mchunkptr p = ms->dv;
5240  if (rsize >= MIN_CHUNK_SIZE) { /* split dv */
5241  mchunkptr r = ms->dv = chunk_plus_offset(p, nb);
5242  ms->dvsize = rsize;
5243  set_size_and_pinuse_of_free_chunk(r, rsize);
5244  set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
5245  }
5246  else { /* exhaust dv */
5247  size_t dvs = ms->dvsize;
5248  ms->dvsize = 0;
5249  ms->dv = 0;
5250  set_inuse_and_pinuse(ms, p, dvs);
5251  }
5252  mem = chunk2mem(p);
5253  check_malloced_chunk(ms, mem, nb);
5254  goto postaction;
5255  }
5256 
5257  else if (nb < ms->topsize) { /* Split top */
5258  size_t rsize = ms->topsize -= nb;
5259  mchunkptr p = ms->top;
5260  mchunkptr r = ms->top = chunk_plus_offset(p, nb);
5261  r->head = rsize | PINUSE_BIT;
5262  set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
5263  mem = chunk2mem(p);
5264  check_top_chunk(ms, ms->top);
5265  check_malloced_chunk(ms, mem, nb);
5266  goto postaction;
5267  }
5268 
5269  mem = sys_alloc(ms, nb);
5270 
5271  postaction:
5272  POSTACTION(ms);
5273  return mem;
5274  }
5275 
5276  return 0;
5277 }
5278 
5279 void mspace_free(mspace msp, void* mem) {
5280  if (mem != 0) {
5281  mchunkptr p = mem2chunk(mem);
5282 #if FOOTERS
5283  mstate fm = get_mstate_for(p);
5284  msp = msp; /* placate people compiling -Wunused */
5285 #else /* FOOTERS */
5286  mstate fm = (mstate)msp;
5287 #endif /* FOOTERS */
5288  if (!ok_magic(fm)) {
5289  USAGE_ERROR_ACTION(fm, p);
5290  return;
5291  }
5292  if (!PREACTION(fm)) {
5293  check_inuse_chunk(fm, p);
5294  if (RTCHECK(ok_address(fm, p) && ok_inuse(p))) {
5295  size_t psize = chunksize(p);
5296  mchunkptr next = chunk_plus_offset(p, psize);
5297  if (!pinuse(p)) {
5298  size_t prevsize = p->prev_foot;
5299  if (is_mmapped(p)) {
5300  psize += prevsize + MMAP_FOOT_PAD;
5301  if (CALL_MUNMAP((char*)p - prevsize, psize) == 0)
5302  fm->footprint -= psize;
5303  goto postaction;
5304  }
5305  else {
5306  mchunkptr prev = chunk_minus_offset(p, prevsize);
5307  psize += prevsize;
5308  p = prev;
5309  if (RTCHECK(ok_address(fm, prev))) { /* consolidate backward */
5310  if (p != fm->dv) {
5311  unlink_chunk(fm, p, prevsize);
5312  }
5313  else if ((next->head & INUSE_BITS) == INUSE_BITS) {
5314  fm->dvsize = psize;
5315  set_free_with_pinuse(p, psize, next);
5316  goto postaction;
5317  }
5318  }
5319  else
5320  goto erroraction;
5321  }
5322  }
5323 
5324  if (RTCHECK(ok_next(p, next) && ok_pinuse(next))) {
5325  if (!cinuse(next)) { /* consolidate forward */
5326  if (next == fm->top) {
5327  size_t tsize = fm->topsize += psize;
5328  fm->top = p;
5329  p->head = tsize | PINUSE_BIT;
5330  if (p == fm->dv) {
5331  fm->dv = 0;
5332  fm->dvsize = 0;
5333  }
5334  if (should_trim(fm, tsize))
5335  sys_trim(fm, 0);
5336  goto postaction;
5337  }
5338  else if (next == fm->dv) {
5339  size_t dsize = fm->dvsize += psize;
5340  fm->dv = p;
5341  set_size_and_pinuse_of_free_chunk(p, dsize);
5342  goto postaction;
5343  }
5344  else {
5345  size_t nsize = chunksize(next);
5346  psize += nsize;
5347  unlink_chunk(fm, next, nsize);
5348  set_size_and_pinuse_of_free_chunk(p, psize);
5349  if (p == fm->dv) {
5350  fm->dvsize = psize;
5351  goto postaction;
5352  }
5353  }
5354  }
5355  else
5356  set_free_with_pinuse(p, psize, next);
5357 
5358  if (is_small(psize)) {
5359  insert_small_chunk(fm, p, psize);
5360  check_free_chunk(fm, p);
5361  }
5362  else {
5363  tchunkptr tp = (tchunkptr)p;
5364  insert_large_chunk(fm, tp, psize);
5365  check_free_chunk(fm, p);
5366  if (--fm->release_checks == 0)
5367  release_unused_segments(fm);
5368  }
5369  goto postaction;
5370  }
5371  }
5372  erroraction:
5373  USAGE_ERROR_ACTION(fm, p);
5374  postaction:
5375  POSTACTION(fm);
5376  }
5377  }
5378 }
5379 
5380 void* mspace_calloc(mspace msp, size_t n_elements, size_t elem_size) {
5381  void* mem;
5382  size_t req = 0;
5383  mstate ms = (mstate)msp;
5384  if (!ok_magic(ms)) {
5385  USAGE_ERROR_ACTION(ms,ms);
5386  return 0;
5387  }
5388  if (n_elements != 0) {
5389  req = n_elements * elem_size;
5390  if (((n_elements | elem_size) & ~(size_t)0xffff) &&
5391  (req / n_elements != elem_size))
5392  req = MAX_SIZE_T; /* force downstream failure on overflow */
5393  }
5394  mem = internal_malloc(ms, req);
5395  if (mem != 0 && calloc_must_clear(mem2chunk(mem)))
5396  memset(mem, 0, req);
5397  return mem;
5398 }
5399 
5400 void* mspace_realloc(mspace msp, void* oldmem, size_t bytes) {
5401  if (oldmem == 0)
5402  return mspace_malloc(msp, bytes);
5403 #ifdef REALLOC_ZERO_BYTES_FREES
5404  if (bytes == 0) {
5405  mspace_free(msp, oldmem);
5406  return 0;
5407  }
5408 #endif /* REALLOC_ZERO_BYTES_FREES */
5409  else {
5410 #if FOOTERS
5411  mchunkptr p = mem2chunk(oldmem);
5412  mstate ms = get_mstate_for(p);
5413 #else /* FOOTERS */
5414  mstate ms = (mstate)msp;
5415 #endif /* FOOTERS */
5416  if (!ok_magic(ms)) {
5417  USAGE_ERROR_ACTION(ms,ms);
5418  return 0;
5419  }
5420  return internal_realloc(ms, oldmem, bytes);
5421  }
5422 }
5423 
5424 void* mspace_memalign(mspace msp, size_t alignment, size_t bytes) {
5425  mstate ms = (mstate)msp;
5426  if (!ok_magic(ms)) {
5427  USAGE_ERROR_ACTION(ms,ms);
5428  return 0;
5429  }
5430  return internal_memalign(ms, alignment, bytes);
5431 }
5432 
5433 void** mspace_independent_calloc(mspace msp, size_t n_elements,
5434  size_t elem_size, void* chunks[]) {
5435  size_t sz = elem_size; /* serves as 1-element array */
5436  mstate ms = (mstate)msp;
5437  if (!ok_magic(ms)) {
5438  USAGE_ERROR_ACTION(ms,ms);
5439  return 0;
5440  }
5441  return ialloc(ms, n_elements, &sz, 3, chunks);
5442 }
5443 
5444 void** mspace_independent_comalloc(mspace msp, size_t n_elements,
5445  size_t sizes[], void* chunks[]) {
5446  mstate ms = (mstate)msp;
5447  if (!ok_magic(ms)) {
5448  USAGE_ERROR_ACTION(ms,ms);
5449  return 0;
5450  }
5451  return ialloc(ms, n_elements, sizes, 0, chunks);
5452 }
5453 
5454 int mspace_trim(mspace msp, size_t pad) {
5455  int result = 0;
5456  mstate ms = (mstate)msp;
5457  if (ok_magic(ms)) {
5458  if (!PREACTION(ms)) {
5459  result = sys_trim(ms, pad);
5460  POSTACTION(ms);
5461  }
5462  }
5463  else {
5464  USAGE_ERROR_ACTION(ms,ms);
5465  }
5466  return result;
5467 }
5468 
5469 void mspace_malloc_stats(mspace msp) {
5470  mstate ms = (mstate)msp;
5471  if (ok_magic(ms)) {
5472  internal_malloc_stats(ms);
5473  }
5474  else {
5475  USAGE_ERROR_ACTION(ms,ms);
5476  }
5477 }
5478 
5479 size_t mspace_footprint(mspace msp) {
5480  size_t result = 0;
5481  mstate ms = (mstate)msp;
5482  if (ok_magic(ms)) {
5483  result = ms->footprint;
5484  }
5485  else {
5486  USAGE_ERROR_ACTION(ms,ms);
5487  }
5488  return result;
5489 }
5490 
5491 
5492 size_t mspace_max_footprint(mspace msp) {
5493  size_t result = 0;
5494  mstate ms = (mstate)msp;
5495  if (ok_magic(ms)) {
5496  result = ms->max_footprint;
5497  }
5498  else {
5499  USAGE_ERROR_ACTION(ms,ms);
5500  }
5501  return result;
5502 }
5503 
5504 
5505 #if !NO_MALLINFO
5506 struct mallinfo mspace_mallinfo(mspace msp) {
5507  mstate ms = (mstate)msp;
5508  if (!ok_magic(ms)) {
5509  USAGE_ERROR_ACTION(ms,ms);
5510  }
5511  return internal_mallinfo(ms);
5512 }
5513 #endif /* NO_MALLINFO */
5514 
5515 size_t mspace_usable_size(void* mem) {
5516  if (mem != 0) {
5517  mchunkptr p = mem2chunk(mem);
5518  if (is_inuse(p))
5519  return chunksize(p) - overhead_for(p);
5520  }
5521  return 0;
5522 }
5523 
5524 int mspace_mallopt(int param_number, int value) {
5525  return change_mparam(param_number, value);
5526 }
5527 
5528 #endif /* MSPACES */
5529 
5530 
5531 /* -------------------- Alternative MORECORE functions ------------------- */
5532 
5533 /*
5534  Guidelines for creating a custom version of MORECORE:
5535 
5536  * For best performance, MORECORE should allocate in multiples of pagesize.
5537  * MORECORE may allocate more memory than requested. (Or even less,
5538  but this will usually result in a malloc failure.)
5539  * MORECORE must not allocate memory when given argument zero, but
5540  instead return one past the end address of memory from previous
5541  nonzero call.
5542  * For best performance, consecutive calls to MORECORE with positive
5543  arguments should return increasing addresses, indicating that
5544  space has been contiguously extended.
5545  * Even though consecutive calls to MORECORE need not return contiguous
5546  addresses, it must be OK for malloc'ed chunks to span multiple
5547  regions in those cases where they do happen to be contiguous.
5548  * MORECORE need not handle negative arguments -- it may instead
5549  just return MFAIL when given negative arguments.
5550  Negative arguments are always multiples of pagesize. MORECORE
5551  must not misinterpret negative args as large positive unsigned
5552  args. You can suppress all such calls from even occurring by defining
5553  MORECORE_CANNOT_TRIM,
5554 
5555  As an example alternative MORECORE, here is a custom allocator
5556  kindly contributed for pre-OSX macOS. It uses virtually but not
5557  necessarily physically contiguous non-paged memory (locked in,
5558  present and won't get swapped out). You can use it by uncommenting
5559  this section, adding some #includes, and setting up the appropriate
5560  defines above:
5561 
5562  #define MORECORE osMoreCore
5563 
5564  There is also a shutdown routine that should somehow be called for
5565  cleanup upon program exit.
5566 
5567  #define MAX_POOL_ENTRIES 100
5568  #define MINIMUM_MORECORE_SIZE (64 * 1024U)
5569  static int next_os_pool;
5570  void *our_os_pools[MAX_POOL_ENTRIES];
5571 
5572  void *osMoreCore(int size)
5573  {
5574  void *ptr = 0;
5575  static void *sbrk_top = 0;
5576 
5577  if (size > 0)
5578  {
5579  if (size < MINIMUM_MORECORE_SIZE)
5580  size = MINIMUM_MORECORE_SIZE;
5581  if (CurrentExecutionLevel() == kTaskLevel)
5582  ptr = PoolAllocateResident(size + RM_PAGE_SIZE, 0);
5583  if (ptr == 0)
5584  {
5585  return (void *) MFAIL;
5586  }
5587  // save ptrs so they can be freed during cleanup
5588  our_os_pools[next_os_pool] = ptr;
5589  next_os_pool++;
5590  ptr = (void *) ((((size_t) ptr) + RM_PAGE_MASK) & ~RM_PAGE_MASK);
5591  sbrk_top = (char *) ptr + size;
5592  return ptr;
5593  }
5594  else if (size < 0)
5595  {
5596  // we don't currently support shrink behavior
5597  return (void *) MFAIL;
5598  }
5599  else
5600  {
5601  return sbrk_top;
5602  }
5603  }
5604 
5605  // cleanup any allocated memory pools
5606  // called as last thing before shutting down driver
5607 
5608  void osCleanupMem(void)
5609  {
5610  void **ptr;
5611 
5612  for (ptr = our_os_pools; ptr < &our_os_pools[MAX_POOL_ENTRIES]; ptr++)
5613  if (*ptr)
5614  {
5615  PoolDeallocate(*ptr);
5616  *ptr = 0;
5617  }
5618  }
5619 
5620 */
5621 
5622 
5623 /* -----------------------------------------------------------------------
5624 History:
5625  V2.8.4 Wed May 27 09:56:23 2009 Doug Lea (dl at gee)
5626  * Use zeros instead of prev foot for is_mmapped
5627  * Add mspace_track_large_chunks; thanks to Jean Brouwers
5628  * Fix set_inuse in internal_realloc; thanks to Jean Brouwers
5629  * Fix insufficient sys_alloc padding when using 16byte alignment
5630  * Fix bad error check in mspace_footprint
5631  * Adaptations for ptmalloc; thanks to Wolfram Gloger.
5632  * Reentrant spin locks; thanks to Earl Chew and others
5633  * Win32 improvements; thanks to Niall Douglas and Earl Chew
5634  * Add NO_SEGMENT_TRAVERSAL and MAX_RELEASE_CHECK_RATE options
5635  * Extension hook in malloc_state
5636  * Various small adjustments to reduce warnings on some compilers
5637  * Various configuration extensions/changes for more platforms. Thanks
5638  to all who contributed these.
5639 
5640  V2.8.3 Thu Sep 22 11:16:32 2005 Doug Lea (dl at gee)
5641  * Add max_footprint functions
5642  * Ensure all appropriate literals are size_t
5643  * Fix conditional compilation problem for some #define settings
5644  * Avoid concatenating segments with the one provided
5645  in create_mspace_with_base
5646  * Rename some variables to avoid compiler shadowing warnings
5647  * Use explicit lock initialization.
5648  * Better handling of sbrk interference.
5649  * Simplify and fix segment insertion, trimming and mspace_destroy
5650  * Reinstate REALLOC_ZERO_BYTES_FREES option from 2.7.x
5651  * Thanks especially to Dennis Flanagan for help on these.
5652 
5653  V2.8.2 Sun Jun 12 16:01:10 2005 Doug Lea (dl at gee)
5654  * Fix memalign brace error.
5655 
5656  V2.8.1 Wed Jun 8 16:11:46 2005 Doug Lea (dl at gee)
5657  * Fix improper #endif nesting in C++
5658  * Add explicit casts needed for C++
5659 
5660  V2.8.0 Mon May 30 14:09:02 2005 Doug Lea (dl at gee)
5661  * Use trees for large bins
5662  * Support mspaces
5663  * Use segments to unify sbrk-based and mmap-based system allocation,
5664  removing need for emulation on most platforms without sbrk.
5665  * Default safety checks
5666  * Optional footer checks. Thanks to William Robertson for the idea.
5667  * Internal code refactoring
5668  * Incorporate suggestions and platform-specific changes.
5669  Thanks to Dennis Flanagan, Colin Plumb, Niall Douglas,
5670  Aaron Bachmann, Emery Berger, and others.
5671  * Speed up non-fastbin processing enough to remove fastbins.
5672  * Remove useless cfree() to avoid conflicts with other apps.
5673  * Remove internal memcpy, memset. Compilers handle builtins better.
5674  * Remove some options that no one ever used and rename others.
5675 
5676  V2.7.2 Sat Aug 17 09:07:30 2002 Doug Lea (dl at gee)
5677  * Fix malloc_state bitmap array misdeclaration
5678 
5679  V2.7.1 Thu Jul 25 10:58:03 2002 Doug Lea (dl at gee)
5680  * Allow tuning of FIRST_SORTED_BIN_SIZE
5681  * Use PTR_UINT as type for all ptr->int casts. Thanks to John Belmonte.
5682  * Better detection and support for non-contiguousness of MORECORE.
5683  Thanks to Andreas Mueller, Conal Walsh, and Wolfram Gloger
5684  * Bypass most of malloc if no frees. Thanks To Emery Berger.
5685  * Fix freeing of old top non-contiguous chunk im sysmalloc.
5686  * Raised default trim and map thresholds to 256K.
5687  * Fix mmap-related #defines. Thanks to Lubos Lunak.
5688  * Fix copy macros; added LACKS_FCNTL_H. Thanks to Neal Walfield.
5689  * Branch-free bin calculation
5690  * Default trim and mmap thresholds now 256K.
5691 
5692  V2.7.0 Sun Mar 11 14:14:06 2001 Doug Lea (dl at gee)
5693  * Introduce independent_comalloc and independent_calloc.
5694  Thanks to Michael Pachos for motivation and help.
5695  * Make optional .h file available
5696  * Allow > 2GB requests on 32bit systems.
5697  * new WIN32 sbrk, mmap, munmap, lock code from <Walter@GeNeSys-e.de>.
5698  Thanks also to Andreas Mueller <a.mueller at paradatec.de>,
5699  and Anonymous.
5700  * Allow override of MALLOC_ALIGNMENT (Thanks to Ruud Waij for
5701  helping test this.)
5702  * memalign: check alignment arg
5703  * realloc: don't try to shift chunks backwards, since this
5704  leads to more fragmentation in some programs and doesn't
5705  seem to help in any others.
5706  * Collect all cases in malloc requiring system memory into sysmalloc
5707  * Use mmap as backup to sbrk
5708  * Place all internal state in malloc_state
5709  * Introduce fastbins (although similar to 2.5.1)
5710  * Many minor tunings and cosmetic improvements
5711  * Introduce USE_PUBLIC_MALLOC_WRAPPERS, USE_MALLOC_LOCK
5712  * Introduce MALLOC_FAILURE_ACTION, MORECORE_CONTIGUOUS
5713  Thanks to Tony E. Bennett <tbennett@nvidia.com> and others.
5714  * Include errno.h to support default failure action.
5715 
5716  V2.6.6 Sun Dec 5 07:42:19 1999 Doug Lea (dl at gee)
5717  * return null for negative arguments
5718  * Added Several WIN32 cleanups from Martin C. Fong <mcfong at yahoo.com>
5719  * Add 'LACKS_SYS_PARAM_H' for those systems without 'sys/param.h'
5720  (e.g. WIN32 platforms)
5721  * Cleanup header file inclusion for WIN32 platforms
5722  * Cleanup code to avoid Microsoft Visual C++ compiler complaints
5723  * Add 'USE_DL_PREFIX' to quickly allow co-existence with existing
5724  memory allocation routines
5725  * Set 'malloc_getpagesize' for WIN32 platforms (needs more work)
5726  * Use 'assert' rather than 'ASSERT' in WIN32 code to conform to
5727  usage of 'assert' in non-WIN32 code
5728  * Improve WIN32 'sbrk()' emulation's 'findRegion()' routine to
5729  avoid infinite loop
5730  * Always call 'fREe()' rather than 'free()'
5731 
5732  V2.6.5 Wed Jun 17 15:57:31 1998 Doug Lea (dl at gee)
5733  * Fixed ordering problem with boundary-stamping
5734 
5735  V2.6.3 Sun May 19 08:17:58 1996 Doug Lea (dl at gee)
5736  * Added pvalloc, as recommended by H.J. Liu
5737  * Added 64bit pointer support mainly from Wolfram Gloger
5738  * Added anonymously donated WIN32 sbrk emulation
5739  * Malloc, calloc, getpagesize: add optimizations from Raymond Nijssen
5740  * malloc_extend_top: fix mask error that caused wastage after
5741  foreign sbrks
5742  * Add linux mremap support code from HJ Liu
5743 
5744  V2.6.2 Tue Dec 5 06:52:55 1995 Doug Lea (dl at gee)
5745  * Integrated most documentation with the code.
5746  * Add support for mmap, with help from
5747  Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
5748  * Use last_remainder in more cases.
5749  * Pack bins using idea from colin@nyx10.cs.du.edu
5750  * Use ordered bins instead of best-fit threshhold
5751  * Eliminate block-local decls to simplify tracing and debugging.
5752  * Support another case of realloc via move into top
5753  * Fix error occuring when initial sbrk_base not word-aligned.
5754  * Rely on page size for units instead of SBRK_UNIT to
5755  avoid surprises about sbrk alignment conventions.
5756  * Add mallinfo, mallopt. Thanks to Raymond Nijssen
5757  (raymond@es.ele.tue.nl) for the suggestion.
5758  * Add `pad' argument to malloc_trim and top_pad mallopt parameter.
5759  * More precautions for cases where other routines call sbrk,
5760  courtesy of Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
5761  * Added macros etc., allowing use in linux libc from
5762  H.J. Lu (hjl@gnu.ai.mit.edu)
5763  * Inverted this history list
5764 
5765  V2.6.1 Sat Dec 2 14:10:57 1995 Doug Lea (dl at gee)
5766  * Re-tuned and fixed to behave more nicely with V2.6.0 changes.
5767  * Removed all preallocation code since under current scheme
5768  the work required to undo bad preallocations exceeds
5769  the work saved in good cases for most test programs.
5770  * No longer use return list or unconsolidated bins since
5771  no scheme using them consistently outperforms those that don't
5772  given above changes.
5773  * Use best fit for very large chunks to prevent some worst-cases.
5774  * Added some support for debugging
5775 
5776  V2.6.0 Sat Nov 4 07:05:23 1995 Doug Lea (dl at gee)
5777  * Removed footers when chunks are in use. Thanks to
5778  Paul Wilson (wilson@cs.texas.edu) for the suggestion.
5779 
5780  V2.5.4 Wed Nov 1 07:54:51 1995 Doug Lea (dl at gee)
5781  * Added malloc_trim, with help from Wolfram Gloger
5782  (wmglo@Dent.MED.Uni-Muenchen.DE).
5783 
5784  V2.5.3 Tue Apr 26 10:16:01 1994 Doug Lea (dl at g)
5785 
5786  V2.5.2 Tue Apr 5 16:20:40 1994 Doug Lea (dl at g)
5787  * realloc: try to expand in both directions
5788  * malloc: swap order of clean-bin strategy;
5789  * realloc: only conditionally expand backwards
5790  * Try not to scavenge used bins
5791  * Use bin counts as a guide to preallocation
5792  * Occasionally bin return list chunks in first scan
5793  * Add a few optimizations from colin@nyx10.cs.du.edu
5794 
5795  V2.5.1 Sat Aug 14 15:40:43 1993 Doug Lea (dl at g)
5796  * faster bin computation & slightly different binning
5797  * merged all consolidations to one part of malloc proper
5798  (eliminating old malloc_find_space & malloc_clean_bin)
5799  * Scan 2 returns chunks (not just 1)
5800  * Propagate failure in realloc if malloc returns 0
5801  * Add stuff to allow compilation on non-ANSI compilers
5802  from kpv@research.att.com
5803 
5804  V2.5 Sat Aug 7 07:41:59 1993 Doug Lea (dl at g.oswego.edu)
5805  * removed potential for odd address access in prev_chunk
5806  * removed dependency on getpagesize.h
5807  * misc cosmetics and a bit more internal documentation
5808  * anticosmetics: mangled names in macros to evade debugger strangeness
5809  * tested on sparc, hp-700, dec-mips, rs6000
5810  with gcc & native cc (hp, dec only) allowing
5811  Detlefs & Zorn comparison study (in SIGPLAN Notices.)
5812 
5813  Trial version Fri Aug 28 13:14:29 1992 Doug Lea (dl at g.oswego.edu)
5814  * Based loosely on libg++-1.2X malloc. (It retains some of the overall
5815  structure of old version, but most details differ.)
5816 
5817 */
5818 
#define LONG
Definition: fasp.h:65
#define FALSE
Definition: fasp_const.h:68