Update PCRE2 to 10.45

New upstream version.

Importing note: 10.45 is missing a licence file for the sljit
dependency. This is tracked upstream at

https://github.com/PCRE2Project/pcre2/issues/686

so it may get fixed in 10.46 (in which case the import script /
qt_attribution.json may need to be amended).

[ChangeLog][Third-Party Code] PCRE2 was updated to version 10.45.

Pick-to: 6.8 6.5
Change-Id: Ifa0430782bed8ffb1c26f44ca6eb06cd26aaa1f9
Reviewed-by: Mårten Nordheim <marten.nordheim@qt.io>
(cherry picked from commit 3cb58b053c26603ba1d541b3c9c51ec25212ee80)
Reviewed-by: Qt Cherry-pick Bot <cherrypick_bot@qt-project.org>
This commit is contained in:
Giuseppe D'Angelo 2025-02-05 15:51:53 +01:00 committed by Qt Cherry-pick Bot
parent a639174daa
commit 0730c7ea55
74 changed files with 19584 additions and 10669 deletions

View File

@ -1,36 +0,0 @@
THE MAIN PCRE2 LIBRARY CODE
---------------------------
Written by: Philip Hazel
Email local part: Philip.Hazel
Email domain: gmail.com
Retired from University of Cambridge Computing Service,
Cambridge, England.
Copyright (c) 1997-2024 University of Cambridge
All rights reserved
PCRE2 JUST-IN-TIME COMPILATION SUPPORT
--------------------------------------
Written by: Zoltan Herczeg
Email local part: hzmester
Emain domain: freemail.hu
Copyright(c) 2010-2024 Zoltan Herczeg
All rights reserved.
STACK-LESS JUST-IN-TIME COMPILER
--------------------------------
Written by: Zoltan Herczeg
Email local part: hzmester
Emain domain: freemail.hu
Copyright(c) 2009-2024 Zoltan Herczeg
All rights reserved.
####

200
src/3rdparty/pcre2/AUTHORS.md vendored Normal file
View File

@ -0,0 +1,200 @@
PCRE2 Authorship and Contributors
=================================
COPYRIGHT
---------
Please see the file [LICENCE](./LICENCE.md) in the PCRE2 distribution for
copyright details.
MAINTAINERS
-----------
The PCRE and PCRE2 libraries were authored and maintained by Philip Hazel.
Since 2024, the contributors with administrator access to the project are now
Nicholas Wilson and Zoltán Herczeg. See the file [SECURITY](./SECURITY.md) for
GPG keys.
Both administrators are volunteers acting in a personal capacity.
<table>
<thead>
<tr>
<th>Name</th>
<th>Role</th>
<tr>
</thead>
<tbody>
<tr>
<td>
Nicholas Wilson<br/>
`nicholas@nicholaswilson.me.uk`<br/>
Currently of Microsoft Research Cambridge, UK
</td>
<td>
* General project administration & maintenance
* Release management
* Code maintenance
</td>
</tr>
<tr>
<td>
Zoltán Herczeg<br/>
`hzmester@freemail.hu`<br/>
Currently of the University of Szeged, Hungary
</td>
<td>
* Code maintenance
* Ownership of `sljit` and PCRE2's JIT
</td>
</tr>
</tbody>
</table>
CONTRIBUTORS
------------
Many others have participated and contributed to PCRE2 over its history.
The maintainers are grateful for all contributions and participation over the
years. We apologise for any names we have forgotten.
We are especially grateful to Philip Hazel, creator of PCRE and PCRE2, and
maintainer from 1997 to 2024.
All names listed alphabetically.
### Contributors to PCRE2
This list includes names up until the PCRE2 10.44 release. New names will be
added from the Git history on each release.
Scott Bell
Carlo Marcelo Arenas Belón
Edward Betts
Jan-Willem Blokland
Ross Burton
Dmitry Cherniachenko
Alexey Chupahin
Jessica Clarke
Alejandro Colomar
Jeremie Courreges-Anglas
Addison Crump
Alex Dowad
Daniel Engberg
Daniel Richard G
David Gaussmann
Andrey Gorbachev
Jordan Griege
Jason Hood
Bumsu Hyeon
Roy Ivy
Martin Joerg
Guillem Jover
Ralf Junker
Ayesh Karunaratne
Michael Kaufmann
Yunho Kim
Joshua Kinard
David Korczynski
Uwe Korn
Jonas Kvinge
Kristian Larsson
Kai Lu
Behzod Mansurov
B. Scott Michel
Nathan Moinvaziri
Mike Munday
Marc Mutz
Fabio Pagani
Christian Persch
Tristan Ross
William A Rowe Jr
David Seifert
Yaakov Selkowitz
Rich Siegel
Karl Skomski
Maciej Sroczyński
Wolfgang Stöggl
Thomas Tempelmann
Greg Thain
Lucas Trzesniewski
Theodore Tsirpanis
Matthew Vernon
Rémi Verschelde
Thomas Voss
Ezekiel Warren
Carl Weaver
Chris Wilson
Amin Yahyaabadi
Joe Zhang
### Contributors to PCRE1
These people contributed either by sending patches or reporting serious issues.
Irfan Adilovic
Alexander Barkov
Daniel Bergström
David Burgess
Ross Burton
David Byron
Fred Cox
Christian Ehrlicher
Tom Fortmann
Lionel Fourquaux
Mike Frysinger
Daniel Richard G
Dair Gran
"Graycode" (Red Hat Product Security)
Viktor Griph
Wen Guanxing
Robin Houston
Martin Jerabek
Peter Kankowski
Stephen Kelly
Yunho Kim
Joshua Kinard
Carsten Klein
Evgeny Kotkov
Ronald Landheer-Cieslak
Alan Lehotsky
Dmitry V. Levin
Nuno Lopes
Kai Lu
Giuseppe Maxia
Dan Mooney
Marc Mutz
Markus Oberhumer
Sheri Pierce
Petr Pisar
Ari Pollak
Bob Rossi
Ruiger Rill
Michael Shigorin
Rich Siegel
Craig Silverstein (C++ wrapper)
Karl Skomski
Paul Sokolovsky
Stan Switzer
Ian Taylor
Mark Tetrode
Jeff Trawick
Steven Van Ingelgem
Lawrence Velazquez
Jiong Wang
Stefan Weber
Chris Wilson
Thanks go to Jeffrey Friedl for testing and debugging assistance.

View File

@ -15,6 +15,7 @@ qt_internal_add_3rdparty_library(BundledPcre2
src/pcre2_chartables.c
src/pcre2_chkdint.c
src/pcre2_compile.c
src/pcre2_compile_class.c
src/pcre2_config.c
src/pcre2_context.c
src/pcre2_dfa_match.c

View File

@ -1,22 +1,25 @@
Copyright 2013-2013 Tilera Corporation(jiwang@tilera.com). All rights reserved.
Copyright Zoltan Herczeg (hzmester@freemail.hu). All rights reserved.
Redistribution and use in source and binary forms, with or without modification, are
permitted provided that the following conditions are met:
1. Redistributions of source code must retain the above copyright notice, this list of
conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright notice, this list
of conditions and the following disclaimer in the documentation and/or other materials
provided with the distribution.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDER(S) AND CONTRIBUTORS ``AS IS'' AND ANY
EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT
SHALL THE COPYRIGHT HOLDER(S) OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED
TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN
ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
/*
* Stack-less Just-In-Time compiler
*
* Copyright Zoltan Herczeg (hzmester@freemail.hu). All rights reserved.
*
* Redistribution and use in source and binary forms, with or without modification, are
* permitted provided that the following conditions are met:
*
* 1. Redistributions of source code must retain the above copyright notice, this list of
* conditions and the following disclaimer.
*
* 2. Redistributions in binary form must reproduce the above copyright notice, this list
* of conditions and the following disclaimer in the documentation and/or other materials
* provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDER(S) AND CONTRIBUTORS ``AS IS'' AND ANY
* EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
* OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT
* SHALL THE COPYRIGHT HOLDER(S) OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
* INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED
* TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
* BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
* CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN
* ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/

View File

@ -1,5 +1,8 @@
PCRE2 LICENCE
-------------
PCRE2 License
=============
| SPDX-License-Identifier: | BSD-3-Clause WITH PCRE2-exception |
|---------|-------|
PCRE2 is a library of functions to support regular expressions whose syntax
and semantics are as close as possible to those of the Perl 5 language.
@ -16,40 +19,46 @@ optimize pattern matching. This is an optional feature that can be omitted when
the library is built.
THE BASIC LIBRARY FUNCTIONS
---------------------------
COPYRIGHT
---------
Written by: Philip Hazel
Email local part: Philip.Hazel
Email domain: gmail.com
### The basic library functions
Retired from University of Cambridge Computing Service,
Cambridge, England.
Written by: Philip Hazel
Email local part: Philip.Hazel
Email domain: gmail.com
Copyright (c) 1997-2024 University of Cambridge
All rights reserved.
Retired from University of Cambridge Computing Service,
Cambridge, England.
Copyright (c) 1997-2007 University of Cambridge
Copyright (c) 2007-2024 Philip Hazel
All rights reserved.
PCRE2 JUST-IN-TIME COMPILATION SUPPORT
--------------------------------------
### PCRE2 Just-In-Time compilation support
Written by: Zoltan Herczeg
Email local part: hzmester
Email domain: freemail.hu
Written by: Zoltan Herczeg
Email local part: hzmester
Email domain: freemail.hu
Copyright(c) 2010-2024 Zoltan Herczeg
All rights reserved.
Copyright (c) 2010-2024 Zoltan Herczeg
All rights reserved.
### Stack-less Just-In-Time compiler
STACK-LESS JUST-IN-TIME COMPILER
--------------------------------
Written by: Zoltan Herczeg
Email local part: hzmester
Email domain: freemail.hu
Written by: Zoltan Herczeg
Email local part: hzmester
Email domain: freemail.hu
Copyright (c) 2009-2024 Zoltan Herczeg
All rights reserved.
Copyright(c) 2009-2024 Zoltan Herczeg
All rights reserved.
### All other contributions
Many other contributors have participated in the authorship of PCRE2. As PCRE2
has never required a Contributor Licensing Agreement, or other copyright
assignment agreement, all contributions have copyright retained by each
original contributor or their employer.
THE "BSD" LICENCE
@ -58,14 +67,14 @@ THE "BSD" LICENCE
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notices,
* Redistributions of source code must retain the above copyright notices,
this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright
* Redistributions in binary form must reproduce the above copyright
notices, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
* Neither the name of the University of Cambridge nor the names of any
* Neither the name of the University of Cambridge nor the names of any
contributors may be used to endorse or promote products derived from this
software without specific prior written permission.

View File

@ -29,7 +29,7 @@
#ifdef __cplusplus
extern "C" {
#endif
#endif /* __cplusplus */
/*
This file contains the basic configuration options for the SLJIT compiler
@ -47,19 +47,19 @@ extern "C" {
#ifndef SLJIT_UTIL_STACK
/* Enabled by default */
#define SLJIT_UTIL_STACK 1
#endif
#endif /* SLJIT_UTIL_STACK */
/* Uses user provided allocator to allocate the stack (see SLJIT_UTIL_STACK) */
#ifndef SLJIT_UTIL_SIMPLE_STACK_ALLOCATION
/* Disabled by default */
#define SLJIT_UTIL_SIMPLE_STACK_ALLOCATION 0
#endif
#endif /* SLJIT_UTIL_SIMPLE_STACK_ALLOCATION */
/* Single threaded application. Does not require any locks. */
#ifndef SLJIT_SINGLE_THREADED
/* Disabled by default. */
#define SLJIT_SINGLE_THREADED 0
#endif
#endif /* SLJIT_SINGLE_THREADED */
/* --------------------------------------------------------------------- */
/* Configuration */
@ -70,7 +70,7 @@ extern "C" {
#ifndef SLJIT_STD_MACROS_DEFINED
/* Disabled by default. */
#define SLJIT_STD_MACROS_DEFINED 0
#endif
#endif /* SLJIT_STD_MACROS_DEFINED */
/* Executable code allocation:
If SLJIT_EXECUTABLE_ALLOCATOR is not defined, the application should
@ -93,7 +93,7 @@ extern "C" {
#ifndef SLJIT_PROT_EXECUTABLE_ALLOCATOR
/* Disabled by default. */
#define SLJIT_PROT_EXECUTABLE_ALLOCATOR 0
#endif
#endif /* SLJIT_PROT_EXECUTABLE_ALLOCATOR */
/* When SLJIT_WX_EXECUTABLE_ALLOCATOR is enabled SLJIT uses an
allocator which does not set writable and executable permission
@ -104,7 +104,7 @@ extern "C" {
#ifndef SLJIT_WX_EXECUTABLE_ALLOCATOR
/* Disabled by default. */
#define SLJIT_WX_EXECUTABLE_ALLOCATOR 0
#endif
#endif /* SLJIT_WX_EXECUTABLE_ALLOCATOR */
#endif /* !SLJIT_EXECUTABLE_ALLOCATOR */
@ -112,19 +112,19 @@ extern "C" {
#ifndef SLJIT_ARGUMENT_CHECKS
/* Disabled by default */
#define SLJIT_ARGUMENT_CHECKS 0
#endif
#endif /* SLJIT_ARGUMENT_CHECKS */
/* Debug checks (assertions, etc.). */
#ifndef SLJIT_DEBUG
/* Enabled by default */
#define SLJIT_DEBUG 1
#endif
#endif /* SLJIT_DEBUG */
/* Verbose operations. */
#ifndef SLJIT_VERBOSE
/* Enabled by default */
#define SLJIT_VERBOSE 1
#endif
#endif /* SLJIT_VERBOSE */
/*
SLJIT_IS_FPU_AVAILABLE
@ -137,6 +137,6 @@ extern "C" {
#ifdef __cplusplus
} /* extern "C" */
#endif
#endif /* __cplusplus */
#endif /* SLJIT_CONFIG_H_ */

View File

@ -169,7 +169,7 @@
#if (defined SLJIT_CONFIG_ARM_V6 && SLJIT_CONFIG_ARM_V6) || (defined SLJIT_CONFIG_ARM_V7 && SLJIT_CONFIG_ARM_V7) \
|| (defined SLJIT_CONFIG_ARM_THUMB2 && SLJIT_CONFIG_ARM_THUMB2)
#define SLJIT_CONFIG_ARM_32 1
#endif
#endif /* SLJIT_CONFIG_ARM_V6 || SLJIT_CONFIG_ARM_V7 || SLJIT_CONFIG_ARM_THUMB2 */
#if (defined SLJIT_CONFIG_X86_32 && SLJIT_CONFIG_X86_32) || (defined SLJIT_CONFIG_X86_64 && SLJIT_CONFIG_X86_64)
#define SLJIT_CONFIG_X86 1

View File

@ -27,20 +27,6 @@
#ifndef SLJIT_CONFIG_INTERNAL_H_
#define SLJIT_CONFIG_INTERNAL_H_
#if (defined SLJIT_VERBOSE && SLJIT_VERBOSE) \
|| (defined SLJIT_DEBUG && SLJIT_DEBUG && (!defined(SLJIT_ASSERT) || !defined(SLJIT_UNREACHABLE)))
#include <stdio.h>
#endif
#if (defined SLJIT_DEBUG && SLJIT_DEBUG \
&& (!defined(SLJIT_ASSERT) || !defined(SLJIT_UNREACHABLE) || !defined(SLJIT_HALT_PROCESS)))
#include <stdlib.h>
#endif
#ifdef __cplusplus
extern "C" {
#endif
/*
SLJIT defines the following architecture dependent types and macros:
@ -64,16 +50,26 @@ extern "C" {
SLJIT_MASKED_SHIFT : all word shifts are always masked
SLJIT_MASKED_SHIFT32 : all 32 bit shifts are always masked
SLJIT_INDIRECT_CALL : see SLJIT_FUNC_ADDR() for more information
SLJIT_UPPER_BITS_IGNORED : 32 bit operations ignores the upper bits of source registers
SLJIT_UPPER_BITS_ZERO_EXTENDED : 32 bit operations clears the upper bits of destination registers
SLJIT_UPPER_BITS_SIGN_EXTENDED : 32 bit operations replicates the sign bit in the upper bits of destination registers
SLJIT_UPPER_BITS_PRESERVED : 32 bit operations preserves the upper bits of destination registers
Constants:
SLJIT_NUMBER_OF_REGISTERS : number of available registers
SLJIT_NUMBER_OF_SCRATCH_REGISTERS : number of available scratch registers
SLJIT_NUMBER_OF_SAVED_REGISTERS : number of available saved registers
SLJIT_NUMBER_OF_FLOAT_REGISTERS : number of available floating point registers
SLJIT_NUMBER_OF_SCRATCH_FLOAT_REGISTERS : number of available floating point scratch registers
SLJIT_NUMBER_OF_SAVED_FLOAT_REGISTERS : number of available floating point saved registers
SLJIT_NUMBER_OF_SCRATCH_FLOAT_REGISTERS : number of available scratch floating point registers
SLJIT_NUMBER_OF_SAVED_FLOAT_REGISTERS : number of available saved floating point registers
SLJIT_NUMBER_OF_VECTOR_REGISTERS : number of available vector registers
SLJIT_NUMBER_OF_SCRATCH_VECTOR_REGISTERS : number of available scratch vector registers
SLJIT_NUMBER_OF_SAVED_VECTOR_REGISTERS : number of available saved vector registers
SLJIT_NUMBER_OF_TEMPORARY_REGISTERS : number of available temporary registers
SLJIT_NUMBER_OF_TEMPORARY_FLOAT_REGISTERS : number of available temporary floating point registers
SLJIT_NUMBER_OF_TEMPORARY_VECTOR_REGISTERS : number of available temporary vector registers
SLJIT_SEPARATE_VECTOR_REGISTERS : if this macro is defined, the vector registers do not
overlap with floating point registers
SLJIT_WORD_SHIFT : the shift required to apply when accessing a sljit_sw/sljit_uw array by index
SLJIT_F32_SHIFT : the shift required to apply when accessing
a single precision floating point array by index
@ -98,16 +94,33 @@ extern "C" {
SLJIT_TMP_R(i) : accessing temporary registers
SLJIT_TMP_FR0 .. FR9 : accessing temporary floating point registers
SLJIT_TMP_FR(i) : accessing temporary floating point registers
SLJIT_TMP_VR0 .. VR9 : accessing temporary vector registers
SLJIT_TMP_VR(i) : accessing temporary vector registers
SLJIT_TMP_DEST_REG : a temporary register for results
SLJIT_TMP_MEM_REG : a temporary base register for accessing memory
(can be the same as SLJIT_TMP_DEST_REG)
SLJIT_TMP_DEST_FREG : a temporary register for float results
SLJIT_TMP_DEST_VREG : a temporary register for vector results
SLJIT_FUNC : calling convention attribute for both calling JIT from C and C calling back from JIT
SLJIT_W(number) : defining 64 bit constants on 64 bit architectures (platform independent helper)
SLJIT_F64_SECOND(reg) : provides the register index of the second 32 bit part of a 64 bit
floating point register when SLJIT_HAS_F64_AS_F32_PAIR returns non-zero
*/
#if (defined SLJIT_VERBOSE && SLJIT_VERBOSE) \
|| (defined SLJIT_DEBUG && SLJIT_DEBUG && (!defined(SLJIT_ASSERT) || !defined(SLJIT_UNREACHABLE)))
#include <stdio.h>
#endif
#if (defined SLJIT_DEBUG && SLJIT_DEBUG \
&& (!defined(SLJIT_ASSERT) || !defined(SLJIT_UNREACHABLE) || !defined(SLJIT_HALT_PROCESS)))
#include <stdlib.h>
#endif
#ifdef __cplusplus
extern "C" {
#endif
/***********************************************************/
/* Intel Control-flow Enforcement Technology (CET) spport. */
/***********************************************************/
@ -285,7 +298,7 @@ extern "C" {
#elif defined(_WIN32)
#define SLJIT_CACHE_FLUSH(from, to) \
FlushInstructionCache(GetCurrentProcess(), (void*)(from), (char*)(to) - (char*)(from))
FlushInstructionCache(GetCurrentProcess(), (void*)(from), (size_t)((char*)(to) - (char*)(from)))
#elif (defined(__GNUC__) && (__GNUC__ >= 5 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 3))) || defined(__clang__)
@ -553,7 +566,7 @@ determine the next executed instruction after return. */
#if (defined SLJIT_EXECUTABLE_ALLOCATOR && SLJIT_EXECUTABLE_ALLOCATOR)
SLJIT_API_FUNC_ATTRIBUTE void* sljit_malloc_exec(sljit_uw size);
SLJIT_API_FUNC_ATTRIBUTE void sljit_free_exec(void* ptr);
SLJIT_API_FUNC_ATTRIBUTE void sljit_free_unused_memory_exec(void);
/* Note: sljitLir.h also defines sljit_free_unused_memory_exec() function. */
#define SLJIT_BUILTIN_MALLOC_EXEC(size, exec_allocator_data) sljit_malloc_exec(size)
#define SLJIT_BUILTIN_FREE_EXEC(ptr, exec_allocator_data) sljit_free_exec(ptr)
@ -591,10 +604,12 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_sw sljit_exec_offset(void *code);
#define SLJIT_TMP_DEST_REG SLJIT_TMP_R0
#define SLJIT_TMP_MEM_REG SLJIT_TMP_R0
#define SLJIT_TMP_DEST_FREG SLJIT_TMP_FR0
#define SLJIT_LOCALS_OFFSET_BASE (8 * SSIZE_OF(sw))
#define SLJIT_LOCALS_OFFSET_BASE (8 * (sljit_s32)sizeof(sljit_sw))
#define SLJIT_PREF_SHIFT_REG SLJIT_R2
#define SLJIT_MASKED_SHIFT 1
#define SLJIT_MASKED_SHIFT32 1
#define SLJIT_UPPER_BITS_IGNORED 1
#define SLJIT_UPPER_BITS_ZERO_EXTENDED 1
#elif (defined SLJIT_CONFIG_X86_64 && SLJIT_CONFIG_X86_64)
@ -609,7 +624,7 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_sw sljit_exec_offset(void *code);
#else /* _WIN64 */
#define SLJIT_NUMBER_OF_SAVED_REGISTERS 8
#define SLJIT_NUMBER_OF_SAVED_FLOAT_REGISTERS 10
#define SLJIT_LOCALS_OFFSET_BASE (4 * SSIZE_OF(sw))
#define SLJIT_LOCALS_OFFSET_BASE (4 * (sljit_s32)sizeof(sljit_sw))
#endif /* !_WIN64 */
#define SLJIT_TMP_DEST_REG SLJIT_TMP_R0
#define SLJIT_TMP_MEM_REG SLJIT_TMP_R0
@ -617,6 +632,8 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_sw sljit_exec_offset(void *code);
#define SLJIT_PREF_SHIFT_REG SLJIT_R3
#define SLJIT_MASKED_SHIFT 1
#define SLJIT_MASKED_SHIFT32 1
#define SLJIT_UPPER_BITS_IGNORED 1
#define SLJIT_UPPER_BITS_ZERO_EXTENDED 1
#elif (defined SLJIT_CONFIG_ARM_32 && SLJIT_CONFIG_ARM_32)
@ -645,6 +662,8 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_sw sljit_exec_offset(void *code);
#define SLJIT_LOCALS_OFFSET_BASE (2 * (sljit_s32)sizeof(sljit_sw))
#define SLJIT_MASKED_SHIFT 1
#define SLJIT_MASKED_SHIFT32 1
#define SLJIT_UPPER_BITS_IGNORED 1
#define SLJIT_UPPER_BITS_ZERO_EXTENDED 1
#elif (defined SLJIT_CONFIG_PPC && SLJIT_CONFIG_PPC)
@ -665,6 +684,7 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_sw sljit_exec_offset(void *code);
#else
#define SLJIT_LOCALS_OFFSET_BASE (3 * (sljit_s32)sizeof(sljit_sw))
#endif /* SLJIT_CONFIG_PPC_64 || _AIX */
#define SLJIT_UPPER_BITS_IGNORED 1
#elif (defined SLJIT_CONFIG_MIPS && SLJIT_CONFIG_MIPS)
@ -686,6 +706,7 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_sw sljit_exec_offset(void *code);
#define SLJIT_TMP_DEST_FREG SLJIT_TMP_FR0
#define SLJIT_MASKED_SHIFT 1
#define SLJIT_MASKED_SHIFT32 1
#define SLJIT_UPPER_BITS_SIGN_EXTENDED 1
#elif (defined SLJIT_CONFIG_RISCV && SLJIT_CONFIG_RISCV)
@ -695,12 +716,19 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_sw sljit_exec_offset(void *code);
#define SLJIT_NUMBER_OF_FLOAT_REGISTERS 30
#define SLJIT_NUMBER_OF_SAVED_FLOAT_REGISTERS 12
#define SLJIT_NUMBER_OF_TEMPORARY_FLOAT_REGISTERS 2
#define SLJIT_SEPARATE_VECTOR_REGISTERS 1
#define SLJIT_NUMBER_OF_VECTOR_REGISTERS 30
#define SLJIT_NUMBER_OF_SAVED_VECTOR_REGISTERS 0
#define SLJIT_NUMBER_OF_TEMPORARY_VECTOR_REGISTERS 2
#define SLJIT_TMP_DEST_REG SLJIT_TMP_R1
#define SLJIT_TMP_MEM_REG SLJIT_TMP_R1
#define SLJIT_TMP_DEST_FREG SLJIT_TMP_FR0
#define SLJIT_TMP_DEST_VREG SLJIT_TMP_VR0
#define SLJIT_LOCALS_OFFSET_BASE 0
#define SLJIT_MASKED_SHIFT 1
#define SLJIT_MASKED_SHIFT32 1
#define SLJIT_UPPER_BITS_IGNORED 1
#define SLJIT_UPPER_BITS_SIGN_EXTENDED 1
#elif (defined SLJIT_CONFIG_S390X && SLJIT_CONFIG_S390X)
@ -736,6 +764,8 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_sw sljit_exec_offset(void *code);
#define SLJIT_TMP_DEST_FREG SLJIT_TMP_FR0
#define SLJIT_LOCALS_OFFSET_BASE SLJIT_S390X_DEFAULT_STACK_FRAME_SIZE
#define SLJIT_MASKED_SHIFT 1
#define SLJIT_UPPER_BITS_IGNORED 1
#define SLJIT_UPPER_BITS_PRESERVED 1
#elif (defined SLJIT_CONFIG_LOONGARCH && SLJIT_CONFIG_LOONGARCH)
@ -751,6 +781,7 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_sw sljit_exec_offset(void *code);
#define SLJIT_LOCALS_OFFSET_BASE 0
#define SLJIT_MASKED_SHIFT 1
#define SLJIT_MASKED_SHIFT32 1
#define SLJIT_UPPER_BITS_SIGN_EXTENDED 1
#elif (defined SLJIT_CONFIG_UNSUPPORTED && SLJIT_CONFIG_UNSUPPORTED)
@ -768,6 +799,13 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_sw sljit_exec_offset(void *code);
#endif
#if !(defined SLJIT_SEPARATE_VECTOR_REGISTERS && SLJIT_SEPARATE_VECTOR_REGISTERS)
#define SLJIT_NUMBER_OF_VECTOR_REGISTERS (SLJIT_NUMBER_OF_FLOAT_REGISTERS)
#define SLJIT_NUMBER_OF_SAVED_VECTOR_REGISTERS (SLJIT_NUMBER_OF_SAVED_FLOAT_REGISTERS)
#define SLJIT_NUMBER_OF_TEMPORARY_VECTOR_REGISTERS (SLJIT_NUMBER_OF_TEMPORARY_FLOAT_REGISTERS)
#define SLJIT_TMP_DEST_VREG (SLJIT_TMP_DEST_FREG)
#endif /* !SLJIT_SEPARATE_VECTOR_REGISTERS */
#define SLJIT_LOCALS_OFFSET (SLJIT_LOCALS_OFFSET_BASE)
#define SLJIT_NUMBER_OF_SCRATCH_REGISTERS \
@ -776,12 +814,27 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_sw sljit_exec_offset(void *code);
#define SLJIT_NUMBER_OF_SCRATCH_FLOAT_REGISTERS \
(SLJIT_NUMBER_OF_FLOAT_REGISTERS - SLJIT_NUMBER_OF_SAVED_FLOAT_REGISTERS)
#define SLJIT_NUMBER_OF_SCRATCH_VECTOR_REGISTERS \
(SLJIT_NUMBER_OF_VECTOR_REGISTERS - SLJIT_NUMBER_OF_SAVED_VECTOR_REGISTERS)
#if (defined SLJIT_UPPER_BITS_ZERO_EXTENDED && SLJIT_UPPER_BITS_ZERO_EXTENDED) \
+ (defined SLJIT_UPPER_BITS_SIGN_EXTENDED && SLJIT_UPPER_BITS_SIGN_EXTENDED) \
+ (defined SLJIT_UPPER_BITS_PRESERVED && SLJIT_UPPER_BITS_PRESERVED) > 1
#error "Invalid upper bits defintion"
#endif
#if (defined SLJIT_UPPER_BITS_PRESERVED && SLJIT_UPPER_BITS_PRESERVED) \
&& !(defined SLJIT_UPPER_BITS_IGNORED && SLJIT_UPPER_BITS_IGNORED)
#error "Upper bits preserved requires bits ignored"
#endif
/**********************************/
/* Temporary register management. */
/**********************************/
#define SLJIT_TMP_REGISTER_BASE (SLJIT_NUMBER_OF_REGISTERS + 2)
#define SLJIT_TMP_FREGISTER_BASE (SLJIT_NUMBER_OF_FLOAT_REGISTERS + 1)
#define SLJIT_TMP_VREGISTER_BASE (SLJIT_NUMBER_OF_VECTOR_REGISTERS + 1)
/* WARNING: Accessing temporary registers is not recommended, because they
are also used by the JIT compiler for various computations. Using them
@ -815,6 +868,18 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_sw sljit_exec_offset(void *code);
#define SLJIT_TMP_FR9 (SLJIT_TMP_FREGISTER_BASE + 9)
#define SLJIT_TMP_FR(i) (SLJIT_TMP_FREGISTER_BASE + (i))
#define SLJIT_TMP_VR0 (SLJIT_TMP_VREGISTER_BASE + 0)
#define SLJIT_TMP_VR1 (SLJIT_TMP_VREGISTER_BASE + 1)
#define SLJIT_TMP_VR2 (SLJIT_TMP_VREGISTER_BASE + 2)
#define SLJIT_TMP_VR3 (SLJIT_TMP_VREGISTER_BASE + 3)
#define SLJIT_TMP_VR4 (SLJIT_TMP_VREGISTER_BASE + 4)
#define SLJIT_TMP_VR5 (SLJIT_TMP_VREGISTER_BASE + 5)
#define SLJIT_TMP_VR6 (SLJIT_TMP_VREGISTER_BASE + 6)
#define SLJIT_TMP_VR7 (SLJIT_TMP_VREGISTER_BASE + 7)
#define SLJIT_TMP_VR8 (SLJIT_TMP_VREGISTER_BASE + 8)
#define SLJIT_TMP_VR9 (SLJIT_TMP_VREGISTER_BASE + 9)
#define SLJIT_TMP_VR(i) (SLJIT_TMP_VREGISTER_BASE + (i))
/********************************/
/* CPU status flags management. */
/********************************/

View File

@ -87,7 +87,7 @@ of sljitConfigInternal.h */
#ifdef __cplusplus
extern "C" {
#endif
#endif /* __cplusplus */
/* Version numbers. */
#define SLJIT_MAJOR_VERSION 0
@ -251,7 +251,7 @@ extern "C" {
#define SLJIT_FS7 (SLJIT_NUMBER_OF_FLOAT_REGISTERS - 7)
#define SLJIT_FS8 (SLJIT_NUMBER_OF_FLOAT_REGISTERS - 8)
#define SLJIT_FS9 (SLJIT_NUMBER_OF_FLOAT_REGISTERS - 9)
/* All S registers provided by the architecture can be accessed by SLJIT_FS(i)
/* All FS registers provided by the architecture can be accessed by SLJIT_FS(i)
The i parameter must be >= 0 and < SLJIT_NUMBER_OF_SAVED_FLOAT_REGISTERS. */
#define SLJIT_FS(i) (SLJIT_NUMBER_OF_FLOAT_REGISTERS - (i))
@ -262,6 +262,52 @@ extern "C" {
#define SLJIT_RETURN_FREG SLJIT_FR0
/* --------------------------------------------------------------------- */
/* Vector registers */
/* --------------------------------------------------------------------- */
/* Vector registers are storage areas, which are used for Single Instruction
Multiple Data (SIMD) computations. The VR and VS register sets overlap
in the same way as R and S register sets. See above.
The storage space of vector registers often overlap with floating point
registers. In this case setting the value of SLJIT_VR(i) destroys the
value of SLJIT_FR(i) and vice versa. See SLJIT_SEPARATE_VECTOR_REGISTERS
macro. */
/* Vector scratch registers. */
#define SLJIT_VR0 1
#define SLJIT_VR1 2
#define SLJIT_VR2 3
#define SLJIT_VR3 4
#define SLJIT_VR4 5
#define SLJIT_VR5 6
#define SLJIT_VR6 7
#define SLJIT_VR7 8
#define SLJIT_VR8 9
#define SLJIT_VR9 10
/* All VR registers provided by the architecture can be accessed by SLJIT_VR(i)
The i parameter must be >= 0 and < SLJIT_NUMBER_OF_VECTOR_REGISTERS. */
#define SLJIT_VR(i) (1 + (i))
/* Vector saved registers. */
#define SLJIT_VS0 (SLJIT_NUMBER_OF_VECTOR_REGISTERS)
#define SLJIT_VS1 (SLJIT_NUMBER_OF_VECTOR_REGISTERS - 1)
#define SLJIT_VS2 (SLJIT_NUMBER_OF_VECTOR_REGISTERS - 2)
#define SLJIT_VS3 (SLJIT_NUMBER_OF_VECTOR_REGISTERS - 3)
#define SLJIT_VS4 (SLJIT_NUMBER_OF_VECTOR_REGISTERS - 4)
#define SLJIT_VS5 (SLJIT_NUMBER_OF_VECTOR_REGISTERS - 5)
#define SLJIT_VS6 (SLJIT_NUMBER_OF_VECTOR_REGISTERS - 6)
#define SLJIT_VS7 (SLJIT_NUMBER_OF_VECTOR_REGISTERS - 7)
#define SLJIT_VS8 (SLJIT_NUMBER_OF_VECTOR_REGISTERS - 8)
#define SLJIT_VS9 (SLJIT_NUMBER_OF_VECTOR_REGISTERS - 9)
/* All VS registers provided by the architecture can be accessed by SLJIT_VS(i)
The i parameter must be >= 0 and < SLJIT_NUMBER_OF_SAVED_VECTOR_REGISTERS. */
#define SLJIT_VS(i) (SLJIT_NUMBER_OF_VECTOR_REGISTERS - (i))
/* Vector registers >= SLJIT_FIRST_SAVED_VECTOR_REG are saved registers. */
#define SLJIT_FIRST_SAVED_VECTOR_REG (SLJIT_VS0 - SLJIT_NUMBER_OF_SAVED_VECTOR_REGISTERS + 1)
/* --------------------------------------------------------------------- */
/* Argument type definitions */
/* --------------------------------------------------------------------- */
@ -483,6 +529,15 @@ struct sljit_compiler {
sljit_s32 fscratches;
/* Available float saved registers. */
sljit_s32 fsaveds;
#if (defined SLJIT_SEPARATE_VECTOR_REGISTERS && SLJIT_SEPARATE_VECTOR_REGISTERS) \
|| (defined SLJIT_ARGUMENT_CHECKS && SLJIT_ARGUMENT_CHECKS) \
|| (defined SLJIT_DEBUG && SLJIT_DEBUG) \
|| (defined SLJIT_VERBOSE && SLJIT_VERBOSE)
/* Available vector scratch registers. */
sljit_s32 vscratches;
/* Available vector saved registers. */
sljit_s32 vsaveds;
#endif /* SLJIT_SEPARATE_VECTOR_REGISTERS || SLJIT_ARGUMENT_CHECKS || SLJIT_DEBUG || SLJIT_VERBOSE */
/* Local stack size. */
sljit_s32 local_size;
/* Maximum code size. */
@ -563,6 +618,7 @@ struct sljit_compiler {
FILE* verbose;
#endif /* SLJIT_VERBOSE */
/* Note: SLJIT_DEBUG enables SLJIT_ARGUMENT_CHECKS. */
#if (defined SLJIT_ARGUMENT_CHECKS && SLJIT_ARGUMENT_CHECKS) \
|| (defined SLJIT_DEBUG && SLJIT_DEBUG)
/* Flags specified by the last arithmetic instruction.
@ -577,6 +633,13 @@ struct sljit_compiler {
#if (defined SLJIT_ARGUMENT_CHECKS && SLJIT_ARGUMENT_CHECKS) \
|| (defined SLJIT_DEBUG && SLJIT_DEBUG) \
|| (defined SLJIT_VERBOSE && SLJIT_VERBOSE)
#if !(defined SLJIT_SEPARATE_VECTOR_REGISTERS && SLJIT_SEPARATE_VECTOR_REGISTERS)
/* Available float scratch registers. */
sljit_s32 real_fscratches;
/* Available float saved registers. */
sljit_s32 real_fsaveds;
#endif /* !SLJIT_SEPARATE_VECTOR_REGISTERS */
/* Trust arguments when an API function is called.
Used internally for calling API functions. */
sljit_s32 skip_checks;
@ -634,7 +697,7 @@ static SLJIT_INLINE void* sljit_compiler_get_user_data(struct sljit_compiler *co
#if (defined SLJIT_VERBOSE && SLJIT_VERBOSE)
/* Passing NULL disables verbose. */
SLJIT_API_FUNC_ATTRIBUTE void sljit_compiler_verbose(struct sljit_compiler *compiler, FILE* verbose);
#endif
#endif /* SLJIT_VERBOSE */
/* Option bits for sljit_generate_code. */
@ -680,7 +743,9 @@ static SLJIT_INLINE sljit_uw sljit_get_generated_code_size(struct sljit_compiler
support while others (e.g. move with update) are emulated if not available.
However, even when a feature is emulated, specialized code paths may be
faster than the emulation. Some limitations are emulated as well so their
general case is supported but it has extra performance costs. */
general case is supported but it has extra performance costs.
Note: sljitConfigInternal.h also provides several feature detection macros. */
/* [Not emulated] Floating-point support is available. */
#define SLJIT_HAS_FPU 0
@ -715,20 +780,22 @@ static SLJIT_INLINE sljit_uw sljit_get_generated_code_size(struct sljit_compiler
a simd operation represents the same 128 bit register, and both SLJIT_FR0
and SLJIT_FR1 are overwritten. */
#define SLJIT_SIMD_REGS_ARE_PAIRS 13
/* [Not emulated] Atomic support is available (fine-grained). */
/* [Not emulated] Atomic support is available. */
#define SLJIT_HAS_ATOMIC 14
/* [Not emulated] Memory barrier support is available. */
#define SLJIT_HAS_MEMORY_BARRIER 15
#if (defined SLJIT_CONFIG_X86 && SLJIT_CONFIG_X86)
/* [Not emulated] AVX support is available on x86. */
#define SLJIT_HAS_AVX 100
/* [Not emulated] AVX2 support is available on x86. */
#define SLJIT_HAS_AVX2 101
#endif
#endif /* SLJIT_CONFIG_X86 */
#if (defined SLJIT_CONFIG_LOONGARCH)
/* [Not emulated] LASX support is available on LoongArch */
#define SLJIT_HAS_LASX 201
#endif
#endif /* SLJIT_CONFIG_LOONGARCH */
SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_has_cpu_feature(sljit_s32 feature_type);
@ -749,42 +816,65 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_cmp_info(sljit_s32 type);
with an error code. */
/*
The executable code is a function from the viewpoint of the C
language. The function calls must conform to the ABI (Application
Binary Interface) of the platform, which specify the purpose of
machine registers and stack handling among other things. The
sljit_emit_enter function emits the necessary instructions for
setting up a new context for the executable code. This is often
called as function prologue. Furthermore the options argument
can be used to pass configuration options to the compiler. The
The executable code is a callable function from the viewpoint
of the C language. Function calls must conform with the ABI
(Application Binary Interface) of the target platform, which
specify the purpose of machine registers and stack handling
among other things. The sljit_emit_enter function emits the
necessary instructions for setting up an entry point for the
executable code. This is often called as function prologue.
The "options" argument can be used to pass configuration options
to the sljit compiler which affects the generated code, until
another sljit_emit_enter or sljit_set_context is called. The
available options are listed before sljit_emit_enter.
The function argument list is specified by the SLJIT_ARGSx
(SLJIT_ARGS0 .. SLJIT_ARGS4) macros. Currently maximum four
arguments are supported. See the description of SLJIT_ARGSx
macros about argument passing. Furthermore the register set
used by the function must be declared as well. The number of
scratch and saved registers available to the function must
be passed to sljit_emit_enter. Only R registers between R0
and "scratches" argument can be used later. E.g. if "scratches"
is set to two, the scratch register set will be limited to
SLJIT_R0 and SLJIT_R1. The S registers and the floating point
registers ("fscratches" and "fsaveds") are specified in a
similar manner. The sljit_emit_enter is also capable of
allocating a stack space for local data. The "local_size"
argument contains the size in bytes of this local area, and
it can be accessed using SLJIT_MEM1(SLJIT_SP). The memory
area between SLJIT_SP (inclusive) and SLJIT_SP + local_size
(exclusive) can be modified freely until the function returns.
The stack space is not initialized to zero.
macros about argument passing.
The register set used by the function must be declared as well.
The number of scratch and saved registers available to the
function must be passed to sljit_emit_enter. Only R registers
between R0 and "scratches" argument can be used later. E.g.
if "scratches" is set to two, the scratch register set will
be limited to SLJIT_R0 and SLJIT_R1. The S registers are
declared in a similar manner, but their count is specified
by "saveds" argument. The floating point scratch and saved
registers can be set by using "scratches" and "saveds" argument
as well, but their value must be passed to the SLJIT_ENTER_FLOAT
macro, see below.
The sljit_emit_enter is also capable of allocating a stack
space for local data. The "local_size" argument contains the
size in bytes of this local area, and it can be accessed using
SLJIT_MEM1(SLJIT_SP). The memory area between SLJIT_SP (inclusive)
and SLJIT_SP + local_size (exclusive) can be modified freely
until the function returns. The alocated stack space is an
uninitialized memory area.
Floating point scratch and saved registers must be specified
by the SLJIT_ENTER_FLOAT macro, which result value should be
combined with scratches / saveds argument.
Examples:
To use three scratch and four floating point scratch
registers, the "scratches" argument must be set to:
3 | SLJIT_ENTER_FLOAT(4)
To use six saved and five floating point saved
registers, the "saveds" argument must be set to:
6 | SLJIT_ENTER_FLOAT(5)
Note: the following conditions must met:
0 <= scratches <= SLJIT_NUMBER_OF_REGISTERS
0 <= saveds <= SLJIT_NUMBER_OF_SAVED_REGISTERS
scratches + saveds <= SLJIT_NUMBER_OF_REGISTERS
0 <= fscratches <= SLJIT_NUMBER_OF_FLOAT_REGISTERS
0 <= fsaveds <= SLJIT_NUMBER_OF_SAVED_FLOAT_REGISTERS
fscratches + fsaveds <= SLJIT_NUMBER_OF_FLOAT_REGISTERS
0 <= float scratches <= SLJIT_NUMBER_OF_FLOAT_REGISTERS
0 <= float saveds <= SLJIT_NUMBER_OF_SAVED_FLOAT_REGISTERS
float scratches + float saveds <= SLJIT_NUMBER_OF_FLOAT_REGISTERS
Note: the compiler can use saved registers as scratch registers,
but the opposite is not supported
@ -793,6 +883,8 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_cmp_info(sljit_s32 type);
overwrites the previous context.
*/
/* The following options are available for sljit_emit_enter. */
/* Saved registers between SLJIT_S0 and SLJIT_S(n - 1) (inclusive)
are not saved / restored on function enter / return. Instead,
these registers can be used to pass / return data (such as
@ -808,17 +900,27 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_cmp_info(sljit_s32 type);
and all arguments must be stored in scratch registers. */
#define SLJIT_ENTER_REG_ARG 0x00000004
/* The local_size must be >= 0 and <= SLJIT_MAX_LOCAL_SIZE. */
#define SLJIT_MAX_LOCAL_SIZE 1048576
#if (defined SLJIT_CONFIG_X86 && SLJIT_CONFIG_X86)
/* Use VEX prefix for all SIMD operations on x86. */
#define SLJIT_ENTER_USE_VEX 0x00010000
#endif /* !SLJIT_CONFIG_X86 */
/* Macros for other sljit_emit_enter arguments. */
/* Floating point scratch and saved registers can be
specified by SLJIT_ENTER_FLOAT. */
#define SLJIT_ENTER_FLOAT(regs) ((regs) << 8)
/* Vector scratch and saved registers can be specified
by SLJIT_ENTER_VECTOR. */
#define SLJIT_ENTER_VECTOR(regs) ((regs) << 16)
/* The local_size must be >= 0 and <= SLJIT_MAX_LOCAL_SIZE. */
#define SLJIT_MAX_LOCAL_SIZE 1048576
SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_enter(struct sljit_compiler *compiler,
sljit_s32 options, sljit_s32 arg_types, sljit_s32 scratches, sljit_s32 saveds,
sljit_s32 fscratches, sljit_s32 fsaveds, sljit_s32 local_size);
sljit_s32 options, sljit_s32 arg_types,
sljit_s32 scratches, sljit_s32 saveds, sljit_s32 local_size);
/* The SLJIT compiler has a current context (which contains the local
stack space size, number of used registers, etc.) which is initialized
@ -834,8 +936,8 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_enter(struct sljit_compiler *compi
the previous context. */
SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_set_context(struct sljit_compiler *compiler,
sljit_s32 options, sljit_s32 arg_types, sljit_s32 scratches, sljit_s32 saveds,
sljit_s32 fscratches, sljit_s32 fsaveds, sljit_s32 local_size);
sljit_s32 options, sljit_s32 arg_types,
sljit_s32 scratches, sljit_s32 saveds, sljit_s32 local_size);
/* Return to the caller function. The sljit_emit_return_void function
does not return with any value. The sljit_emit_return function returns
@ -1092,16 +1194,21 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_return_to(struct sljit_compiler *c
the behaviour is undefined. */
#define SLJIT_DIV_SW (SLJIT_OP0_BASE + 7)
#define SLJIT_DIV_S32 (SLJIT_DIV_SW | SLJIT_32)
/* Flags: - (does not modify flags)
May return with SLJIT_ERR_UNSUPPORTED if SLJIT_HAS_MEMORY_BARRIER
feature is not supported (calling sljit_has_cpu_feature() with
this feature option returns with 0). */
#define SLJIT_MEMORY_BARRIER (SLJIT_OP0_BASE + 8)
/* Flags: - (does not modify flags)
ENDBR32 instruction for x86-32 and ENDBR64 instruction for x86-64
when Intel Control-flow Enforcement Technology (CET) is enabled.
No instructions are emitted for other architectures. */
#define SLJIT_ENDBR (SLJIT_OP0_BASE + 8)
#define SLJIT_ENDBR (SLJIT_OP0_BASE + 9)
/* Flags: - (may destroy flags)
Skip stack frames before return when Intel Control-flow
Enforcement Technology (CET) is enabled. No instructions
are emitted for other architectures. */
#define SLJIT_SKIP_FRAMES_BEFORE_RETURN (SLJIT_OP0_BASE + 9)
#define SLJIT_SKIP_FRAMES_BEFORE_RETURN (SLJIT_OP0_BASE + 10)
SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_op0(struct sljit_compiler *compiler, sljit_s32 op);
@ -1890,21 +1997,21 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_fmem_update(struct sljit_compiler
/* The following options are used by several simd operations. */
/* Load data into a simd register, this is the default */
/* Load data into a vector register, this is the default */
#define SLJIT_SIMD_LOAD 0x000000
/* Store data from a simd register */
/* Store data from a vector register */
#define SLJIT_SIMD_STORE 0x000001
/* The simd register contains floating point values */
/* The vector register contains floating point values */
#define SLJIT_SIMD_FLOAT 0x000400
/* Tests whether the operation is available */
#define SLJIT_SIMD_TEST 0x000800
/* Move data to/from a 64 bit (8 byte) long SIMD register */
/* Move data to/from a 64 bit (8 byte) long vector register */
#define SLJIT_SIMD_REG_64 (3 << 12)
/* Move data to/from a 128 bit (16 byte) long SIMD register */
/* Move data to/from a 128 bit (16 byte) long vector register */
#define SLJIT_SIMD_REG_128 (4 << 12)
/* Move data to/from a 256 bit (32 byte) long SIMD register */
/* Move data to/from a 256 bit (32 byte) long vector register */
#define SLJIT_SIMD_REG_256 (5 << 12)
/* Move data to/from a 512 bit (64 byte) long SIMD register */
/* Move data to/from a 512 bit (64 byte) long vector register */
#define SLJIT_SIMD_REG_512 (6 << 12)
/* Element size is 8 bit long (this is the default), usually cannot be combined with SLJIT_SIMD_FLOAT */
#define SLJIT_SIMD_ELEM_8 (0 << 18)
@ -1919,7 +2026,8 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_fmem_update(struct sljit_compiler
/* Element size is 256 bit long */
#define SLJIT_SIMD_ELEM_256 (5 << 18)
/* The following options are used by sljit_emit_simd_mov(). */
/* The following options are used by sljit_emit_simd_mov()
and sljit_emit_simd_op2(). */
/* Memory address is unaligned (this is the default) */
#define SLJIT_SIMD_MEM_UNALIGNED (0 << 24)
@ -1936,7 +2044,7 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_fmem_update(struct sljit_compiler
/* Memory address is 512 bit aligned */
#define SLJIT_SIMD_MEM_ALIGNED_512 (6 << 24)
/* Moves data between a simd register and memory.
/* Moves data between a vector register and memory.
If the operation is not supported, it returns with
SLJIT_ERR_UNSUPPORTED. If SLJIT_SIMD_TEST is passed,
@ -1944,21 +2052,21 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_fmem_update(struct sljit_compiler
type must be a combination of SLJIT_SIMD_* and
SLJIT_SIMD_MEM_* options
freg is the source or destination simd register
vreg is the source or destination vector register
of the operation
srcdst must be a memory operand or a simd register
srcdst must be a memory operand or a vector register
Note:
The alignment and element size must be
less or equal than simd register size.
less or equal than vector register size.
Flags: - (does not modify flags) */
SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_mov(struct sljit_compiler *compiler, sljit_s32 type,
sljit_s32 freg,
sljit_s32 vreg,
sljit_s32 srcdst, sljit_sw srcdstw);
/* Replicates a scalar value to all lanes of a simd
/* Replicates a scalar value to all lanes of a vector
register.
If the operation is not supported, it returns with
@ -1967,7 +2075,7 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_mov(struct sljit_compiler *co
type must be a combination of SLJIT_SIMD_* options
except SLJIT_SIMD_STORE.
freg is the destination simd register of the operation
vreg is the destination vector register of the operation
src is the value which is replicated
Note:
@ -1977,7 +2085,7 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_mov(struct sljit_compiler *co
Flags: - (does not modify flags) */
SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_replicate(struct sljit_compiler *compiler, sljit_s32 type,
sljit_s32 freg,
sljit_s32 vreg,
sljit_s32 src, sljit_sw srcw);
/* The following options are used by sljit_emit_simd_lane_mov(). */
@ -1987,7 +2095,7 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_replicate(struct sljit_compil
/* Sign extend the integer value stored from the lane. */
#define SLJIT_SIMD_LANE_SIGNED 0x000004
/* Moves data between a simd register lane and a register or
/* Moves data between a vector register lane and a register or
memory. If the srcdst argument is a register, it must be
a floating point register when SLJIT_SIMD_FLOAT is specified,
or a general purpose register otherwise.
@ -2003,7 +2111,7 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_replicate(struct sljit_compil
is set and SLJIT_SIMD_FLOAT is not set
SLJIT_SIMD_LANE_ZERO - when SLJIT_SIMD_LOAD
is specified
freg is the source or destination simd register
vreg is the source or destination vector register
of the operation
lane_index is the index of the lane
srcdst is the destination operand for loads, and
@ -2015,11 +2123,11 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_replicate(struct sljit_compil
Flags: - (does not modify flags) */
SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_lane_mov(struct sljit_compiler *compiler, sljit_s32 type,
sljit_s32 freg, sljit_s32 lane_index,
sljit_s32 vreg, sljit_s32 lane_index,
sljit_s32 srcdst, sljit_sw srcdstw);
/* Replicates a scalar value from a lane to all lanes
of a simd register.
of a vector register.
If the operation is not supported, it returns with
SLJIT_ERR_UNSUPPORTED. If SLJIT_SIMD_TEST is passed,
@ -2027,14 +2135,14 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_lane_mov(struct sljit_compile
type must be a combination of SLJIT_SIMD_* options
except SLJIT_SIMD_STORE.
freg is the destination simd register of the operation
src is the simd register which lane is replicated
vreg is the destination vector register of the operation
src is the vector register which lane is replicated
src_lane_index is the lane index of the src register
Flags: - (does not modify flags) */
SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_lane_replicate(struct sljit_compiler *compiler, sljit_s32 type,
sljit_s32 freg,
sljit_s32 vreg,
sljit_s32 src, sljit_s32 src_lane_index);
/* The following options are used by sljit_emit_simd_load_extend(). */
@ -2048,7 +2156,7 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_lane_replicate(struct sljit_c
/* Extend data to 64 bit */
#define SLJIT_SIMD_EXTEND_64 (3 << 24)
/* Extend elements and stores them in a simd register.
/* Extend elements and stores them in a vector register.
The extension operation increases the size of the
elements (e.g. from 16 bit to 64 bit). For integer
values, the extension can be signed or unsigned.
@ -2059,15 +2167,15 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_lane_replicate(struct sljit_c
type must be a combination of SLJIT_SIMD_*, and
SLJIT_SIMD_EXTEND_* options except SLJIT_SIMD_STORE
freg is the destination simd register of the operation
src must be a memory operand or a simd register.
vreg is the destination vector register of the operation
src must be a memory operand or a vector register.
In the latter case, the source elements are stored
in the lower half of the register.
Flags: - (does not modify flags) */
SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_extend(struct sljit_compiler *compiler, sljit_s32 type,
sljit_s32 freg,
sljit_s32 vreg,
sljit_s32 src, sljit_sw srcw);
/* Extract the highest bit (usually the sign bit) from
@ -2079,16 +2187,16 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_extend(struct sljit_compiler
type must be a combination of SLJIT_SIMD_* and SLJIT_32
options except SLJIT_SIMD_LOAD
freg is the source simd register of the operation
vreg is the source vector register of the operation
dst is the destination operand
Flags: - (does not modify flags) */
SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_sign(struct sljit_compiler *compiler, sljit_s32 type,
sljit_s32 freg,
sljit_s32 vreg,
sljit_s32 dst, sljit_sw dstw);
/* The following options are used by sljit_emit_simd_op2(). */
/* The following operations are used by sljit_emit_simd_op2(). */
/* Binary 'and' operation */
#define SLJIT_SIMD_OP2_AND 0x000001
@ -2096,23 +2204,40 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_sign(struct sljit_compiler *c
#define SLJIT_SIMD_OP2_OR 0x000002
/* Binary 'xor' operation */
#define SLJIT_SIMD_OP2_XOR 0x000003
/* Shuffle bytes of src1 using the indicies in src2 */
#define SLJIT_SIMD_OP2_SHUFFLE 0x000004
/* Perform simd operations using simd registers.
/* Perform simd operations using vector registers.
If the operation is not supported, it returns with
SLJIT_ERR_UNSUPPORTED. If SLJIT_SIMD_TEST is passed,
it does not emit any instructions.
type must be a combination of SLJIT_SIMD_* and SLJIT_SIMD_OP2_
options except SLJIT_SIMD_LOAD and SLJIT_SIMD_STORE
dst_freg is the destination register of the operation
src1_freg is the first source register of the operation
src1_freg is the second source register of the operation
type must be a combination of SLJIT_SIMD_*, SLJIT_SIMD_MEM_*
and SLJIT_SIMD_OP2_* options except SLJIT_SIMD_LOAD
and SLJIT_SIMD_STORE
dst_vreg is the destination register of the operation
src1_vreg is the first source register of the operation
src2 is the second source operand of the operation
Flags: - (does not modify flags) */
SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_op2(struct sljit_compiler *compiler, sljit_s32 type,
sljit_s32 dst_freg, sljit_s32 src1_freg, sljit_s32 src2_freg);
sljit_s32 dst_vreg, sljit_s32 src1_vreg, sljit_s32 src2, sljit_sw src2w);
/* The following operations are used by sljit_emit_atomic_load() and
sljit_emit_atomic_store() operations. */
/* Tests whether the atomic operation is available (does not generate
any instructions). When a load from is allowed, its corresponding
store form is allowed and vice versa. */
#define SLJIT_ATOMIC_TEST 0x10000
/* The compiler must generate compare and swap instruction.
When this bit is set, calling sljit_emit_atomic_load() is optional. */
#define SLJIT_ATOMIC_USE_CAS 0x20000
/* The compiler must generate load-acquire and store-release instructions.
When this bit is set, the temp_reg for sljit_emit_atomic_store is not used. */
#define SLJIT_ATOMIC_USE_LS 0x40000
/* The sljit_emit_atomic_load and sljit_emit_atomic_store operation pair
can perform an atomic read-modify-write operation. First, an unsigned
@ -2121,23 +2246,17 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_op2(struct sljit_compiler *co
sljit_emit_atomic_store. A thread can only perform a single atomic
operation at a time.
Note: atomic operations are experimental, and not implemented
for all cpus.
The following conditions must be satisfied, or the operation
is undefined:
- the address provided in mem_reg must be divisible by the size of
the value (only naturally aligned updates are supported)
- no memory writes are allowed between the load and store operations
regardless of its target address (currently read operations are
allowed, but this might change in the future)
- no memory operations are allowed between the load and store operations
- the memory operation (op) and the base address (stored in mem_reg)
passed to the load/store operations must be the same (the mem_reg
can be a different register, only its value must be the same)
- an store must always follow a load for the same transaction.
- a store must always follow a load for the same transaction.
op must be between SLJIT_MOV and SLJIT_MOV_P, excluding all
signed loads such as SLJIT_MOV32_S16
op must be between SLJIT_MOV and SLJIT_MOV_P
dst_reg is the register where the data will be loaded into
mem_reg is the base address of the memory load (it cannot be
SLJIT_SP or a virtual register on x86-32)
@ -2151,18 +2270,19 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_atomic_load(struct sljit_compiler
allows performing an atomic read-modify-write operation. See the
description of sljit_emit_atomic_load.
op must be between SLJIT_MOV and SLJIT_MOV_P, excluding all signed
loads such as SLJIT_MOV32_S16
op must be between SLJIT_MOV and SLJIT_MOV_P
src_reg is the register which value is stored into the memory
mem_reg is the base address of the memory store (it cannot be
SLJIT_SP or a virtual register on x86-32)
temp_reg is a not preserved scratch register, which must be
initialized with the value loaded into the dst_reg during the
corresponding sljit_emit_atomic_load operation, or the operation
is undefined
temp_reg is a scratch register, which must be initialized with
the value loaded into the dst_reg during the corresponding
sljit_emit_atomic_load operation, or the operation is undefined.
The temp_reg register preserves its value, if the memory store
is successful. Otherwise, its value is undefined.
Flags: ATOMIC_STORED is set if the operation is successful,
otherwise the memory remains unchanged. */
Flags: ATOMIC_STORED
if ATOMIC_STORED flag is set, it represents that the memory
is updated with a new value. Otherwise the memory is unchanged. */
SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_atomic_store(struct sljit_compiler *compiler, sljit_s32 op,
sljit_s32 src_reg,
sljit_s32 mem_reg,
@ -2457,10 +2577,10 @@ SLJIT_API_FUNC_ATTRIBUTE void sljit_set_function_context(void** func_ptr, struct
it is sometimes desired to free all unused memory regions, e.g.
before the application terminates. */
SLJIT_API_FUNC_ATTRIBUTE void sljit_free_unused_memory_exec(void);
#endif
#endif /* SLJIT_EXECUTABLE_ALLOCATOR */
#ifdef __cplusplus
} /* extern "C" */
#endif
#endif /* __cplusplus */
#endif /* SLJIT_LIR_H_ */

View File

@ -114,6 +114,7 @@ static const sljit_u8 freg_ebit_map[((SLJIT_NUMBER_OF_FLOAT_REGISTERS + 2) << 1)
#define CLZ 0xe16f0f10
#define CMN 0xe1600000
#define CMP 0xe1400000
#define DMB_SY 0xf57ff05f
#define EOR 0xe0200000
#define LDR 0xe5100000
#define LDR_POST 0xe4100000
@ -180,6 +181,7 @@ static const sljit_u8 freg_ebit_map[((SLJIT_NUMBER_OF_FLOAT_REGISTERS + 2) << 1)
#define VST1_s 0xf4800000
#define VSTR_F32 0xed000a00
#define VSUB_F32 0xee300a40
#define VTBL 0xf3b00800
#if (defined SLJIT_CONFIG_ARM_V7 && SLJIT_CONFIG_ARM_V7)
/* Arm v7 specific instructions. */
@ -198,11 +200,28 @@ static sljit_s32 function_check_is_freg(struct sljit_compiler *compiler, sljit_s
if (is_32 && fr >= SLJIT_F64_SECOND(SLJIT_FR0))
fr -= SLJIT_F64_SECOND(0);
return (fr >= SLJIT_FR0 && fr < (SLJIT_FR0 + compiler->fscratches))
|| (fr > (SLJIT_FS0 - compiler->fsaveds) && fr <= SLJIT_FS0)
return (fr >= SLJIT_FR0 && fr < (SLJIT_FR0 + compiler->real_fscratches))
|| (fr > (SLJIT_FS0 - compiler->real_fsaveds) && fr <= SLJIT_FS0)
|| (fr >= SLJIT_TMP_FREGISTER_BASE && fr < (SLJIT_TMP_FREGISTER_BASE + SLJIT_NUMBER_OF_TEMPORARY_FLOAT_REGISTERS));
}
static sljit_s32 function_check_is_vreg(struct sljit_compiler *compiler, sljit_s32 vr, sljit_s32 type)
{
sljit_s32 vr_low = vr;
if (compiler->scratches == -1)
return 0;
if (SLJIT_SIMD_GET_REG_SIZE(type) == 4) {
vr += (vr & 0x1);
vr_low = vr - 1;
}
return (vr >= SLJIT_VR0 && vr < (SLJIT_VR0 + compiler->vscratches))
|| (vr_low > (SLJIT_VS0 - compiler->vsaveds) && vr_low <= SLJIT_VS0)
|| (vr >= SLJIT_TMP_VREGISTER_BASE && vr < (SLJIT_TMP_VREGISTER_BASE + SLJIT_NUMBER_OF_TEMPORARY_VECTOR_REGISTERS));
}
#endif /* SLJIT_ARGUMENT_CHECKS */
#if (defined SLJIT_CONFIG_ARM_V6 && SLJIT_CONFIG_ARM_V6)
@ -364,7 +383,7 @@ static sljit_uw patch_pc_relative_loads(sljit_uw *last_pc_patch, sljit_uw *code_
while (last_pc_patch < code_ptr) {
/* Data transfer instruction with Rn == r15. */
if ((*last_pc_patch & 0x0e0f0000) == 0x040f0000) {
if ((*last_pc_patch & 0x0e4f0000) == 0x040f0000) {
diff = (sljit_uw)(const_pool - last_pc_patch);
ind = (*last_pc_patch) & 0xfff;
@ -476,6 +495,14 @@ static SLJIT_INLINE sljit_s32 emit_imm(struct sljit_compiler *compiler, sljit_s3
static SLJIT_INLINE sljit_s32 detect_jump_type(struct sljit_jump *jump, sljit_uw *code_ptr, sljit_uw *code, sljit_sw executable_offset)
{
sljit_sw diff;
sljit_uw target_addr;
sljit_uw jump_addr = (sljit_uw)code_ptr;
sljit_uw orig_addr = jump->addr;
SLJIT_UNUSED_ARG(executable_offset);
#if (defined SLJIT_CONFIG_ARM_V7 && SLJIT_CONFIG_ARM_V7)
jump->addr = jump_addr;
#endif
if (jump->flags & SLJIT_REWRITABLE_JUMP)
return 0;
@ -486,12 +513,17 @@ static SLJIT_INLINE sljit_s32 detect_jump_type(struct sljit_jump *jump, sljit_uw
#endif /* SLJIT_CONFIG_ARM_V6 */
if (jump->flags & JUMP_ADDR)
diff = ((sljit_sw)jump->u.target - (sljit_sw)(code_ptr + 2) - executable_offset);
target_addr = jump->u.target;
else {
SLJIT_ASSERT(jump->u.label != NULL);
diff = ((sljit_sw)(code + jump->u.label->size) - (sljit_sw)(code_ptr + 2));
target_addr = (sljit_uw)SLJIT_ADD_EXEC_OFFSET(code + jump->u.label->size, executable_offset);
if (jump->u.label->size > orig_addr)
jump_addr = (sljit_uw)(code + orig_addr);
}
diff = (sljit_sw)target_addr - (sljit_sw)SLJIT_ADD_EXEC_OFFSET(jump_addr + 8, executable_offset);
/* Branch to Thumb code has not been optimized yet. */
if (diff & 0x3)
return 0;
@ -503,13 +535,10 @@ static SLJIT_INLINE sljit_s32 detect_jump_type(struct sljit_jump *jump, sljit_uw
jump->flags |= PATCH_B;
return 1;
}
}
else {
if (diff <= 0x01ffffff && diff >= -0x02000000) {
} else if (diff <= 0x01ffffff && diff >= -0x02000000) {
*code_ptr = (B - CONDITIONAL) | (*code_ptr & COND_MASK);
jump->flags |= PATCH_B;
}
}
#else /* !SLJIT_CONFIG_ARM_V6 */
if (diff <= 0x01ffffff && diff >= -0x02000000) {
*code_ptr = ((jump->flags & IS_BL) ? (BL - CONDITIONAL) : (B - CONDITIONAL)) | (*code_ptr & COND_MASK);
@ -714,16 +743,21 @@ static void set_const_value(sljit_uw addr, sljit_sw executable_offset, sljit_uw
static SLJIT_INLINE sljit_sw mov_addr_get_length(struct sljit_jump *jump, sljit_ins *code_ptr, sljit_ins *code, sljit_sw executable_offset)
{
sljit_uw addr;
sljit_uw jump_addr = (sljit_uw)code_ptr;
sljit_sw diff;
SLJIT_UNUSED_ARG(executable_offset);
if (jump->flags & JUMP_ADDR)
addr = jump->u.target;
else
else {
addr = (sljit_uw)SLJIT_ADD_EXEC_OFFSET(code + jump->u.label->size, executable_offset);
if (jump->u.label->size > jump->addr)
jump_addr = (sljit_uw)(code + jump->addr);
}
/* The pc+8 offset is represented by the 2 * SSIZE_OF(ins) below. */
diff = (sljit_sw)addr - (sljit_sw)SLJIT_ADD_EXEC_OFFSET(code_ptr, executable_offset);
diff = (sljit_sw)addr - (sljit_sw)SLJIT_ADD_EXEC_OFFSET(jump_addr, executable_offset);
if ((diff & 0x3) == 0 && diff <= (0x3fc + 2 * SSIZE_OF(ins)) && diff >= (-0x3fc + 2 * SSIZE_OF(ins))) {
jump->flags |= PATCH_B;
@ -784,6 +818,10 @@ static void reduce_code_size(struct sljit_compiler *compiler)
if (!(jump->flags & (SLJIT_REWRITABLE_JUMP | JUMP_ADDR))) {
/* Unit size: instruction. */
diff = (sljit_sw)jump->u.label->size - (sljit_sw)jump->addr - 2;
if (jump->u.label->size > jump->addr) {
SLJIT_ASSERT(jump->u.label->size - size_reduce >= jump->addr);
diff -= (sljit_sw)size_reduce;
}
if (diff <= (0x01ffffff / SSIZE_OF(ins)) && diff >= (-0x02000000 / SSIZE_OF(ins)))
total_size = 1 - 1;
@ -796,6 +834,11 @@ static void reduce_code_size(struct sljit_compiler *compiler)
if (!(jump->flags & JUMP_ADDR)) {
diff = (sljit_sw)jump->u.label->size - (sljit_sw)jump->addr;
if (jump->u.label->size > jump->addr) {
SLJIT_ASSERT(jump->u.label->size - size_reduce >= jump->addr);
diff -= (sljit_sw)size_reduce;
}
if (diff <= 0xff + 2 && diff >= -0xff + 2)
total_size = 0;
}
@ -917,7 +960,6 @@ SLJIT_API_FUNC_ATTRIBUTE void* sljit_generate_code(struct sljit_compiler *compil
jump->addr = (sljit_uw)code_ptr;
#else /* !SLJIT_CONFIG_ARM_V6 */
word_count += jump->flags >> JUMP_SIZE_SHIFT;
jump->addr = (sljit_uw)code_ptr;
if (!detect_jump_type(jump, code_ptr, code, executable_offset)) {
code_ptr[2] = code_ptr[0];
addr = ((code_ptr[0] & 0xf) << 12);
@ -1131,6 +1173,9 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_has_cpu_feature(sljit_s32 feature_type)
case SLJIT_HAS_COPY_F32:
case SLJIT_HAS_COPY_F64:
case SLJIT_HAS_ATOMIC:
#if (defined SLJIT_CONFIG_ARM_V7 && SLJIT_CONFIG_ARM_V7)
case SLJIT_HAS_MEMORY_BARRIER:
#endif /* SLJIT_CONFIG_ARM_V7 */
return 1;
case SLJIT_HAS_CTZ:
@ -1225,9 +1270,11 @@ static sljit_s32 emit_op(struct sljit_compiler *compiler, sljit_s32 op, sljit_s3
sljit_s32 src2, sljit_sw src2w);
SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_enter(struct sljit_compiler *compiler,
sljit_s32 options, sljit_s32 arg_types, sljit_s32 scratches, sljit_s32 saveds,
sljit_s32 fscratches, sljit_s32 fsaveds, sljit_s32 local_size)
sljit_s32 options, sljit_s32 arg_types,
sljit_s32 scratches, sljit_s32 saveds, sljit_s32 local_size)
{
sljit_s32 fscratches;
sljit_s32 fsaveds;
sljit_uw imm, offset;
sljit_s32 i, tmp, size, word_arg_count;
sljit_s32 saved_arg_count = SLJIT_KEPT_SAVEDS_COUNT(options);
@ -1240,11 +1287,15 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_enter(struct sljit_compiler *compi
#endif
CHECK_ERROR();
CHECK(check_sljit_emit_enter(compiler, options, arg_types, scratches, saveds, fscratches, fsaveds, local_size));
set_emit_enter(compiler, options, arg_types, scratches, saveds, fscratches, fsaveds, local_size);
CHECK(check_sljit_emit_enter(compiler, options, arg_types, scratches, saveds, local_size));
set_emit_enter(compiler, options, arg_types, scratches, saveds, local_size);
scratches = ENTER_GET_REGS(scratches);
saveds = ENTER_GET_REGS(saveds);
fscratches = compiler->fscratches;
fsaveds = compiler->fsaveds;
imm = 0;
tmp = SLJIT_S0 - saveds;
for (i = SLJIT_S0 - saved_arg_count; i > tmp; i--)
imm |= (sljit_uw)1 << reg_map[i];
@ -1391,15 +1442,21 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_enter(struct sljit_compiler *compi
}
SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_set_context(struct sljit_compiler *compiler,
sljit_s32 options, sljit_s32 arg_types, sljit_s32 scratches, sljit_s32 saveds,
sljit_s32 fscratches, sljit_s32 fsaveds, sljit_s32 local_size)
sljit_s32 options, sljit_s32 arg_types,
sljit_s32 scratches, sljit_s32 saveds, sljit_s32 local_size)
{
sljit_s32 fscratches;
sljit_s32 fsaveds;
sljit_s32 size;
CHECK_ERROR();
CHECK(check_sljit_set_context(compiler, options, arg_types, scratches, saveds, fscratches, fsaveds, local_size));
set_set_context(compiler, options, arg_types, scratches, saveds, fscratches, fsaveds, local_size);
CHECK(check_sljit_set_context(compiler, options, arg_types, scratches, saveds, local_size));
set_emit_enter(compiler, options, arg_types, scratches, saveds, local_size);
scratches = ENTER_GET_REGS(scratches);
saveds = ENTER_GET_REGS(saveds);
fscratches = compiler->fscratches;
fsaveds = compiler->fsaveds;
size = GET_SAVED_REGISTERS_SIZE(scratches, saveds - SLJIT_KEPT_SAVEDS_COUNT(options), 1);
/* Doubles are saved, so alignment is unaffected. */
@ -2364,6 +2421,12 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_op0(struct sljit_compiler *compile
| (saved_reg_list[0] << 12) /* ldr rX, [sp], #8/16 */);
}
return SLJIT_SUCCESS;
case SLJIT_MEMORY_BARRIER:
#if (defined SLJIT_CONFIG_ARM_V7 && SLJIT_CONFIG_ARM_V7)
return push_inst(compiler, DMB_SY);
#else /* !SLJIT_CONFIG_ARM_V7 */
return SLJIT_ERR_UNSUPPORTED;
#endif /* SLJIT_CONFIG_ARM_V7 */
case SLJIT_ENDBR:
case SLJIT_SKIP_FRAMES_BEFORE_RETURN:
return SLJIT_SUCCESS;
@ -2630,7 +2693,7 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_get_register_index(sljit_s32 type, slji
if (type == SLJIT_FLOAT_REGISTER || type == SLJIT_SIMD_REG_64)
return freg_map[reg];
if (type != SLJIT_SIMD_REG_128)
if (type == SLJIT_SIMD_REG_128)
return freg_map[reg] & ~0x1;
return -1;
@ -3105,9 +3168,9 @@ SLJIT_API_FUNC_ATTRIBUTE struct sljit_jump* sljit_emit_jump(struct sljit_compile
if (type >= SLJIT_FAST_CALL)
PTR_FAIL_IF(prepare_blx(compiler));
jump->addr = compiler->size;
PTR_FAIL_IF(push_inst_with_unique_literal(compiler, ((EMIT_DATA_TRANSFER(WORD_SIZE | LOAD_DATA, 1,
type <= SLJIT_JUMP ? TMP_PC : TMP_REG1, TMP_PC, 0)) & ~COND_MASK) | get_cc(compiler, type), 0));
jump->addr = compiler->size - 1;
if (jump->flags & SLJIT_REWRITABLE_JUMP)
compiler->patches++;
@ -3907,7 +3970,7 @@ static SLJIT_INLINE sljit_s32 simd_get_quad_reg_index(sljit_s32 freg)
#define SLJIT_QUAD_OTHER_HALF(freg) ((((freg) & 0x1) << 1) - 1)
SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_mov(struct sljit_compiler *compiler, sljit_s32 type,
sljit_s32 freg,
sljit_s32 vreg,
sljit_s32 srcdst, sljit_sw srcdstw)
{
sljit_s32 reg_size = SLJIT_SIMD_GET_REG_SIZE(type);
@ -3916,7 +3979,7 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_mov(struct sljit_compiler *co
sljit_ins ins;
CHECK_ERROR();
CHECK(check_sljit_emit_simd_mov(compiler, type, freg, srcdst, srcdstw));
CHECK(check_sljit_emit_simd_mov(compiler, type, vreg, srcdst, srcdstw));
ADJUST_LOCAL_OFFSET(srcdst, srcdstw);
@ -3930,16 +3993,16 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_mov(struct sljit_compiler *co
return SLJIT_SUCCESS;
if (reg_size == 4)
freg = simd_get_quad_reg_index(freg);
vreg = simd_get_quad_reg_index(vreg);
if (!(srcdst & SLJIT_MEM)) {
if (reg_size == 4)
srcdst = simd_get_quad_reg_index(srcdst);
if (type & SLJIT_SIMD_STORE)
ins = VD(srcdst) | VN(freg) | VM(freg);
ins = VD(srcdst) | VN(vreg) | VM(vreg);
else
ins = VD(freg) | VN(srcdst) | VM(srcdst);
ins = VD(vreg) | VN(srcdst) | VM(srcdst);
if (reg_size == 4)
ins |= (sljit_ins)1 << 6;
@ -3952,7 +4015,7 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_mov(struct sljit_compiler *co
if (elem_size > 3)
elem_size = 3;
ins = ((type & SLJIT_SIMD_STORE) ? VST1 : VLD1) | VD(freg)
ins = ((type & SLJIT_SIMD_STORE) ? VST1 : VLD1) | VD(vreg)
| (sljit_ins)((reg_size == 3) ? (0x7 << 8) : (0xa << 8));
SLJIT_ASSERT(reg_size >= alignment);
@ -4060,7 +4123,7 @@ static sljit_ins simd_get_imm(sljit_s32 elem_size, sljit_uw value)
}
SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_replicate(struct sljit_compiler *compiler, sljit_s32 type,
sljit_s32 freg,
sljit_s32 vreg,
sljit_s32 src, sljit_sw srcw)
{
sljit_s32 reg_size = SLJIT_SIMD_GET_REG_SIZE(type);
@ -4068,7 +4131,7 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_replicate(struct sljit_compil
sljit_ins ins, imm;
CHECK_ERROR();
CHECK(check_sljit_emit_simd_replicate(compiler, type, freg, src, srcw));
CHECK(check_sljit_emit_simd_replicate(compiler, type, vreg, src, srcw));
ADJUST_LOCAL_OFFSET(src, srcw);
@ -4082,24 +4145,24 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_replicate(struct sljit_compil
return SLJIT_SUCCESS;
if (reg_size == 4)
freg = simd_get_quad_reg_index(freg);
vreg = simd_get_quad_reg_index(vreg);
if (src == SLJIT_IMM && srcw == 0)
return push_inst(compiler, VMOV_i | ((reg_size == 4) ? (1 << 6) : 0) | VD(freg));
return push_inst(compiler, VMOV_i | ((reg_size == 4) ? (1 << 6) : 0) | VD(vreg));
if (SLJIT_UNLIKELY(elem_size == 3)) {
SLJIT_ASSERT(type & SLJIT_SIMD_FLOAT);
if (src & SLJIT_MEM) {
FAIL_IF(emit_fop_mem(compiler, FPU_LOAD | SLJIT_32, freg, src, srcw));
src = freg;
} else if (freg != src)
FAIL_IF(push_inst(compiler, VORR | VD(freg) | VN(src) | VM(src)));
FAIL_IF(emit_fop_mem(compiler, FPU_LOAD | SLJIT_32, vreg, src, srcw));
src = vreg;
} else if (vreg != src)
FAIL_IF(push_inst(compiler, VORR | VD(vreg) | VN(src) | VM(src)));
freg += SLJIT_QUAD_OTHER_HALF(freg);
vreg += SLJIT_QUAD_OTHER_HALF(vreg);
if (freg != src)
return push_inst(compiler, VORR | VD(freg) | VN(src) | VM(src));
if (vreg != src)
return push_inst(compiler, VORR | VD(vreg) | VN(src) | VM(src));
return SLJIT_SUCCESS;
}
@ -4111,7 +4174,7 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_replicate(struct sljit_compil
if (reg_size == 4)
ins |= (sljit_ins)1 << 5;
return push_inst(compiler, VLD1_r | ins | VD(freg) | RN(src) | 0xf);
return push_inst(compiler, VLD1_r | ins | VD(vreg) | RN(src) | 0xf);
}
if (type & SLJIT_SIMD_FLOAT) {
@ -4121,7 +4184,7 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_replicate(struct sljit_compil
if (reg_size == 4)
ins |= (sljit_ins)1 << 6;
return push_inst(compiler, VDUP_s | ins | VD(freg) | (sljit_ins)freg_map[src]);
return push_inst(compiler, VDUP_s | ins | VD(vreg) | (sljit_ins)freg_map[src]);
}
if (src == SLJIT_IMM) {
@ -4134,7 +4197,7 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_replicate(struct sljit_compil
if (reg_size == 4)
imm |= (sljit_ins)1 << 6;
return push_inst(compiler, VMOV_i | imm | VD(freg));
return push_inst(compiler, VMOV_i | imm | VD(vreg));
}
FAIL_IF(load_immediate(compiler, TMP_REG1, (sljit_uw)srcw));
@ -4156,11 +4219,11 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_replicate(struct sljit_compil
if (reg_size == 4)
ins |= (sljit_ins)1 << 21;
return push_inst(compiler, VDUP | ins | VN(freg) | RD(src));
return push_inst(compiler, VDUP | ins | VN(vreg) | RD(src));
}
SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_lane_mov(struct sljit_compiler *compiler, sljit_s32 type,
sljit_s32 freg, sljit_s32 lane_index,
sljit_s32 vreg, sljit_s32 lane_index,
sljit_s32 srcdst, sljit_sw srcdstw)
{
sljit_s32 reg_size = SLJIT_SIMD_GET_REG_SIZE(type);
@ -4168,7 +4231,7 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_lane_mov(struct sljit_compile
sljit_ins ins;
CHECK_ERROR();
CHECK(check_sljit_emit_simd_lane_mov(compiler, type, freg, lane_index, srcdst, srcdstw));
CHECK(check_sljit_emit_simd_lane_mov(compiler, type, vreg, lane_index, srcdst, srcdstw));
ADJUST_LOCAL_OFFSET(srcdst, srcdstw);
@ -4182,7 +4245,7 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_lane_mov(struct sljit_compile
return SLJIT_SUCCESS;
if (reg_size == 4)
freg = simd_get_quad_reg_index(freg);
vreg = simd_get_quad_reg_index(vreg);
if (type & SLJIT_SIMD_LANE_ZERO) {
ins = (reg_size == 3) ? 0 : ((sljit_ins)1 << 6);
@ -4190,62 +4253,62 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_lane_mov(struct sljit_compile
if (type & SLJIT_SIMD_FLOAT) {
if (elem_size == 3 && !(srcdst & SLJIT_MEM)) {
if (lane_index == 1)
freg += SLJIT_QUAD_OTHER_HALF(freg);
vreg += SLJIT_QUAD_OTHER_HALF(vreg);
if (srcdst != freg)
FAIL_IF(push_inst(compiler, VORR | VD(freg) | VN(srcdst) | VM(srcdst)));
if (srcdst != vreg)
FAIL_IF(push_inst(compiler, VORR | VD(vreg) | VN(srcdst) | VM(srcdst)));
freg += SLJIT_QUAD_OTHER_HALF(freg);
return push_inst(compiler, VMOV_i | VD(freg));
vreg += SLJIT_QUAD_OTHER_HALF(vreg);
return push_inst(compiler, VMOV_i | VD(vreg));
}
if (srcdst == freg || (elem_size == 3 && srcdst == (freg + SLJIT_QUAD_OTHER_HALF(freg)))) {
FAIL_IF(push_inst(compiler, VORR | ins | VD(TMP_FREG2) | VN(freg) | VM(freg)));
if (srcdst == vreg || (elem_size == 3 && srcdst == (vreg + SLJIT_QUAD_OTHER_HALF(vreg)))) {
FAIL_IF(push_inst(compiler, VORR | ins | VD(TMP_FREG2) | VN(vreg) | VM(vreg)));
srcdst = TMP_FREG2;
srcdstw = 0;
}
}
FAIL_IF(push_inst(compiler, VMOV_i | ins | VD(freg)));
FAIL_IF(push_inst(compiler, VMOV_i | ins | VD(vreg)));
}
if (reg_size == 4 && lane_index >= (0x8 >> elem_size)) {
lane_index -= (0x8 >> elem_size);
freg += SLJIT_QUAD_OTHER_HALF(freg);
vreg += SLJIT_QUAD_OTHER_HALF(vreg);
}
if (srcdst & SLJIT_MEM) {
if (elem_size == 3)
return emit_fop_mem(compiler, ((type & SLJIT_SIMD_STORE) ? 0 : FPU_LOAD) | SLJIT_32, freg, srcdst, srcdstw);
return emit_fop_mem(compiler, ((type & SLJIT_SIMD_STORE) ? 0 : FPU_LOAD) | SLJIT_32, vreg, srcdst, srcdstw);
FAIL_IF(sljit_emit_simd_mem_offset(compiler, &srcdst, srcdstw));
lane_index = lane_index << elem_size;
ins = (sljit_ins)((elem_size << 10) | (lane_index << 5));
return push_inst(compiler, ((type & SLJIT_SIMD_STORE) ? VST1_s : VLD1_s) | ins | VD(freg) | RN(srcdst) | 0xf);
return push_inst(compiler, ((type & SLJIT_SIMD_STORE) ? VST1_s : VLD1_s) | ins | VD(vreg) | RN(srcdst) | 0xf);
}
if (type & SLJIT_SIMD_FLOAT) {
if (elem_size == 3) {
if (type & SLJIT_SIMD_STORE)
return push_inst(compiler, VORR | VD(srcdst) | VN(freg) | VM(freg));
return push_inst(compiler, VMOV_F32 | SLJIT_32 | VD(freg) | VM(srcdst));
return push_inst(compiler, VORR | VD(srcdst) | VN(vreg) | VM(vreg));
return push_inst(compiler, VMOV_F32 | SLJIT_32 | VD(vreg) | VM(srcdst));
}
if (type & SLJIT_SIMD_STORE) {
if (freg_ebit_map[freg] == 0) {
if (freg_ebit_map[vreg] == 0) {
if (lane_index == 1)
freg = SLJIT_F64_SECOND(freg);
vreg = SLJIT_F64_SECOND(vreg);
return push_inst(compiler, VMOV_F32 | VD(srcdst) | VM(freg));
return push_inst(compiler, VMOV_F32 | VD(srcdst) | VM(vreg));
}
FAIL_IF(push_inst(compiler, VMOV_s | (1 << 20) | ((sljit_ins)lane_index << 21) | VN(freg) | RD(TMP_REG1)));
FAIL_IF(push_inst(compiler, VMOV_s | (1 << 20) | ((sljit_ins)lane_index << 21) | VN(vreg) | RD(TMP_REG1)));
return push_inst(compiler, VMOV | VN(srcdst) | RD(TMP_REG1));
}
FAIL_IF(push_inst(compiler, VMOV | (1 << 20) | VN(srcdst) | RD(TMP_REG1)));
return push_inst(compiler, VMOV_s | ((sljit_ins)lane_index << 21) | VN(freg) | RD(TMP_REG1));
return push_inst(compiler, VMOV_s | ((sljit_ins)lane_index << 21) | VN(vreg) | RD(TMP_REG1));
}
if (srcdst == SLJIT_IMM) {
@ -4273,11 +4336,11 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_lane_mov(struct sljit_compile
ins |= (1 << 23);
}
return push_inst(compiler, VMOV_s | ins | VN(freg) | RD(srcdst));
return push_inst(compiler, VMOV_s | ins | VN(vreg) | RD(srcdst));
}
SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_lane_replicate(struct sljit_compiler *compiler, sljit_s32 type,
sljit_s32 freg,
sljit_s32 vreg,
sljit_s32 src, sljit_s32 src_lane_index)
{
sljit_s32 reg_size = SLJIT_SIMD_GET_REG_SIZE(type);
@ -4285,7 +4348,7 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_lane_replicate(struct sljit_c
sljit_ins ins;
CHECK_ERROR();
CHECK(check_sljit_emit_simd_lane_replicate(compiler, type, freg, src, src_lane_index));
CHECK(check_sljit_emit_simd_lane_replicate(compiler, type, vreg, src, src_lane_index));
if (reg_size != 3 && reg_size != 4)
return SLJIT_ERR_UNSUPPORTED;
@ -4297,7 +4360,7 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_lane_replicate(struct sljit_c
return SLJIT_SUCCESS;
if (reg_size == 4) {
freg = simd_get_quad_reg_index(freg);
vreg = simd_get_quad_reg_index(vreg);
src = simd_get_quad_reg_index(src);
if (src_lane_index >= (0x8 >> elem_size)) {
@ -4307,13 +4370,13 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_lane_replicate(struct sljit_c
}
if (elem_size == 3) {
if (freg != src)
FAIL_IF(push_inst(compiler, VORR | VD(freg) | VN(src) | VM(src)));
if (vreg != src)
FAIL_IF(push_inst(compiler, VORR | VD(vreg) | VN(src) | VM(src)));
freg += SLJIT_QUAD_OTHER_HALF(freg);
vreg += SLJIT_QUAD_OTHER_HALF(vreg);
if (freg != src)
return push_inst(compiler, VORR | VD(freg) | VN(src) | VM(src));
if (vreg != src)
return push_inst(compiler, VORR | VD(vreg) | VN(src) | VM(src));
return SLJIT_SUCCESS;
}
@ -4322,11 +4385,11 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_lane_replicate(struct sljit_c
if (reg_size == 4)
ins |= (sljit_ins)1 << 6;
return push_inst(compiler, VDUP_s | ins | VD(freg) | VM(src));
return push_inst(compiler, VDUP_s | ins | VD(vreg) | VM(src));
}
SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_extend(struct sljit_compiler *compiler, sljit_s32 type,
sljit_s32 freg,
sljit_s32 vreg,
sljit_s32 src, sljit_sw srcw)
{
sljit_s32 reg_size = SLJIT_SIMD_GET_REG_SIZE(type);
@ -4335,7 +4398,7 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_extend(struct sljit_compiler
sljit_s32 dst_reg;
CHECK_ERROR();
CHECK(check_sljit_emit_simd_extend(compiler, type, freg, src, srcw));
CHECK(check_sljit_emit_simd_extend(compiler, type, vreg, src, srcw));
ADJUST_LOCAL_OFFSET(src, srcw);
@ -4349,20 +4412,20 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_extend(struct sljit_compiler
return SLJIT_SUCCESS;
if (reg_size == 4)
freg = simd_get_quad_reg_index(freg);
vreg = simd_get_quad_reg_index(vreg);
if (src & SLJIT_MEM) {
FAIL_IF(sljit_emit_simd_mem_offset(compiler, &src, srcw));
if (reg_size == 4 && elem2_size - elem_size == 1)
FAIL_IF(push_inst(compiler, VLD1 | (0x7 << 8) | VD(freg) | RN(src) | 0xf));
FAIL_IF(push_inst(compiler, VLD1 | (0x7 << 8) | VD(vreg) | RN(src) | 0xf));
else
FAIL_IF(push_inst(compiler, VLD1_s | (sljit_ins)((reg_size - elem2_size + elem_size) << 10) | VD(freg) | RN(src) | 0xf));
src = freg;
FAIL_IF(push_inst(compiler, VLD1_s | (sljit_ins)((reg_size - elem2_size + elem_size) << 10) | VD(vreg) | RN(src) | 0xf));
src = vreg;
} else if (reg_size == 4)
src = simd_get_quad_reg_index(src);
if (!(type & SLJIT_SIMD_FLOAT)) {
dst_reg = (reg_size == 4) ? freg : TMP_FREG2;
dst_reg = (reg_size == 4) ? vreg : TMP_FREG2;
do {
FAIL_IF(push_inst(compiler, VSHLL | ((type & SLJIT_SIMD_EXTEND_SIGNED) ? 0 : (1 << 24))
@ -4371,27 +4434,27 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_extend(struct sljit_compiler
} while (++elem_size < elem2_size);
if (dst_reg == TMP_FREG2)
return push_inst(compiler, VORR | VD(freg) | VN(TMP_FREG2) | VM(TMP_FREG2));
return push_inst(compiler, VORR | VD(vreg) | VN(TMP_FREG2) | VM(TMP_FREG2));
return SLJIT_SUCCESS;
}
/* No SIMD variant, must use VFP instead. */
SLJIT_ASSERT(reg_size == 4);
if (freg == src) {
freg += SLJIT_QUAD_OTHER_HALF(freg);
FAIL_IF(push_inst(compiler, VCVT_F64_F32 | VD(freg) | VM(src) | 0x20));
freg += SLJIT_QUAD_OTHER_HALF(freg);
return push_inst(compiler, VCVT_F64_F32 | VD(freg) | VM(src));
if (vreg == src) {
vreg += SLJIT_QUAD_OTHER_HALF(vreg);
FAIL_IF(push_inst(compiler, VCVT_F64_F32 | VD(vreg) | VM(src) | 0x20));
vreg += SLJIT_QUAD_OTHER_HALF(vreg);
return push_inst(compiler, VCVT_F64_F32 | VD(vreg) | VM(src));
}
FAIL_IF(push_inst(compiler, VCVT_F64_F32 | VD(freg) | VM(src)));
freg += SLJIT_QUAD_OTHER_HALF(freg);
return push_inst(compiler, VCVT_F64_F32 | VD(freg) | VM(src) | 0x20);
FAIL_IF(push_inst(compiler, VCVT_F64_F32 | VD(vreg) | VM(src)));
vreg += SLJIT_QUAD_OTHER_HALF(vreg);
return push_inst(compiler, VCVT_F64_F32 | VD(vreg) | VM(src) | 0x20);
}
SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_sign(struct sljit_compiler *compiler, sljit_s32 type,
sljit_s32 freg,
sljit_s32 vreg,
sljit_s32 dst, sljit_sw dstw)
{
sljit_s32 reg_size = SLJIT_SIMD_GET_REG_SIZE(type);
@ -4400,7 +4463,7 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_sign(struct sljit_compiler *c
sljit_s32 dst_r;
CHECK_ERROR();
CHECK(check_sljit_emit_simd_sign(compiler, type, freg, dst, dstw));
CHECK(check_sljit_emit_simd_sign(compiler, type, vreg, dst, dstw));
ADJUST_LOCAL_OFFSET(dst, dstw);
@ -4433,12 +4496,12 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_sign(struct sljit_compiler *c
}
if (reg_size == 4) {
freg = simd_get_quad_reg_index(freg);
vreg = simd_get_quad_reg_index(vreg);
ins |= (sljit_ins)1 << 6;
}
SLJIT_ASSERT((freg_map[TMP_FREG2] & 0x1) == 0);
FAIL_IF(push_inst(compiler, ins | VD(TMP_FREG2) | VM(freg)));
FAIL_IF(push_inst(compiler, ins | VD(TMP_FREG2) | VM(vreg)));
if (reg_size == 4 && elem_size > 0)
FAIL_IF(push_inst(compiler, VMOVN | ((sljit_ins)(elem_size - 1) << 18) | VD(TMP_FREG2) | VM(TMP_FREG2)));
@ -4468,14 +4531,16 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_sign(struct sljit_compiler *c
}
SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_op2(struct sljit_compiler *compiler, sljit_s32 type,
sljit_s32 dst_freg, sljit_s32 src1_freg, sljit_s32 src2_freg)
sljit_s32 dst_vreg, sljit_s32 src1_vreg, sljit_s32 src2, sljit_sw src2w)
{
sljit_s32 reg_size = SLJIT_SIMD_GET_REG_SIZE(type);
sljit_s32 elem_size = SLJIT_SIMD_GET_ELEM_SIZE(type);
sljit_ins ins = 0;
sljit_s32 alignment;
sljit_ins ins = 0, load_ins;
CHECK_ERROR();
CHECK(check_sljit_emit_simd_op2(compiler, type, dst_freg, src1_freg, src2_freg));
CHECK(check_sljit_emit_simd_op2(compiler, type, dst_vreg, src1_vreg, src2, src2w));
ADJUST_LOCAL_OFFSET(src2, src2w);
if (reg_size != 3 && reg_size != 4)
return SLJIT_ERR_UNSUPPORTED;
@ -4483,6 +4548,9 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_op2(struct sljit_compiler *co
if ((type & SLJIT_SIMD_FLOAT) && (elem_size < 2 || elem_size > 3))
return SLJIT_ERR_UNSUPPORTED;
if (type & SLJIT_SIMD_TEST)
return SLJIT_SUCCESS;
switch (SLJIT_SIMD_GET_OPCODE(type)) {
case SLJIT_SIMD_OP2_AND:
ins = VAND;
@ -4493,19 +4561,51 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_op2(struct sljit_compiler *co
case SLJIT_SIMD_OP2_XOR:
ins = VEOR;
break;
case SLJIT_SIMD_OP2_SHUFFLE:
ins = VTBL;
break;
}
if (type & SLJIT_SIMD_TEST)
return SLJIT_SUCCESS;
if (src2 & SLJIT_MEM) {
if (elem_size > 3)
elem_size = 3;
load_ins = VLD1 | (sljit_ins)((reg_size == 3) ? (0x7 << 8) : (0xa << 8));
alignment = SLJIT_SIMD_GET_ELEM2_SIZE(type);
SLJIT_ASSERT(reg_size >= alignment);
if (alignment == 3)
load_ins |= 0x10;
else if (alignment >= 4)
load_ins |= 0x20;
FAIL_IF(sljit_emit_simd_mem_offset(compiler, &src2, src2w));
FAIL_IF(push_inst(compiler, load_ins | VD(TMP_FREG2) | RN(src2) | ((sljit_ins)elem_size) << 6 | 0xf));
src2 = TMP_FREG2;
}
if (reg_size == 4) {
dst_freg = simd_get_quad_reg_index(dst_freg);
src1_freg = simd_get_quad_reg_index(src1_freg);
src2_freg = simd_get_quad_reg_index(src2_freg);
dst_vreg = simd_get_quad_reg_index(dst_vreg);
src1_vreg = simd_get_quad_reg_index(src1_vreg);
src2 = simd_get_quad_reg_index(src2);
if (SLJIT_SIMD_GET_OPCODE(type) == SLJIT_SIMD_OP2_SHUFFLE) {
ins |= (sljit_ins)1 << 8;
FAIL_IF(push_inst(compiler, ins | VD(dst_vreg != src1_vreg ? dst_vreg : TMP_FREG2) | VN(src1_vreg) | VM(src2)));
src2 += SLJIT_QUAD_OTHER_HALF(src2);
FAIL_IF(push_inst(compiler, ins | VD(dst_vreg + SLJIT_QUAD_OTHER_HALF(dst_vreg)) | VN(src1_vreg) | VM(src2)));
if (dst_vreg == src1_vreg)
return push_inst(compiler, VORR | VD(dst_vreg) | VN(TMP_FREG2) | VM(TMP_FREG2));
return SLJIT_SUCCESS;
}
ins |= (sljit_ins)1 << 6;
}
return push_inst(compiler, ins | VD(dst_freg) | VN(src1_freg) | VM(src2_freg));
return push_inst(compiler, ins | VD(dst_vreg) | VN(src1_vreg) | VM(src2));
}
#undef FPU_LOAD
@ -4519,7 +4619,15 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_atomic_load(struct sljit_compiler
CHECK_ERROR();
CHECK(check_sljit_emit_atomic_load(compiler, op, dst_reg, mem_reg));
if (op & SLJIT_ATOMIC_USE_CAS)
return SLJIT_ERR_UNSUPPORTED;
switch (GET_OPCODE(op)) {
case SLJIT_MOV_S8:
case SLJIT_MOV_S16:
case SLJIT_MOV_S32:
return SLJIT_ERR_UNSUPPORTED;
case SLJIT_MOV_U8:
ins = LDREXB;
break;
@ -4531,6 +4639,9 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_atomic_load(struct sljit_compiler
break;
}
if (op & SLJIT_ATOMIC_TEST)
return SLJIT_SUCCESS;
return push_inst(compiler, ins | RN(mem_reg) | RD(dst_reg));
}
@ -4547,7 +4658,15 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_atomic_store(struct sljit_compiler
CHECK_ERROR();
CHECK(check_sljit_emit_atomic_store(compiler, op, src_reg, mem_reg, temp_reg));
if (op & SLJIT_ATOMIC_USE_CAS)
return SLJIT_ERR_UNSUPPORTED;
switch (GET_OPCODE(op)) {
case SLJIT_MOV_S8:
case SLJIT_MOV_S16:
case SLJIT_MOV_S32:
return SLJIT_ERR_UNSUPPORTED;
case SLJIT_MOV_U8:
ins = STREXB;
break;
@ -4559,6 +4678,9 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_atomic_store(struct sljit_compiler
break;
}
if (op & SLJIT_ATOMIC_TEST)
return SLJIT_SUCCESS;
FAIL_IF(push_inst(compiler, ins | RN(mem_reg) | RD(TMP_REG1) | RM(src_reg)));
if (op & SLJIT_SET_ATOMIC_STORED)
return push_inst(compiler, CMP | SET_FLAGS | SRC2_IMM | RN(TMP_REG1));

View File

@ -91,6 +91,7 @@ static const sljit_u8 freg_map[SLJIT_NUMBER_OF_FLOAT_REGISTERS + 3] = {
#define CLZ 0xdac01000
#define CSEL 0x9a800000
#define CSINC 0x9a800400
#define DMB_SY 0xd5033fbf
#define DUP_e 0x0e000400
#define DUP_g 0x0e000c00
#define EOR 0xca000000
@ -171,6 +172,7 @@ static const sljit_u8 freg_map[SLJIT_NUMBER_OF_FLOAT_REGISTERS + 3] = {
#define SUBI 0xd1000000
#define SUBS 0xeb000000
#define TBZ 0x36000000
#define TBL_v 0x0e000000
#define UBFM 0xd3400000
#define UCVTF 0x9e630000
#define UDIV 0x9ac00800
@ -208,7 +210,11 @@ static SLJIT_INLINE sljit_ins* detect_jump_type(struct sljit_jump *jump, sljit_i
{
sljit_sw diff;
sljit_uw target_addr;
sljit_uw jump_addr = (sljit_uw)code_ptr;
sljit_uw orig_addr = jump->addr;
SLJIT_UNUSED_ARG(executable_offset);
jump->addr = jump_addr;
if (jump->flags & SLJIT_REWRITABLE_JUMP)
goto exit;
@ -216,10 +222,13 @@ static SLJIT_INLINE sljit_ins* detect_jump_type(struct sljit_jump *jump, sljit_i
target_addr = jump->u.target;
else {
SLJIT_ASSERT(jump->u.label != NULL);
target_addr = (sljit_uw)(code + jump->u.label->size) + (sljit_uw)executable_offset;
target_addr = (sljit_uw)SLJIT_ADD_EXEC_OFFSET(code + jump->u.label->size, executable_offset);
if (jump->u.label->size > orig_addr)
jump_addr = (sljit_uw)(code + orig_addr);
}
diff = (sljit_sw)target_addr - (sljit_sw)code_ptr - executable_offset;
diff = (sljit_sw)target_addr - (sljit_sw)SLJIT_ADD_EXEC_OFFSET(jump_addr, executable_offset);
if (jump->flags & IS_COND) {
diff += SSIZE_OF(ins);
@ -271,16 +280,21 @@ exit:
static SLJIT_INLINE sljit_sw mov_addr_get_length(struct sljit_jump *jump, sljit_ins *code_ptr, sljit_ins *code, sljit_sw executable_offset)
{
sljit_uw addr;
sljit_uw jump_addr = (sljit_uw)code_ptr;
sljit_sw diff;
SLJIT_UNUSED_ARG(executable_offset);
SLJIT_ASSERT(jump->flags < ((sljit_uw)4 << JUMP_SIZE_SHIFT));
if (jump->flags & JUMP_ADDR)
addr = jump->u.target;
else
else {
addr = (sljit_uw)SLJIT_ADD_EXEC_OFFSET(code + jump->u.label->size, executable_offset);
diff = (sljit_sw)addr - (sljit_sw)SLJIT_ADD_EXEC_OFFSET(code_ptr, executable_offset);
if (jump->u.label->size > jump->addr)
jump_addr = (sljit_uw)(code + jump->addr);
}
diff = (sljit_sw)addr - (sljit_sw)SLJIT_ADD_EXEC_OFFSET(jump_addr, executable_offset);
if (diff <= 0xfffff && diff >= -0x100000) {
jump->flags |= PATCH_B;
@ -422,6 +436,10 @@ static void reduce_code_size(struct sljit_compiler *compiler)
} else {
/* Unit size: instruction. */
diff = (sljit_sw)jump->u.label->size - (sljit_sw)jump->addr;
if (jump->u.label->size > jump->addr) {
SLJIT_ASSERT(jump->u.label->size - size_reduce >= jump->addr);
diff -= (sljit_sw)size_reduce;
}
if ((jump->flags & IS_COND) && (diff + 1) <= (0xfffff / SSIZE_OF(ins)) && (diff + 1) >= (-0x100000 / SSIZE_OF(ins)))
total_size = 0;
@ -439,6 +457,10 @@ static void reduce_code_size(struct sljit_compiler *compiler)
if (!(jump->flags & JUMP_ADDR)) {
diff = (sljit_sw)jump->u.label->size - (sljit_sw)jump->addr;
if (jump->u.label->size > jump->addr) {
SLJIT_ASSERT(jump->u.label->size - size_reduce >= jump->addr);
diff -= (sljit_sw)size_reduce;
}
if (diff <= (0xfffff / SSIZE_OF(ins)) && diff >= (-0x100000 / SSIZE_OF(ins)))
total_size = 0;
@ -516,7 +538,6 @@ SLJIT_API_FUNC_ATTRIBUTE void* sljit_generate_code(struct sljit_compiler *compil
if (next_min_addr == next_jump_addr) {
if (!(jump->flags & JUMP_MOV_ADDR)) {
word_count = word_count - 1 + (jump->flags >> JUMP_SIZE_SHIFT);
jump->addr = (sljit_uw)code_ptr;
code_ptr = detect_jump_type(jump, code_ptr, code, executable_offset);
SLJIT_ASSERT((jump->flags & PATCH_COND) || ((sljit_uw)code_ptr - jump->addr < (jump->flags >> JUMP_SIZE_SHIFT) * sizeof(sljit_ins)));
} else {
@ -593,6 +614,7 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_has_cpu_feature(sljit_s32 feature_type)
case SLJIT_HAS_COPY_F32:
case SLJIT_HAS_COPY_F64:
case SLJIT_HAS_ATOMIC:
case SLJIT_HAS_MEMORY_BARRIER:
return 1;
default:
@ -1208,16 +1230,23 @@ static sljit_s32 emit_op_mem(struct sljit_compiler *compiler, sljit_s32 flags, s
/* --------------------------------------------------------------------- */
SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_enter(struct sljit_compiler *compiler,
sljit_s32 options, sljit_s32 arg_types, sljit_s32 scratches, sljit_s32 saveds,
sljit_s32 fscratches, sljit_s32 fsaveds, sljit_s32 local_size)
sljit_s32 options, sljit_s32 arg_types,
sljit_s32 scratches, sljit_s32 saveds, sljit_s32 local_size)
{
sljit_s32 fscratches;
sljit_s32 fsaveds;
sljit_s32 prev, fprev, saved_regs_size, i, tmp;
sljit_s32 saved_arg_count = SLJIT_KEPT_SAVEDS_COUNT(options);
sljit_ins offs;
CHECK_ERROR();
CHECK(check_sljit_emit_enter(compiler, options, arg_types, scratches, saveds, fscratches, fsaveds, local_size));
set_emit_enter(compiler, options, arg_types, scratches, saveds, fscratches, fsaveds, local_size);
CHECK(check_sljit_emit_enter(compiler, options, arg_types, scratches, saveds, local_size));
set_emit_enter(compiler, options, arg_types, scratches, saveds, local_size);
scratches = ENTER_GET_REGS(scratches);
saveds = ENTER_GET_REGS(saveds);
fscratches = compiler->fscratches;
fsaveds = compiler->fsaveds;
saved_regs_size = GET_SAVED_REGISTERS_SIZE(scratches, saveds - saved_arg_count, 2);
saved_regs_size += GET_SAVED_FLOAT_REGISTERS_SIZE(fscratches, fsaveds, f64);
@ -1383,15 +1412,21 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_enter(struct sljit_compiler *compi
}
SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_set_context(struct sljit_compiler *compiler,
sljit_s32 options, sljit_s32 arg_types, sljit_s32 scratches, sljit_s32 saveds,
sljit_s32 fscratches, sljit_s32 fsaveds, sljit_s32 local_size)
sljit_s32 options, sljit_s32 arg_types,
sljit_s32 scratches, sljit_s32 saveds, sljit_s32 local_size)
{
sljit_s32 fscratches;
sljit_s32 fsaveds;
sljit_s32 saved_regs_size;
CHECK_ERROR();
CHECK(check_sljit_set_context(compiler, options, arg_types, scratches, saveds, fscratches, fsaveds, local_size));
set_set_context(compiler, options, arg_types, scratches, saveds, fscratches, fsaveds, local_size);
CHECK(check_sljit_set_context(compiler, options, arg_types, scratches, saveds, local_size));
set_emit_enter(compiler, options, arg_types, scratches, saveds, local_size);
scratches = ENTER_GET_REGS(scratches);
saveds = ENTER_GET_REGS(saveds);
fscratches = compiler->fscratches;
fsaveds = compiler->fsaveds;
saved_regs_size = GET_SAVED_REGISTERS_SIZE(scratches, saveds - SLJIT_KEPT_SAVEDS_COUNT(options), 2);
saved_regs_size += GET_SAVED_FLOAT_REGISTERS_SIZE(fscratches, fsaveds, f64);
@ -1537,7 +1572,7 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_op0(struct sljit_compiler *compile
op = GET_OPCODE(op);
switch (op) {
case SLJIT_BREAKPOINT:
return push_inst(compiler, BRK);
return push_inst(compiler, BRK | (0xf000 << 5));
case SLJIT_NOP:
return push_inst(compiler, NOP);
case SLJIT_LMUL_UW:
@ -1554,6 +1589,8 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_op0(struct sljit_compiler *compile
case SLJIT_DIV_UW:
case SLJIT_DIV_SW:
return push_inst(compiler, ((op == SLJIT_DIV_UW ? UDIV : SDIV) ^ inv_bits) | RD(SLJIT_R0) | RN(SLJIT_R0) | RM(SLJIT_R1));
case SLJIT_MEMORY_BARRIER:
return push_inst(compiler, DMB_SY);
case SLJIT_ENDBR:
case SLJIT_SKIP_FRAMES_BEFORE_RETURN:
return SLJIT_SUCCESS;
@ -2775,7 +2812,7 @@ static sljit_s32 sljit_emit_simd_mem_offset(struct sljit_compiler *compiler, slj
}
SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_mov(struct sljit_compiler *compiler, sljit_s32 type,
sljit_s32 freg,
sljit_s32 vreg,
sljit_s32 srcdst, sljit_sw srcdstw)
{
sljit_s32 reg_size = SLJIT_SIMD_GET_REG_SIZE(type);
@ -2783,7 +2820,7 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_mov(struct sljit_compiler *co
sljit_ins ins;
CHECK_ERROR();
CHECK(check_sljit_emit_simd_mov(compiler, type, freg, srcdst, srcdstw));
CHECK(check_sljit_emit_simd_mov(compiler, type, vreg, srcdst, srcdstw));
ADJUST_LOCAL_OFFSET(srcdst, srcdstw);
@ -2798,9 +2835,9 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_mov(struct sljit_compiler *co
if (!(srcdst & SLJIT_MEM)) {
if (type & SLJIT_SIMD_STORE)
ins = VD(srcdst) | VN(freg) | VM(freg);
ins = VD(srcdst) | VN(vreg) | VM(vreg);
else
ins = VD(freg) | VN(srcdst) | VM(srcdst);
ins = VD(vreg) | VN(srcdst) | VM(srcdst);
if (reg_size == 4)
ins |= (1 << 30);
@ -2818,7 +2855,7 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_mov(struct sljit_compiler *co
if (reg_size == 4)
ins |= (1 << 30);
return push_inst(compiler, ins | ((sljit_ins)elem_size << 10) | RN(srcdst) | VT(freg));
return push_inst(compiler, ins | ((sljit_ins)elem_size << 10) | RN(srcdst) | VT(vreg));
}
static sljit_ins simd_get_imm(sljit_s32 elem_size, sljit_uw value)
@ -2923,7 +2960,7 @@ static sljit_ins simd_get_imm(sljit_s32 elem_size, sljit_uw value)
}
SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_replicate(struct sljit_compiler *compiler, sljit_s32 type,
sljit_s32 freg,
sljit_s32 vreg,
sljit_s32 src, sljit_sw srcw)
{
sljit_s32 reg_size = SLJIT_SIMD_GET_REG_SIZE(type);
@ -2931,7 +2968,7 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_replicate(struct sljit_compil
sljit_ins ins, imm;
CHECK_ERROR();
CHECK(check_sljit_emit_simd_replicate(compiler, type, freg, src, srcw));
CHECK(check_sljit_emit_simd_replicate(compiler, type, vreg, src, srcw));
ADJUST_LOCAL_OFFSET(src, srcw);
@ -2952,7 +2989,7 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_replicate(struct sljit_compil
if (reg_size == 4)
ins |= (sljit_ins)1 << 30;
return push_inst(compiler, LD1R | ins | RN(src) | VT(freg));
return push_inst(compiler, LD1R | ins | RN(src) | VT(vreg));
}
ins = (sljit_ins)1 << (16 + elem_size);
@ -2962,9 +2999,9 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_replicate(struct sljit_compil
if (type & SLJIT_SIMD_FLOAT) {
if (src == SLJIT_IMM)
return push_inst(compiler, MOVI | (ins & ((sljit_ins)1 << 30)) | VD(freg));
return push_inst(compiler, MOVI | (ins & ((sljit_ins)1 << 30)) | VD(vreg));
return push_inst(compiler, DUP_e | ins | VD(freg) | VN(src));
return push_inst(compiler, DUP_e | ins | VD(vreg) | VN(src));
}
if (src == SLJIT_IMM) {
@ -2976,18 +3013,18 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_replicate(struct sljit_compil
if (imm != ~(sljit_ins)0) {
imm |= ins & ((sljit_ins)1 << 30);
return push_inst(compiler, MOVI | imm | VD(freg));
return push_inst(compiler, MOVI | imm | VD(vreg));
}
FAIL_IF(load_immediate(compiler, TMP_REG2, srcw));
src = TMP_REG2;
}
return push_inst(compiler, DUP_g | ins | VD(freg) | RN(src));
return push_inst(compiler, DUP_g | ins | VD(vreg) | RN(src));
}
SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_lane_mov(struct sljit_compiler *compiler, sljit_s32 type,
sljit_s32 freg, sljit_s32 lane_index,
sljit_s32 vreg, sljit_s32 lane_index,
sljit_s32 srcdst, sljit_sw srcdstw)
{
sljit_s32 reg_size = SLJIT_SIMD_GET_REG_SIZE(type);
@ -2995,7 +3032,7 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_lane_mov(struct sljit_compile
sljit_ins ins;
CHECK_ERROR();
CHECK(check_sljit_emit_simd_lane_mov(compiler, type, freg, lane_index, srcdst, srcdstw));
CHECK(check_sljit_emit_simd_lane_mov(compiler, type, vreg, lane_index, srcdst, srcdstw));
ADJUST_LOCAL_OFFSET(srcdst, srcdstw);
@ -3011,13 +3048,13 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_lane_mov(struct sljit_compile
if (type & SLJIT_SIMD_LANE_ZERO) {
ins = (reg_size == 3) ? 0 : ((sljit_ins)1 << 30);
if ((type & SLJIT_SIMD_FLOAT) && freg == srcdst) {
FAIL_IF(push_inst(compiler, ORR_v | ins | VD(TMP_FREG1) | VN(freg) | VM(freg)));
if ((type & SLJIT_SIMD_FLOAT) && vreg == srcdst) {
FAIL_IF(push_inst(compiler, ORR_v | ins | VD(TMP_FREG1) | VN(vreg) | VM(vreg)));
srcdst = TMP_FREG1;
srcdstw = 0;
}
FAIL_IF(push_inst(compiler, MOVI | ins | VD(freg)));
FAIL_IF(push_inst(compiler, MOVI | ins | VD(vreg)));
}
if (srcdst & SLJIT_MEM) {
@ -3033,14 +3070,14 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_lane_mov(struct sljit_compile
lane_index = lane_index << elem_size;
ins |= (sljit_ins)(((lane_index & 0x8) << 27) | ((lane_index & 0x7) << 10));
return push_inst(compiler, ((type & SLJIT_SIMD_STORE) ? ST1_s : LD1_s) | ins | RN(srcdst) | VT(freg));
return push_inst(compiler, ((type & SLJIT_SIMD_STORE) ? ST1_s : LD1_s) | ins | RN(srcdst) | VT(vreg));
}
if (type & SLJIT_SIMD_FLOAT) {
if (type & SLJIT_SIMD_STORE)
ins = INS_e | ((sljit_ins)1 << (16 + elem_size)) | ((sljit_ins)lane_index << (11 + elem_size)) | VD(srcdst) | VN(freg);
ins = INS_e | ((sljit_ins)1 << (16 + elem_size)) | ((sljit_ins)lane_index << (11 + elem_size)) | VD(srcdst) | VN(vreg);
else
ins = INS_e | ((((sljit_ins)lane_index << 1) | 1) << (16 + elem_size)) | VD(freg) | VN(srcdst);
ins = INS_e | ((((sljit_ins)lane_index << 1) | 1) << (16 + elem_size)) | VD(vreg) | VN(srcdst);
return push_inst(compiler, ins);
}
@ -3054,7 +3091,7 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_lane_mov(struct sljit_compile
}
if (type & SLJIT_SIMD_STORE) {
ins = RD(srcdst) | VN(freg);
ins = RD(srcdst) | VN(vreg);
if ((type & SLJIT_SIMD_LANE_SIGNED) && (elem_size < 2 || (elem_size == 2 && !(type & SLJIT_32)))) {
ins |= SMOV;
@ -3064,7 +3101,7 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_lane_mov(struct sljit_compile
} else
ins |= UMOV;
} else
ins = INS | VD(freg) | RN(srcdst);
ins = INS | VD(vreg) | RN(srcdst);
if (elem_size == 3)
ins |= (sljit_ins)1 << 30;
@ -3073,7 +3110,7 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_lane_mov(struct sljit_compile
}
SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_lane_replicate(struct sljit_compiler *compiler, sljit_s32 type,
sljit_s32 freg,
sljit_s32 vreg,
sljit_s32 src, sljit_s32 src_lane_index)
{
sljit_s32 reg_size = SLJIT_SIMD_GET_REG_SIZE(type);
@ -3081,7 +3118,7 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_lane_replicate(struct sljit_c
sljit_ins ins;
CHECK_ERROR();
CHECK(check_sljit_emit_simd_lane_replicate(compiler, type, freg, src, src_lane_index));
CHECK(check_sljit_emit_simd_lane_replicate(compiler, type, vreg, src, src_lane_index));
if (reg_size != 3 && reg_size != 4)
return SLJIT_ERR_UNSUPPORTED;
@ -3097,11 +3134,11 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_lane_replicate(struct sljit_c
if (reg_size == 4)
ins |= (sljit_ins)1 << 30;
return push_inst(compiler, DUP_e | ins | VD(freg) | VN(src));
return push_inst(compiler, DUP_e | ins | VD(vreg) | VN(src));
}
SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_extend(struct sljit_compiler *compiler, sljit_s32 type,
sljit_s32 freg,
sljit_s32 vreg,
sljit_s32 src, sljit_sw srcw)
{
sljit_s32 reg_size = SLJIT_SIMD_GET_REG_SIZE(type);
@ -3109,7 +3146,7 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_extend(struct sljit_compiler
sljit_s32 elem2_size = SLJIT_SIMD_GET_ELEM2_SIZE(type);
CHECK_ERROR();
CHECK(check_sljit_emit_simd_extend(compiler, type, freg, src, srcw));
CHECK(check_sljit_emit_simd_extend(compiler, type, vreg, src, srcw));
ADJUST_LOCAL_OFFSET(src, srcw);
@ -3126,28 +3163,28 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_extend(struct sljit_compiler
FAIL_IF(sljit_emit_simd_mem_offset(compiler, &src, srcw));
if (reg_size == 4 && elem2_size - elem_size == 1)
FAIL_IF(push_inst(compiler, LD1 | ((sljit_ins)elem_size << 10) | RN(src) | VT(freg)));
FAIL_IF(push_inst(compiler, LD1 | ((sljit_ins)elem_size << 10) | RN(src) | VT(vreg)));
else
FAIL_IF(push_inst(compiler, LD1_s | ((sljit_ins)0x2000 << (reg_size - elem2_size + elem_size)) | RN(src) | VT(freg)));
src = freg;
FAIL_IF(push_inst(compiler, LD1_s | ((sljit_ins)0x2000 << (reg_size - elem2_size + elem_size)) | RN(src) | VT(vreg)));
src = vreg;
}
if (type & SLJIT_SIMD_FLOAT) {
SLJIT_ASSERT(reg_size == 4);
return push_inst(compiler, FCVTL | (1 << 22) | VD(freg) | VN(src));
return push_inst(compiler, FCVTL | (1 << 22) | VD(vreg) | VN(src));
}
do {
FAIL_IF(push_inst(compiler, ((type & SLJIT_SIMD_EXTEND_SIGNED) ? SSHLL : USHLL)
| ((sljit_ins)1 << (19 + elem_size)) | VD(freg) | VN(src)));
src = freg;
| ((sljit_ins)1 << (19 + elem_size)) | VD(vreg) | VN(src)));
src = vreg;
} while (++elem_size < elem2_size);
return SLJIT_SUCCESS;
}
SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_sign(struct sljit_compiler *compiler, sljit_s32 type,
sljit_s32 freg,
sljit_s32 vreg,
sljit_s32 dst, sljit_sw dstw)
{
sljit_s32 reg_size = SLJIT_SIMD_GET_REG_SIZE(type);
@ -3156,7 +3193,7 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_sign(struct sljit_compiler *c
sljit_s32 dst_r;
CHECK_ERROR();
CHECK(check_sljit_emit_simd_sign(compiler, type, freg, dst, dstw));
CHECK(check_sljit_emit_simd_sign(compiler, type, vreg, dst, dstw));
ADJUST_LOCAL_OFFSET(dst, dstw);
@ -3191,7 +3228,7 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_sign(struct sljit_compiler *c
if (reg_size == 4)
ins |= (1 << 30);
FAIL_IF(push_inst(compiler, ins | VD(TMP_FREG1) | VN(freg)));
FAIL_IF(push_inst(compiler, ins | VD(TMP_FREG1) | VN(vreg)));
if (reg_size == 4 && elem_size > 0)
FAIL_IF(push_inst(compiler, XTN | ((sljit_ins)(elem_size - 1) << 22) | VD(TMP_FREG1) | VN(TMP_FREG1)));
@ -3224,14 +3261,15 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_sign(struct sljit_compiler *c
}
SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_op2(struct sljit_compiler *compiler, sljit_s32 type,
sljit_s32 dst_freg, sljit_s32 src1_freg, sljit_s32 src2_freg)
sljit_s32 dst_vreg, sljit_s32 src1_vreg, sljit_s32 src2, sljit_sw src2w)
{
sljit_s32 reg_size = SLJIT_SIMD_GET_REG_SIZE(type);
sljit_s32 elem_size = SLJIT_SIMD_GET_ELEM_SIZE(type);
sljit_ins ins = 0;
CHECK_ERROR();
CHECK(check_sljit_emit_simd_op2(compiler, type, dst_freg, src1_freg, src2_freg));
CHECK(check_sljit_emit_simd_op2(compiler, type, dst_vreg, src1_vreg, src2, src2w));
ADJUST_LOCAL_OFFSET(src2, src2w);
if (reg_size != 3 && reg_size != 4)
return SLJIT_ERR_UNSUPPORTED;
@ -3239,6 +3277,9 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_op2(struct sljit_compiler *co
if ((type & SLJIT_SIMD_FLOAT) && (elem_size < 2 || elem_size > 3))
return SLJIT_ERR_UNSUPPORTED;
if (type & SLJIT_SIMD_TEST)
return SLJIT_SUCCESS;
switch (SLJIT_SIMD_GET_OPCODE(type)) {
case SLJIT_SIMD_OP2_AND:
ins = AND_v;
@ -3249,15 +3290,24 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_op2(struct sljit_compiler *co
case SLJIT_SIMD_OP2_XOR:
ins = EOR_v;
break;
case SLJIT_SIMD_OP2_SHUFFLE:
ins = TBL_v;
break;
}
if (type & SLJIT_SIMD_TEST)
return SLJIT_SUCCESS;
if (src2 & SLJIT_MEM) {
if (elem_size > 3)
elem_size = 3;
FAIL_IF(sljit_emit_simd_mem_offset(compiler, &src2, src2w));
push_inst(compiler, LD1 | (reg_size == 4 ? (1 << 30) : 0) | ((sljit_ins)elem_size << 10) | RN(src2) | VT(TMP_FREG1));
src2 = TMP_FREG1;
}
if (reg_size == 4)
ins |= (sljit_ins)1 << 30;
return push_inst(compiler, ins | VD(dst_freg) | VN(src1_freg) | VM(src2_freg));
return push_inst(compiler, ins | VD(dst_vreg) | VN(src1_vreg) | VM(src2));
}
SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_atomic_load(struct sljit_compiler *compiler, sljit_s32 op,
@ -3269,39 +3319,55 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_atomic_load(struct sljit_compiler
CHECK_ERROR();
CHECK(check_sljit_emit_atomic_load(compiler, op, dst_reg, mem_reg));
#ifndef __ARM_FEATURE_ATOMICS
if (op & SLJIT_ATOMIC_USE_CAS)
return SLJIT_ERR_UNSUPPORTED;
#endif /* ARM_FEATURE_ATOMICS */
switch (GET_OPCODE(op)) {
case SLJIT_MOV_S8:
case SLJIT_MOV_S16:
case SLJIT_MOV_S32:
return SLJIT_ERR_UNSUPPORTED;
case SLJIT_MOV32:
case SLJIT_MOV_U32:
#ifdef __ARM_FEATURE_ATOMICS
switch (GET_OPCODE(op)) {
case SLJIT_MOV32:
case SLJIT_MOV_U32:
if (!(op & SLJIT_ATOMIC_USE_LS))
ins = LDR ^ (1 << 30);
break;
case SLJIT_MOV_U16:
ins = LDRH;
break;
case SLJIT_MOV_U8:
ins = LDRB;
break;
default:
ins = LDR;
break;
}
#else /* !__ARM_FEATURE_ATOMICS */
switch (GET_OPCODE(op)) {
case SLJIT_MOV32:
case SLJIT_MOV_U32:
else
#endif /* ARM_FEATURE_ATOMICS */
ins = LDXR ^ (1 << 30);
break;
case SLJIT_MOV_U8:
#ifdef __ARM_FEATURE_ATOMICS
if (!(op & SLJIT_ATOMIC_USE_LS))
ins = LDRB;
else
#endif /* ARM_FEATURE_ATOMICS */
ins = LDXRB;
break;
case SLJIT_MOV_U16:
#ifdef __ARM_FEATURE_ATOMICS
if (!(op & SLJIT_ATOMIC_USE_LS))
ins = LDRH;
else
#endif /* ARM_FEATURE_ATOMICS */
ins = LDXRH;
break;
default:
#ifdef __ARM_FEATURE_ATOMICS
if (!(op & SLJIT_ATOMIC_USE_LS))
ins = LDR;
else
#endif /* ARM_FEATURE_ATOMICS */
ins = LDXR;
break;
}
#endif /* ARM_FEATURE_ATOMICS */
if (op & SLJIT_ATOMIC_TEST)
return SLJIT_SUCCESS;
return push_inst(compiler, ins | RN(mem_reg) | RT(dst_reg));
}
@ -3311,18 +3377,22 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_atomic_store(struct sljit_compiler
sljit_s32 temp_reg)
{
sljit_ins ins;
sljit_s32 tmp = temp_reg;
sljit_ins cmp = 0;
sljit_ins inv_bits = W_OP;
CHECK_ERROR();
CHECK(check_sljit_emit_atomic_store(compiler, op, src_reg, mem_reg, temp_reg));
#ifdef __ARM_FEATURE_ATOMICS
if (!(op & SLJIT_ATOMIC_USE_LS)) {
if (op & SLJIT_SET_ATOMIC_STORED)
cmp = (SUBS ^ W_OP) | RD(TMP_ZERO);
switch (GET_OPCODE(op)) {
case SLJIT_MOV_S8:
case SLJIT_MOV_S16:
case SLJIT_MOV_S32:
return SLJIT_ERR_UNSUPPORTED;
case SLJIT_MOV32:
case SLJIT_MOV_U32:
ins = CAS ^ (1 << 30);
@ -3335,31 +3405,37 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_atomic_store(struct sljit_compiler
break;
default:
ins = CAS;
inv_bits = 0;
if (cmp)
cmp ^= W_OP;
break;
}
if (cmp) {
FAIL_IF(push_inst(compiler, (MOV ^ inv_bits) | RM(temp_reg) | RD(TMP_REG1)));
tmp = TMP_REG1;
}
FAIL_IF(push_inst(compiler, ins | RM(tmp) | RN(mem_reg) | RD(src_reg)));
if (op & SLJIT_ATOMIC_TEST)
return SLJIT_SUCCESS;
if (cmp)
FAIL_IF(push_inst(compiler, ((MOV ^ W_OP) ^ (cmp & W_OP)) | RM(temp_reg) | RD(TMP_REG2)));
FAIL_IF(push_inst(compiler, ins | RM(temp_reg) | RN(mem_reg) | RD(src_reg)));
if (!cmp)
return SLJIT_SUCCESS;
FAIL_IF(push_inst(compiler, cmp | RM(tmp) | RN(temp_reg)));
FAIL_IF(push_inst(compiler, (CSET ^ inv_bits) | RD(tmp)));
return push_inst(compiler, cmp | RM(tmp) | RN(TMP_ZERO));
return push_inst(compiler, cmp | RM(TMP_REG2) | RN(temp_reg));
}
#else /* !__ARM_FEATURE_ATOMICS */
SLJIT_UNUSED_ARG(tmp);
SLJIT_UNUSED_ARG(inv_bits);
if (op & SLJIT_ATOMIC_USE_CAS)
return SLJIT_ERR_UNSUPPORTED;
#endif /* __ARM_FEATURE_ATOMICS */
if (op & SLJIT_SET_ATOMIC_STORED)
cmp = (SUBI ^ W_OP) | (1 << 29);
switch (GET_OPCODE(op)) {
case SLJIT_MOV_S8:
case SLJIT_MOV_S16:
case SLJIT_MOV_S32:
return SLJIT_ERR_UNSUPPORTED;
case SLJIT_MOV32:
case SLJIT_MOV_U32:
ins = STXR ^ (1 << 30);
@ -3375,9 +3451,13 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_atomic_store(struct sljit_compiler
break;
}
FAIL_IF(push_inst(compiler, ins | RM(TMP_REG1) | RN(mem_reg) | RT(src_reg)));
return cmp ? push_inst(compiler, cmp | RD(TMP_ZERO) | RN(TMP_REG1)) : SLJIT_SUCCESS;
#endif /* __ARM_FEATURE_ATOMICS */
if (op & SLJIT_ATOMIC_TEST)
return SLJIT_SUCCESS;
FAIL_IF(push_inst(compiler, ins | RM(TMP_REG2) | RN(mem_reg) | RT(src_reg)));
if (!cmp)
return SLJIT_SUCCESS;
return push_inst(compiler, cmp | RD(TMP_ZERO) | RN(TMP_REG2));
}
SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_get_local_base(struct sljit_compiler *compiler, sljit_s32 dst, sljit_sw dstw, sljit_sw offset)

View File

@ -138,6 +138,7 @@ static const sljit_u8 freg_ebit_map[((SLJIT_NUMBER_OF_FLOAT_REGISTERS + 2) << 1)
#define CMPI_W 0xf1b00f00
#define CMP_X 0x4500
#define CMP_W 0xebb00f00
#define DMB_SY 0xf3bf8f5f
#define EORI 0xf0800000
#define EORS 0x4040
#define EOR_W 0xea800000
@ -253,6 +254,7 @@ static const sljit_u8 freg_ebit_map[((SLJIT_NUMBER_OF_FLOAT_REGISTERS + 2) << 1)
#define VST1_s 0xf9800000
#define VSTR_F32 0xed000a00
#define VSUB_F32 0xee300a40
#define VTBL 0xffb00800
#if (defined SLJIT_ARGUMENT_CHECKS && SLJIT_ARGUMENT_CHECKS)
@ -264,11 +266,28 @@ static sljit_s32 function_check_is_freg(struct sljit_compiler *compiler, sljit_s
if (is_32 && fr >= SLJIT_F64_SECOND(SLJIT_FR0))
fr -= SLJIT_F64_SECOND(0);
return (fr >= SLJIT_FR0 && fr < (SLJIT_FR0 + compiler->fscratches))
|| (fr > (SLJIT_FS0 - compiler->fsaveds) && fr <= SLJIT_FS0)
return (fr >= SLJIT_FR0 && fr < (SLJIT_FR0 + compiler->real_fscratches))
|| (fr > (SLJIT_FS0 - compiler->real_fsaveds) && fr <= SLJIT_FS0)
|| (fr >= SLJIT_TMP_FREGISTER_BASE && fr < (SLJIT_TMP_FREGISTER_BASE + SLJIT_NUMBER_OF_TEMPORARY_FLOAT_REGISTERS));
}
static sljit_s32 function_check_is_vreg(struct sljit_compiler *compiler, sljit_s32 vr, sljit_s32 type)
{
sljit_s32 vr_low = vr;
if (compiler->scratches == -1)
return 0;
if (SLJIT_SIMD_GET_REG_SIZE(type) == 4) {
vr += (vr & 0x1);
vr_low = vr - 1;
}
return (vr >= SLJIT_VR0 && vr < (SLJIT_VR0 + compiler->vscratches))
|| (vr_low > (SLJIT_VS0 - compiler->vsaveds) && vr_low <= SLJIT_VS0)
|| (vr >= SLJIT_TMP_VREGISTER_BASE && vr < (SLJIT_TMP_VREGISTER_BASE + SLJIT_NUMBER_OF_TEMPORARY_VECTOR_REGISTERS));
}
#endif /* SLJIT_ARGUMENT_CHECKS */
static sljit_s32 push_inst16(struct sljit_compiler *compiler, sljit_ins inst)
@ -320,7 +339,12 @@ static SLJIT_INLINE void modify_imm32_const(sljit_u16 *inst, sljit_uw new_imm)
static SLJIT_INLINE sljit_u16* detect_jump_type(struct sljit_jump *jump, sljit_u16 *code_ptr, sljit_u16 *code, sljit_sw executable_offset)
{
sljit_sw diff;
sljit_uw target_addr;
sljit_uw jump_addr = (sljit_uw)code_ptr;
sljit_uw orig_addr = jump->addr;
SLJIT_UNUSED_ARG(executable_offset);
jump->addr = jump_addr;
if (jump->flags & SLJIT_REWRITABLE_JUMP)
goto exit;
@ -328,12 +352,17 @@ static SLJIT_INLINE sljit_u16* detect_jump_type(struct sljit_jump *jump, sljit_u
/* Branch to ARM code is not optimized yet. */
if (!(jump->u.target & 0x1))
goto exit;
diff = (sljit_sw)jump->u.target - (sljit_sw)(code_ptr + 2) - executable_offset;
target_addr = jump->u.target;
} else {
SLJIT_ASSERT(jump->u.label != NULL);
diff = (sljit_sw)(code + jump->u.label->size) - (sljit_sw)(code_ptr + 2);
target_addr = (sljit_uw)SLJIT_ADD_EXEC_OFFSET(code + jump->u.label->size, executable_offset);
if (jump->u.label->size > orig_addr)
jump_addr = (sljit_uw)(code + orig_addr);
}
diff = (sljit_sw)target_addr - (sljit_sw)SLJIT_ADD_EXEC_OFFSET(jump_addr + 4, executable_offset);
if (jump->flags & IS_COND) {
SLJIT_ASSERT(!(jump->flags & IS_BL));
/* Size of the prefix IT instruction. */
@ -380,16 +409,21 @@ exit:
static SLJIT_INLINE sljit_sw mov_addr_get_length(struct sljit_jump *jump, sljit_u16 *code_ptr, sljit_u16 *code, sljit_sw executable_offset)
{
sljit_uw addr;
sljit_uw jump_addr = (sljit_uw)code_ptr;
sljit_sw diff;
SLJIT_UNUSED_ARG(executable_offset);
if (jump->flags & JUMP_ADDR)
addr = jump->u.target;
else
else {
addr = (sljit_uw)SLJIT_ADD_EXEC_OFFSET(code + jump->u.label->size, executable_offset);
if (jump->u.label->size > jump->addr)
jump_addr = (sljit_uw)(code + jump->addr);
}
/* The pc+4 offset is represented by the 2 * SSIZE_OF(sljit_u16) below. */
diff = (sljit_sw)addr - (sljit_sw)SLJIT_ADD_EXEC_OFFSET(code_ptr, executable_offset);
diff = (sljit_sw)addr - (sljit_sw)SLJIT_ADD_EXEC_OFFSET(jump_addr, executable_offset);
/* Note: ADR with imm8 does not set the last bit (Thumb2 flag). */
@ -517,6 +551,10 @@ static void reduce_code_size(struct sljit_compiler *compiler)
if (!(jump->flags & (SLJIT_REWRITABLE_JUMP | JUMP_ADDR))) {
/* Unit size: instruction. */
diff = (sljit_sw)jump->u.label->size - (sljit_sw)jump->addr - 2;
if (jump->u.label->size > jump->addr) {
SLJIT_ASSERT(jump->u.label->size - size_reduce >= jump->addr);
diff -= (sljit_sw)size_reduce;
}
if (jump->flags & IS_COND) {
diff++;
@ -540,6 +578,10 @@ static void reduce_code_size(struct sljit_compiler *compiler)
if (!(jump->flags & JUMP_ADDR)) {
diff = (sljit_sw)jump->u.label->size - (sljit_sw)jump->addr;
if (jump->u.label->size > jump->addr) {
SLJIT_ASSERT(jump->u.label->size - size_reduce >= jump->addr);
diff -= (sljit_sw)size_reduce;
}
if (diff <= (0xffd / SSIZE_OF(u16)) && diff >= (-0xfff / SSIZE_OF(u16)))
total_size = 1;
@ -612,7 +654,6 @@ SLJIT_API_FUNC_ATTRIBUTE void* sljit_generate_code(struct sljit_compiler *compil
if (next_min_addr == next_jump_addr) {
if (!(jump->flags & JUMP_MOV_ADDR)) {
half_count = half_count - 1 + (jump->flags >> JUMP_SIZE_SHIFT);
jump->addr = (sljit_uw)code_ptr;
code_ptr = detect_jump_type(jump, code_ptr, code, executable_offset);
SLJIT_ASSERT((sljit_uw)code_ptr - jump->addr <
((jump->flags >> JUMP_SIZE_SHIFT) + ((jump->flags & 0xf0) <= PATCH_TYPE2)) * sizeof(sljit_u16));
@ -694,6 +735,7 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_has_cpu_feature(sljit_s32 feature_type)
case SLJIT_HAS_COPY_F32:
case SLJIT_HAS_COPY_F64:
case SLJIT_HAS_ATOMIC:
case SLJIT_HAS_MEMORY_BARRIER:
return 1;
default:
@ -1367,9 +1409,11 @@ static SLJIT_INLINE sljit_s32 emit_op_mem(struct sljit_compiler *compiler, sljit
/* --------------------------------------------------------------------- */
SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_enter(struct sljit_compiler *compiler,
sljit_s32 options, sljit_s32 arg_types, sljit_s32 scratches, sljit_s32 saveds,
sljit_s32 fscratches, sljit_s32 fsaveds, sljit_s32 local_size)
sljit_s32 options, sljit_s32 arg_types,
sljit_s32 scratches, sljit_s32 saveds, sljit_s32 local_size)
{
sljit_s32 fscratches;
sljit_s32 fsaveds;
sljit_s32 size, i, tmp, word_arg_count;
sljit_s32 saved_arg_count = SLJIT_KEPT_SAVEDS_COUNT(options);
sljit_uw offset;
@ -1383,8 +1427,13 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_enter(struct sljit_compiler *compi
#endif
CHECK_ERROR();
CHECK(check_sljit_emit_enter(compiler, options, arg_types, scratches, saveds, fscratches, fsaveds, local_size));
set_emit_enter(compiler, options, arg_types, scratches, saveds, fscratches, fsaveds, local_size);
CHECK(check_sljit_emit_enter(compiler, options, arg_types, scratches, saveds, local_size));
set_emit_enter(compiler, options, arg_types, scratches, saveds, local_size);
scratches = ENTER_GET_REGS(scratches);
saveds = ENTER_GET_REGS(saveds);
fscratches = compiler->fscratches;
fsaveds = compiler->fsaveds;
tmp = SLJIT_S0 - saveds;
for (i = SLJIT_S0 - saved_arg_count; i > tmp; i--)
@ -1577,15 +1626,21 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_enter(struct sljit_compiler *compi
}
SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_set_context(struct sljit_compiler *compiler,
sljit_s32 options, sljit_s32 arg_types, sljit_s32 scratches, sljit_s32 saveds,
sljit_s32 fscratches, sljit_s32 fsaveds, sljit_s32 local_size)
sljit_s32 options, sljit_s32 arg_types,
sljit_s32 scratches, sljit_s32 saveds, sljit_s32 local_size)
{
sljit_s32 fscratches;
sljit_s32 fsaveds;
sljit_s32 size;
CHECK_ERROR();
CHECK(check_sljit_set_context(compiler, options, arg_types, scratches, saveds, fscratches, fsaveds, local_size));
set_set_context(compiler, options, arg_types, scratches, saveds, fscratches, fsaveds, local_size);
CHECK(check_sljit_set_context(compiler, options, arg_types, scratches, saveds, local_size));
set_emit_enter(compiler, options, arg_types, scratches, saveds, local_size);
scratches = ENTER_GET_REGS(scratches);
saveds = ENTER_GET_REGS(saveds);
fscratches = compiler->fscratches;
fsaveds = compiler->fsaveds;
size = GET_SAVED_REGISTERS_SIZE(scratches, saveds - SLJIT_KEPT_SAVEDS_COUNT(options), 1);
/* Doubles are saved, so alignment is unaffected. */
@ -1904,6 +1959,8 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_op0(struct sljit_compiler *compile
}
return SLJIT_SUCCESS;
#endif /* __ARM_FEATURE_IDIV || __ARM_ARCH_EXT_IDIV__ */
case SLJIT_MEMORY_BARRIER:
return push_inst32(compiler, DMB_SY);
case SLJIT_ENDBR:
case SLJIT_SKIP_FRAMES_BEFORE_RETURN:
return SLJIT_SUCCESS;
@ -2204,7 +2261,7 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_get_register_index(sljit_s32 type, slji
if (type == SLJIT_FLOAT_REGISTER || type == SLJIT_SIMD_REG_64)
return freg_map[reg];
if (type != SLJIT_SIMD_REG_128)
if (type == SLJIT_SIMD_REG_128)
return freg_map[reg] & ~0x1;
return -1;
@ -3582,7 +3639,7 @@ static SLJIT_INLINE sljit_s32 simd_get_quad_reg_index(sljit_s32 freg)
#define SLJIT_QUAD_OTHER_HALF(freg) ((((freg) & 0x1) << 1) - 1)
SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_mov(struct sljit_compiler *compiler, sljit_s32 type,
sljit_s32 freg,
sljit_s32 vreg,
sljit_s32 srcdst, sljit_sw srcdstw)
{
sljit_s32 reg_size = SLJIT_SIMD_GET_REG_SIZE(type);
@ -3591,7 +3648,7 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_mov(struct sljit_compiler *co
sljit_ins ins;
CHECK_ERROR();
CHECK(check_sljit_emit_simd_mov(compiler, type, freg, srcdst, srcdstw));
CHECK(check_sljit_emit_simd_mov(compiler, type, vreg, srcdst, srcdstw));
ADJUST_LOCAL_OFFSET(srcdst, srcdstw);
@ -3605,16 +3662,16 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_mov(struct sljit_compiler *co
return SLJIT_SUCCESS;
if (reg_size == 4)
freg = simd_get_quad_reg_index(freg);
vreg = simd_get_quad_reg_index(vreg);
if (!(srcdst & SLJIT_MEM)) {
if (reg_size == 4)
srcdst = simd_get_quad_reg_index(srcdst);
if (type & SLJIT_SIMD_STORE)
ins = VD4(srcdst) | VN4(freg) | VM4(freg);
ins = VD4(srcdst) | VN4(vreg) | VM4(vreg);
else
ins = VD4(freg) | VN4(srcdst) | VM4(srcdst);
ins = VD4(vreg) | VN4(srcdst) | VM4(srcdst);
if (reg_size == 4)
ins |= (sljit_ins)1 << 6;
@ -3627,7 +3684,7 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_mov(struct sljit_compiler *co
if (elem_size > 3)
elem_size = 3;
ins = ((type & SLJIT_SIMD_STORE) ? VST1 : VLD1) | VD4(freg)
ins = ((type & SLJIT_SIMD_STORE) ? VST1 : VLD1) | VD4(vreg)
| (sljit_ins)((reg_size == 3) ? (0x7 << 8) : (0xa << 8));
SLJIT_ASSERT(reg_size >= alignment);
@ -3735,7 +3792,7 @@ static sljit_ins simd_get_imm(sljit_s32 elem_size, sljit_uw value)
}
SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_replicate(struct sljit_compiler *compiler, sljit_s32 type,
sljit_s32 freg,
sljit_s32 vreg,
sljit_s32 src, sljit_sw srcw)
{
sljit_s32 reg_size = SLJIT_SIMD_GET_REG_SIZE(type);
@ -3743,7 +3800,7 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_replicate(struct sljit_compil
sljit_ins ins, imm;
CHECK_ERROR();
CHECK(check_sljit_emit_simd_replicate(compiler, type, freg, src, srcw));
CHECK(check_sljit_emit_simd_replicate(compiler, type, vreg, src, srcw));
ADJUST_LOCAL_OFFSET(src, srcw);
@ -3757,24 +3814,24 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_replicate(struct sljit_compil
return SLJIT_SUCCESS;
if (reg_size == 4)
freg = simd_get_quad_reg_index(freg);
vreg = simd_get_quad_reg_index(vreg);
if (src == SLJIT_IMM && srcw == 0)
return push_inst32(compiler, VMOV_i | ((reg_size == 4) ? (1 << 6) : 0) | VD4(freg));
return push_inst32(compiler, VMOV_i | ((reg_size == 4) ? (1 << 6) : 0) | VD4(vreg));
if (SLJIT_UNLIKELY(elem_size == 3)) {
SLJIT_ASSERT(type & SLJIT_SIMD_FLOAT);
if (src & SLJIT_MEM) {
FAIL_IF(emit_fop_mem(compiler, FPU_LOAD | SLJIT_32, freg, src, srcw));
src = freg;
} else if (freg != src)
FAIL_IF(push_inst32(compiler, VORR | VD4(freg) | VN4(src) | VM4(src)));
FAIL_IF(emit_fop_mem(compiler, FPU_LOAD | SLJIT_32, vreg, src, srcw));
src = vreg;
} else if (vreg != src)
FAIL_IF(push_inst32(compiler, VORR | VD4(vreg) | VN4(src) | VM4(src)));
freg += SLJIT_QUAD_OTHER_HALF(freg);
vreg += SLJIT_QUAD_OTHER_HALF(vreg);
if (freg != src)
return push_inst32(compiler, VORR | VD4(freg) | VN4(src) | VM4(src));
if (vreg != src)
return push_inst32(compiler, VORR | VD4(vreg) | VN4(src) | VM4(src));
return SLJIT_SUCCESS;
}
@ -3786,7 +3843,7 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_replicate(struct sljit_compil
if (reg_size == 4)
ins |= 1 << 5;
return push_inst32(compiler, VLD1_r | ins | VD4(freg) | RN4(src) | 0xf);
return push_inst32(compiler, VLD1_r | ins | VD4(vreg) | RN4(src) | 0xf);
}
if (type & SLJIT_SIMD_FLOAT) {
@ -3796,7 +3853,7 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_replicate(struct sljit_compil
if (reg_size == 4)
ins |= (sljit_ins)1 << 6;
return push_inst32(compiler, VDUP_s | ins | VD4(freg) | (sljit_ins)freg_map[src]);
return push_inst32(compiler, VDUP_s | ins | VD4(vreg) | (sljit_ins)freg_map[src]);
}
if (src == SLJIT_IMM) {
@ -3809,7 +3866,7 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_replicate(struct sljit_compil
if (reg_size == 4)
imm |= (sljit_ins)1 << 6;
return push_inst32(compiler, VMOV_i | imm | VD4(freg));
return push_inst32(compiler, VMOV_i | imm | VD4(vreg));
}
FAIL_IF(load_immediate(compiler, TMP_REG1, (sljit_uw)srcw));
@ -3831,11 +3888,11 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_replicate(struct sljit_compil
if (reg_size == 4)
ins |= (sljit_ins)1 << 21;
return push_inst32(compiler, VDUP | ins | VN4(freg) | RT4(src));
return push_inst32(compiler, VDUP | ins | VN4(vreg) | RT4(src));
}
SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_lane_mov(struct sljit_compiler *compiler, sljit_s32 type,
sljit_s32 freg, sljit_s32 lane_index,
sljit_s32 vreg, sljit_s32 lane_index,
sljit_s32 srcdst, sljit_sw srcdstw)
{
sljit_s32 reg_size = SLJIT_SIMD_GET_REG_SIZE(type);
@ -3843,7 +3900,7 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_lane_mov(struct sljit_compile
sljit_ins ins;
CHECK_ERROR();
CHECK(check_sljit_emit_simd_lane_mov(compiler, type, freg, lane_index, srcdst, srcdstw));
CHECK(check_sljit_emit_simd_lane_mov(compiler, type, vreg, lane_index, srcdst, srcdstw));
ADJUST_LOCAL_OFFSET(srcdst, srcdstw);
@ -3857,7 +3914,7 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_lane_mov(struct sljit_compile
return SLJIT_SUCCESS;
if (reg_size == 4)
freg = simd_get_quad_reg_index(freg);
vreg = simd_get_quad_reg_index(vreg);
if (type & SLJIT_SIMD_LANE_ZERO) {
ins = (reg_size == 3) ? 0 : ((sljit_ins)1 << 6);
@ -3865,62 +3922,62 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_lane_mov(struct sljit_compile
if (type & SLJIT_SIMD_FLOAT) {
if (elem_size == 3 && !(srcdst & SLJIT_MEM)) {
if (lane_index == 1)
freg += SLJIT_QUAD_OTHER_HALF(freg);
vreg += SLJIT_QUAD_OTHER_HALF(vreg);
if (srcdst != freg)
FAIL_IF(push_inst32(compiler, VORR | VD4(freg) | VN4(srcdst) | VM4(srcdst)));
if (srcdst != vreg)
FAIL_IF(push_inst32(compiler, VORR | VD4(vreg) | VN4(srcdst) | VM4(srcdst)));
freg += SLJIT_QUAD_OTHER_HALF(freg);
return push_inst32(compiler, VMOV_i | VD4(freg));
vreg += SLJIT_QUAD_OTHER_HALF(vreg);
return push_inst32(compiler, VMOV_i | VD4(vreg));
}
if (srcdst == freg || (elem_size == 3 && srcdst == (freg + SLJIT_QUAD_OTHER_HALF(freg)))) {
FAIL_IF(push_inst32(compiler, VORR | ins | VD4(TMP_FREG2) | VN4(freg) | VM4(freg)));
if (srcdst == vreg || (elem_size == 3 && srcdst == (vreg + SLJIT_QUAD_OTHER_HALF(vreg)))) {
FAIL_IF(push_inst32(compiler, VORR | ins | VD4(TMP_FREG2) | VN4(vreg) | VM4(vreg)));
srcdst = TMP_FREG2;
srcdstw = 0;
}
}
FAIL_IF(push_inst32(compiler, VMOV_i | ins | VD4(freg)));
FAIL_IF(push_inst32(compiler, VMOV_i | ins | VD4(vreg)));
}
if (reg_size == 4 && lane_index >= (0x8 >> elem_size)) {
lane_index -= (0x8 >> elem_size);
freg += SLJIT_QUAD_OTHER_HALF(freg);
vreg += SLJIT_QUAD_OTHER_HALF(vreg);
}
if (srcdst & SLJIT_MEM) {
if (elem_size == 3)
return emit_fop_mem(compiler, ((type & SLJIT_SIMD_STORE) ? 0 : FPU_LOAD) | SLJIT_32, freg, srcdst, srcdstw);
return emit_fop_mem(compiler, ((type & SLJIT_SIMD_STORE) ? 0 : FPU_LOAD) | SLJIT_32, vreg, srcdst, srcdstw);
FAIL_IF(sljit_emit_simd_mem_offset(compiler, &srcdst, srcdstw));
lane_index = lane_index << elem_size;
ins = (sljit_ins)((elem_size << 10) | (lane_index << 5));
return push_inst32(compiler, ((type & SLJIT_SIMD_STORE) ? VST1_s : VLD1_s) | ins | VD4(freg) | RN4(srcdst) | 0xf);
return push_inst32(compiler, ((type & SLJIT_SIMD_STORE) ? VST1_s : VLD1_s) | ins | VD4(vreg) | RN4(srcdst) | 0xf);
}
if (type & SLJIT_SIMD_FLOAT) {
if (elem_size == 3) {
if (type & SLJIT_SIMD_STORE)
return push_inst32(compiler, VORR | VD4(srcdst) | VN4(freg) | VM4(freg));
return push_inst32(compiler, VMOV_F32 | SLJIT_32 | VD4(freg) | VM4(srcdst));
return push_inst32(compiler, VORR | VD4(srcdst) | VN4(vreg) | VM4(vreg));
return push_inst32(compiler, VMOV_F32 | SLJIT_32 | VD4(vreg) | VM4(srcdst));
}
if (type & SLJIT_SIMD_STORE) {
if (freg_ebit_map[freg] == 0) {
if (freg_ebit_map[vreg] == 0) {
if (lane_index == 1)
freg = SLJIT_F64_SECOND(freg);
vreg = SLJIT_F64_SECOND(vreg);
return push_inst32(compiler, VMOV_F32 | VD4(srcdst) | VM4(freg));
return push_inst32(compiler, VMOV_F32 | VD4(srcdst) | VM4(vreg));
}
FAIL_IF(push_inst32(compiler, VMOV_s | (1 << 20) | ((sljit_ins)lane_index << 21) | VN4(freg) | RT4(TMP_REG1)));
FAIL_IF(push_inst32(compiler, VMOV_s | (1 << 20) | ((sljit_ins)lane_index << 21) | VN4(vreg) | RT4(TMP_REG1)));
return push_inst32(compiler, VMOV | VN4(srcdst) | RT4(TMP_REG1));
}
FAIL_IF(push_inst32(compiler, VMOV | (1 << 20) | VN4(srcdst) | RT4(TMP_REG1)));
return push_inst32(compiler, VMOV_s | ((sljit_ins)lane_index << 21) | VN4(freg) | RT4(TMP_REG1));
return push_inst32(compiler, VMOV_s | ((sljit_ins)lane_index << 21) | VN4(vreg) | RT4(TMP_REG1));
}
if (srcdst == SLJIT_IMM) {
@ -3948,11 +4005,11 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_lane_mov(struct sljit_compile
ins |= (1 << 23);
}
return push_inst32(compiler, VMOV_s | ins | VN4(freg) | RT4(srcdst));
return push_inst32(compiler, VMOV_s | ins | VN4(vreg) | RT4(srcdst));
}
SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_lane_replicate(struct sljit_compiler *compiler, sljit_s32 type,
sljit_s32 freg,
sljit_s32 vreg,
sljit_s32 src, sljit_s32 src_lane_index)
{
sljit_s32 reg_size = SLJIT_SIMD_GET_REG_SIZE(type);
@ -3960,7 +4017,7 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_lane_replicate(struct sljit_c
sljit_ins ins;
CHECK_ERROR();
CHECK(check_sljit_emit_simd_lane_replicate(compiler, type, freg, src, src_lane_index));
CHECK(check_sljit_emit_simd_lane_replicate(compiler, type, vreg, src, src_lane_index));
if (reg_size != 3 && reg_size != 4)
return SLJIT_ERR_UNSUPPORTED;
@ -3972,7 +4029,7 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_lane_replicate(struct sljit_c
return SLJIT_SUCCESS;
if (reg_size == 4) {
freg = simd_get_quad_reg_index(freg);
vreg = simd_get_quad_reg_index(vreg);
src = simd_get_quad_reg_index(src);
if (src_lane_index >= (0x8 >> elem_size)) {
@ -3982,13 +4039,13 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_lane_replicate(struct sljit_c
}
if (elem_size == 3) {
if (freg != src)
FAIL_IF(push_inst32(compiler, VORR | VD4(freg) | VN4(src) | VM4(src)));
if (vreg != src)
FAIL_IF(push_inst32(compiler, VORR | VD4(vreg) | VN4(src) | VM4(src)));
freg += SLJIT_QUAD_OTHER_HALF(freg);
vreg += SLJIT_QUAD_OTHER_HALF(vreg);
if (freg != src)
return push_inst32(compiler, VORR | VD4(freg) | VN4(src) | VM4(src));
if (vreg != src)
return push_inst32(compiler, VORR | VD4(vreg) | VN4(src) | VM4(src));
return SLJIT_SUCCESS;
}
@ -3997,11 +4054,11 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_lane_replicate(struct sljit_c
if (reg_size == 4)
ins |= (sljit_ins)1 << 6;
return push_inst32(compiler, VDUP_s | ins | VD4(freg) | VM4(src));
return push_inst32(compiler, VDUP_s | ins | VD4(vreg) | VM4(src));
}
SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_extend(struct sljit_compiler *compiler, sljit_s32 type,
sljit_s32 freg,
sljit_s32 vreg,
sljit_s32 src, sljit_sw srcw)
{
sljit_s32 reg_size = SLJIT_SIMD_GET_REG_SIZE(type);
@ -4010,7 +4067,7 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_extend(struct sljit_compiler
sljit_s32 dst_reg;
CHECK_ERROR();
CHECK(check_sljit_emit_simd_extend(compiler, type, freg, src, srcw));
CHECK(check_sljit_emit_simd_extend(compiler, type, vreg, src, srcw));
ADJUST_LOCAL_OFFSET(src, srcw);
@ -4024,20 +4081,20 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_extend(struct sljit_compiler
return SLJIT_SUCCESS;
if (reg_size == 4)
freg = simd_get_quad_reg_index(freg);
vreg = simd_get_quad_reg_index(vreg);
if (src & SLJIT_MEM) {
FAIL_IF(sljit_emit_simd_mem_offset(compiler, &src, srcw));
if (reg_size == 4 && elem2_size - elem_size == 1)
FAIL_IF(push_inst32(compiler, VLD1 | (0x7 << 8) | VD4(freg) | RN4(src) | 0xf));
FAIL_IF(push_inst32(compiler, VLD1 | (0x7 << 8) | VD4(vreg) | RN4(src) | 0xf));
else
FAIL_IF(push_inst32(compiler, VLD1_s | (sljit_ins)((reg_size - elem2_size + elem_size) << 10) | VD4(freg) | RN4(src) | 0xf));
src = freg;
FAIL_IF(push_inst32(compiler, VLD1_s | (sljit_ins)((reg_size - elem2_size + elem_size) << 10) | VD4(vreg) | RN4(src) | 0xf));
src = vreg;
} else if (reg_size == 4)
src = simd_get_quad_reg_index(src);
if (!(type & SLJIT_SIMD_FLOAT)) {
dst_reg = (reg_size == 4) ? freg : TMP_FREG2;
dst_reg = (reg_size == 4) ? vreg : TMP_FREG2;
do {
FAIL_IF(push_inst32(compiler, VSHLL | ((type & SLJIT_SIMD_EXTEND_SIGNED) ? 0 : (1 << 28))
@ -4046,27 +4103,27 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_extend(struct sljit_compiler
} while (++elem_size < elem2_size);
if (dst_reg == TMP_FREG2)
return push_inst32(compiler, VORR | VD4(freg) | VN4(TMP_FREG2) | VM4(TMP_FREG2));
return push_inst32(compiler, VORR | VD4(vreg) | VN4(TMP_FREG2) | VM4(TMP_FREG2));
return SLJIT_SUCCESS;
}
/* No SIMD variant, must use VFP instead. */
SLJIT_ASSERT(reg_size == 4);
if (freg == src) {
freg += SLJIT_QUAD_OTHER_HALF(freg);
FAIL_IF(push_inst32(compiler, VCVT_F64_F32 | VD4(freg) | VM4(src) | 0x20));
freg += SLJIT_QUAD_OTHER_HALF(freg);
return push_inst32(compiler, VCVT_F64_F32 | VD4(freg) | VM4(src));
if (vreg == src) {
vreg += SLJIT_QUAD_OTHER_HALF(vreg);
FAIL_IF(push_inst32(compiler, VCVT_F64_F32 | VD4(vreg) | VM4(src) | 0x20));
vreg += SLJIT_QUAD_OTHER_HALF(vreg);
return push_inst32(compiler, VCVT_F64_F32 | VD4(vreg) | VM4(src));
}
FAIL_IF(push_inst32(compiler, VCVT_F64_F32 | VD4(freg) | VM4(src)));
freg += SLJIT_QUAD_OTHER_HALF(freg);
return push_inst32(compiler, VCVT_F64_F32 | VD4(freg) | VM4(src) | 0x20);
FAIL_IF(push_inst32(compiler, VCVT_F64_F32 | VD4(vreg) | VM4(src)));
vreg += SLJIT_QUAD_OTHER_HALF(vreg);
return push_inst32(compiler, VCVT_F64_F32 | VD4(vreg) | VM4(src) | 0x20);
}
SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_sign(struct sljit_compiler *compiler, sljit_s32 type,
sljit_s32 freg,
sljit_s32 vreg,
sljit_s32 dst, sljit_sw dstw)
{
sljit_s32 reg_size = SLJIT_SIMD_GET_REG_SIZE(type);
@ -4075,7 +4132,7 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_sign(struct sljit_compiler *c
sljit_s32 dst_r;
CHECK_ERROR();
CHECK(check_sljit_emit_simd_sign(compiler, type, freg, dst, dstw));
CHECK(check_sljit_emit_simd_sign(compiler, type, vreg, dst, dstw));
ADJUST_LOCAL_OFFSET(dst, dstw);
@ -4108,12 +4165,12 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_sign(struct sljit_compiler *c
}
if (reg_size == 4) {
freg = simd_get_quad_reg_index(freg);
vreg = simd_get_quad_reg_index(vreg);
ins |= (sljit_ins)1 << 6;
}
SLJIT_ASSERT((freg_map[TMP_FREG2] & 0x1) == 0);
FAIL_IF(push_inst32(compiler, ins | VD4(TMP_FREG2) | VM4(freg)));
FAIL_IF(push_inst32(compiler, ins | VD4(TMP_FREG2) | VM4(vreg)));
if (reg_size == 4 && elem_size > 0)
FAIL_IF(push_inst32(compiler, VMOVN | ((sljit_ins)(elem_size - 1) << 18) | VD4(TMP_FREG2) | VM4(TMP_FREG2)));
@ -4143,14 +4200,16 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_sign(struct sljit_compiler *c
}
SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_op2(struct sljit_compiler *compiler, sljit_s32 type,
sljit_s32 dst_freg, sljit_s32 src1_freg, sljit_s32 src2_freg)
sljit_s32 dst_vreg, sljit_s32 src1_vreg, sljit_s32 src2, sljit_sw src2w)
{
sljit_s32 reg_size = SLJIT_SIMD_GET_REG_SIZE(type);
sljit_s32 elem_size = SLJIT_SIMD_GET_ELEM_SIZE(type);
sljit_ins ins = 0;
sljit_s32 alignment;
sljit_ins ins = 0, load_ins;
CHECK_ERROR();
CHECK(check_sljit_emit_simd_op2(compiler, type, dst_freg, src1_freg, src2_freg));
CHECK(check_sljit_emit_simd_op2(compiler, type, dst_vreg, src1_vreg, src2, src2w));
ADJUST_LOCAL_OFFSET(src2, src2w);
if (reg_size != 3 && reg_size != 4)
return SLJIT_ERR_UNSUPPORTED;
@ -4158,6 +4217,9 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_op2(struct sljit_compiler *co
if ((type & SLJIT_SIMD_FLOAT) && (elem_size < 2 || elem_size > 3))
return SLJIT_ERR_UNSUPPORTED;
if (type & SLJIT_SIMD_TEST)
return SLJIT_SUCCESS;
switch (SLJIT_SIMD_GET_OPCODE(type)) {
case SLJIT_SIMD_OP2_AND:
ins = VAND;
@ -4168,19 +4230,51 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_op2(struct sljit_compiler *co
case SLJIT_SIMD_OP2_XOR:
ins = VEOR;
break;
case SLJIT_SIMD_OP2_SHUFFLE:
ins = VTBL;
break;
}
if (type & SLJIT_SIMD_TEST)
return SLJIT_SUCCESS;
if (src2 & SLJIT_MEM) {
if (elem_size > 3)
elem_size = 3;
load_ins = VLD1 | (sljit_ins)((reg_size == 3) ? (0x7 << 8) : (0xa << 8));
alignment = SLJIT_SIMD_GET_ELEM2_SIZE(type);
SLJIT_ASSERT(reg_size >= alignment);
if (alignment == 3)
load_ins |= 0x10;
else if (alignment >= 4)
load_ins |= 0x20;
FAIL_IF(sljit_emit_simd_mem_offset(compiler, &src2, src2w));
FAIL_IF(push_inst32(compiler, load_ins | VD4(TMP_FREG2) | RN4(src2) | ((sljit_ins)elem_size) << 6 | 0xf));
src2 = TMP_FREG2;
}
if (reg_size == 4) {
dst_freg = simd_get_quad_reg_index(dst_freg);
src1_freg = simd_get_quad_reg_index(src1_freg);
src2_freg = simd_get_quad_reg_index(src2_freg);
dst_vreg = simd_get_quad_reg_index(dst_vreg);
src1_vreg = simd_get_quad_reg_index(src1_vreg);
src2 = simd_get_quad_reg_index(src2);
if (SLJIT_SIMD_GET_OPCODE(type) == SLJIT_SIMD_OP2_SHUFFLE) {
ins |= (sljit_ins)1 << 8;
FAIL_IF(push_inst32(compiler, ins | VD4(dst_vreg != src1_vreg ? dst_vreg : TMP_FREG2) | VN4(src1_vreg) | VM4(src2)));
src2 += SLJIT_QUAD_OTHER_HALF(src2);
FAIL_IF(push_inst32(compiler, ins | VD4(dst_vreg + SLJIT_QUAD_OTHER_HALF(dst_vreg)) | VN4(src1_vreg) | VM4(src2)));
if (dst_vreg == src1_vreg)
return push_inst32(compiler, VORR | VD4(dst_vreg) | VN4(TMP_FREG2) | VM4(TMP_FREG2));
return SLJIT_SUCCESS;
}
ins |= (sljit_ins)1 << 6;
}
return push_inst32(compiler, ins | VD4(dst_freg) | VN4(src1_freg) | VM4(src2_freg));
return push_inst32(compiler, ins | VD4(dst_vreg) | VN4(src1_vreg) | VM4(src2));
}
#undef FPU_LOAD
@ -4194,7 +4288,15 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_atomic_load(struct sljit_compiler
CHECK_ERROR();
CHECK(check_sljit_emit_atomic_load(compiler, op, dst_reg, mem_reg));
if (op & SLJIT_ATOMIC_USE_CAS)
return SLJIT_ERR_UNSUPPORTED;
switch (GET_OPCODE(op)) {
case SLJIT_MOV_S8:
case SLJIT_MOV_S16:
case SLJIT_MOV_S32:
return SLJIT_ERR_UNSUPPORTED;
case SLJIT_MOV_U8:
ins = LDREXB;
break;
@ -4206,6 +4308,9 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_atomic_load(struct sljit_compiler
break;
}
if (op & SLJIT_ATOMIC_TEST)
return SLJIT_SUCCESS;
return push_inst32(compiler, ins | RN4(mem_reg) | RT4(dst_reg));
}
@ -4222,7 +4327,15 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_atomic_store(struct sljit_compiler
CHECK_ERROR();
CHECK(check_sljit_emit_atomic_store(compiler, op, src_reg, mem_reg, temp_reg));
if (op & SLJIT_ATOMIC_USE_CAS)
return SLJIT_ERR_UNSUPPORTED;
switch (GET_OPCODE(op)) {
case SLJIT_MOV_S8:
case SLJIT_MOV_S16:
case SLJIT_MOV_S32:
return SLJIT_ERR_UNSUPPORTED;
case SLJIT_MOV_U8:
ins = STREXB | RM4(TMP_REG1);
break;
@ -4234,6 +4347,9 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_atomic_store(struct sljit_compiler
break;
}
if (op & SLJIT_ATOMIC_TEST)
return SLJIT_SUCCESS;
FAIL_IF(push_inst32(compiler, ins | RN4(mem_reg) | RT4(src_reg)));
if (op & SLJIT_SET_ATOMIC_STORED)
return push_inst32(compiler, CMPI_W | RN4(TMP_REG1));

View File

@ -250,6 +250,9 @@ lower parts in the instruction word, denoted by the “L” and “H” suffixes
#define AMCAS_W OPC_3R(0x70B2)
#define AMCAS_D OPC_3R(0x70B3)
/* Memory barrier instructions */
#define DBAR OPC_3R(0x70e4)
/* Other instructions */
#define BREAK OPC_3R(0x54)
#define DBGCALL OPC_3R(0x55)
@ -348,6 +351,7 @@ lower parts in the instruction word, denoted by the “L” and “H” suffixes
#define VREPLGR2VR OPC_2R(0x1ca7c0)
#define VREPLVE OPC_3R(0xe244)
#define VREPLVEI OPC_2R(0x1cbde0)
#define VSHUF_B OPC_4R(0xd5)
#define XVPERMI OPC_2RI8(0x1dfa)
#define I12_MAX (0x7ff)
@ -386,6 +390,8 @@ static sljit_u32 hwcap_feature_list = 0;
#define GET_CFG2 0
#define GET_HWCAP 1
#define LOONGARCH_SUPPORT_AMCAS (LOONGARCH_CFG2_LAMCAS & get_cpu_features(GET_CFG2))
static SLJIT_INLINE sljit_u32 get_cpu_features(sljit_u32 feature_type)
{
if (cfg2_feature_list == 0)
@ -405,14 +411,15 @@ static sljit_s32 push_inst(struct sljit_compiler *compiler, sljit_ins ins)
return SLJIT_SUCCESS;
}
static SLJIT_INLINE sljit_ins* detect_jump_type(struct sljit_jump *jump, sljit_ins *code, sljit_sw executable_offset)
static SLJIT_INLINE sljit_ins* detect_jump_type(struct sljit_jump *jump, sljit_ins *code_ptr, sljit_ins *code, sljit_sw executable_offset)
{
sljit_sw diff;
sljit_uw target_addr;
sljit_ins *inst;
inst = (sljit_ins *)jump->addr;
sljit_uw jump_addr = (sljit_uw)code_ptr;
sljit_uw orig_addr = jump->addr;
SLJIT_UNUSED_ARG(executable_offset);
jump->addr = jump_addr;
if (jump->flags & SLJIT_REWRITABLE_JUMP)
goto exit;
@ -420,20 +427,23 @@ static SLJIT_INLINE sljit_ins* detect_jump_type(struct sljit_jump *jump, sljit_i
target_addr = jump->u.target;
else {
SLJIT_ASSERT(jump->u.label != NULL);
target_addr = (sljit_uw)(code + jump->u.label->size) + (sljit_uw)executable_offset;
target_addr = (sljit_uw)SLJIT_ADD_EXEC_OFFSET(code + jump->u.label->size, executable_offset);
if (jump->u.label->size > orig_addr)
jump_addr = (sljit_uw)(code + orig_addr);
}
diff = (sljit_sw)target_addr - (sljit_sw)inst - executable_offset;
diff = (sljit_sw)target_addr - (sljit_sw)SLJIT_ADD_EXEC_OFFSET(jump_addr, executable_offset);
if (jump->flags & IS_COND) {
diff += SSIZE_OF(ins);
if (diff >= BRANCH16_MIN && diff <= BRANCH16_MAX) {
inst--;
inst[0] = (inst[0] & 0xfc0003ff) ^ 0x4000000;
code_ptr--;
code_ptr[0] = (code_ptr[0] & 0xfc0003ff) ^ 0x4000000;
jump->flags |= PATCH_B;
jump->addr = (sljit_uw)inst;
return inst;
jump->addr = (sljit_uw)code_ptr;
return code_ptr;
}
diff -= SSIZE_OF(ins);
@ -441,60 +451,65 @@ static SLJIT_INLINE sljit_ins* detect_jump_type(struct sljit_jump *jump, sljit_i
if (diff >= JUMP_MIN && diff <= JUMP_MAX) {
if (jump->flags & IS_COND) {
inst[-1] |= (sljit_ins)IMM_I16(2);
code_ptr[-1] |= (sljit_ins)IMM_I16(2);
}
jump->flags |= PATCH_J;
return inst;
return code_ptr;
}
if (diff >= S32_MIN && diff <= S32_MAX) {
if (jump->flags & IS_COND)
inst[-1] |= (sljit_ins)IMM_I16(3);
code_ptr[-1] |= (sljit_ins)IMM_I16(3);
jump->flags |= PATCH_REL32;
inst[1] = inst[0];
return inst + 1;
code_ptr[1] = code_ptr[0];
return code_ptr + 1;
}
if (target_addr <= (sljit_uw)S32_MAX) {
if (jump->flags & IS_COND)
inst[-1] |= (sljit_ins)IMM_I16(3);
code_ptr[-1] |= (sljit_ins)IMM_I16(3);
jump->flags |= PATCH_ABS32;
inst[1] = inst[0];
return inst + 1;
code_ptr[1] = code_ptr[0];
return code_ptr + 1;
}
if (target_addr <= S52_MAX) {
if (jump->flags & IS_COND)
inst[-1] |= (sljit_ins)IMM_I16(4);
code_ptr[-1] |= (sljit_ins)IMM_I16(4);
jump->flags |= PATCH_ABS52;
inst[2] = inst[0];
return inst + 2;
code_ptr[2] = code_ptr[0];
return code_ptr + 2;
}
exit:
if (jump->flags & IS_COND)
inst[-1] |= (sljit_ins)IMM_I16(5);
inst[3] = inst[0];
return inst + 3;
code_ptr[-1] |= (sljit_ins)IMM_I16(5);
code_ptr[3] = code_ptr[0];
return code_ptr + 3;
}
static SLJIT_INLINE sljit_sw mov_addr_get_length(struct sljit_jump *jump, sljit_ins *code_ptr, sljit_ins *code, sljit_sw executable_offset)
{
sljit_uw addr;
sljit_uw jump_addr = (sljit_uw)code_ptr;
sljit_sw diff;
SLJIT_UNUSED_ARG(executable_offset);
SLJIT_ASSERT(jump->flags < ((sljit_uw)6 << JUMP_SIZE_SHIFT));
if (jump->flags & JUMP_ADDR)
addr = jump->u.target;
else
else {
addr = (sljit_uw)SLJIT_ADD_EXEC_OFFSET(code + jump->u.label->size, executable_offset);
diff = (sljit_sw)addr - (sljit_sw)SLJIT_ADD_EXEC_OFFSET(code_ptr, executable_offset);
if (jump->u.label->size > jump->addr)
jump_addr = (sljit_uw)(code + jump->addr);
}
diff = (sljit_sw)addr - (sljit_sw)SLJIT_ADD_EXEC_OFFSET(jump_addr, executable_offset);
if (diff >= S32_MIN && diff <= S32_MAX) {
SLJIT_ASSERT(jump->flags >= ((sljit_uw)1 << JUMP_SIZE_SHIFT));
@ -617,6 +632,10 @@ static void reduce_code_size(struct sljit_compiler *compiler)
} else {
/* Unit size: instruction. */
diff = (sljit_sw)jump->u.label->size - (sljit_sw)jump->addr;
if (jump->u.label->size > jump->addr) {
SLJIT_ASSERT(jump->u.label->size - size_reduce >= jump->addr);
diff -= (sljit_sw)size_reduce;
}
if ((jump->flags & IS_COND) && (diff + 1) <= (BRANCH16_MAX / SSIZE_OF(ins)) && (diff + 1) >= (BRANCH16_MIN / SSIZE_OF(ins)))
total_size = 0;
@ -635,6 +654,10 @@ static void reduce_code_size(struct sljit_compiler *compiler)
if (!(jump->flags & JUMP_ADDR)) {
/* Real size minus 1. Unit size: instruction. */
diff = (sljit_sw)jump->u.label->size - (sljit_sw)jump->addr;
if (jump->u.label->size > jump->addr) {
SLJIT_ASSERT(jump->u.label->size - size_reduce >= jump->addr);
diff -= (sljit_sw)size_reduce;
}
if (diff >= (S32_MIN / SSIZE_OF(ins)) && diff <= (S32_MAX / SSIZE_OF(ins)))
total_size = 1;
@ -710,8 +733,7 @@ SLJIT_API_FUNC_ATTRIBUTE void* sljit_generate_code(struct sljit_compiler *compil
if (next_min_addr == next_jump_addr) {
if (!(jump->flags & JUMP_MOV_ADDR)) {
word_count = word_count - 1 + (jump->flags >> JUMP_SIZE_SHIFT);
jump->addr = (sljit_uw)code_ptr;
code_ptr = detect_jump_type(jump, code, executable_offset);
code_ptr = detect_jump_type(jump, code_ptr, code, executable_offset);
SLJIT_ASSERT((jump->flags & PATCH_B) || ((sljit_uw)code_ptr - jump->addr < (jump->flags >> JUMP_SIZE_SHIFT) * sizeof(sljit_ins)));
} else {
word_count += jump->flags >> JUMP_SIZE_SHIFT;
@ -804,9 +826,6 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_has_cpu_feature(sljit_s32 feature_type)
case SLJIT_HAS_SIMD:
return (LOONGARCH_HWCAP_LSX & get_cpu_features(GET_HWCAP));
case SLJIT_HAS_ATOMIC:
return (LOONGARCH_CFG2_LAMCAS & get_cpu_features(GET_CFG2));
case SLJIT_HAS_CLZ:
case SLJIT_HAS_CTZ:
case SLJIT_HAS_REV:
@ -814,6 +833,8 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_has_cpu_feature(sljit_s32 feature_type)
case SLJIT_HAS_PREFETCH:
case SLJIT_HAS_COPY_F32:
case SLJIT_HAS_COPY_F64:
case SLJIT_HAS_ATOMIC:
case SLJIT_HAS_MEMORY_BARRIER:
return 1;
default:
@ -889,16 +910,22 @@ static sljit_s32 load_immediate(struct sljit_compiler *compiler, sljit_s32 dst_r
static sljit_s32 emit_op_mem(struct sljit_compiler *compiler, sljit_s32 flags, sljit_s32 reg, sljit_s32 arg, sljit_sw argw);
SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_enter(struct sljit_compiler *compiler,
sljit_s32 options, sljit_s32 arg_types, sljit_s32 scratches, sljit_s32 saveds,
sljit_s32 fscratches, sljit_s32 fsaveds, sljit_s32 local_size)
sljit_s32 options, sljit_s32 arg_types,
sljit_s32 scratches, sljit_s32 saveds, sljit_s32 local_size)
{
sljit_s32 fscratches;
sljit_s32 fsaveds;
sljit_s32 i, tmp, offset;
sljit_s32 saved_arg_count = SLJIT_KEPT_SAVEDS_COUNT(options);
CHECK_ERROR();
CHECK(check_sljit_emit_enter(compiler, options, arg_types, scratches, saveds, fscratches, fsaveds, local_size));
set_emit_enter(compiler, options, arg_types, scratches, saveds, fscratches, fsaveds, local_size);
CHECK(check_sljit_emit_enter(compiler, options, arg_types, scratches, saveds, local_size));
set_emit_enter(compiler, options, arg_types, scratches, saveds, local_size);
scratches = ENTER_GET_REGS(scratches);
saveds = ENTER_GET_REGS(saveds);
fscratches = compiler->fscratches;
fsaveds = compiler->fsaveds;
local_size += GET_SAVED_REGISTERS_SIZE(scratches, saveds - saved_arg_count, 1);
local_size += GET_SAVED_FLOAT_REGISTERS_SIZE(fscratches, fsaveds, f64);
@ -973,13 +1000,20 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_enter(struct sljit_compiler *compi
#undef STACK_MAX_DISTANCE
SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_set_context(struct sljit_compiler *compiler,
sljit_s32 options, sljit_s32 arg_types, sljit_s32 scratches, sljit_s32 saveds,
sljit_s32 fscratches, sljit_s32 fsaveds, sljit_s32 local_size)
sljit_s32 options, sljit_s32 arg_types,
sljit_s32 scratches, sljit_s32 saveds, sljit_s32 local_size)
{
CHECK_ERROR();
CHECK(check_sljit_set_context(compiler, options, arg_types, scratches, saveds, fscratches, fsaveds, local_size));
set_set_context(compiler, options, arg_types, scratches, saveds, fscratches, fsaveds, local_size);
sljit_s32 fscratches;
sljit_s32 fsaveds;
CHECK_ERROR();
CHECK(check_sljit_set_context(compiler, options, arg_types, scratches, saveds, local_size));
set_emit_enter(compiler, options, arg_types, scratches, saveds, local_size);
scratches = ENTER_GET_REGS(scratches);
saveds = ENTER_GET_REGS(saveds);
fscratches = compiler->fscratches;
fsaveds = compiler->fsaveds;
local_size += GET_SAVED_REGISTERS_SIZE(scratches, saveds - SLJIT_KEPT_SAVEDS_COUNT(options), 1);
local_size += GET_SAVED_FLOAT_REGISTERS_SIZE(fscratches, fsaveds, f64);
@ -1884,6 +1918,8 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_op0(struct sljit_compiler *compile
return push_inst(compiler, ((op & SLJIT_32)? DIV_WU: DIV_DU) | RD(SLJIT_R0) | RJ(SLJIT_R0) | RK(SLJIT_R1));
case SLJIT_DIV_SW:
return push_inst(compiler, INST(DIV, op) | RD(SLJIT_R0) | RJ(SLJIT_R0) | RK(SLJIT_R1));
case SLJIT_MEMORY_BARRIER:
return push_inst(compiler, DBAR);
case SLJIT_ENDBR:
case SLJIT_SKIP_FRAMES_BEFORE_RETURN:
return SLJIT_SUCCESS;
@ -2644,10 +2680,8 @@ static sljit_ins get_jump_instruction(sljit_s32 type)
{
switch (type) {
case SLJIT_EQUAL:
case SLJIT_ATOMIC_NOT_STORED:
return BNE | RJ(EQUAL_FLAG) | RD(TMP_ZERO);
case SLJIT_NOT_EQUAL:
case SLJIT_ATOMIC_STORED:
return BEQ | RJ(EQUAL_FLAG) | RD(TMP_ZERO);
case SLJIT_LESS:
case SLJIT_GREATER:
@ -2655,6 +2689,7 @@ static sljit_ins get_jump_instruction(sljit_s32 type)
case SLJIT_SIG_GREATER:
case SLJIT_OVERFLOW:
case SLJIT_CARRY:
case SLJIT_ATOMIC_STORED:
return BEQ | RJ(OTHER_FLAG) | RD(TMP_ZERO);
case SLJIT_GREATER_EQUAL:
case SLJIT_LESS_EQUAL:
@ -2662,6 +2697,7 @@ static sljit_ins get_jump_instruction(sljit_s32 type)
case SLJIT_SIG_LESS_EQUAL:
case SLJIT_NOT_OVERFLOW:
case SLJIT_NOT_CARRY:
case SLJIT_ATOMIC_NOT_STORED:
return BNE | RJ(OTHER_FLAG) | RD(TMP_ZERO);
case SLJIT_F_EQUAL:
case SLJIT_ORDERED_EQUAL:
@ -2933,7 +2969,7 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_op_flags(struct sljit_compiler *co
break;
case SLJIT_ATOMIC_STORED:
case SLJIT_ATOMIC_NOT_STORED:
FAIL_IF(push_inst(compiler, SLTUI | RD(dst_r) | RJ(EQUAL_FLAG) | IMM_I12(1)));
FAIL_IF(push_inst(compiler, SLTUI | RD(dst_r) | RJ(OTHER_FLAG) | IMM_I12(1)));
src_r = dst_r;
invert ^= 0x1;
break;
@ -3162,14 +3198,14 @@ static sljit_s32 sljit_emit_simd_mem_offset(struct sljit_compiler *compiler, slj
}
SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_mov(struct sljit_compiler *compiler, sljit_s32 type,
sljit_s32 freg,
sljit_s32 vreg,
sljit_s32 srcdst, sljit_sw srcdstw)
{
sljit_s32 reg_size = SLJIT_SIMD_GET_REG_SIZE(type);
sljit_ins ins = 0;
CHECK_ERROR();
CHECK(check_sljit_emit_simd_mov(compiler, type, freg, srcdst, srcdstw));
CHECK(check_sljit_emit_simd_mov(compiler, type, vreg, srcdst, srcdstw));
ADJUST_LOCAL_OFFSET(srcdst, srcdstw);
@ -3184,9 +3220,9 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_mov(struct sljit_compiler *co
if (!(srcdst & SLJIT_MEM)) {
if (type & SLJIT_SIMD_STORE)
ins = FRD(srcdst) | FRJ(freg) | FRK(freg);
ins = FRD(srcdst) | FRJ(vreg) | FRK(vreg);
else
ins = FRD(freg) | FRJ(srcdst) | FRK(srcdst);
ins = FRD(vreg) | FRJ(srcdst) | FRK(srcdst);
if (reg_size == 5)
ins |= VOR_V | (sljit_ins)1 << 26;
@ -3202,15 +3238,15 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_mov(struct sljit_compiler *co
ins = (type & SLJIT_SIMD_STORE) ? XVST : XVLD;
if (FAST_IS_REG(srcdst) && srcdst >= 0 && (srcdstw >= I12_MIN && srcdstw <= I12_MAX))
return push_inst(compiler, ins | FRD(freg) | RJ((sljit_u8)srcdst) | IMM_I12(srcdstw));
return push_inst(compiler, ins | FRD(vreg) | RJ((sljit_u8)srcdst) | IMM_I12(srcdstw));
else {
FAIL_IF(sljit_emit_simd_mem_offset(compiler, &srcdst, srcdstw));
return push_inst(compiler, ins | FRD(freg) | RJ(srcdst) | IMM_I12(0));
return push_inst(compiler, ins | FRD(vreg) | RJ(srcdst) | IMM_I12(0));
}
}
SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_replicate(struct sljit_compiler *compiler, sljit_s32 type,
sljit_s32 freg,
sljit_s32 vreg,
sljit_s32 src, sljit_sw srcw)
{
sljit_s32 reg_size = SLJIT_SIMD_GET_REG_SIZE(type);
@ -3218,7 +3254,7 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_replicate(struct sljit_compil
sljit_ins ins = 0;
CHECK_ERROR();
CHECK(check_sljit_emit_simd_replicate(compiler, type, freg, src, srcw));
CHECK(check_sljit_emit_simd_replicate(compiler, type, vreg, src, srcw));
ADJUST_LOCAL_OFFSET(src, srcw);
@ -3237,7 +3273,7 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_replicate(struct sljit_compil
if (reg_size == 5)
ins = (sljit_ins)1 << 25;
return push_inst(compiler, VLDREPL | ins | FRD(freg) | RJ(src) | (sljit_ins)1 << (23 - elem_size));
return push_inst(compiler, VLDREPL | ins | FRD(vreg) | RJ(src) | (sljit_ins)1 << (23 - elem_size));
}
if (reg_size == 5)
@ -3245,13 +3281,13 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_replicate(struct sljit_compil
if (type & SLJIT_SIMD_FLOAT) {
if (src == SLJIT_IMM)
return push_inst(compiler, VREPLGR2VR | ins | FRD(freg) | RJ(TMP_ZERO) | (sljit_ins)elem_size << 10);
return push_inst(compiler, VREPLGR2VR | ins | FRD(vreg) | RJ(TMP_ZERO) | (sljit_ins)elem_size << 10);
FAIL_IF(push_inst(compiler, VREPLVE | ins | FRD(freg) | FRJ(src) | RK(TMP_ZERO) | (sljit_ins)elem_size << 15));
FAIL_IF(push_inst(compiler, VREPLVE | ins | FRD(vreg) | FRJ(src) | RK(TMP_ZERO) | (sljit_ins)elem_size << 15));
if (reg_size == 5) {
ins = (sljit_ins)(0x44 << 10);
return push_inst(compiler, XVPERMI | ins | FRD(freg) | FRJ(freg));
return push_inst(compiler, XVPERMI | ins | FRD(vreg) | FRJ(vreg));
}
return SLJIT_SUCCESS;
@ -3264,11 +3300,11 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_replicate(struct sljit_compil
src = TMP_REG2;
}
return push_inst(compiler, ins | FRD(freg) | RJ(src));
return push_inst(compiler, ins | FRD(vreg) | RJ(src));
}
SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_lane_mov(struct sljit_compiler *compiler, sljit_s32 type,
sljit_s32 freg, sljit_s32 lane_index,
sljit_s32 vreg, sljit_s32 lane_index,
sljit_s32 srcdst, sljit_sw srcdstw)
{
sljit_s32 reg_size = SLJIT_SIMD_GET_REG_SIZE(type);
@ -3276,7 +3312,7 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_lane_mov(struct sljit_compile
sljit_ins ins = 0;
CHECK_ERROR();
CHECK(check_sljit_emit_simd_lane_mov(compiler, type, freg, lane_index, srcdst, srcdstw));
CHECK(check_sljit_emit_simd_lane_mov(compiler, type, vreg, lane_index, srcdst, srcdstw));
ADJUST_LOCAL_OFFSET(srcdst, srcdstw);
@ -3298,13 +3334,13 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_lane_mov(struct sljit_compile
if (type & SLJIT_SIMD_LANE_ZERO) {
ins = (reg_size == 5) ? ((sljit_ins)1 << 26) : 0;
if ((type & SLJIT_SIMD_FLOAT) && freg == srcdst) {
FAIL_IF(push_inst(compiler, VOR_V | ins | FRD(TMP_FREG1) | FRJ(freg) | FRK(freg)));
if ((type & SLJIT_SIMD_FLOAT) && vreg == srcdst) {
FAIL_IF(push_inst(compiler, VOR_V | ins | FRD(TMP_FREG1) | FRJ(vreg) | FRK(vreg)));
srcdst = TMP_FREG1;
srcdstw = 0;
}
FAIL_IF(push_inst(compiler, VXOR_V | ins | FRD(freg) | FRJ(freg) | FRK(freg)));
FAIL_IF(push_inst(compiler, VXOR_V | ins | FRD(vreg) | FRJ(vreg) | FRK(vreg)));
}
if (srcdst & SLJIT_MEM) {
@ -3315,7 +3351,7 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_lane_mov(struct sljit_compile
if (type & SLJIT_SIMD_STORE) {
ins |= (sljit_ins)lane_index << 18 | (sljit_ins)(1 << (23 - elem_size));
return push_inst(compiler, VSTELM | ins | FRD(freg) | RJ(srcdst));
return push_inst(compiler, VSTELM | ins | FRD(vreg) | RJ(srcdst));
} else {
emit_op_mem(compiler, (elem_size == 3 ? WORD_DATA : (elem_size == 2 ? INT_DATA : (elem_size == 1 ? HALF_DATA : BYTE_DATA))) | LOAD_DATA, TMP_REG1, srcdst | SLJIT_MEM, 0);
srcdst = TMP_REG1;
@ -3323,20 +3359,20 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_lane_mov(struct sljit_compile
if (reg_size == 5) {
if (elem_size < 2) {
FAIL_IF(push_inst(compiler, VOR_V | (sljit_ins)1 << 26 | FRD(TMP_FREG1) | FRJ(freg) | FRK(freg)));
FAIL_IF(push_inst(compiler, VOR_V | (sljit_ins)1 << 26 | FRD(TMP_FREG1) | FRJ(vreg) | FRK(vreg)));
if (lane_index >= (2 << (3 - elem_size))) {
FAIL_IF(push_inst(compiler, XVPERMI | (sljit_ins)1 << 18 | FRD(TMP_FREG1) | FRJ(freg) | IMM_I8(1)));
FAIL_IF(push_inst(compiler, XVPERMI | (sljit_ins)1 << 18 | FRD(TMP_FREG1) | FRJ(vreg) | IMM_I8(1)));
FAIL_IF(push_inst(compiler, VINSGR2VR | ins | FRD(TMP_FREG1) | RJ(srcdst) | IMM_V(lane_index % (2 << (3 - elem_size)))));
return push_inst(compiler, XVPERMI | (sljit_ins)1 << 18 | FRD(freg) | FRJ(TMP_FREG1) | IMM_I8(2));
return push_inst(compiler, XVPERMI | (sljit_ins)1 << 18 | FRD(vreg) | FRJ(TMP_FREG1) | IMM_I8(2));
} else {
FAIL_IF(push_inst(compiler, VINSGR2VR | ins | FRD(freg) | RJ(srcdst) | IMM_V(lane_index)));
return push_inst(compiler, XVPERMI | (sljit_ins)1 << 18 | FRD(freg) | FRJ(TMP_FREG1) | IMM_I8(18));
FAIL_IF(push_inst(compiler, VINSGR2VR | ins | FRD(vreg) | RJ(srcdst) | IMM_V(lane_index)));
return push_inst(compiler, XVPERMI | (sljit_ins)1 << 18 | FRD(vreg) | FRJ(TMP_FREG1) | IMM_I8(18));
}
} else
ins = (sljit_ins)(0x3f ^ (0x3f >> elem_size)) << 10 | (sljit_ins)1 << 26;
}
return push_inst(compiler, VINSGR2VR | ins | FRD(freg) | RJ(srcdst) | IMM_V(lane_index));
return push_inst(compiler, VINSGR2VR | ins | FRD(vreg) | RJ(srcdst) | IMM_V(lane_index));
}
}
@ -3344,11 +3380,11 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_lane_mov(struct sljit_compile
ins = (reg_size == 5) ? (sljit_ins)(0x3f ^ (0x3f >> elem_size)) << 10 | (sljit_ins)1 << 26 : (sljit_ins)(0x3f ^ (0x1f >> elem_size)) << 10;
if (type & SLJIT_SIMD_STORE) {
FAIL_IF(push_inst(compiler, VPICKVE2GR_U | ins | RD(TMP_REG1) | FRJ(freg) | IMM_V(lane_index)));
FAIL_IF(push_inst(compiler, VPICKVE2GR_U | ins | RD(TMP_REG1) | FRJ(vreg) | IMM_V(lane_index)));
return push_inst(compiler, VINSGR2VR | ins | FRD(srcdst) | RJ(TMP_REG1) | IMM_V(0));
} else {
FAIL_IF(push_inst(compiler, VPICKVE2GR_U | ins | RD(TMP_REG1) | FRJ(srcdst) | IMM_V(0)));
return push_inst(compiler, VINSGR2VR | ins | FRD(freg) | RJ(TMP_REG1) | IMM_V(lane_index));
return push_inst(compiler, VINSGR2VR | ins | FRD(vreg) | RJ(TMP_REG1) | IMM_V(lane_index));
}
}
@ -3373,8 +3409,8 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_lane_mov(struct sljit_compile
else
ins |= VPICKVE2GR_U;
FAIL_IF(push_inst(compiler, VOR_V | (sljit_ins)1 << 26 | FRD(TMP_FREG1) | FRJ(freg) | FRK(freg)));
FAIL_IF(push_inst(compiler, XVPERMI | (sljit_ins)1 << 18 | FRD(TMP_FREG1) | FRJ(freg) | IMM_I8(1)));
FAIL_IF(push_inst(compiler, VOR_V | (sljit_ins)1 << 26 | FRD(TMP_FREG1) | FRJ(vreg) | FRK(vreg)));
FAIL_IF(push_inst(compiler, XVPERMI | (sljit_ins)1 << 18 | FRD(TMP_FREG1) | FRJ(vreg) | IMM_I8(1)));
return push_inst(compiler, ins | RD(srcdst) | FRJ(TMP_FREG1) | IMM_V(lane_index % (2 << (3 - elem_size))));
}
} else {
@ -3383,33 +3419,33 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_lane_mov(struct sljit_compile
}
}
return push_inst(compiler, ins | RD(srcdst) | FRJ(freg) | IMM_V(lane_index));
return push_inst(compiler, ins | RD(srcdst) | FRJ(vreg) | IMM_V(lane_index));
} else {
ins = (sljit_ins)(0x3f ^ (0x1f >> elem_size)) << 10;
if (reg_size == 5) {
if (elem_size < 2) {
FAIL_IF(push_inst(compiler, VOR_V | (sljit_ins)1 << 26 | FRD(TMP_FREG1) | FRJ(freg) | FRK(freg)));
FAIL_IF(push_inst(compiler, VOR_V | (sljit_ins)1 << 26 | FRD(TMP_FREG1) | FRJ(vreg) | FRK(vreg)));
if (lane_index >= (2 << (3 - elem_size))) {
FAIL_IF(push_inst(compiler, XVPERMI | (sljit_ins)1 << 18 | FRD(TMP_FREG1) | FRJ(freg) | IMM_I8(1)));
FAIL_IF(push_inst(compiler, XVPERMI | (sljit_ins)1 << 18 | FRD(TMP_FREG1) | FRJ(vreg) | IMM_I8(1)));
FAIL_IF(push_inst(compiler, VINSGR2VR | ins | FRD(TMP_FREG1) | RJ(srcdst) | IMM_V(lane_index % (2 << (3 - elem_size)))));
return push_inst(compiler, XVPERMI | (sljit_ins)1 << 18 | FRD(freg) | FRJ(TMP_FREG1) | IMM_I8(2));
return push_inst(compiler, XVPERMI | (sljit_ins)1 << 18 | FRD(vreg) | FRJ(TMP_FREG1) | IMM_I8(2));
} else {
FAIL_IF(push_inst(compiler, VINSGR2VR | ins | FRD(freg) | RJ(srcdst) | IMM_V(lane_index)));
return push_inst(compiler, XVPERMI | (sljit_ins)1 << 18 | FRD(freg) | FRJ(TMP_FREG1) | IMM_I8(18));
FAIL_IF(push_inst(compiler, VINSGR2VR | ins | FRD(vreg) | RJ(srcdst) | IMM_V(lane_index)));
return push_inst(compiler, XVPERMI | (sljit_ins)1 << 18 | FRD(vreg) | FRJ(TMP_FREG1) | IMM_I8(18));
}
} else
ins = (sljit_ins)(0x3f ^ (0x3f >> elem_size)) << 10 | (sljit_ins)1 << 26;
}
return push_inst(compiler, VINSGR2VR | ins | FRD(freg) | RJ(srcdst) | IMM_V(lane_index));
return push_inst(compiler, VINSGR2VR | ins | FRD(vreg) | RJ(srcdst) | IMM_V(lane_index));
}
return SLJIT_ERR_UNSUPPORTED;
}
SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_lane_replicate(struct sljit_compiler *compiler, sljit_s32 type,
sljit_s32 freg,
sljit_s32 vreg,
sljit_s32 src, sljit_s32 src_lane_index)
{
sljit_s32 reg_size = SLJIT_SIMD_GET_REG_SIZE(type);
@ -3417,7 +3453,7 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_lane_replicate(struct sljit_c
sljit_ins ins = 0;
CHECK_ERROR();
CHECK(check_sljit_emit_simd_lane_replicate(compiler, type, freg, src, src_lane_index));
CHECK(check_sljit_emit_simd_lane_replicate(compiler, type, vreg, src, src_lane_index));
if (reg_size != 5 && reg_size != 4)
return SLJIT_ERR_UNSUPPORTED;
@ -3431,18 +3467,18 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_lane_replicate(struct sljit_c
ins = (sljit_ins)(0x3f ^ (0x1f >> elem_size)) << 10;
if (reg_size == 5) {
FAIL_IF(push_inst(compiler, VREPLVEI | (sljit_ins)1 << 26 | ins | FRD(freg) | FRJ(src) | IMM_V(src_lane_index % (2 << (3 - elem_size)))));
FAIL_IF(push_inst(compiler, VREPLVEI | (sljit_ins)1 << 26 | ins | FRD(vreg) | FRJ(src) | IMM_V(src_lane_index % (2 << (3 - elem_size)))));
ins = (src_lane_index < (2 << (3 - elem_size))) ? (sljit_ins)(0x44 << 10) : (sljit_ins)(0xee << 10);
return push_inst(compiler, XVPERMI | ins | FRD(freg) | FRJ(freg));
return push_inst(compiler, XVPERMI | ins | FRD(vreg) | FRJ(vreg));
}
return push_inst(compiler, VREPLVEI | ins | FRD(freg) | FRJ(src) | IMM_V(src_lane_index));
return push_inst(compiler, VREPLVEI | ins | FRD(vreg) | FRJ(src) | IMM_V(src_lane_index));
}
SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_extend(struct sljit_compiler *compiler, sljit_s32 type,
sljit_s32 freg,
sljit_s32 vreg,
sljit_s32 src, sljit_sw srcw)
{
sljit_s32 reg_size = SLJIT_SIMD_GET_REG_SIZE(type);
@ -3451,7 +3487,7 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_extend(struct sljit_compiler
sljit_ins ins = 0;
CHECK_ERROR();
CHECK(check_sljit_emit_simd_extend(compiler, type, freg, src, srcw));
CHECK(check_sljit_emit_simd_extend(compiler, type, vreg, src, srcw));
ADJUST_LOCAL_OFFSET(src, srcw);
@ -3471,12 +3507,12 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_extend(struct sljit_compiler
ins = (type & SLJIT_SIMD_STORE) ? XVST : XVLD;
if (FAST_IS_REG(src) && src >= 0 && (srcw >= I12_MIN && srcw <= I12_MAX))
FAIL_IF(push_inst(compiler, ins | FRD(freg) | RJ(src) | IMM_I12(srcw)));
FAIL_IF(push_inst(compiler, ins | FRD(vreg) | RJ(src) | IMM_I12(srcw)));
else {
FAIL_IF(sljit_emit_simd_mem_offset(compiler, &src, srcw));
FAIL_IF(push_inst(compiler, ins | FRD(freg) | RJ(src) | IMM_I12(0)));
FAIL_IF(push_inst(compiler, ins | FRD(vreg) | RJ(src) | IMM_I12(0)));
}
src = freg;
src = vreg;
}
if (type & SLJIT_SIMD_FLOAT) {
@ -3489,7 +3525,7 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_extend(struct sljit_compiler
FAIL_IF(push_inst(compiler, XVPERMI | FRD(src) | FRJ(src) | IMM_I8(16)));
}
return push_inst(compiler, VFCVTL_D_S | ins | FRD(freg) | FRJ(src));
return push_inst(compiler, VFCVTL_D_S | ins | FRD(vreg) | FRJ(src));
}
ins = (type & SLJIT_SIMD_EXTEND_SIGNED) ? VSLLWIL : (VSLLWIL | (sljit_ins)1 << 18);
@ -3501,15 +3537,15 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_extend(struct sljit_compiler
if (reg_size == 5)
FAIL_IF(push_inst(compiler, XVPERMI | FRD(src) | FRJ(src) | IMM_I8(16)));
FAIL_IF(push_inst(compiler, ins | ((sljit_ins)1 << (13 + elem_size)) | FRD(freg) | FRJ(src)));
src = freg;
FAIL_IF(push_inst(compiler, ins | ((sljit_ins)1 << (13 + elem_size)) | FRD(vreg) | FRJ(src)));
src = vreg;
} while (++elem_size < elem2_size);
return SLJIT_SUCCESS;
}
SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_sign(struct sljit_compiler *compiler, sljit_s32 type,
sljit_s32 freg,
sljit_s32 vreg,
sljit_s32 dst, sljit_sw dstw)
{
sljit_s32 reg_size = SLJIT_SIMD_GET_REG_SIZE(type);
@ -3518,7 +3554,7 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_sign(struct sljit_compiler *c
sljit_s32 dst_r;
CHECK_ERROR();
CHECK(check_sljit_emit_simd_sign(compiler, type, freg, dst, dstw));
CHECK(check_sljit_emit_simd_sign(compiler, type, vreg, dst, dstw));
ADJUST_LOCAL_OFFSET(dst, dstw);
@ -3539,7 +3575,7 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_sign(struct sljit_compiler *c
if (reg_size == 5)
ins = (sljit_ins)1 << 26;
FAIL_IF(push_inst(compiler, VMSKLTZ | ins | (sljit_ins)(elem_size << 10) | FRD(TMP_FREG1) | FRJ(freg)));
FAIL_IF(push_inst(compiler, VMSKLTZ | ins | (sljit_ins)(elem_size << 10) | FRD(TMP_FREG1) | FRJ(vreg)));
FAIL_IF(push_inst(compiler, VPICKVE2GR_U | (sljit_ins)(0x3c << 10) | RD(dst_r) | FRJ(TMP_FREG1)));
@ -3556,14 +3592,15 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_sign(struct sljit_compiler *c
}
SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_op2(struct sljit_compiler *compiler, sljit_s32 type,
sljit_s32 dst_freg, sljit_s32 src1_freg, sljit_s32 src2_freg)
sljit_s32 dst_vreg, sljit_s32 src1_vreg, sljit_s32 src2, sljit_sw src2w)
{
sljit_s32 reg_size = SLJIT_SIMD_GET_REG_SIZE(type);
sljit_s32 elem_size = SLJIT_SIMD_GET_ELEM_SIZE(type);
sljit_ins ins = 0;
CHECK_ERROR();
CHECK(check_sljit_emit_simd_op2(compiler, type, dst_freg, src1_freg, src2_freg));
CHECK(check_sljit_emit_simd_op2(compiler, type, dst_vreg, src1_vreg, src2, src2w));
ADJUST_LOCAL_OFFSET(src2, src2w);
if (reg_size != 5 && reg_size != 4)
return SLJIT_ERR_UNSUPPORTED;
@ -3577,6 +3614,12 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_op2(struct sljit_compiler *co
if (type & SLJIT_SIMD_TEST)
return SLJIT_SUCCESS;
if (src2 & SLJIT_MEM) {
FAIL_IF(sljit_emit_simd_mem_offset(compiler, &src2, src2w));
FAIL_IF(push_inst(compiler, (reg_size == 4 ? VLD : XVLD) | FRD(TMP_FREG1) | RJ(src2) | IMM_I12(0)));
src2 = TMP_FREG1;
}
switch (SLJIT_SIMD_GET_OPCODE(type)) {
case SLJIT_SIMD_OP2_AND:
ins = VAND_V;
@ -3587,12 +3630,17 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_op2(struct sljit_compiler *co
case SLJIT_SIMD_OP2_XOR:
ins = VXOR_V;
break;
case SLJIT_SIMD_OP2_SHUFFLE:
if (reg_size != 4)
return SLJIT_ERR_UNSUPPORTED;
return push_inst(compiler, VSHUF_B | FRD(dst_vreg) | FRJ(src1_vreg) | FRK(src1_vreg) | FRA(src2));
}
if (reg_size == 5)
ins |= (sljit_ins)1 << 26;
return push_inst(compiler, ins | FRD(dst_freg) | FRJ(src1_freg) | FRK(src2_freg));
return push_inst(compiler, ins | FRD(dst_vreg) | FRJ(src1_vreg) | FRK(src2));
}
SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_atomic_load(struct sljit_compiler *compiler,
@ -3605,14 +3653,45 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_atomic_load(struct sljit_compiler
CHECK_ERROR();
CHECK(check_sljit_emit_atomic_load(compiler, op, dst_reg, mem_reg));
if ((op & SLJIT_ATOMIC_USE_LS) || !LOONGARCH_SUPPORT_AMCAS) {
if (op & SLJIT_ATOMIC_USE_CAS)
return SLJIT_ERR_UNSUPPORTED;
switch (GET_OPCODE(op)) {
case SLJIT_MOV:
case SLJIT_MOV_P:
ins = LL_D;
break;
case SLJIT_MOV_S32:
case SLJIT_MOV32:
ins = LL_W;
break;
default:
return SLJIT_ERR_UNSUPPORTED;
}
if (op & SLJIT_ATOMIC_TEST)
return SLJIT_SUCCESS;
return push_inst(compiler, ins | RD(dst_reg) | RJ(mem_reg));
}
switch(GET_OPCODE(op)) {
case SLJIT_MOV_S8:
ins = LD_B;
break;
case SLJIT_MOV_U8:
ins = LD_BU;
break;
case SLJIT_MOV_S16:
ins = LD_H;
break;
case SLJIT_MOV_U16:
ins = LD_HU;
break;
case SLJIT_MOV32:
case SLJIT_MOV_S32:
ins = LD_W;
break;
case SLJIT_MOV_U32:
@ -3623,6 +3702,9 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_atomic_load(struct sljit_compiler
break;
}
if (op & SLJIT_ATOMIC_TEST)
return SLJIT_SUCCESS;
return push_inst(compiler, ins | RD(dst_reg) | RJ(mem_reg) | IMM_I12(0));
}
@ -3639,16 +3721,48 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_atomic_store(struct sljit_compiler
CHECK_ERROR();
CHECK(check_sljit_emit_atomic_store(compiler, op, src_reg, mem_reg, temp_reg));
if ((op & SLJIT_ATOMIC_USE_LS) || !LOONGARCH_SUPPORT_AMCAS) {
if (op & SLJIT_ATOMIC_USE_CAS)
return SLJIT_ERR_UNSUPPORTED;
switch (GET_OPCODE(op)) {
case SLJIT_MOV:
case SLJIT_MOV_P:
ins = SC_D;
break;
case SLJIT_MOV_S32:
case SLJIT_MOV32:
ins = SC_W;
break;
default:
return SLJIT_ERR_UNSUPPORTED;
}
if (op & SLJIT_ATOMIC_TEST)
return SLJIT_SUCCESS;
FAIL_IF(push_inst(compiler, ADD_D | RD(OTHER_FLAG) | RJ(src_reg) | RK(TMP_ZERO)));
return push_inst(compiler, ins | RD(OTHER_FLAG) | RJ(mem_reg));
}
switch (GET_OPCODE(op)) {
case SLJIT_MOV_S8:
ins = AMCAS_B;
break;
case SLJIT_MOV_U8:
ins = AMCAS_B;
unsign = BSTRPICK_D | (7 << 16);
break;
case SLJIT_MOV_S16:
ins = AMCAS_H;
break;
case SLJIT_MOV_U16:
ins = AMCAS_H;
unsign = BSTRPICK_D | (15 << 16);
break;
case SLJIT_MOV32:
case SLJIT_MOV_S32:
ins = AMCAS_W;
break;
case SLJIT_MOV_U32:
@ -3660,9 +3774,12 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_atomic_store(struct sljit_compiler
break;
}
if (op & SLJIT_ATOMIC_TEST)
return SLJIT_SUCCESS;
if (op & SLJIT_SET_ATOMIC_STORED) {
FAIL_IF(push_inst(compiler, XOR | RD(TMP_REG1) | RJ(temp_reg) | RK(TMP_ZERO)));
tmp = TMP_REG1;
FAIL_IF(push_inst(compiler, XOR | RD(TMP_REG3) | RJ(temp_reg) | RK(TMP_ZERO)));
tmp = TMP_REG3;
}
FAIL_IF(push_inst(compiler, ins | RD(tmp) | RJ(mem_reg) | RK(src_reg)));
if (!(op & SLJIT_SET_ATOMIC_STORED))
@ -3671,8 +3788,8 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_atomic_store(struct sljit_compiler
if (unsign)
FAIL_IF(push_inst(compiler, unsign | RD(tmp) | RJ(tmp)));
FAIL_IF(push_inst(compiler, XOR | RD(EQUAL_FLAG) | RJ(tmp) | RK(temp_reg)));
return push_inst(compiler, SLTUI | RD(EQUAL_FLAG) | RJ(EQUAL_FLAG) | IMM_I12(1));
FAIL_IF(push_inst(compiler, XOR | RD(OTHER_FLAG) | RJ(tmp) | RK(temp_reg)));
return push_inst(compiler, SLTUI | RD(OTHER_FLAG) | RJ(OTHER_FLAG) | IMM_I12(1));
}
static SLJIT_INLINE sljit_s32 emit_const(struct sljit_compiler *compiler, sljit_s32 dst, sljit_sw init_value, sljit_ins last_ins)

View File

@ -249,6 +249,8 @@ static const sljit_u8 freg_map[SLJIT_NUMBER_OF_FLOAT_REGISTERS + 4] = {
#define LDL (HI(26))
#define LDR (HI(27))
#define LDC1 (HI(53))
#define LL (HI(48))
#define LLD (HI(52))
#define LUI (HI(15))
#define LW (HI(35))
#define LWL (HI(34))
@ -288,6 +290,8 @@ static const sljit_u8 freg_map[SLJIT_NUMBER_OF_FLOAT_REGISTERS + 4] = {
#define ROTR (HI(0) | (1 << 21) | LO(2))
#define ROTRV (HI(0) | (1 << 6) | LO(6))
#endif /* SLJIT_MIPS_REV >= 2 */
#define SC (HI(56))
#define SCD (HI(60))
#define SD (HI(63))
#define SDL (HI(44))
#define SDR (HI(45))
@ -308,6 +312,7 @@ static const sljit_u8 freg_map[SLJIT_NUMBER_OF_FLOAT_REGISTERS + 4] = {
#define SWL (HI(42))
#define SWR (HI(46))
#define SWC1 (HI(57))
#define SYNC (HI(0) | LO(15))
#define TRUNC_W_S (HI(17) | FMT_S | LO(13))
#if defined(SLJIT_MIPS_REV) && SLJIT_MIPS_REV >= 2
#define WSBH (HI(31) | (2 << 6) | LO(32))
@ -381,11 +386,21 @@ static sljit_s32 function_check_is_freg(struct sljit_compiler *compiler, sljit_s
if (is_32 && fr >= SLJIT_F64_SECOND(SLJIT_FR0))
fr -= SLJIT_F64_SECOND(0);
return (fr >= SLJIT_FR0 && fr < (SLJIT_FR0 + compiler->fscratches))
|| (fr > (SLJIT_FS0 - compiler->fsaveds) && fr <= SLJIT_FS0)
return (fr >= SLJIT_FR0 && fr < (SLJIT_FR0 + compiler->real_fscratches))
|| (fr > (SLJIT_FS0 - compiler->real_fsaveds) && fr <= SLJIT_FS0)
|| (fr >= SLJIT_TMP_FREGISTER_BASE && fr < (SLJIT_TMP_FREGISTER_BASE + SLJIT_NUMBER_OF_TEMPORARY_FLOAT_REGISTERS));
}
static sljit_s32 function_check_is_vreg(struct sljit_compiler *compiler, sljit_s32 vr, sljit_s32 type)
{
SLJIT_UNUSED_ARG(compiler);
SLJIT_UNUSED_ARG(vr);
SLJIT_UNUSED_ARG(type);
/* SIMD is not supported. */
return 0;
}
#endif /* SLJIT_CONFIG_MIPS_32 && SLJIT_ARGUMENT_CHECKS */
static void get_cpu_features(void)
@ -857,6 +872,8 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_has_cpu_feature(sljit_s32 feature_type)
case SLJIT_HAS_CLZ:
case SLJIT_HAS_CMOV:
case SLJIT_HAS_PREFETCH:
case SLJIT_HAS_ATOMIC:
case SLJIT_HAS_MEMORY_BARRIER:
return 1;
case SLJIT_HAS_CTZ:
@ -928,17 +945,22 @@ static sljit_s32 emit_stack_frame_release(struct sljit_compiler *compiler, sljit
#endif
SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_enter(struct sljit_compiler *compiler,
sljit_s32 options, sljit_s32 arg_types, sljit_s32 scratches, sljit_s32 saveds,
sljit_s32 fscratches, sljit_s32 fsaveds, sljit_s32 local_size)
sljit_s32 options, sljit_s32 arg_types,
sljit_s32 scratches, sljit_s32 saveds, sljit_s32 local_size)
{
sljit_s32 fscratches = ENTER_GET_FLOAT_REGS(scratches);
sljit_s32 fsaveds = ENTER_GET_FLOAT_REGS(saveds);
sljit_ins base;
sljit_s32 i, tmp, offset;
sljit_s32 arg_count, word_arg_count, float_arg_count;
sljit_s32 saved_arg_count = SLJIT_KEPT_SAVEDS_COUNT(options);
CHECK_ERROR();
CHECK(check_sljit_emit_enter(compiler, options, arg_types, scratches, saveds, fscratches, fsaveds, local_size));
set_emit_enter(compiler, options, arg_types, scratches, saveds, fscratches, fsaveds, local_size);
CHECK(check_sljit_emit_enter(compiler, options, arg_types, scratches, saveds, local_size));
set_emit_enter(compiler, options, arg_types, scratches, saveds, local_size);
scratches = ENTER_GET_REGS(scratches);
saveds = ENTER_GET_REGS(saveds);
local_size += GET_SAVED_REGISTERS_SIZE(scratches, saveds - saved_arg_count, 1);
#if (defined SLJIT_CONFIG_MIPS_32 && SLJIT_CONFIG_MIPS_32)
@ -1138,12 +1160,18 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_enter(struct sljit_compiler *compi
}
SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_set_context(struct sljit_compiler *compiler,
sljit_s32 options, sljit_s32 arg_types, sljit_s32 scratches, sljit_s32 saveds,
sljit_s32 fscratches, sljit_s32 fsaveds, sljit_s32 local_size)
sljit_s32 options, sljit_s32 arg_types,
sljit_s32 scratches, sljit_s32 saveds, sljit_s32 local_size)
{
sljit_s32 fscratches = ENTER_GET_FLOAT_REGS(scratches);
sljit_s32 fsaveds = ENTER_GET_FLOAT_REGS(saveds);
CHECK_ERROR();
CHECK(check_sljit_set_context(compiler, options, arg_types, scratches, saveds, fscratches, fsaveds, local_size));
set_set_context(compiler, options, arg_types, scratches, saveds, fscratches, fsaveds, local_size);
CHECK(check_sljit_set_context(compiler, options, arg_types, scratches, saveds, local_size));
set_emit_enter(compiler, options, arg_types, scratches, saveds, local_size);
scratches = ENTER_GET_REGS(scratches);
saveds = ENTER_GET_REGS(saveds);
local_size += GET_SAVED_REGISTERS_SIZE(scratches, saveds - SLJIT_KEPT_SAVEDS_COUNT(options), 1);
#if (defined SLJIT_CONFIG_MIPS_32 && SLJIT_CONFIG_MIPS_32)
@ -2462,6 +2490,12 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_op0(struct sljit_compiler *compile
FAIL_IF(push_inst(compiler, MFLO | D(SLJIT_R0), DR(SLJIT_R0)));
return (op >= SLJIT_DIV_UW) ? SLJIT_SUCCESS : push_inst(compiler, MFHI | D(SLJIT_R1), DR(SLJIT_R1));
#endif /* SLJIT_MIPS_REV >= 6 */
case SLJIT_MEMORY_BARRIER:
#if (defined SLJIT_MIPS_REV && SLJIT_MIPS_REV >= 1)
return push_inst(compiler, SYNC, UNMOVABLE_INS);
#else /* SLJIT_MIPS_REV < 1 */
return SLJIT_ERR_UNSUPPORTED;
#endif /* SLJIT_MIPS_REV >= 1 */
case SLJIT_ENDBR:
case SLJIT_SKIP_FRAMES_BEFORE_RETURN:
return SLJIT_SUCCESS;
@ -3312,6 +3346,7 @@ SLJIT_API_FUNC_ATTRIBUTE struct sljit_jump* sljit_emit_jump(struct sljit_compile
case SLJIT_SIG_GREATER:
case SLJIT_OVERFLOW:
case SLJIT_CARRY:
case SLJIT_ATOMIC_STORED:
BR_Z(OTHER_FLAG);
break;
case SLJIT_GREATER_EQUAL:
@ -3320,6 +3355,7 @@ SLJIT_API_FUNC_ATTRIBUTE struct sljit_jump* sljit_emit_jump(struct sljit_compile
case SLJIT_SIG_LESS_EQUAL:
case SLJIT_NOT_OVERFLOW:
case SLJIT_NOT_CARRY:
case SLJIT_ATOMIC_NOT_STORED:
BR_NZ(OTHER_FLAG);
break;
case SLJIT_F_NOT_EQUAL:
@ -4209,6 +4245,80 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_fmem(struct sljit_compiler *compil
#undef TO_ARGW_HI
SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_atomic_load(struct sljit_compiler *compiler, sljit_s32 op,
sljit_s32 dst_reg,
sljit_s32 mem_reg)
{
sljit_ins ins;
CHECK_ERROR();
CHECK(check_sljit_emit_atomic_load(compiler, op, dst_reg, mem_reg));
if (op & SLJIT_ATOMIC_USE_CAS)
return SLJIT_ERR_UNSUPPORTED;
switch (GET_OPCODE(op)) {
case SLJIT_MOV:
case SLJIT_MOV_P:
#if (defined SLJIT_CONFIG_MIPS_64 && SLJIT_CONFIG_MIPS_64)
ins = LLD;
break;
#endif /* SLJIT_CONFIG_MIPS_64 */
case SLJIT_MOV_S32:
case SLJIT_MOV32:
ins = LL;
break;
default:
return SLJIT_ERR_UNSUPPORTED;
}
if (op & SLJIT_ATOMIC_TEST)
return SLJIT_SUCCESS;
return push_inst(compiler, ins | T(dst_reg) | S(mem_reg), DR(dst_reg));
}
SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_atomic_store(struct sljit_compiler *compiler, sljit_s32 op,
sljit_s32 src_reg,
sljit_s32 mem_reg,
sljit_s32 temp_reg)
{
sljit_ins ins;
/* temp_reg == mem_reg is undefined so use another temp register */
SLJIT_UNUSED_ARG(temp_reg);
CHECK_ERROR();
CHECK(check_sljit_emit_atomic_store(compiler, op, src_reg, mem_reg, temp_reg));
if (op & SLJIT_ATOMIC_USE_CAS)
return SLJIT_ERR_UNSUPPORTED;
switch (GET_OPCODE(op)) {
case SLJIT_MOV:
case SLJIT_MOV_P:
#if (defined SLJIT_CONFIG_MIPS_64 && SLJIT_CONFIG_MIPS_64)
ins = SCD;
break;
#endif /* SLJIT_CONFIG_RISCV_64 */
case SLJIT_MOV_S32:
case SLJIT_MOV32:
op |= SLJIT_32;
ins = SC;
break;
default:
return SLJIT_ERR_UNSUPPORTED;
}
if (op & SLJIT_ATOMIC_TEST)
return SLJIT_SUCCESS;
FAIL_IF(push_inst(compiler, SELECT_OP(DADDU, ADDU) | S(src_reg) | TA(0) | DA(OTHER_FLAG), OTHER_FLAG));
return push_inst(compiler, ins | TA(OTHER_FLAG) | S(mem_reg), OTHER_FLAG);
}
SLJIT_API_FUNC_ATTRIBUTE struct sljit_const* sljit_emit_const(struct sljit_compiler *compiler, sljit_s32 dst, sljit_sw dstw, sljit_sw init_value)
{
struct sljit_const *const_;

View File

@ -187,10 +187,12 @@ static const sljit_u8 freg_map[SLJIT_NUMBER_OF_FLOAT_REGISTERS + 3] = {
#define LD (HI(58) | 0)
#define LFD (HI(50))
#define LFS (HI(48))
#define LDARX (HI(31) | LO(84))
#if defined(_ARCH_PWR7) && _ARCH_PWR7
#define LDBRX (HI(31) | LO(532))
#endif /* POWER7 */
#define LHBRX (HI(31) | LO(790))
#define LWARX (HI(31) | LO(20))
#define LWBRX (HI(31) | LO(534))
#define LWZ (HI(32))
#define MFCR (HI(31) | LO(19))
@ -231,6 +233,7 @@ static const sljit_u8 freg_map[SLJIT_NUMBER_OF_FLOAT_REGISTERS + 3] = {
#if defined(_ARCH_PWR7) && _ARCH_PWR7
#define STDBRX (HI(31) | LO(660))
#endif /* POWER7 */
#define STDCX (HI(31) | LO(214))
#define STDU (HI(62) | 1)
#define STDUX (HI(31) | LO(181))
#define STFD (HI(54))
@ -239,12 +242,14 @@ static const sljit_u8 freg_map[SLJIT_NUMBER_OF_FLOAT_REGISTERS + 3] = {
#define STHBRX (HI(31) | LO(918))
#define STW (HI(36))
#define STWBRX (HI(31) | LO(662))
#define STWCX (HI(31) | LO(150))
#define STWU (HI(37))
#define STWUX (HI(31) | LO(183))
#define SUBF (HI(31) | LO(40))
#define SUBFC (HI(31) | LO(8))
#define SUBFE (HI(31) | LO(136))
#define SUBFIC (HI(8))
#define SYNC (HI(31) | LO(598))
#define XOR (HI(31) | LO(316))
#define XORI (HI(26))
#define XORIS (HI(27))
@ -314,7 +319,11 @@ static SLJIT_INLINE sljit_ins* detect_jump_type(struct sljit_jump *jump, sljit_i
{
sljit_sw diff;
sljit_uw target_addr;
sljit_uw jump_addr = (sljit_uw)code_ptr;
sljit_uw orig_addr = jump->addr;
SLJIT_UNUSED_ARG(executable_offset);
jump->addr = jump_addr;
#if (defined SLJIT_PASS_ENTRY_ADDR_TO_CALL && SLJIT_PASS_ENTRY_ADDR_TO_CALL) && (defined SLJIT_CONFIG_PPC_32 && SLJIT_CONFIG_PPC_32)
if (jump->flags & (SLJIT_REWRITABLE_JUMP | IS_CALL))
goto exit;
@ -328,6 +337,9 @@ static SLJIT_INLINE sljit_ins* detect_jump_type(struct sljit_jump *jump, sljit_i
else {
SLJIT_ASSERT(jump->u.label != NULL);
target_addr = (sljit_uw)(code + jump->u.label->size) + (sljit_uw)executable_offset;
if (jump->u.label->size > orig_addr)
jump_addr = (sljit_uw)(code + orig_addr);
}
#if (defined SLJIT_PASS_ENTRY_ADDR_TO_CALL && SLJIT_PASS_ENTRY_ADDR_TO_CALL) && (defined SLJIT_CONFIG_PPC_64 && SLJIT_CONFIG_PPC_64)
@ -335,7 +347,7 @@ static SLJIT_INLINE sljit_ins* detect_jump_type(struct sljit_jump *jump, sljit_i
goto keep_address;
#endif
diff = (sljit_sw)target_addr - (sljit_sw)code_ptr - executable_offset;
diff = (sljit_sw)target_addr - (sljit_sw)SLJIT_ADD_EXEC_OFFSET(jump_addr, executable_offset);
if (jump->flags & IS_COND) {
if (diff <= 0x7fff && diff >= -0x8000) {
@ -547,6 +559,10 @@ static void reduce_code_size(struct sljit_compiler *compiler)
} else {
/* Unit size: instruction. */
diff = (sljit_sw)jump->u.label->size - (sljit_sw)jump->addr;
if (jump->u.label->size > jump->addr) {
SLJIT_ASSERT(jump->u.label->size - size_reduce >= jump->addr);
diff -= (sljit_sw)size_reduce;
}
if (jump->flags & IS_COND) {
if (diff <= (0x7fff / SSIZE_OF(ins)) && diff >= (-0x8000 / SSIZE_OF(ins)))
@ -592,6 +608,9 @@ SLJIT_API_FUNC_ATTRIBUTE void* sljit_generate_code(struct sljit_compiler *compil
sljit_ins *buf_ptr;
sljit_ins *buf_end;
sljit_uw word_count;
#if (defined SLJIT_DEBUG && SLJIT_DEBUG)
sljit_uw jump_addr;
#endif
SLJIT_NEXT_DEFINE_TYPES;
sljit_sw executable_offset;
@ -648,9 +667,11 @@ SLJIT_API_FUNC_ATTRIBUTE void* sljit_generate_code(struct sljit_compiler *compil
if (next_min_addr == next_jump_addr) {
if (!(jump->flags & JUMP_MOV_ADDR)) {
word_count += jump->flags >> JUMP_SIZE_SHIFT;
jump->addr = (sljit_uw)code_ptr;
#if (defined SLJIT_DEBUG && SLJIT_DEBUG)
jump_addr = (sljit_uw)code_ptr;
#endif
code_ptr = detect_jump_type(jump, code_ptr, code, executable_offset);
SLJIT_ASSERT(((sljit_uw)code_ptr - jump->addr <= (jump->flags >> JUMP_SIZE_SHIFT) * sizeof(sljit_ins)));
SLJIT_ASSERT(((sljit_uw)code_ptr - jump_addr <= (jump->flags >> JUMP_SIZE_SHIFT) * sizeof(sljit_ins)));
} else {
jump->addr = (sljit_uw)code_ptr;
#if (defined SLJIT_CONFIG_PPC_64 && SLJIT_CONFIG_PPC_64)
@ -748,6 +769,8 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_has_cpu_feature(sljit_s32 feature_type)
case SLJIT_HAS_CLZ:
case SLJIT_HAS_ROT:
case SLJIT_HAS_PREFETCH:
case SLJIT_HAS_ATOMIC:
case SLJIT_HAS_MEMORY_BARRIER:
return 1;
case SLJIT_HAS_CTZ:
@ -845,9 +868,11 @@ static sljit_s32 emit_op_mem(struct sljit_compiler *compiler, sljit_s32 inp_flag
#define STACK_MAX_DISTANCE (0x8000 - SSIZE_OF(sw) - LR_SAVE_OFFSET)
SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_enter(struct sljit_compiler *compiler,
sljit_s32 options, sljit_s32 arg_types, sljit_s32 scratches, sljit_s32 saveds,
sljit_s32 fscratches, sljit_s32 fsaveds, sljit_s32 local_size)
sljit_s32 options, sljit_s32 arg_types,
sljit_s32 scratches, sljit_s32 saveds, sljit_s32 local_size)
{
sljit_s32 fscratches = ENTER_GET_FLOAT_REGS(scratches);
sljit_s32 fsaveds = ENTER_GET_FLOAT_REGS(saveds);
sljit_s32 i, tmp, base, offset;
sljit_s32 word_arg_count = 0;
sljit_s32 saved_arg_count = SLJIT_KEPT_SAVEDS_COUNT(options);
@ -856,9 +881,11 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_enter(struct sljit_compiler *compi
#endif
CHECK_ERROR();
CHECK(check_sljit_emit_enter(compiler, options, arg_types, scratches, saveds, fscratches, fsaveds, local_size));
set_emit_enter(compiler, options, arg_types, scratches, saveds, fscratches, fsaveds, local_size);
CHECK(check_sljit_emit_enter(compiler, options, arg_types, scratches, saveds, local_size));
set_emit_enter(compiler, options, arg_types, scratches, saveds, local_size);
scratches = ENTER_GET_REGS(scratches);
saveds = ENTER_GET_REGS(saveds);
local_size += GET_SAVED_REGISTERS_SIZE(scratches, saveds - saved_arg_count, 0)
+ GET_SAVED_FLOAT_REGISTERS_SIZE(fscratches, fsaveds, f64);
@ -962,13 +989,18 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_enter(struct sljit_compiler *compi
}
SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_set_context(struct sljit_compiler *compiler,
sljit_s32 options, sljit_s32 arg_types, sljit_s32 scratches, sljit_s32 saveds,
sljit_s32 fscratches, sljit_s32 fsaveds, sljit_s32 local_size)
sljit_s32 options, sljit_s32 arg_types,
sljit_s32 scratches, sljit_s32 saveds, sljit_s32 local_size)
{
CHECK_ERROR();
CHECK(check_sljit_set_context(compiler, options, arg_types, scratches, saveds, fscratches, fsaveds, local_size));
set_set_context(compiler, options, arg_types, scratches, saveds, fscratches, fsaveds, local_size);
sljit_s32 fscratches = ENTER_GET_FLOAT_REGS(scratches);
sljit_s32 fsaveds = ENTER_GET_FLOAT_REGS(saveds);
CHECK_ERROR();
CHECK(check_sljit_set_context(compiler, options, arg_types, scratches, saveds, local_size));
set_emit_enter(compiler, options, arg_types, scratches, saveds, local_size);
scratches = ENTER_GET_REGS(scratches);
saveds = ENTER_GET_REGS(saveds);
local_size += GET_SAVED_REGISTERS_SIZE(scratches, saveds - SLJIT_KEPT_SAVEDS_COUNT(options), 0)
+ GET_SAVED_FLOAT_REGISTERS_SIZE(fscratches, fsaveds, f64);
@ -1399,6 +1431,8 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_op0(struct sljit_compiler *compile
#else
return push_inst(compiler, (op == SLJIT_DIV_UW ? DIVWU : DIVW) | D(SLJIT_R0) | A(SLJIT_R0) | B(SLJIT_R1));
#endif
case SLJIT_MEMORY_BARRIER:
return push_inst(compiler, SYNC);
case SLJIT_ENDBR:
case SLJIT_SKIP_FRAMES_BEFORE_RETURN:
return SLJIT_SUCCESS;
@ -2422,6 +2456,7 @@ static sljit_ins get_bo_bi_flags(struct sljit_compiler *compiler, sljit_s32 type
/* fallthrough */
case SLJIT_EQUAL:
case SLJIT_ATOMIC_STORED:
return (12 << 21) | (2 << 16);
case SLJIT_CARRY:
@ -2430,6 +2465,7 @@ static sljit_ins get_bo_bi_flags(struct sljit_compiler *compiler, sljit_s32 type
/* fallthrough */
case SLJIT_NOT_EQUAL:
case SLJIT_ATOMIC_NOT_STORED:
return (4 << 21) | (2 << 16);
case SLJIT_LESS:
@ -2686,10 +2722,12 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_op_flags(struct sljit_compiler *co
break;
case SLJIT_EQUAL:
case SLJIT_ATOMIC_STORED:
bit = 2;
break;
case SLJIT_NOT_EQUAL:
case SLJIT_ATOMIC_NOT_STORED:
bit = 2;
invert = 1;
break;
@ -3106,6 +3144,78 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_fmem_update(struct sljit_compiler
return push_inst(compiler, INST_CODE_AND_DST(inst, DOUBLE_DATA, freg) | A(mem & REG_MASK) | IMM(memw));
}
SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_atomic_load(struct sljit_compiler *compiler, sljit_s32 op,
sljit_s32 dst_reg,
sljit_s32 mem_reg)
{
sljit_ins ins;
CHECK_ERROR();
CHECK(check_sljit_emit_atomic_load(compiler, op, dst_reg, mem_reg));
if (op & SLJIT_ATOMIC_USE_CAS)
return SLJIT_ERR_UNSUPPORTED;
switch (GET_OPCODE(op)) {
case SLJIT_MOV:
case SLJIT_MOV_P:
#if (defined SLJIT_CONFIG_PPC_64 && SLJIT_CONFIG_PPC_64)
ins = LDARX;
break;
#endif /* SLJIT_CONFIG_RISCV_64 */
case SLJIT_MOV_U32:
case SLJIT_MOV32:
ins = LWARX;
break;
default:
return SLJIT_ERR_UNSUPPORTED;
}
if (op & SLJIT_ATOMIC_TEST)
return SLJIT_SUCCESS;
return push_inst(compiler, ins | D(dst_reg) | B(mem_reg));
}
SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_atomic_store(struct sljit_compiler *compiler, sljit_s32 op,
sljit_s32 src_reg,
sljit_s32 mem_reg,
sljit_s32 temp_reg)
{
sljit_ins ins;
/* temp_reg == mem_reg is undefined so use another temp register */
SLJIT_UNUSED_ARG(temp_reg);
CHECK_ERROR();
CHECK(check_sljit_emit_atomic_store(compiler, op, src_reg, mem_reg, temp_reg));
if (op & SLJIT_ATOMIC_USE_CAS)
return SLJIT_ERR_UNSUPPORTED;
switch (GET_OPCODE(op)) {
case SLJIT_MOV:
case SLJIT_MOV_P:
#if (defined SLJIT_CONFIG_PPC_64 && SLJIT_CONFIG_PPC_64)
ins = STDCX | 0x1;
break;
#endif /* SLJIT_CONFIG_RISCV_64 */
case SLJIT_MOV_U32:
case SLJIT_MOV32:
ins = STWCX | 0x1;
break;
default:
return SLJIT_ERR_UNSUPPORTED;
}
if (op & SLJIT_ATOMIC_TEST)
return SLJIT_SUCCESS;
return push_inst(compiler, ins | D(src_reg) | B(mem_reg));
}
SLJIT_API_FUNC_ATTRIBUTE struct sljit_const* sljit_emit_const(struct sljit_compiler *compiler, sljit_s32 dst, sljit_sw dstw, sljit_sw init_value)
{
struct sljit_const *const_;

View File

@ -1638,6 +1638,7 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_has_cpu_feature(sljit_s32 feature_type)
case SLJIT_HAS_COPY_F64:
case SLJIT_HAS_SIMD:
case SLJIT_HAS_ATOMIC:
case SLJIT_HAS_MEMORY_BARRIER:
return 1;
case SLJIT_HAS_CTZ:
@ -1660,19 +1661,26 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_cmp_info(sljit_s32 type)
/* --------------------------------------------------------------------- */
SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_enter(struct sljit_compiler *compiler,
sljit_s32 options, sljit_s32 arg_types, sljit_s32 scratches, sljit_s32 saveds,
sljit_s32 fscratches, sljit_s32 fsaveds, sljit_s32 local_size)
sljit_s32 options, sljit_s32 arg_types,
sljit_s32 scratches, sljit_s32 saveds, sljit_s32 local_size)
{
sljit_s32 fscratches;
sljit_s32 fsaveds;
sljit_s32 saved_arg_count = SLJIT_KEPT_SAVEDS_COUNT(options);
sljit_s32 offset, i, tmp;
CHECK_ERROR();
CHECK(check_sljit_emit_enter(compiler, options, arg_types, scratches, saveds, fscratches, fsaveds, local_size));
set_emit_enter(compiler, options, arg_types, scratches, saveds, fscratches, fsaveds, local_size);
CHECK(check_sljit_emit_enter(compiler, options, arg_types, scratches, saveds, local_size));
set_emit_enter(compiler, options, arg_types, scratches, saveds, local_size);
/* Saved registers are stored in callee allocated save area. */
SLJIT_ASSERT(gpr(SLJIT_FIRST_SAVED_REG) == r6 && gpr(SLJIT_S0) == r13);
scratches = ENTER_GET_REGS(scratches);
saveds = ENTER_GET_REGS(saveds);
fscratches = compiler->fscratches;
fsaveds = compiler->fsaveds;
offset = 2 * SSIZE_OF(sw);
if (saveds + scratches >= SLJIT_NUMBER_OF_REGISTERS) {
if (saved_arg_count == 0) {
@ -1756,12 +1764,12 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_enter(struct sljit_compiler *compi
}
SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_set_context(struct sljit_compiler *compiler,
sljit_s32 options, sljit_s32 arg_types, sljit_s32 scratches, sljit_s32 saveds,
sljit_s32 fscratches, sljit_s32 fsaveds, sljit_s32 local_size)
sljit_s32 options, sljit_s32 arg_types,
sljit_s32 scratches, sljit_s32 saveds, sljit_s32 local_size)
{
CHECK_ERROR();
CHECK(check_sljit_set_context(compiler, options, arg_types, scratches, saveds, fscratches, fsaveds, local_size));
set_set_context(compiler, options, arg_types, scratches, saveds, fscratches, fsaveds, local_size);
CHECK(check_sljit_set_context(compiler, options, arg_types, scratches, saveds, local_size));
set_emit_enter(compiler, options, arg_types, scratches, saveds, local_size);
compiler->local_size = (local_size + SLJIT_S390X_DEFAULT_STACK_FRAME_SIZE + 0xf) & ~0xf;
return SLJIT_SUCCESS;
@ -1923,7 +1931,7 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_op0(struct sljit_compiler *compile
return SLJIT_SUCCESS;
case SLJIT_DIV_S32:
case SLJIT_DIVMOD_S32:
FAIL_IF(push_inst(compiler, lhi(tmp0, 0)));
FAIL_IF(push_inst(compiler, 0xeb00000000dc /* srak */ | R36A(tmp0) | R32A(arg0) | (31 << 16)));
FAIL_IF(push_inst(compiler, lr(tmp1, arg0)));
FAIL_IF(push_inst(compiler, dr(tmp0, arg1)));
FAIL_IF(push_inst(compiler, lr(arg0, tmp1))); /* quotient */
@ -1950,6 +1958,8 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_op0(struct sljit_compiler *compile
return push_inst(compiler, lgr(arg1, tmp0)); /* remainder */
return SLJIT_SUCCESS;
case SLJIT_MEMORY_BARRIER:
return push_inst(compiler, 0x0700 /* bcr */ | (0xe << 4) | 0);
case SLJIT_ENDBR:
return SLJIT_SUCCESS;
case SLJIT_SKIP_FRAMES_BEFORE_RETURN:
@ -2475,15 +2485,10 @@ static sljit_s32 sljit_emit_sub(struct sljit_compiler *compiler, sljit_s32 op,
ins = (op & SLJIT_32) ? 0xc20d00000000 /* cfi */ : 0xc20c00000000 /* cgfi */;
return emit_ri(compiler, ins, src1, src1, src1w, src2w, RIL_A);
}
}
else {
if ((op & SLJIT_32) || is_u32(src2w)) {
} else if ((op & SLJIT_32) || is_u32(src2w)) {
ins = (op & SLJIT_32) ? 0xc20f00000000 /* clfi */ : 0xc20e00000000 /* clgfi */;
return emit_ri(compiler, ins, src1, src1, src1w, src2w, RIL_A);
}
if (is_s16(src2w))
return emit_rie_d(compiler, 0xec00000000db /* alghsik */, (sljit_s32)tmp0, src1, src1w, src2w);
}
}
else if (src2 & SLJIT_MEM) {
if ((op & SLJIT_32) && ((src2 & OFFS_REG_MASK) || is_u12(src2w))) {
@ -3182,7 +3187,7 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_get_register_index(sljit_s32 type, slji
if (type == SLJIT_GP_REGISTER)
return (sljit_s32)gpr(reg);
if (type != SLJIT_FLOAT_REGISTER)
if (type != SLJIT_FLOAT_REGISTER && type != SLJIT_SIMD_REG_128)
return -1;
return (sljit_s32)freg_map[reg];
@ -3934,7 +3939,7 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_mem(struct sljit_compiler *compile
}
SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_mov(struct sljit_compiler *compiler, sljit_s32 type,
sljit_s32 freg,
sljit_s32 vreg,
sljit_s32 srcdst, sljit_sw srcdstw)
{
sljit_s32 reg_size = SLJIT_SIMD_GET_REG_SIZE(type);
@ -3944,7 +3949,7 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_mov(struct sljit_compiler *co
sljit_ins ins;
CHECK_ERROR();
CHECK(check_sljit_emit_simd_mov(compiler, type, freg, srcdst, srcdstw));
CHECK(check_sljit_emit_simd_mov(compiler, type, vreg, srcdst, srcdstw));
ADJUST_LOCAL_OFFSET(srcdst, srcdstw);
@ -3959,15 +3964,15 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_mov(struct sljit_compiler *co
if (!(srcdst & SLJIT_MEM)) {
if (type & SLJIT_SIMD_STORE)
ins = F36(srcdst) | F32(freg);
ins = F36(srcdst) | F32(vreg);
else
ins = F36(freg) | F32(srcdst);
ins = F36(vreg) | F32(srcdst);
return push_inst(compiler, 0xe70000000056 /* vlr */ | ins);
}
FAIL_IF(make_addr_bx(compiler, &addr, srcdst, srcdstw, tmp1));
ins = F36(freg) | R32A(addr.index) | R28A(addr.base) | disp_s20(addr.offset);
ins = F36(vreg) | R32A(addr.index) | R28A(addr.base) | disp_s20(addr.offset);
if (alignment >= 4)
ins |= 4 << 12;
@ -3978,7 +3983,7 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_mov(struct sljit_compiler *co
}
SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_replicate(struct sljit_compiler *compiler, sljit_s32 type,
sljit_s32 freg,
sljit_s32 vreg,
sljit_s32 src, sljit_sw srcw)
{
sljit_s32 reg_size = SLJIT_SIMD_GET_REG_SIZE(type);
@ -3988,7 +3993,7 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_replicate(struct sljit_compil
sljit_sw sign_ext;
CHECK_ERROR();
CHECK(check_sljit_emit_simd_replicate(compiler, type, freg, src, srcw));
CHECK(check_sljit_emit_simd_replicate(compiler, type, vreg, src, srcw));
ADJUST_LOCAL_OFFSET(src, srcw);
@ -4003,15 +4008,15 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_replicate(struct sljit_compil
if (src & SLJIT_MEM) {
FAIL_IF(make_addr_bx(compiler, &addr, src, srcw, tmp1));
return push_inst(compiler, 0xe70000000005 /* vlrep */ | F36(freg)
return push_inst(compiler, 0xe70000000005 /* vlrep */ | F36(vreg)
| R32A(addr.index) | R28A(addr.base) | disp_s20(addr.offset) | ((sljit_ins)elem_size << 12));
}
if (type & SLJIT_SIMD_FLOAT) {
if (src == SLJIT_IMM)
return push_inst(compiler, 0xe70000000044 /* vgbm */ | F36(freg));
return push_inst(compiler, 0xe70000000044 /* vgbm */ | F36(vreg));
return push_inst(compiler, 0xe7000000004d /* vrep */ | F36(freg) | F32(src) | ((sljit_ins)elem_size << 12));
return push_inst(compiler, 0xe7000000004d /* vrep */ | F36(vreg) | F32(src) | ((sljit_ins)elem_size << 12));
}
if (src == SLJIT_IMM) {
@ -4043,10 +4048,10 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_replicate(struct sljit_compil
if (sign_ext != 0x10000) {
if (sign_ext == 0 || sign_ext == -1)
return push_inst(compiler, 0xe70000000044 /* vgbm */ | F36(freg)
return push_inst(compiler, 0xe70000000044 /* vgbm */ | F36(vreg)
| (sign_ext == 0 ? 0 : ((sljit_ins)0xffff << 16)));
return push_inst(compiler, 0xe70000000045 /* vrepi */ | F36(freg)
return push_inst(compiler, 0xe70000000045 /* vrepi */ | F36(vreg)
| ((sljit_ins)srcw << 16) | ((sljit_ins)elem_size << 12));
}
@ -4055,12 +4060,12 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_replicate(struct sljit_compil
} else
reg = gpr(src);
FAIL_IF(push_inst(compiler, 0xe70000000022 /* vlvg */ | F36(freg) | R32A(reg) | ((sljit_ins)elem_size << 12)));
return push_inst(compiler, 0xe7000000004d /* vrep */ | F36(freg) | F32(freg) | ((sljit_ins)elem_size << 12));
FAIL_IF(push_inst(compiler, 0xe70000000022 /* vlvg */ | F36(vreg) | R32A(reg) | ((sljit_ins)elem_size << 12)));
return push_inst(compiler, 0xe7000000004d /* vrep */ | F36(vreg) | F32(vreg) | ((sljit_ins)elem_size << 12));
}
SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_lane_mov(struct sljit_compiler *compiler, sljit_s32 type,
sljit_s32 freg, sljit_s32 lane_index,
sljit_s32 vreg, sljit_s32 lane_index,
sljit_s32 srcdst, sljit_sw srcdstw)
{
sljit_s32 reg_size = SLJIT_SIMD_GET_REG_SIZE(type);
@ -4070,7 +4075,7 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_lane_mov(struct sljit_compile
sljit_ins ins = 0;
CHECK_ERROR();
CHECK(check_sljit_emit_simd_lane_mov(compiler, type, freg, lane_index, srcdst, srcdstw));
CHECK(check_sljit_emit_simd_lane_mov(compiler, type, vreg, lane_index, srcdst, srcdstw));
ADJUST_LOCAL_OFFSET(srcdst, srcdstw);
@ -4085,20 +4090,20 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_lane_mov(struct sljit_compile
if (srcdst & SLJIT_MEM) {
FAIL_IF(make_addr_bx(compiler, &addr, srcdst, srcdstw, tmp1));
ins = F36(freg) | R32A(addr.index) | R28A(addr.base) | disp_s20(addr.offset);
ins = F36(vreg) | R32A(addr.index) | R28A(addr.base) | disp_s20(addr.offset);
}
if (type & SLJIT_SIMD_LANE_ZERO) {
if ((srcdst & SLJIT_MEM) && lane_index == ((1 << (3 - elem_size)) - 1))
return push_inst(compiler, 0xe70000000004 /* vllez */ | ins | ((sljit_ins)elem_size << 12));
if ((type & SLJIT_SIMD_FLOAT) && freg == srcdst) {
FAIL_IF(push_inst(compiler, 0xe70000000056 /* vlr */ | F36(TMP_FREG1) | F32(freg)));
if ((type & SLJIT_SIMD_FLOAT) && vreg == srcdst) {
FAIL_IF(push_inst(compiler, 0xe70000000056 /* vlr */ | F36(TMP_FREG1) | F32(vreg)));
srcdst = TMP_FREG1;
srcdstw = 0;
}
FAIL_IF(push_inst(compiler, 0xe70000000044 /* vgbm */ | F36(freg)));
FAIL_IF(push_inst(compiler, 0xe70000000044 /* vgbm */ | F36(vreg)));
}
if (srcdst & SLJIT_MEM) {
@ -4126,19 +4131,19 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_lane_mov(struct sljit_compile
if (type & SLJIT_SIMD_FLOAT) {
if (type & SLJIT_SIMD_STORE)
return push_inst(compiler, 0xe7000000004d /* vrep */ | F36(srcdst) | F32(freg) | ((sljit_ins)lane_index << 16) | ((sljit_ins)elem_size << 12));
return push_inst(compiler, 0xe7000000004d /* vrep */ | F36(srcdst) | F32(vreg) | ((sljit_ins)lane_index << 16) | ((sljit_ins)elem_size << 12));
if (elem_size == 3) {
if (lane_index == 0)
ins = F32(srcdst) | F28(freg) | (1 << 12);
ins = F32(srcdst) | F28(vreg) | (1 << 12);
else
ins = F32(freg) | F28(srcdst);
ins = F32(vreg) | F28(srcdst);
return push_inst(compiler, 0xe70000000084 /* vpdi */ | F36(freg) | ins);
return push_inst(compiler, 0xe70000000084 /* vpdi */ | F36(vreg) | ins);
}
FAIL_IF(push_inst(compiler, 0xe70000000021 /* vlgv */ | R36A(tmp0) | F32(srcdst) | ((sljit_ins)2 << 12)));
return push_inst(compiler, 0xe70000000022 /* vlvg */ | F36(freg) | R32A(tmp0) | ((sljit_ins)lane_index << 16) | ((sljit_ins)2 << 12));
return push_inst(compiler, 0xe70000000022 /* vlvg */ | F36(vreg) | R32A(tmp0) | ((sljit_ins)lane_index << 16) | ((sljit_ins)2 << 12));
}
if (srcdst == SLJIT_IMM) {
@ -4167,7 +4172,7 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_lane_mov(struct sljit_compile
}
if (ins != 0)
return push_inst(compiler, ins | F36(freg) | ((sljit_ins)srcdstw << 16) | ((sljit_ins)lane_index << 12));
return push_inst(compiler, ins | F36(vreg) | ((sljit_ins)srcdstw << 16) | ((sljit_ins)lane_index << 12));
push_load_imm_inst(compiler, tmp0, srcdstw);
reg = tmp0;
@ -4177,9 +4182,9 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_lane_mov(struct sljit_compile
ins = ((sljit_ins)lane_index << 16) | ((sljit_ins)elem_size << 12);
if (!(type & SLJIT_SIMD_STORE))
return push_inst(compiler, 0xe70000000022 /* vlvg */ | F36(freg) | R32A(reg) | ins);
return push_inst(compiler, 0xe70000000022 /* vlvg */ | F36(vreg) | R32A(reg) | ins);
FAIL_IF(push_inst(compiler, 0xe70000000021 /* vlgv */ | R36A(reg) | F32(freg) | ins));
FAIL_IF(push_inst(compiler, 0xe70000000021 /* vlgv */ | R36A(reg) | F32(vreg) | ins));
if (!(type & SLJIT_SIMD_LANE_SIGNED) || elem_size >= 3)
return SLJIT_SUCCESS;
@ -4200,14 +4205,14 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_lane_mov(struct sljit_compile
}
SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_lane_replicate(struct sljit_compiler *compiler, sljit_s32 type,
sljit_s32 freg,
sljit_s32 vreg,
sljit_s32 src, sljit_s32 src_lane_index)
{
sljit_s32 reg_size = SLJIT_SIMD_GET_REG_SIZE(type);
sljit_s32 elem_size = SLJIT_SIMD_GET_ELEM_SIZE(type);
CHECK_ERROR();
CHECK(check_sljit_emit_simd_lane_replicate(compiler, type, freg, src, src_lane_index));
CHECK(check_sljit_emit_simd_lane_replicate(compiler, type, vreg, src, src_lane_index));
if (reg_size != 4)
return SLJIT_ERR_UNSUPPORTED;
@ -4218,12 +4223,12 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_lane_replicate(struct sljit_c
if (type & SLJIT_SIMD_TEST)
return SLJIT_SUCCESS;
return push_inst(compiler, 0xe7000000004d /* vrep */ | F36(freg) | F32(src)
return push_inst(compiler, 0xe7000000004d /* vrep */ | F36(vreg) | F32(src)
| ((sljit_ins)src_lane_index << 16) | ((sljit_ins)elem_size << 12));
}
SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_extend(struct sljit_compiler *compiler, sljit_s32 type,
sljit_s32 freg,
sljit_s32 vreg,
sljit_s32 src, sljit_sw srcw)
{
sljit_s32 reg_size = SLJIT_SIMD_GET_REG_SIZE(type);
@ -4233,7 +4238,7 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_extend(struct sljit_compiler
sljit_ins ins;
CHECK_ERROR();
CHECK(check_sljit_emit_simd_extend(compiler, type, freg, src, srcw));
CHECK(check_sljit_emit_simd_extend(compiler, type, vreg, src, srcw));
ADJUST_LOCAL_OFFSET(src, srcw);
@ -4248,7 +4253,7 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_extend(struct sljit_compiler
if (src & SLJIT_MEM) {
FAIL_IF(make_addr_bx(compiler, &addr, src, srcw, tmp1));
ins = F36(freg) | R32A(addr.index) | R28A(addr.base) | disp_s20(addr.offset);
ins = F36(vreg) | R32A(addr.index) | R28A(addr.base) | disp_s20(addr.offset);
switch (elem2_size - elem_size) {
case 1:
@ -4263,27 +4268,27 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_extend(struct sljit_compiler
}
FAIL_IF(push_inst(compiler, ins));
src = freg;
src = vreg;
}
if (type & SLJIT_SIMD_FLOAT) {
FAIL_IF(push_inst(compiler, 0xe700000000d5 /* vuplh */ | F36(freg) | F32(src) | (2 << 12)));
FAIL_IF(push_inst(compiler, 0xe70000000030 /* vesl */ | F36(freg) | F32(freg) | (32 << 16) | (3 << 12)));
return push_inst(compiler, 0xe700000000c4 /* vfll */ | F36(freg) | F32(freg) | (2 << 12));
FAIL_IF(push_inst(compiler, 0xe700000000d5 /* vuplh */ | F36(vreg) | F32(src) | (2 << 12)));
FAIL_IF(push_inst(compiler, 0xe70000000030 /* vesl */ | F36(vreg) | F32(vreg) | (32 << 16) | (3 << 12)));
return push_inst(compiler, 0xe700000000c4 /* vfll */ | F36(vreg) | F32(vreg) | (2 << 12));
}
ins = ((type & SLJIT_SIMD_EXTEND_SIGNED) ? 0xe700000000d7 /* vuph */ : 0xe700000000d5 /* vuplh */) | F36(freg);
ins = ((type & SLJIT_SIMD_EXTEND_SIGNED) ? 0xe700000000d7 /* vuph */ : 0xe700000000d5 /* vuplh */) | F36(vreg);
do {
FAIL_IF(push_inst(compiler, ins | F32(src) | ((sljit_ins)elem_size << 12)));
src = freg;
src = vreg;
} while (++elem_size < elem2_size);
return SLJIT_SUCCESS;
}
SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_sign(struct sljit_compiler *compiler, sljit_s32 type,
sljit_s32 freg,
sljit_s32 vreg,
sljit_s32 dst, sljit_sw dstw)
{
sljit_s32 reg_size = SLJIT_SIMD_GET_REG_SIZE(type);
@ -4291,7 +4296,7 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_sign(struct sljit_compiler *c
sljit_gpr dst_r;
CHECK_ERROR();
CHECK(check_sljit_emit_simd_sign(compiler, type, freg, dst, dstw));
CHECK(check_sljit_emit_simd_sign(compiler, type, vreg, dst, dstw));
ADJUST_LOCAL_OFFSET(dst, dstw);
@ -4324,7 +4329,7 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_sign(struct sljit_compiler *c
if (elem_size != 0)
FAIL_IF(push_inst(compiler, 0xe70000000022 /* vlvg */ | F36(TMP_FREG1) | R32A(tmp0) | (1 << 16) | (3 << 12)));
FAIL_IF(push_inst(compiler, 0xe70000000085 /* vbperm */ | F36(TMP_FREG1) | F32(freg) | F28(TMP_FREG1)));
FAIL_IF(push_inst(compiler, 0xe70000000085 /* vbperm */ | F36(TMP_FREG1) | F32(vreg) | F28(TMP_FREG1)));
dst_r = FAST_IS_REG(dst) ? gpr(dst) : tmp0;
FAIL_IF(push_inst(compiler, 0xe70000000021 /* vlgv */ | R36A(dst_r) | F32(TMP_FREG1)
@ -4337,14 +4342,17 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_sign(struct sljit_compiler *c
}
SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_op2(struct sljit_compiler *compiler, sljit_s32 type,
sljit_s32 dst_freg, sljit_s32 src1_freg, sljit_s32 src2_freg)
sljit_s32 dst_vreg, sljit_s32 src1_vreg, sljit_s32 src2, sljit_sw src2w)
{
sljit_s32 reg_size = SLJIT_SIMD_GET_REG_SIZE(type);
sljit_s32 elem_size = SLJIT_SIMD_GET_ELEM_SIZE(type);
sljit_ins ins = 0;
sljit_s32 alignment;
struct addr addr;
sljit_ins ins = 0, load_ins;
CHECK_ERROR();
CHECK(check_sljit_emit_simd_op2(compiler, type, dst_freg, src1_freg, src2_freg));
CHECK(check_sljit_emit_simd_op2(compiler, type, dst_vreg, src1_vreg, src2, src2w));
ADJUST_LOCAL_OFFSET(src2, src2w);
if (reg_size != 4)
return SLJIT_ERR_UNSUPPORTED;
@ -4365,12 +4373,29 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_simd_op2(struct sljit_compiler *co
case SLJIT_SIMD_OP2_XOR:
ins = 0xe7000000006d /* vx */;
break;
case SLJIT_SIMD_OP2_SHUFFLE:
ins = 0xe7000000008c /* vperm */;
break;
}
if (type & SLJIT_SIMD_TEST)
return SLJIT_SUCCESS;
if (src2 & SLJIT_MEM) {
FAIL_IF(make_addr_bx(compiler, &addr, src2, src2w, tmp1));
load_ins = 0xe70000000006 /* vl */ | F36(TMP_FREG1) | R32A(addr.index) | R28A(addr.base) | disp_s20(addr.offset);
alignment = SLJIT_SIMD_GET_ELEM2_SIZE(type);
return push_inst(compiler, ins | F36(dst_freg) | F32(src1_freg) | F28(src2_freg));
if (alignment >= 4)
load_ins |= 4 << 12;
else if (alignment == 3)
load_ins |= 3 << 12;
FAIL_IF(push_inst(compiler, load_ins));
src2 = TMP_FREG1;
}
if (SLJIT_SIMD_GET_OPCODE(type) == SLJIT_SIMD_OP2_SHUFFLE)
return push_inst(compiler, ins | F36(dst_vreg) | F32(src1_vreg) | F28(src1_vreg) | F12(src2));
return push_inst(compiler, ins | F36(dst_vreg) | F32(src1_vreg) | F28(src2));
}
SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_atomic_load(struct sljit_compiler *compiler, sljit_s32 op,
@ -4380,8 +4405,22 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_atomic_load(struct sljit_compiler
CHECK_ERROR();
CHECK(check_sljit_emit_atomic_load(compiler, op, dst_reg, mem_reg));
if (op & SLJIT_ATOMIC_USE_LS)
return SLJIT_ERR_UNSUPPORTED;
switch (GET_OPCODE(op)) {
case SLJIT_MOV32:
case SLJIT_MOV_U32:
case SLJIT_MOV:
case SLJIT_MOV_P:
if (op & SLJIT_ATOMIC_TEST)
return SLJIT_SUCCESS;
SLJIT_SKIP_CHECKS(compiler);
return sljit_emit_op1(compiler, op, dst_reg, 0, SLJIT_MEM1(mem_reg), 0);
return sljit_emit_op1(compiler, op & ~SLJIT_ATOMIC_USE_CAS, dst_reg, 0, SLJIT_MEM1(mem_reg), 0);
default:
return SLJIT_ERR_UNSUPPORTED;
}
}
SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_atomic_store(struct sljit_compiler *compiler, sljit_s32 op,
@ -4389,44 +4428,33 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_atomic_store(struct sljit_compiler
sljit_s32 mem_reg,
sljit_s32 temp_reg)
{
sljit_ins mask;
sljit_ins ins;
sljit_gpr tmp_r = gpr(temp_reg);
sljit_gpr mem_r = gpr(mem_reg);
CHECK_ERROR();
CHECK(check_sljit_emit_atomic_store(compiler, op, src_reg, mem_reg, temp_reg));
if (op & SLJIT_ATOMIC_USE_LS)
return SLJIT_ERR_UNSUPPORTED;
switch (GET_OPCODE(op)) {
case SLJIT_MOV32:
case SLJIT_MOV_U32:
return push_inst(compiler, 0xba000000 /* cs */ | R20A(tmp_r) | R16A(gpr(src_reg)) | R12A(mem_r));
case SLJIT_MOV_U8:
mask = 0xff;
ins = 0xba000000 /* cs */ | R20A(tmp_r) | R16A(gpr(src_reg)) | R12A(mem_r);
break;
case SLJIT_MOV_U16:
mask = 0xffff;
case SLJIT_MOV:
case SLJIT_MOV_P:
ins = 0xeb0000000030 /* csg */ | R36A(tmp_r) | R32A(gpr(src_reg)) | R28A(mem_r);
break;
default:
return push_inst(compiler, 0xeb0000000030 /* csg */ | R36A(tmp_r) | R32A(gpr(src_reg)) | R28A(mem_r));
return SLJIT_ERR_UNSUPPORTED;
}
/* tmp0 = (src_reg ^ tmp_r) & mask */
FAIL_IF(push_inst(compiler, 0xa50f0000 /* llill */ | R20A(tmp1) | mask));
FAIL_IF(push_inst(compiler, 0xb9e70000 /* xgrk */ | R4A(tmp0) | R0A(gpr(src_reg)) | R12A(tmp_r)));
FAIL_IF(push_inst(compiler, 0xa7090000 /* lghi */ | R20A(tmp_r) | 0xfffc));
FAIL_IF(push_inst(compiler, 0xb9800000 /* ngr */ | R4A(tmp0) | R0A(tmp1)));
if (op & SLJIT_ATOMIC_TEST)
return SLJIT_SUCCESS;
/* tmp0 = tmp0 << (((mem_r ^ 0x3) & 0x3) << 3) */
FAIL_IF(push_inst(compiler, 0xa50f0000 /* llill */ | R20A(tmp1) | (sljit_ins)((mask == 0xff) ? 0x18 : 0x10)));
FAIL_IF(push_inst(compiler, 0xb9800000 /* ngr */ | R4A(tmp_r) | R0A(mem_r)));
FAIL_IF(push_inst(compiler, 0xec0000000057 /* rxsbg */ | R36A(tmp1) | R32A(mem_r) | (59 << 24) | (60 << 16) | (3 << 8)));
FAIL_IF(push_inst(compiler, 0xeb000000000d /* sllg */ | R36A(tmp0) | R32A(tmp0) | R28A(tmp1)));
/* Already computed: tmp_r = mem_r & ~0x3 */
FAIL_IF(push_inst(compiler, 0x58000000 /* l */ | R20A(tmp1) | R12A(tmp_r)));
FAIL_IF(push_inst(compiler, 0x1700 /* x */ | R4A(tmp0) | R0A(tmp1)));
return push_inst(compiler, 0xba000000 /* cs */ | R20A(tmp1) | R16A(tmp0) | R12A(tmp_r));
return push_inst(compiler, ins);
}
/* --------------------------------------------------------------------- */

View File

@ -311,8 +311,8 @@ static sljit_u8* detect_far_jump_type(struct sljit_jump *jump, sljit_u8 *code_pt
#define ENTER_TMP_TO_S 0x00002
SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_enter(struct sljit_compiler *compiler,
sljit_s32 options, sljit_s32 arg_types, sljit_s32 scratches, sljit_s32 saveds,
sljit_s32 fscratches, sljit_s32 fsaveds, sljit_s32 local_size)
sljit_s32 options, sljit_s32 arg_types,
sljit_s32 scratches, sljit_s32 saveds, sljit_s32 local_size)
{
sljit_s32 word_arg_count, saved_arg_count, float_arg_count;
sljit_s32 size, args_size, types, status;
@ -323,8 +323,10 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_enter(struct sljit_compiler *compi
#endif
CHECK_ERROR();
CHECK(check_sljit_emit_enter(compiler, options, arg_types, scratches, saveds, fscratches, fsaveds, local_size));
set_emit_enter(compiler, options, arg_types, scratches, saveds, fscratches, fsaveds, local_size);
CHECK(check_sljit_emit_enter(compiler, options, arg_types, scratches, saveds, local_size));
set_emit_enter(compiler, options, arg_types, scratches, saveds, local_size);
scratches = ENTER_GET_REGS(scratches);
/* Emit ENDBR32 at function entry if needed. */
FAIL_IF(emit_endbranch(compiler));
@ -536,14 +538,16 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_enter(struct sljit_compiler *compi
}
SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_set_context(struct sljit_compiler *compiler,
sljit_s32 options, sljit_s32 arg_types, sljit_s32 scratches, sljit_s32 saveds,
sljit_s32 fscratches, sljit_s32 fsaveds, sljit_s32 local_size)
sljit_s32 options, sljit_s32 arg_types,
sljit_s32 scratches, sljit_s32 saveds, sljit_s32 local_size)
{
sljit_s32 args_size;
CHECK_ERROR();
CHECK(check_sljit_set_context(compiler, options, arg_types, scratches, saveds, fscratches, fsaveds, local_size));
set_set_context(compiler, options, arg_types, scratches, saveds, fscratches, fsaveds, local_size);
CHECK(check_sljit_set_context(compiler, options, arg_types, scratches, saveds, local_size));
set_emit_enter(compiler, options, arg_types, scratches, saveds, local_size);
scratches = ENTER_GET_REGS(scratches);
arg_types >>= SLJIT_ARG_SHIFT;
args_size = 0;

View File

@ -454,14 +454,16 @@ typedef struct {
#endif /* _WIN64 */
SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_enter(struct sljit_compiler *compiler,
sljit_s32 options, sljit_s32 arg_types, sljit_s32 scratches, sljit_s32 saveds,
sljit_s32 fscratches, sljit_s32 fsaveds, sljit_s32 local_size)
sljit_s32 options, sljit_s32 arg_types,
sljit_s32 scratches, sljit_s32 saveds, sljit_s32 local_size)
{
sljit_uw size;
sljit_s32 word_arg_count = 0;
sljit_s32 saved_arg_count = SLJIT_KEPT_SAVEDS_COUNT(options);
sljit_s32 saved_regs_size, tmp, i;
#ifdef _WIN64
sljit_s32 fscratches;
sljit_s32 fsaveds;
sljit_s32 saved_float_regs_size;
sljit_s32 saved_float_regs_offset = 0;
sljit_s32 float_arg_count = 0;
@ -469,8 +471,15 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_enter(struct sljit_compiler *compi
sljit_u8 *inst;
CHECK_ERROR();
CHECK(check_sljit_emit_enter(compiler, options, arg_types, scratches, saveds, fscratches, fsaveds, local_size));
set_emit_enter(compiler, options, arg_types, scratches, saveds, fscratches, fsaveds, local_size);
CHECK(check_sljit_emit_enter(compiler, options, arg_types, scratches, saveds, local_size));
set_emit_enter(compiler, options, arg_types, scratches, saveds, local_size);
scratches = ENTER_GET_REGS(scratches);
#ifdef _WIN64
saveds = ENTER_GET_REGS(saveds);
fscratches = compiler->fscratches;
fsaveds = compiler->fsaveds;
#endif /* _WIN64 */
if (options & SLJIT_ENTER_REG_ARG)
arg_types = 0;
@ -630,19 +639,27 @@ SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_emit_enter(struct sljit_compiler *compi
}
SLJIT_API_FUNC_ATTRIBUTE sljit_s32 sljit_set_context(struct sljit_compiler *compiler,
sljit_s32 options, sljit_s32 arg_types, sljit_s32 scratches, sljit_s32 saveds,
sljit_s32 fscratches, sljit_s32 fsaveds, sljit_s32 local_size)
sljit_s32 options, sljit_s32 arg_types,
sljit_s32 scratches, sljit_s32 saveds, sljit_s32 local_size)
{
sljit_s32 saved_regs_size;
#ifdef _WIN64
sljit_s32 fscratches;
sljit_s32 fsaveds;
sljit_s32 saved_float_regs_size;
#endif /* _WIN64 */
CHECK_ERROR();
CHECK(check_sljit_set_context(compiler, options, arg_types, scratches, saveds, fscratches, fsaveds, local_size));
set_set_context(compiler, options, arg_types, scratches, saveds, fscratches, fsaveds, local_size);
CHECK(check_sljit_set_context(compiler, options, arg_types, scratches, saveds, local_size));
set_emit_enter(compiler, options, arg_types, scratches, saveds, local_size);
scratches = ENTER_GET_REGS(scratches);
#ifdef _WIN64
saveds = ENTER_GET_REGS(saveds);
fscratches = compiler->fscratches;
fsaveds = compiler->fsaveds;
local_size += SLJIT_LOCALS_OFFSET;
saved_float_regs_size = GET_SAVED_FLOAT_REGISTERS_SIZE(fscratches, fsaveds, sse2_reg);

View File

@ -7,6 +7,8 @@
# into 3rdparty/pcre2/ , following the instructions found in the NON-AUTOTOOLS-BUILD
# file. Documentation, tests, demos etc. are not imported.
set -e
if [ $# -ne 2 ]; then
echo "Usage: $0 pcre2_tarball_dir/ \$QTDIR/src/3rdparty/pcre2/"
exit 1
@ -44,12 +46,14 @@ copy_file "src/pcre2.h.generic" "src/pcre2.h"
copy_file "src/pcre2_chartables.c.dist" "src/pcre2_chartables.c"
FILES="
AUTHORS
LICENCE
AUTHORS.md
LICENCE.md
src/pcre2_auto_possess.c
src/pcre2_chkdint.c
src/pcre2_compile.c
src/pcre2_compile.h
src/pcre2_compile_class.c
src/pcre2_config.c
src/pcre2_context.c
src/pcre2_dfa_match.c
@ -69,6 +73,7 @@ FILES="
src/pcre2_pattern_info.c
src/pcre2_script_run.c
src/pcre2_serialize.c
src/pcre2_jit_char_inc.h
src/pcre2_jit_neon_inc.h
src/pcre2_jit_simd_inc.h
src/pcre2_string_utils.c
@ -79,41 +84,42 @@ FILES="
src/pcre2_ucd.c
src/pcre2_ucp.h
src/pcre2_ucptables.c
src/pcre2_util.h
src/pcre2_valid_utf.c
src/pcre2_xclass.c
src/sljit/sljitConfigCPU.h
src/sljit/sljitConfig.h
src/sljit/sljitConfigInternal.h
src/sljit/sljitLir.c
src/sljit/sljitLir.h
src/sljit/sljitNativeARM_32.c
src/sljit/sljitNativeARM_64.c
src/sljit/sljitNativeARM_T2_32.c
src/sljit/sljitNativeLOONGARCH_64.c
src/sljit/sljitNativeMIPS_32.c
src/sljit/sljitNativeMIPS_64.c
src/sljit/sljitNativeMIPS_common.c
src/sljit/sljitNativePPC_32.c
src/sljit/sljitNativePPC_64.c
src/sljit/sljitNativePPC_common.c
src/sljit/sljitNativeRISCV_32.c
src/sljit/sljitNativeRISCV_64.c
src/sljit/sljitNativeRISCV_common.c
src/sljit/sljitNativeS390X.c
src/sljit/sljitNativeX86_32.c
src/sljit/sljitNativeX86_64.c
src/sljit/sljitNativeX86_common.c
src/sljit/sljitSerialize.c
src/sljit/sljitUtils.c
src/sljit/allocator_src/sljitExecAllocatorPosix.c
src/sljit/allocator_src/sljitProtExecAllocatorPosix.c
src/sljit/allocator_src/sljitWXExecAllocatorPosix.c
src/sljit/allocator_src/sljitProtExecAllocatorNetBSD.c
src/sljit/allocator_src/sljitExecAllocatorWindows.c
src/sljit/allocator_src/sljitExecAllocatorFreeBSD.c
src/sljit/allocator_src/sljitExecAllocatorApple.c
src/sljit/allocator_src/sljitWXExecAllocatorWindows.c
src/sljit/allocator_src/sljitExecAllocatorCore.c
deps/sljit/sljit_src/sljitConfigCPU.h
deps/sljit/sljit_src/sljitConfig.h
deps/sljit/sljit_src/sljitConfigInternal.h
deps/sljit/sljit_src/sljitLir.c
deps/sljit/sljit_src/sljitLir.h
deps/sljit/sljit_src/sljitNativeARM_32.c
deps/sljit/sljit_src/sljitNativeARM_64.c
deps/sljit/sljit_src/sljitNativeARM_T2_32.c
deps/sljit/sljit_src/sljitNativeLOONGARCH_64.c
deps/sljit/sljit_src/sljitNativeMIPS_32.c
deps/sljit/sljit_src/sljitNativeMIPS_64.c
deps/sljit/sljit_src/sljitNativeMIPS_common.c
deps/sljit/sljit_src/sljitNativePPC_32.c
deps/sljit/sljit_src/sljitNativePPC_64.c
deps/sljit/sljit_src/sljitNativePPC_common.c
deps/sljit/sljit_src/sljitNativeRISCV_32.c
deps/sljit/sljit_src/sljitNativeRISCV_64.c
deps/sljit/sljit_src/sljitNativeRISCV_common.c
deps/sljit/sljit_src/sljitNativeS390X.c
deps/sljit/sljit_src/sljitNativeX86_32.c
deps/sljit/sljit_src/sljitNativeX86_64.c
deps/sljit/sljit_src/sljitNativeX86_common.c
deps/sljit/sljit_src/sljitSerialize.c
deps/sljit/sljit_src/sljitUtils.c
deps/sljit/sljit_src/allocator_src/sljitExecAllocatorPosix.c
deps/sljit/sljit_src/allocator_src/sljitProtExecAllocatorPosix.c
deps/sljit/sljit_src/allocator_src/sljitWXExecAllocatorPosix.c
deps/sljit/sljit_src/allocator_src/sljitProtExecAllocatorNetBSD.c
deps/sljit/sljit_src/allocator_src/sljitExecAllocatorWindows.c
deps/sljit/sljit_src/allocator_src/sljitExecAllocatorFreeBSD.c
deps/sljit/sljit_src/allocator_src/sljitExecAllocatorApple.c
deps/sljit/sljit_src/allocator_src/sljitWXExecAllocatorWindows.c
deps/sljit/sljit_src/allocator_src/sljitExecAllocatorCore.c
"
for i in $FILES; do

View File

@ -8,13 +8,13 @@
"Description": "The PCRE library is a set of functions that implement regular expression pattern matching using the same syntax and semantics as Perl 5.",
"Homepage": "http://www.pcre.org/",
"Version": "10.44",
"DownloadLocation": "https://github.com/PCRE2Project/pcre2/releases/download/pcre2-10.44/pcre2-10.44.tar.bz2",
"Version": "10.45",
"DownloadLocation": "https://github.com/PCRE2Project/pcre2/releases/download/pcre2-10.45/pcre2-10.45.tar.bz2",
"PURL": "pkg:github/PCRE2Project/pcre2@pcre2-$<VERSION>",
"CPE": "cpe:2.3:a:pcre:pcre2:$<VERSION>:*:*:*:*:*:*:*",
"License": "BSD 3-clause \"New\" or \"Revised\" License with PCRE2 binary-like Packages Exception",
"LicenseId": "LicenseRef-BSD-3-Clause-with-PCRE2-Binary-Like-Packages-Exception",
"LicenseFile": "LICENCE",
"LicenseFile": "LICENCE.md",
"Copyright": ["Copyright (c) 1997-2024 University of Cambridge",
"Copyright (c) 2010-2024 Zoltan Herczeg"]
},
@ -24,11 +24,11 @@
"QDocModule": "qtcore",
"QtUsage": "Optionally used in Qt Core (QRegularExpression). Configure with -system-pcre or -no-pcre to avoid.",
"Path": "src/sljit",
"Path": "deps/sljit",
"Description": "The PCRE library is a set of functions that implement regular expression pattern matching using the same syntax and semantics as Perl 5.",
"Homepage": "http://www.pcre.org/",
"Version": "10.44",
"DownloadLocation": "https://github.com/PCRE2Project/pcre2/releases/download/pcre2-10.44/pcre2-10.44.tar.bz2",
"Version": "10.45",
"DownloadLocation": "https://github.com/PCRE2Project/pcre2/releases/download/pcre2-10.45/pcre2-10.45.tar.bz2",
"PURL": "pkg:github/PCRE2Project/pcre2@$<VERSION>",
"CPE": "cpe:2.3:a:pcre:pcre2:$<VERSION>:*:*:*:*:*:*:*",
"License": "BSD 2-clause \"Simplified\" License",

View File

@ -42,9 +42,9 @@ POSSIBILITY OF SUCH DAMAGE.
/* The current PCRE version information. */
#define PCRE2_MAJOR 10
#define PCRE2_MINOR 44
#define PCRE2_MINOR 45
#define PCRE2_PRERELEASE
#define PCRE2_DATE 2024-06-07
#define PCRE2_DATE 2025-02-05
/* When an application links to a PCRE DLL in Windows, the symbols that are
imported have to be identified as such. When building PCRE2, the appropriate
@ -143,6 +143,7 @@ D is inspected during pcre2_dfa_match() execution
#define PCRE2_EXTENDED_MORE 0x01000000u /* C */
#define PCRE2_LITERAL 0x02000000u /* C */
#define PCRE2_MATCH_INVALID_UTF 0x04000000u /* J M D */
#define PCRE2_ALT_EXTENDED_CLASS 0x08000000u /* C */
/* An additional compile options word is available in the compile context. */
@ -159,6 +160,10 @@ D is inspected during pcre2_dfa_match() execution
#define PCRE2_EXTRA_ASCII_BSW 0x00000400u /* C */
#define PCRE2_EXTRA_ASCII_POSIX 0x00000800u /* C */
#define PCRE2_EXTRA_ASCII_DIGIT 0x00001000u /* C */
#define PCRE2_EXTRA_PYTHON_OCTAL 0x00002000u /* C */
#define PCRE2_EXTRA_NO_BS0 0x00004000u /* C */
#define PCRE2_EXTRA_NEVER_CALLOUT 0x00008000u /* C */
#define PCRE2_EXTRA_TURKISH_CASING 0x00010000u /* C */
/* These are for pcre2_jit_compile(). */
@ -166,6 +171,7 @@ D is inspected during pcre2_dfa_match() execution
#define PCRE2_JIT_PARTIAL_SOFT 0x00000002u
#define PCRE2_JIT_PARTIAL_HARD 0x00000004u
#define PCRE2_JIT_INVALID_UTF 0x00000100u
#define PCRE2_JIT_TEST_ALLOC 0x00000200u
/* These are for pcre2_match(), pcre2_dfa_match(), pcre2_jit_match(), and
pcre2_substitute(). Some are allowed only for one of the functions, and in
@ -318,9 +324,25 @@ pcre2_pattern_convert(). */
#define PCRE2_ERROR_ALPHA_ASSERTION_UNKNOWN 195
#define PCRE2_ERROR_SCRIPT_RUN_NOT_AVAILABLE 196
#define PCRE2_ERROR_TOO_MANY_CAPTURES 197
#define PCRE2_ERROR_CONDITION_ATOMIC_ASSERTION_EXPECTED 198
#define PCRE2_ERROR_MISSING_OCTAL_DIGIT 198
#define PCRE2_ERROR_BACKSLASH_K_IN_LOOKAROUND 199
#define PCRE2_ERROR_MAX_VAR_LOOKBEHIND_EXCEEDED 200
#define PCRE2_ERROR_PATTERN_COMPILED_SIZE_TOO_BIG 201
#define PCRE2_ERROR_OVERSIZE_PYTHON_OCTAL 202
#define PCRE2_ERROR_CALLOUT_CALLER_DISABLED 203
#define PCRE2_ERROR_EXTRA_CASING_REQUIRES_UNICODE 204
#define PCRE2_ERROR_TURKISH_CASING_REQUIRES_UTF 205
#define PCRE2_ERROR_EXTRA_CASING_INCOMPATIBLE 206
#define PCRE2_ERROR_ECLASS_NEST_TOO_DEEP 207
#define PCRE2_ERROR_ECLASS_INVALID_OPERATOR 208
#define PCRE2_ERROR_ECLASS_UNEXPECTED_OPERATOR 209
#define PCRE2_ERROR_ECLASS_EXPECTED_OPERAND 210
#define PCRE2_ERROR_ECLASS_MIXED_OPERATORS 211
#define PCRE2_ERROR_ECLASS_HINT_SQUARE_BRACKET 212
#define PCRE2_ERROR_PERL_ECLASS_UNEXPECTED_EXPR 213
#define PCRE2_ERROR_PERL_ECLASS_EMPTY_EXPR 214
#define PCRE2_ERROR_PERL_ECLASS_MISSING_CLOSE 215
#define PCRE2_ERROR_PERL_ECLASS_UNEXPECTED_CHAR 216
/* "Expected" matching error codes: no match and partial match. */
@ -407,6 +429,9 @@ released, the numbers must not be changed. */
#define PCRE2_ERROR_INTERNAL_DUPMATCH (-65)
#define PCRE2_ERROR_DFA_UINVALID_UTF (-66)
#define PCRE2_ERROR_INVALIDOFFSET (-67)
#define PCRE2_ERROR_JIT_UNSUPPORTED (-68)
#define PCRE2_ERROR_REPLACECASE (-69)
#define PCRE2_ERROR_TOOLARGEREPLACE (-70)
/* Request types for pcre2_pattern_info() */
@ -460,6 +485,30 @@ released, the numbers must not be changed. */
#define PCRE2_CONFIG_COMPILED_WIDTHS 14
#define PCRE2_CONFIG_TABLES_LENGTH 15
/* Optimization directives for pcre2_set_optimize().
For binary compatibility, only add to this list; do not renumber. */
#define PCRE2_OPTIMIZATION_NONE 0
#define PCRE2_OPTIMIZATION_FULL 1
#define PCRE2_AUTO_POSSESS 64
#define PCRE2_AUTO_POSSESS_OFF 65
#define PCRE2_DOTSTAR_ANCHOR 66
#define PCRE2_DOTSTAR_ANCHOR_OFF 67
#define PCRE2_START_OPTIMIZE 68
#define PCRE2_START_OPTIMIZE_OFF 69
/* Types used in pcre2_set_substitute_case_callout().
PCRE2_SUBSTITUTE_CASE_LOWER and PCRE2_SUBSTITUTE_CASE_UPPER are passed to the
callout to indicate that the case of the entire callout input should be
case-transformed. PCRE2_SUBSTITUTE_CASE_TITLE_FIRST is passed to indicate that
only the first character or glyph should be transformed to Unicode titlecase,
and the rest to lowercase. */
#define PCRE2_SUBSTITUTE_CASE_LOWER 1
#define PCRE2_SUBSTITUTE_CASE_UPPER 2
#define PCRE2_SUBSTITUTE_CASE_TITLE_FIRST 3
/* Types for code units in patterns and subject strings. */
@ -613,7 +662,9 @@ PCRE2_EXP_DECL int PCRE2_CALL_CONVENTION \
pcre2_set_parens_nest_limit(pcre2_compile_context *, uint32_t); \
PCRE2_EXP_DECL int PCRE2_CALL_CONVENTION \
pcre2_set_compile_recursion_guard(pcre2_compile_context *, \
int (*)(uint32_t, void *), void *);
int (*)(uint32_t, void *), void *); \
PCRE2_EXP_DECL int PCRE2_CALL_CONVENTION \
pcre2_set_optimize(pcre2_compile_context *, uint32_t);
#define PCRE2_MATCH_CONTEXT_FUNCTIONS \
PCRE2_EXP_DECL pcre2_match_context *PCRE2_CALL_CONVENTION \
@ -628,6 +679,11 @@ PCRE2_EXP_DECL int PCRE2_CALL_CONVENTION \
PCRE2_EXP_DECL int PCRE2_CALL_CONVENTION \
pcre2_set_substitute_callout(pcre2_match_context *, \
int (*)(pcre2_substitute_callout_block *, void *), void *); \
PCRE2_EXP_DECL int PCRE2_CALL_CONVENTION \
pcre2_set_substitute_case_callout(pcre2_match_context *, \
PCRE2_SIZE (*)(PCRE2_SPTR, PCRE2_SIZE, PCRE2_UCHAR *, PCRE2_SIZE, int, \
void *), \
void *); \
PCRE2_EXP_DECL int PCRE2_CALL_CONVENTION \
pcre2_set_depth_limit(pcre2_match_context *, uint32_t); \
PCRE2_EXP_DECL int PCRE2_CALL_CONVENTION \
@ -740,6 +796,7 @@ PCRE2_EXP_DECL void PCRE2_CALL_CONVENTION \
PCRE2_EXP_DECL int PCRE2_CALL_CONVENTION \
pcre2_substring_list_get(pcre2_match_data *, PCRE2_UCHAR ***, PCRE2_SIZE **);
/* Functions for serializing / deserializing compiled patterns. */
#define PCRE2_SERIALIZE_FUNCTIONS \
@ -907,7 +964,9 @@ pcre2_compile are called by application code. */
#define pcre2_set_newline PCRE2_SUFFIX(pcre2_set_newline_)
#define pcre2_set_parens_nest_limit PCRE2_SUFFIX(pcre2_set_parens_nest_limit_)
#define pcre2_set_offset_limit PCRE2_SUFFIX(pcre2_set_offset_limit_)
#define pcre2_set_optimize PCRE2_SUFFIX(pcre2_set_optimize_)
#define pcre2_set_substitute_callout PCRE2_SUFFIX(pcre2_set_substitute_callout_)
#define pcre2_set_substitute_case_callout PCRE2_SUFFIX(pcre2_set_substitute_case_callout_)
#define pcre2_substitute PCRE2_SUFFIX(pcre2_substitute_)
#define pcre2_substring_copy_byname PCRE2_SUFFIX(pcre2_substring_copy_byname_)
#define pcre2_substring_copy_bynumber PCRE2_SUFFIX(pcre2_substring_copy_bynumber_)

View File

@ -7,7 +7,7 @@ and semantics are as close as possible to those of the Perl 5 language.
Written by Philip Hazel
Original API code Copyright (c) 1997-2012 University of Cambridge
New API code Copyright (c) 2016-2022 University of Cambridge
New API code Copyright (c) 2016-2024 University of Cambridge
-----------------------------------------------------------------------------
Redistribution and use in source and binary forms, with or without
@ -49,6 +49,10 @@ repeats into possessive repeats where possible. */
#include "pcre2_internal.h"
/* This macro represents the max size of list[] and that is used to keep
track of UCD info in several places, it should be kept on sync with the
value used by GenerateUcd.py */
#define MAX_LIST 8
/*************************************************
* Tables for auto-possessification *
@ -64,7 +68,7 @@ The Unicode property types (\P and \p) have to be present to fill out the table
because of what their opcode values are, but the table values should always be
zero because property types are handled separately in the code. The last four
columns apply to items that cannot be repeated, so there is no need to have
rows for them. Note that OP_DIGIT etc. are generated only when PCRE_UCP is
rows for them. Note that OP_DIGIT etc. are generated only when PCRE2_UCP is
*not* set. When it is set, \d etc. are converted into OP_(NOT_)PROP codes. */
#define APTROWS (LAST_AUTOTAB_LEFT_OP - FIRST_AUTOTAB_OP + 1)
@ -123,21 +127,21 @@ opcode is used to select the column. The values are as follows:
*/
static const uint8_t propposstab[PT_TABSIZE][PT_TABSIZE] = {
/* ANY LAMP GC PC SC SCX ALNUM SPACE PXSPACE WORD CLIST UCNC BIDICL BOOL */
{ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 }, /* PT_ANY */
{ 0, 3, 0, 0, 0, 0, 3, 1, 1, 0, 0, 0, 0, 0 }, /* PT_LAMP */
{ 0, 0, 2, 4, 0, 0, 9, 10, 10, 11, 0, 0, 0, 0 }, /* PT_GC */
{ 0, 0, 5, 2, 0, 0, 15, 16, 16, 17, 0, 0, 0, 0 }, /* PT_PC */
{ 0, 0, 0, 0, 2, 2, 0, 0, 0, 0, 0, 0, 0, 0 }, /* PT_SC */
{ 0, 0, 0, 0, 2, 2, 0, 0, 0, 0, 0, 0, 0, 0 }, /* PT_SCX */
{ 0, 3, 6, 12, 0, 0, 3, 1, 1, 0, 0, 0, 0, 0 }, /* PT_ALNUM */
{ 0, 1, 7, 13, 0, 0, 1, 3, 3, 1, 0, 0, 0, 0 }, /* PT_SPACE */
{ 0, 1, 7, 13, 0, 0, 1, 3, 3, 1, 0, 0, 0, 0 }, /* PT_PXSPACE */
{ 0, 0, 8, 14, 0, 0, 0, 1, 1, 3, 0, 0, 0, 0 }, /* PT_WORD */
{ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 }, /* PT_CLIST */
{ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3, 0, 0 }, /* PT_UCNC */
{ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 }, /* PT_BIDICL */
{ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 } /* PT_BOOL */
/* LAMP GC PC SC SCX ALNUM SPACE PXSPACE WORD CLIST UCNC BIDICL BOOL */
{ 3, 0, 0, 0, 0, 3, 1, 1, 0, 0, 0, 0, 0 }, /* PT_LAMP */
{ 0, 2, 4, 0, 0, 9, 10, 10, 11, 0, 0, 0, 0 }, /* PT_GC */
{ 0, 5, 2, 0, 0, 15, 16, 16, 17, 0, 0, 0, 0 }, /* PT_PC */
{ 0, 0, 0, 2, 2, 0, 0, 0, 0, 0, 0, 0, 0 }, /* PT_SC */
{ 0, 0, 0, 2, 2, 0, 0, 0, 0, 0, 0, 0, 0 }, /* PT_SCX */
{ 3, 6, 12, 0, 0, 3, 1, 1, 0, 0, 0, 0, 0 }, /* PT_ALNUM */
{ 1, 7, 13, 0, 0, 1, 3, 3, 1, 0, 0, 0, 0 }, /* PT_SPACE */
{ 1, 7, 13, 0, 0, 1, 3, 3, 1, 0, 0, 0, 0 }, /* PT_PXSPACE */
{ 0, 8, 14, 0, 0, 0, 1, 1, 3, 0, 0, 0, 0 }, /* PT_WORD */
{ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 }, /* PT_CLIST */
{ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3, 0, 0 }, /* PT_UCNC */
{ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 }, /* PT_BIDICL */
{ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 } /* PT_BOOL */
/* PT_ANY does not need a record. */
};
/* This table is used to check whether auto-possessification is possible
@ -199,7 +203,7 @@ static BOOL
check_char_prop(uint32_t c, unsigned int ptype, unsigned int pdata,
BOOL negated)
{
BOOL ok;
BOOL ok, rc;
const uint32_t *p;
const ucd_record *prop = GET_UCD(c);
@ -240,12 +244,13 @@ switch(ptype)
{
HSPACE_CASES:
VSPACE_CASES:
return negated;
rc = negated;
break;
default:
return (PRIV(ucp_gentype)[prop->chartype] == ucp_Z) == negated;
rc = (PRIV(ucp_gentype)[prop->chartype] == ucp_Z) == negated;
}
break; /* Control never reaches here */
return rc;
case PT_WORD:
return (PRIV(ucp_gentype)[prop->chartype] == ucp_L ||
@ -259,7 +264,8 @@ switch(ptype)
if (c < *p) return !negated;
if (c == *p++) return negated;
}
break; /* Control never reaches here */
PCRE2_DEBUG_UNREACHABLE(); /* Control should never reach here */
break;
/* Haven't yet thought these through. */
@ -328,6 +334,7 @@ get_chr_property_list(PCRE2_SPTR code, BOOL utf, BOOL ucp, const uint8_t *fcc,
PCRE2_UCHAR c = *code;
PCRE2_UCHAR base;
PCRE2_SPTR end;
PCRE2_SPTR class_end;
uint32_t chr;
#ifdef SUPPORT_UNICODE
@ -450,10 +457,12 @@ switch(c)
code += 2;
do {
if (clist_dest >= list + 8)
if (clist_dest >= list + MAX_LIST)
{
/* Early return if there is not enough space. This should never
happen, since all clists are shorter than 5 character now. */
/* Early return if there is not enough space. GenerateUcd.py
generated a list with more than 5 characters and something
must be done about that going forward. */
PCRE2_DEBUG_UNREACHABLE(); /* Remove if it ever triggers */
list[2] = code[0];
list[3] = code[1];
return code;
@ -473,11 +482,13 @@ switch(c)
case OP_CLASS:
#ifdef SUPPORT_WIDE_CHARS
case OP_XCLASS:
if (c == OP_XCLASS)
case OP_ECLASS:
if (c == OP_XCLASS || c == OP_ECLASS)
end = code + GET(code, 0) - 1;
else
#endif
end = code + 32 / sizeof(PCRE2_UCHAR);
class_end = end;
switch(*end)
{
@ -505,6 +516,7 @@ switch(c)
break;
}
list[2] = (uint32_t)(end - code);
list[3] = (uint32_t)(end - class_end);
return end;
}
@ -537,7 +549,7 @@ compare_opcodes(PCRE2_SPTR code, BOOL utf, BOOL ucp, const compile_block *cb,
const uint32_t *base_list, PCRE2_SPTR base_end, int *rec_limit)
{
PCRE2_UCHAR c;
uint32_t list[8];
uint32_t list[MAX_LIST];
const uint32_t *chr_ptr;
const uint32_t *ochr_ptr;
const uint32_t *list_ptr;
@ -581,7 +593,7 @@ for(;;)
continue;
}
/* At the end of a branch, skip to the end of the group. */
/* At the end of a branch, skip to the end of the group and process it. */
if (c == OP_ALT)
{
@ -638,19 +650,29 @@ for(;;)
return FALSE;
break;
/* Atomic sub-patterns and assertions can always auto-possessify their
last iterator except for variable length lookbehinds. However, if the
group was entered as a result of checking a previous iterator, this is
not possible. */
/* Atomic sub-patterns and forward assertions can always auto-possessify
their last iterator. However, if the group was entered as a result of
checking a previous iterator, this is not possible. */
case OP_ASSERT:
case OP_ASSERT_NOT:
case OP_ONCE:
return !entered_a_group;
/* Fixed-length lookbehinds can be treated the same way, but variable
length lookbehinds must not auto-possessify their last iterator. Note
that in order to identify a variable length lookbehind we must check
through all branches, because some may be of fixed length. */
case OP_ASSERTBACK:
case OP_ASSERTBACK_NOT:
return (bracode[1+LINK_SIZE] == OP_VREVERSE)? FALSE : !entered_a_group;
do
{
if (bracode[1+LINK_SIZE] == OP_VREVERSE) return FALSE; /* Variable */
bracode += GET(bracode, 1);
}
while (*bracode == OP_ALT);
return !entered_a_group; /* Not variable length */
/* Non-atomic assertions - don't possessify last iterator. This needs
more thought. */
@ -748,12 +770,12 @@ for(;;)
if (base_list[0] == OP_CLASS)
#endif
{
set1 = (uint8_t *)(base_end - base_list[2]);
set1 = (const uint8_t *)(base_end - base_list[2]);
list_ptr = list;
}
else
{
set1 = (uint8_t *)(code - list[2]);
set1 = (const uint8_t *)(code - list[2]);
list_ptr = base_list;
}
@ -762,13 +784,14 @@ for(;;)
{
case OP_CLASS:
case OP_NCLASS:
set2 = (uint8_t *)
set2 = (const uint8_t *)
((list_ptr == list ? code : base_end) - list_ptr[2]);
break;
#ifdef SUPPORT_WIDE_CHARS
case OP_XCLASS:
xclass_flags = (list_ptr == list ? code : base_end) - list_ptr[2] + LINK_SIZE;
xclass_flags = (list_ptr == list ? code : base_end) -
list_ptr[2] + LINK_SIZE;
if ((*xclass_flags & XCL_HASPROP) != 0) return FALSE;
if ((*xclass_flags & XCL_MAP) == 0)
{
@ -777,7 +800,7 @@ for(;;)
/* Might be an empty repeat. */
continue;
}
set2 = (uint8_t *)(xclass_flags + 1);
set2 = (const uint8_t *)(xclass_flags + 1);
break;
#endif
@ -785,21 +808,21 @@ for(;;)
invert_bits = TRUE;
/* Fall through */
case OP_DIGIT:
set2 = (uint8_t *)(cb->cbits + cbit_digit);
set2 = (const uint8_t *)(cb->cbits + cbit_digit);
break;
case OP_NOT_WHITESPACE:
invert_bits = TRUE;
/* Fall through */
case OP_WHITESPACE:
set2 = (uint8_t *)(cb->cbits + cbit_space);
set2 = (const uint8_t *)(cb->cbits + cbit_space);
break;
case OP_NOT_WORDCHAR:
invert_bits = TRUE;
/* Fall through */
case OP_WORDCHAR:
set2 = (uint8_t *)(cb->cbits + cbit_word);
set2 = (const uint8_t *)(cb->cbits + cbit_word);
break;
default:
@ -1084,7 +1107,7 @@ for(;;)
case OP_CLASS:
if (chr > 255) break;
class_bitset = (uint8_t *)
class_bitset = (const uint8_t *)
((list_ptr == list ? code : base_end) - list_ptr[2]);
if ((class_bitset[chr >> 3] & (1u << (chr & 7))) != 0) return FALSE;
break;
@ -1092,9 +1115,18 @@ for(;;)
#ifdef SUPPORT_WIDE_CHARS
case OP_XCLASS:
if (PRIV(xclass)(chr, (list_ptr == list ? code : base_end) -
list_ptr[2] + LINK_SIZE, utf)) return FALSE;
list_ptr[2] + LINK_SIZE, (const uint8_t*)cb->start_code, utf))
return FALSE;
break;
#endif
case OP_ECLASS:
if (PRIV(eclass)(chr,
(list_ptr == list ? code : base_end) - list_ptr[2] + LINK_SIZE,
(list_ptr == list ? code : base_end) - list_ptr[3],
(const uint8_t*)cb->start_code, utf))
return FALSE;
break;
#endif /* SUPPORT_WIDE_CHARS */
default:
return FALSE;
@ -1109,8 +1141,8 @@ for(;;)
if (list[1] == 0) return TRUE;
}
/* Control never reaches here. There used to be a fail-save return FALSE; here,
but some compilers complain about an unreachable statement. */
PCRE2_DEBUG_UNREACHABLE(); /* Control should never reach here */
return FALSE; /* Avoid compiler warnings */
}
@ -1140,7 +1172,7 @@ PRIV(auto_possessify)(PCRE2_UCHAR *code, const compile_block *cb)
PCRE2_UCHAR c;
PCRE2_SPTR end;
PCRE2_UCHAR *repeat_opcode;
uint32_t list[8];
uint32_t list[MAX_LIST];
int rec_limit = 1000; /* Was 10,000 but clang+ASAN uses a lot of stack. */
BOOL utf = (cb->external_options & PCRE2_UTF) != 0;
BOOL ucp = (cb->external_options & PCRE2_UCP) != 0;
@ -1149,7 +1181,11 @@ for (;;)
{
c = *code;
if (c >= OP_TABLE_LENGTH) return -1; /* Something gone wrong */
if (c >= OP_TABLE_LENGTH)
{
PCRE2_DEBUG_UNREACHABLE();
return -1; /* Something gone wrong */
}
if (c >= OP_STAR && c <= OP_TYPEPOSUPTO)
{
@ -1198,10 +1234,14 @@ for (;;)
}
c = *code;
}
else if (c == OP_CLASS || c == OP_NCLASS || c == OP_XCLASS)
else if (c == OP_CLASS || c == OP_NCLASS
#ifdef SUPPORT_WIDE_CHARS
|| c == OP_XCLASS || c == OP_ECLASS
#endif
)
{
#ifdef SUPPORT_WIDE_CHARS
if (c == OP_XCLASS)
if (c == OP_XCLASS || c == OP_ECLASS)
repeat_opcode = code + GET(code, 1);
else
#endif
@ -1211,7 +1251,7 @@ for (;;)
if (c >= OP_CRSTAR && c <= OP_CRMINRANGE)
{
/* The return from get_chr_property_list() will never be NULL when
*code (aka c) is one of the three class opcodes. However, gcc with
*code (aka c) is one of the four class opcodes. However, gcc with
-fanalyzer notes that a NULL return is possible, and grumbles. Hence we
put in a check. */
@ -1279,6 +1319,7 @@ for (;;)
#ifdef SUPPORT_WIDE_CHARS
case OP_XCLASS:
case OP_ECLASS:
code += GET(code, 1);
break;
#endif

View File

@ -74,9 +74,7 @@ if (__builtin_mul_overflow(a, b, &m)) return TRUE;
#else
INT64_OR_DOUBLE m;
#ifdef PCRE2_DEBUG
if (a < 0 || b < 0) abort();
#endif
PCRE2_ASSERT(a >= 0 && b >= 0);
m = (INT64_OR_DOUBLE)a * (INT64_OR_DOUBLE)b;
@ -93,4 +91,4 @@ if (m > PCRE2_SIZE_MAX) return TRUE;
return FALSE;
}
/* End of pcre_chkdint.c */
/* End of pcre2_chkdint.c */

File diff suppressed because it is too large Load Diff

280
src/3rdparty/pcre2/src/pcre2_compile.h vendored Normal file
View File

@ -0,0 +1,280 @@
/*************************************************
* Perl-Compatible Regular Expressions *
*************************************************/
/* PCRE2 is a library of functions to support regular expressions whose syntax
and semantics are as close as possible to those of the Perl 5 language.
Written by Philip Hazel
Original API code Copyright (c) 1997-2012 University of Cambridge
New API code Copyright (c) 2016-2024 University of Cambridge
-----------------------------------------------------------------------------
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice,
this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
* Neither the name of the University of Cambridge nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
POSSIBILITY OF SUCH DAMAGE.
-----------------------------------------------------------------------------
*/
#ifndef PCRE2_COMPILE_H_IDEMPOTENT_GUARD
#define PCRE2_COMPILE_H_IDEMPOTENT_GUARD
#include "pcre2_internal.h"
/* Compile time error code numbers. They are given names so that they can more
easily be tracked. When a new number is added, the tables called eint1 and
eint2 in pcre2posix.c may need to be updated, and a new error text must be
added to compile_error_texts in pcre2_error.c. Also, the error codes in
pcre2.h.in must be updated - their values are exactly 100 greater than these
values. */
enum { ERR0 = COMPILE_ERROR_BASE,
ERR1, ERR2, ERR3, ERR4, ERR5, ERR6, ERR7, ERR8, ERR9, ERR10,
ERR11, ERR12, ERR13, ERR14, ERR15, ERR16, ERR17, ERR18, ERR19, ERR20,
ERR21, ERR22, ERR23, ERR24, ERR25, ERR26, ERR27, ERR28, ERR29, ERR30,
ERR31, ERR32, ERR33, ERR34, ERR35, ERR36, ERR37, ERR38, ERR39, ERR40,
ERR41, ERR42, ERR43, ERR44, ERR45, ERR46, ERR47, ERR48, ERR49, ERR50,
ERR51, ERR52, ERR53, ERR54, ERR55, ERR56, ERR57, ERR58, ERR59, ERR60,
ERR61, ERR62, ERR63, ERR64, ERR65, ERR66, ERR67, ERR68, ERR69, ERR70,
ERR71, ERR72, ERR73, ERR74, ERR75, ERR76, ERR77, ERR78, ERR79, ERR80,
ERR81, ERR82, ERR83, ERR84, ERR85, ERR86, ERR87, ERR88, ERR89, ERR90,
ERR91, ERR92, ERR93, ERR94, ERR95, ERR96, ERR97, ERR98, ERR99, ERR100,
ERR101,ERR102,ERR103,ERR104,ERR105,ERR106,ERR107,ERR108,ERR109,ERR110,
ERR111,ERR112,ERR113,ERR114,ERR115,ERR116 };
/* Code values for parsed patterns, which are stored in a vector of 32-bit
unsigned ints. Values less than META_END are literal data values. The coding
for identifying the item is in the top 16-bits, leaving 16 bits for the
additional data that some of them need. The META_CODE, META_DATA, and META_DIFF
macros are used to manipulate parsed pattern elements.
NOTE: When these definitions are changed, the table of extra lengths for each
code (meta_extra_lengths) must be updated to remain in step. */
#define META_END 0x80000000u /* End of pattern */
#define META_ALT 0x80010000u /* alternation */
#define META_ATOMIC 0x80020000u /* atomic group */
#define META_BACKREF 0x80030000u /* Back ref */
#define META_BACKREF_BYNAME 0x80040000u /* \k'name' */
#define META_BIGVALUE 0x80050000u /* Next is a literal > META_END */
#define META_CALLOUT_NUMBER 0x80060000u /* (?C with numerical argument */
#define META_CALLOUT_STRING 0x80070000u /* (?C with string argument */
#define META_CAPTURE 0x80080000u /* Capturing parenthesis */
#define META_CIRCUMFLEX 0x80090000u /* ^ metacharacter */
#define META_CLASS 0x800a0000u /* start non-empty class */
#define META_CLASS_EMPTY 0x800b0000u /* empty class */
#define META_CLASS_EMPTY_NOT 0x800c0000u /* negative empty class */
#define META_CLASS_END 0x800d0000u /* end of non-empty class */
#define META_CLASS_NOT 0x800e0000u /* start non-empty negative class */
#define META_COND_ASSERT 0x800f0000u /* (?(?assertion)... */
#define META_COND_DEFINE 0x80100000u /* (?(DEFINE)... */
#define META_COND_NAME 0x80110000u /* (?(<name>)... */
#define META_COND_NUMBER 0x80120000u /* (?(digits)... */
#define META_COND_RNAME 0x80130000u /* (?(R&name)... */
#define META_COND_RNUMBER 0x80140000u /* (?(Rdigits)... */
#define META_COND_VERSION 0x80150000u /* (?(VERSION<op>x.y)... */
#define META_OFFSET 0x80160000u /* Setting offset for various
META codes (e.g. META_SCS_NAME) */
#define META_SCS 0x80170000u /* (*scan_substring:... */
#define META_SCS_NAME 0x80180000u /* Next <name> of scan_substring */
#define META_SCS_NUMBER 0x80190000u /* Next digits of scan_substring */
#define META_DOLLAR 0x801a0000u /* $ metacharacter */
#define META_DOT 0x801b0000u /* . metacharacter */
#define META_ESCAPE 0x801c0000u /* \d and friends */
#define META_KET 0x801d0000u /* closing parenthesis */
#define META_NOCAPTURE 0x801e0000u /* no capture parens */
#define META_OPTIONS 0x801f0000u /* (?i) and friends */
#define META_POSIX 0x80200000u /* POSIX class item */
#define META_POSIX_NEG 0x80210000u /* negative POSIX class item */
#define META_RANGE_ESCAPED 0x80220000u /* range with at least one escape */
#define META_RANGE_LITERAL 0x80230000u /* range defined literally */
#define META_RECURSE 0x80240000u /* Recursion */
#define META_RECURSE_BYNAME 0x80250000u /* (?&name) */
#define META_SCRIPT_RUN 0x80260000u /* (*script_run:...) */
/* These must be kept together to make it easy to check that an assertion
is present where expected in a conditional group. */
#define META_LOOKAHEAD 0x80270000u /* (?= */
#define META_LOOKAHEADNOT 0x80280000u /* (?! */
#define META_LOOKBEHIND 0x80290000u /* (?<= */
#define META_LOOKBEHINDNOT 0x802a0000u /* (?<! */
/* These cannot be conditions */
#define META_LOOKAHEAD_NA 0x802b0000u /* (*napla: */
#define META_LOOKBEHIND_NA 0x802c0000u /* (*naplb: */
/* These must be kept in this order, with consecutive values, and the _ARG
versions of COMMIT, PRUNE, SKIP, and THEN immediately after their non-argument
versions. */
#define META_MARK 0x802d0000u /* (*MARK) */
#define META_ACCEPT 0x802e0000u /* (*ACCEPT) */
#define META_FAIL 0x802f0000u /* (*FAIL) */
#define META_COMMIT 0x80300000u /* These */
#define META_COMMIT_ARG 0x80310000u /* pairs */
#define META_PRUNE 0x80320000u /* must */
#define META_PRUNE_ARG 0x80330000u /* be */
#define META_SKIP 0x80340000u /* kept */
#define META_SKIP_ARG 0x80350000u /* in */
#define META_THEN 0x80360000u /* this */
#define META_THEN_ARG 0x80370000u /* order */
/* These must be kept in groups of adjacent 3 values, and all together. */
#define META_ASTERISK 0x80380000u /* * */
#define META_ASTERISK_PLUS 0x80390000u /* *+ */
#define META_ASTERISK_QUERY 0x803a0000u /* *? */
#define META_PLUS 0x803b0000u /* + */
#define META_PLUS_PLUS 0x803c0000u /* ++ */
#define META_PLUS_QUERY 0x803d0000u /* +? */
#define META_QUERY 0x803e0000u /* ? */
#define META_QUERY_PLUS 0x803f0000u /* ?+ */
#define META_QUERY_QUERY 0x80400000u /* ?? */
#define META_MINMAX 0x80410000u /* {n,m} repeat */
#define META_MINMAX_PLUS 0x80420000u /* {n,m}+ repeat */
#define META_MINMAX_QUERY 0x80430000u /* {n,m}? repeat */
/* These meta codes must be kept in a group, with the OR/SUB/XOR in
this order, and AND/NOT at the start/end. */
#define META_ECLASS_AND 0x80440000u /* && (or &) in a class */
#define META_ECLASS_OR 0x80450000u /* || (or |, +) in a class */
#define META_ECLASS_SUB 0x80460000u /* -- (or -) in a class */
#define META_ECLASS_XOR 0x80470000u /* ~~ (or ^) in a class */
#define META_ECLASS_NOT 0x80480000u /* ! in a class */
/* Convenience aliases. */
#define META_FIRST_QUANTIFIER META_ASTERISK
#define META_LAST_QUANTIFIER META_MINMAX_QUERY
/* This is a special "meta code" that is used only to distinguish (*asr: from
(*sr: in the table of alphabetic assertions. It is never stored in the parsed
pattern because (*asr: is turned into (*sr:(*atomic: at that stage. There is
therefore no need for it to have a length entry, so use a high value. */
#define META_ATOMIC_SCRIPT_RUN 0x8fff0000u
/* Macros for manipulating elements of the parsed pattern vector. */
#define META_CODE(x) (x & 0xffff0000u)
#define META_DATA(x) (x & 0x0000ffffu)
#define META_DIFF(x,y) ((x-y)>>16)
/* Extended class management flags. */
#define CLASS_IS_ECLASS 0x1
/* Macro for the highest character value. */
#if PCRE2_CODE_UNIT_WIDTH == 8
#define MAX_UCHAR_VALUE 0xffu
#elif PCRE2_CODE_UNIT_WIDTH == 16
#define MAX_UCHAR_VALUE 0xffffu
#else
#define MAX_UCHAR_VALUE 0xffffffffu
#endif
#define GET_MAX_CHAR_VALUE(utf) \
((utf) ? MAX_UTF_CODE_POINT : MAX_UCHAR_VALUE)
/* Macro for setting individual bits in class bitmaps. */
#define SETBIT(a,b) a[(b) >> 3] |= (uint8_t)(1u << ((b) & 0x7))
/* Macro for 8 bit specific checks. */
#if PCRE2_CODE_UNIT_WIDTH == 8
#define SELECT_VALUE8(value8, value) (value8)
#else
#define SELECT_VALUE8(value8, value) (value)
#endif
/* Macro for aligning data. */
#define CLIST_ALIGN_TO(base, align) \
((base + ((size_t)(align) - 1)) & ~((size_t)(align) - 1))
/* Structure for holding information about an OP_ECLASS internal operand.
An "operand" here could be just a single OP_[X]CLASS, or it could be some
complex expression; but it's some sequence of ECL_* codes which pushes one
value to the stack. */
typedef struct {
/* The position of the operand - or NULL if (lengthptr != NULL). */
PCRE2_UCHAR *code_start;
PCRE2_SIZE length;
/* The operand's type if it is a single code (ECL_XCLASS, ECL_ANY, ECL_NONE);
otherwise zero if the operand is not atomic. */
uint8_t op_single_type;
/* Regardless of whether it's a single code or not, we fully constant-fold
the bitmap for code points < 256. */
class_bits_storage bits;
} eclass_op_info;
/* Macros for the definitions below, to prevent name collisions. */
#define _pcre2_posix_class_maps PCRE2_SUFFIX(_pcre2_posix_class_maps)
#define _pcre2_update_classbits PCRE2_SUFFIX(_pcre2_update_classbits_)
#define _pcre2_compile_class_nested PCRE2_SUFFIX(_pcre2_compile_class_nested_)
#define _pcre2_compile_class_not_nested PCRE2_SUFFIX(_pcre2_compile_class_not_nested_)
/* Indices of the POSIX classes in posix_names, posix_name_lengths,
posix_class_maps, and posix_substitutes. They must be kept in sync. */
#define PC_DIGIT 7
#define PC_GRAPH 8
#define PC_PRINT 9
#define PC_PUNCT 10
#define PC_XDIGIT 13
extern const int PRIV(posix_class_maps)[];
/* Set bits in classbits according to the property type */
void PRIV(update_classbits)(uint32_t ptype, uint32_t pdata, BOOL negated,
uint8_t *classbits);
/* Compile the META codes from start_ptr...end_ptr, writing a single OP_CLASS
OP_CLASS, OP_NCLASS, OP_XCLASS, or OP_ALLANY into pcode. */
uint32_t *PRIV(compile_class_not_nested)(uint32_t options, uint32_t xoptions,
uint32_t *start_ptr, PCRE2_UCHAR **pcode, BOOL negate_class, BOOL* has_bitmap,
int *errorcodeptr, compile_block *cb, PCRE2_SIZE *lengthptr);
/* Compile the META codes in pptr into opcodes written to pcode. The pptr must
start at a META_CLASS or META_CLASS_NOT.
The pptr will be left pointing at the matching META_CLASS_END. */
BOOL PRIV(compile_class_nested)(uint32_t options, uint32_t xoptions,
uint32_t **pptr, PCRE2_UCHAR **pcode, int *errorcodeptr,
compile_block *cb, PCRE2_SIZE *lengthptr);
#endif /* PCRE2_COMPILE_H_IDEMPOTENT_GUARD */
/* End of pcre2_compile.h */

File diff suppressed because it is too large Load Diff

View File

@ -224,8 +224,8 @@ switch (what)
XSTRING when PCRE2_PRERELEASE is not empty, an unwanted space is inserted.
There are problems using an "obvious" approach like this:
XSTRING(PCRE2_MAJOR) "." XSTRING(PCRE_MINOR)
XSTRING(PCRE2_PRERELEASE) " " XSTRING(PCRE_DATE)
XSTRING(PCRE2_MAJOR) "." XSTRING(PCRE2_MINOR)
XSTRING(PCRE2_PRERELEASE) " " XSTRING(PCRE2_DATE)
because, when PCRE2_PRERELEASE is empty, this leads to an attempted expansion
of STRING(). The C standard states: "If (before argument substitution) any

View File

@ -130,7 +130,7 @@ return gcontext;
/* A default compile context is set up to save having to initialize at run time
when no context is supplied to the compile function. */
const pcre2_compile_context PRIV(default_compile_context) = {
pcre2_compile_context PRIV(default_compile_context) = {
{ default_malloc, default_free, NULL }, /* Default memory handling */
NULL, /* Stack guard */
NULL, /* Stack guard data */
@ -141,7 +141,8 @@ const pcre2_compile_context PRIV(default_compile_context) = {
NEWLINE_DEFAULT, /* Newline convention */
PARENS_NEST_LIMIT, /* As it says */
0, /* Extra options */
MAX_VARLOOKBEHIND /* As it says */
MAX_VARLOOKBEHIND, /* As it says */
PCRE2_OPTIMIZATION_ALL /* All optimizations enabled */
};
/* The create function copies the default into the new memory, but must
@ -163,7 +164,7 @@ return ccontext;
/* A default match context is set up to save having to initialize at run time
when no context is supplied to a match function. */
const pcre2_match_context PRIV(default_match_context) = {
pcre2_match_context PRIV(default_match_context) = {
{ default_malloc, default_free, NULL },
#ifdef SUPPORT_JIT
NULL, /* JIT callback */
@ -173,6 +174,8 @@ const pcre2_match_context PRIV(default_match_context) = {
NULL, /* Callout data */
NULL, /* Substitute callout function */
NULL, /* Substitute callout data */
NULL, /* Substitute case callout function */
NULL, /* Substitute case callout data */
PCRE2_UNSET, /* Offset limit */
HEAP_LIMIT,
MATCH_LIMIT,
@ -197,7 +200,7 @@ return mcontext;
/* A default convert context is set up to save having to initialize at run time
when no context is supplied to the convert function. */
const pcre2_convert_context PRIV(default_convert_context) = {
pcre2_convert_context PRIV(default_convert_context) = {
{ default_malloc, default_free, NULL }, /* Default memory handling */
#ifdef _WIN32
CHAR_BACKSLASH, /* Default path separator */
@ -409,6 +412,38 @@ ccontext->stack_guard_data = user_data;
return 0;
}
PCRE2_EXP_DEFN int PCRE2_CALL_CONVENTION
pcre2_set_optimize(pcre2_compile_context *ccontext, uint32_t directive)
{
if (ccontext == NULL)
return PCRE2_ERROR_NULL;
switch (directive)
{
case PCRE2_OPTIMIZATION_NONE:
ccontext->optimization_flags = 0;
break;
case PCRE2_OPTIMIZATION_FULL:
ccontext->optimization_flags = PCRE2_OPTIMIZATION_ALL;
break;
default:
if (directive >= PCRE2_AUTO_POSSESS && directive <= PCRE2_START_OPTIMIZE_OFF)
{
/* Even directive numbers starting from 64 switch a bit on;
* Odd directive numbers starting from 65 switch a bit off */
if ((directive & 1) != 0)
ccontext->optimization_flags &= ~(1u << ((directive >> 1) - 32));
else
ccontext->optimization_flags |= 1u << ((directive >> 1) - 32);
return 0;
}
return PCRE2_ERROR_BADOPTION;
}
return 0;
}
/* ------------ Match context ------------ */
@ -431,6 +466,17 @@ mcontext->substitute_callout_data = substitute_callout_data;
return 0;
}
PCRE2_EXP_DEFN int PCRE2_CALL_CONVENTION
pcre2_set_substitute_case_callout(pcre2_match_context *mcontext,
PCRE2_SIZE (*substitute_case_callout)(PCRE2_SPTR, PCRE2_SIZE, PCRE2_UCHAR *,
PCRE2_SIZE, int, void *),
void *substitute_case_callout_data)
{
mcontext->substitute_case_callout = substitute_case_callout;
mcontext->substitute_case_callout_data = substitute_case_callout_data;
return 0;
}
PCRE2_EXP_DEFN int PCRE2_CALL_CONVENTION
pcre2_set_heap_limit(pcre2_match_context *mcontext, uint32_t limit)
{

View File

@ -7,7 +7,7 @@ and semantics are as close as possible to those of the Perl 5 language.
Written by Philip Hazel
Original API code Copyright (c) 1997-2012 University of Cambridge
New API code Copyright (c) 2016-2023 University of Cambridge
New API code Copyright (c) 2016-2024 University of Cambridge
-----------------------------------------------------------------------------
Redistribution and use in source and binary forms, with or without
@ -156,6 +156,7 @@ static const uint8_t coptable[] = {
0, /* CLASS */
0, /* NCLASS */
0, /* XCLASS - variable length */
0, /* ECLASS - variable length */
0, /* REF */
0, /* REFI */
0, /* DNREF */
@ -175,6 +176,7 @@ static const uint8_t coptable[] = {
0, /* Assert behind not */
0, /* NA assert */
0, /* NA assert behind */
0, /* Assert scan substring */
0, /* ONCE */
0, /* SCRIPT_RUN */
0, 0, 0, 0, 0, /* BRA, BRAPOS, CBRA, CBRAPOS, COND */
@ -188,7 +190,7 @@ static const uint8_t coptable[] = {
0, 0, /* COMMIT, COMMIT_ARG */
0, 0, 0, /* FAIL, ACCEPT, ASSERT_ACCEPT */
0, 0, 0, /* CLOSE, SKIPZERO, DEFINE */
0, 0 /* \B and \b in UCP mode */
0, 0, /* \B and \b in UCP mode */
};
/* This table identifies those opcodes that inspect a character. It is used to
@ -234,6 +236,7 @@ static const uint8_t poptable[] = {
1, /* CLASS */
1, /* NCLASS */
1, /* XCLASS - variable length */
1, /* ECLASS - variable length */
0, /* REF */
0, /* REFI */
0, /* DNREF */
@ -253,6 +256,7 @@ static const uint8_t poptable[] = {
0, /* Assert behind not */
0, /* NA assert */
0, /* NA assert behind */
0, /* Assert scan substring */
0, /* ONCE */
0, /* SCRIPT_RUN */
0, 0, 0, 0, 0, /* BRA, BRAPOS, CBRA, CBRAPOS, COND */
@ -266,9 +270,13 @@ static const uint8_t poptable[] = {
0, 0, /* COMMIT, COMMIT_ARG */
0, 0, 0, /* FAIL, ACCEPT, ASSERT_ACCEPT */
0, 0, 0, /* CLOSE, SKIPZERO, DEFINE */
1, 1 /* \B and \b in UCP mode */
1, 1, /* \B and \b in UCP mode */
};
/* Compile-time check that these tables have the correct size. */
STATIC_ASSERT(sizeof(coptable) == OP_TABLE_LENGTH, coptable);
STATIC_ASSERT(sizeof(poptable) == OP_TABLE_LENGTH, poptable);
/* These 2 tables allow for compact code for testing for \D, \d, \S, \s, \W,
and \w */
@ -695,7 +703,6 @@ for (;;)
int i, j;
int clen, dlen;
uint32_t c, d;
int forced_fail = 0;
BOOL partial_newline = FALSE;
BOOL could_continue = reset_could_continue;
reset_could_continue = FALSE;
@ -841,19 +848,6 @@ for (;;)
switch (codevalue)
{
/* ========================================================================== */
/* These cases are never obeyed. This is a fudge that causes a compile-
time error if the vectors coptable or poptable, which are indexed by
opcode, are not the correct length. It seems to be the only way to do
such a check at compile time, as the sizeof() operator does not work
in the C preprocessor. */
case OP_TABLE_LENGTH:
case OP_TABLE_LENGTH +
((sizeof(coptable) == OP_TABLE_LENGTH) &&
(sizeof(poptable) == OP_TABLE_LENGTH)):
return 0;
/* ========================================================================== */
/* Reached a closing bracket. If not at the end of the pattern, carry
on with the next opcode. For repeating opcodes, also add the repeat
@ -1179,10 +1173,6 @@ for (;;)
const ucd_record * prop = GET_UCD(c);
switch(code[1])
{
case PT_ANY:
OK = TRUE;
break;
case PT_LAMP:
chartype = prop->chartype;
OK = chartype == ucp_Lu || chartype == ucp_Ll ||
@ -1462,10 +1452,6 @@ for (;;)
const ucd_record * prop = GET_UCD(c);
switch(code[2])
{
case PT_ANY:
OK = TRUE;
break;
case PT_LAMP:
chartype = prop->chartype;
OK = chartype == ucp_Lu || chartype == ucp_Ll || chartype == ucp_Lt;
@ -1727,10 +1713,6 @@ for (;;)
const ucd_record * prop = GET_UCD(c);
switch(code[2])
{
case PT_ANY:
OK = TRUE;
break;
case PT_LAMP:
chartype = prop->chartype;
OK = chartype == ucp_Lu || chartype == ucp_Ll || chartype == ucp_Lt;
@ -2017,10 +1999,6 @@ for (;;)
const ucd_record * prop = GET_UCD(c);
switch(code[1 + IMM2_SIZE + 1])
{
case PT_ANY:
OK = TRUE;
break;
case PT_LAMP:
chartype = prop->chartype;
OK = chartype == ucp_Lu || chartype == ucp_Ll || chartype == ucp_Lt;
@ -2663,35 +2641,54 @@ for (;;)
case OP_CLASS:
case OP_NCLASS:
#ifdef SUPPORT_WIDE_CHARS
case OP_XCLASS:
case OP_ECLASS:
#endif
{
BOOL isinclass = FALSE;
int next_state_offset;
PCRE2_SPTR ecode;
#ifdef SUPPORT_WIDE_CHARS
/* An extended class may have a table or a list of single characters,
ranges, or both, and it may be positive or negative. There's a
function that sorts all this out. */
if (codevalue == OP_XCLASS)
{
ecode = code + GET(code, 1);
if (clen > 0)
isinclass = PRIV(xclass)(c, code + 1 + LINK_SIZE,
(const uint8_t*)mb->start_code, utf);
}
/* A nested set-based class has internal opcodes for performing
set operations. */
else if (codevalue == OP_ECLASS)
{
ecode = code + GET(code, 1);
if (clen > 0)
isinclass = PRIV(eclass)(c, code + 1 + LINK_SIZE, ecode,
(const uint8_t*)mb->start_code, utf);
}
else
#endif /* SUPPORT_WIDE_CHARS */
/* For a simple class, there is always just a 32-byte table, and we
can set isinclass from it. */
if (codevalue != OP_XCLASS)
{
ecode = code + 1 + (32 / sizeof(PCRE2_UCHAR));
if (clen > 0)
{
isinclass = (c > 255)? (codevalue == OP_NCLASS) :
((((uint8_t *)(code + 1))[c/8] & (1u << (c&7))) != 0);
((((const uint8_t *)(code + 1))[c/8] & (1u << (c&7))) != 0);
}
}
/* An extended class may have a table or a list of single characters,
ranges, or both, and it may be positive or negative. There's a
function that sorts all this out. */
else
{
ecode = code + GET(code, 1);
if (clen > 0) isinclass = PRIV(xclass)(c, code + 1 + LINK_SIZE, utf);
}
/* At this point, isinclass is set for all kinds of class, and ecode
points to the byte after the end of the class. If there is a
quantifier, this is where it will be. */
@ -2784,7 +2781,6 @@ for (;;)
though the other "backtracking verbs" are not supported. */
case OP_FAIL:
forced_fail++; /* Count FAILs for multiple states */
break;
case OP_ASSERT:
@ -3058,7 +3054,7 @@ for (;;)
if (codevalue == OP_BRAPOSZERO)
{
allow_zero = TRUE;
codevalue = *(++code); /* Codevalue will be one of above BRAs */
++code; /* The following opcode will be one of the above BRAs */
}
else allow_zero = FALSE;
@ -3271,18 +3267,12 @@ for (;;)
matches that we are going to find. If partial matching has been requested,
check for appropriate conditions.
The "forced_ fail" variable counts the number of (*F) encountered for the
character. If it is equal to the original active_count (saved in
workspace[1]) it means that (*F) was found on every active state. In this
case we don't want to give a partial match.
The "could_continue" variable is true if a state could have continued but
for the fact that the end of the subject was reached. */
if (new_count <= 0)
{
if (could_continue && /* Some could go on, and */
forced_fail != workspace[1] && /* Not all forced fail & */
( /* either... */
(mb->moptions & PCRE2_PARTIAL_HARD) != 0 /* Hard partial */
|| /* or... */
@ -3438,7 +3428,7 @@ if ((re->flags & PCRE2_MODE_MASK) != PCRE2_CODE_UNIT_WIDTH/8)
/* PCRE2_NOTEMPTY and PCRE2_NOTEMPTY_ATSTART are match-time flags in the
options variable for this function. Users of PCRE2 who are not calling the
function directly would like to have a way of setting these flags, in the same
way that they can set pcre2_compile() flags like PCRE2_NO_AUTOPOSSESS with
way that they can set pcre2_compile() flags like PCRE2_NO_AUTO_POSSESS with
constructions like (*NO_AUTOPOSSESS). To enable this, (*NOTEMPTY) and
(*NOTEMPTY_ATSTART) set bits in the pattern's "flag" function which can now be
transferred to the options for this function. The bits are guaranteed to be
@ -3528,8 +3518,7 @@ if (mb->match_limit_depth > re->limit_depth)
if (mb->heap_limit > re->limit_heap)
mb->heap_limit = re->limit_heap;
mb->start_code = (PCRE2_UCHAR *)((uint8_t *)re + sizeof(pcre2_real_code)) +
re->name_count * re->name_entry_size;
mb->start_code = (PCRE2_SPTR)((const uint8_t *)re + re->code_start);
mb->tables = re->tables;
mb->start_subject = subject;
mb->end_subject = end_subject;
@ -3576,7 +3565,9 @@ switch(re->newline_convention)
mb->nltype = NLTYPE_ANYCRLF;
break;
default: return PCRE2_ERROR_INTERNAL;
default:
PCRE2_DEBUG_UNREACHABLE();
return PCRE2_ERROR_INTERNAL;
}
/* Check a UTF string for validity if required. For 8-bit and 16-bit strings,
@ -3705,7 +3696,7 @@ for (;;)
these, for testing and for ensuring that all callouts do actually occur.
The optimizations must also be avoided when restarting a DFA match. */
if ((re->overall_options & PCRE2_NO_START_OPTIMIZE) == 0 &&
if ((re->optimization_flags & PCRE2_OPTIM_START_OPTIMIZE) != 0 &&
(options & PCRE2_DFA_RESTART) == 0)
{
/* If firstline is TRUE, the start of the match is constrained to the first

View File

@ -96,7 +96,7 @@ static const unsigned char compile_error_texts[] =
"length of lookbehind assertion is not limited\0"
"a relative value of zero is not allowed\0"
"conditional subpattern contains more than two branches\0"
"assertion expected after (?( or (?(?C)\0"
"atomic assertion expected after (?( or (?(?C)\0"
"digit expected after (?+ or (?-\0"
/* 30 */
"unknown POSIX class name\0"
@ -161,7 +161,7 @@ static const unsigned char compile_error_texts[] =
"using UCP is disabled by the application\0"
"name is too long in (*MARK), (*PRUNE), (*SKIP), or (*THEN)\0"
"character code point value in \\u.... sequence is too large\0"
"digits missing in \\x{} or \\o{} or \\N{U+}\0"
"digits missing after \\x or in \\x{} or \\o{} or \\N{U+}\0"
"syntax error or number too big in (?(VERSION condition\0"
/* 80 */
"internal error: unknown opcode in auto_possessify()\0"
@ -185,11 +185,29 @@ static const unsigned char compile_error_texts[] =
"(*alpha_assertion) not recognized\0"
"script runs require Unicode support, which this version of PCRE2 does not have\0"
"too many capturing groups (maximum 65535)\0"
"atomic assertion expected after (?( or (?(?C)\0"
"octal digit missing after \\0 (PCRE2_EXTRA_NO_BS0 is set)\0"
"\\K is not allowed in lookarounds (but see PCRE2_EXTRA_ALLOW_LOOKAROUND_BSK)\0"
/* 100 */
"branch too long in variable-length lookbehind assertion\0"
"compiled pattern would be longer than the limit set by the application\0"
"octal value given by \\ddd is greater than \\377 (forbidden by PCRE2_EXTRA_PYTHON_OCTAL)\0"
"using callouts is disabled by the application\0"
"PCRE2_EXTRA_TURKISH_CASING require Unicode (UTF or UCP) mode\0"
/* 105 */
"PCRE2_EXTRA_TURKISH_CASING requires UTF in 8-bit mode\0"
"PCRE2_EXTRA_TURKISH_CASING and PCRE2_EXTRA_CASELESS_RESTRICT are not compatible\0"
"extended character class nesting is too deep\0"
"invalid operator in extended character class\0"
"unexpected operator in extended character class (no preceding operand)\0"
/* 110 */
"expected operand after operator in extended character class\0"
"square brackets needed to clarify operator precedence in extended character class\0"
"missing terminating ] for extended character class (note '[' must be escaped under PCRE2_ALT_EXTENDED_CLASS)\0"
"unexpected expression in extended character class (no preceding operator)\0"
"empty expression in extended character class\0"
/* 115 */
"terminating ] with no following closing parenthesis in (?[...]\0"
"unexpected character in (?[...]) extended character class\0"
;
/* Match-time and UTF error texts are in the same format. */
@ -276,6 +294,10 @@ static const unsigned char match_error_texts[] =
"internal error - duplicate substitution match\0"
"PCRE2_MATCH_INVALID_UTF is not supported for DFA matching\0"
"INTERNAL ERROR: invalid substring offset\0"
"feature is not supported by the JIT compiler\0"
"error performing replacement case transformation\0"
/* 70 */
"replacement too large (longer than PCRE2_SIZE)\0"
;
@ -318,7 +340,7 @@ else if (enumber < 0) /* Match or UTF error */
}
else /* Invalid error number */
{
message = (unsigned char *)"\0"; /* Empty message list */
message = (const unsigned char *)"\0"; /* Empty message list */
n = 1;
}

View File

@ -40,7 +40,7 @@ POSSIBILITY OF SUCH DAMAGE.
/* This module contains an internal function that is used to match a Unicode
extended grapheme sequence. It is used by both pcre2_match() and
pcre2_def_match(). However, it is called only when Unicode support is being
pcre2_dfa_match(). However, it is called only when Unicode support is being
compiled. Nevertheless, we provide a dummy function when there is no Unicode
support, because some compilers do not like functionless source files. */

View File

@ -7,7 +7,7 @@ and semantics are as close as possible to those of the Perl 5 language.
Written by Philip Hazel
Original API code Copyright (c) 1997-2012 University of Cambridge
New API code Copyright (c) 2016-2023 University of Cambridge
New API code Copyright (c) 2016-2024 University of Cambridge
-----------------------------------------------------------------------------
Redistribution and use in source and binary forms, with or without
@ -76,18 +76,19 @@ for (;;)
if (c == OP_END) return NULL;
/* XCLASS is used for classes that cannot be represented just by a bit map.
This includes negated single high-valued characters. CALLOUT_STR is used for
callouts with string arguments. In both cases the length in the table is
This includes negated single high-valued characters. ECLASS is used for
classes that use set operations internally. CALLOUT_STR is used for
callouts with string arguments. In each case the length in the table is
zero; the actual length is stored in the compiled code. */
if (c == OP_XCLASS) code += GET(code, 1);
if (c == OP_XCLASS || c == OP_ECLASS) code += GET(code, 1);
else if (c == OP_CALLOUT_STR) code += GET(code, 1 + 2*LINK_SIZE);
/* Handle lookbehind */
else if (c == OP_REVERSE || c == OP_VREVERSE)
{
if (number < 0) return (PCRE2_UCHAR *)code;
if (number < 0) return code;
code += PRIV(OP_lengths)[c];
}
@ -97,7 +98,7 @@ for (;;)
c == OP_CBRAPOS || c == OP_SCBRAPOS)
{
int n = (int)GET2(code, 1+LINK_SIZE);
if (n == number) return (PCRE2_UCHAR *)code;
if (n == number) return code;
code += PRIV(OP_lengths)[c];
}

View File

@ -7,7 +7,7 @@ and semantics are as close as possible to those of the Perl 5 language.
Written by Philip Hazel
Original API code Copyright (c) 1997-2012 University of Cambridge
New API code Copyright (c) 2016-2023 University of Cambridge
New API code Copyright (c) 2016-2024 University of Cambridge
-----------------------------------------------------------------------------
Redistribution and use in source and binary forms, with or without
@ -88,6 +88,12 @@ typedef int BOOL;
#define TRUE 1
#endif
/* Helper macro for static (compile-time) assertions. Can be used inside
functions, or at the top-level of a file. */
#define STATIC_ASSERT_JOIN(a,b) a ## b
#define STATIC_ASSERT(cond, msg) \
typedef int STATIC_ASSERT_JOIN(static_assertion_,msg)[(cond)?1:-1]
/* Valgrind (memcheck) support */
#ifdef SUPPORT_VALGRIND
@ -523,29 +529,29 @@ start/end of string field names are. */
three must not be changed, because whichever is set is actually the number of
bytes in a code unit in that mode. */
#define PCRE2_MODE8 0x00000001 /* compiled in 8 bit mode */
#define PCRE2_MODE16 0x00000002 /* compiled in 16 bit mode */
#define PCRE2_MODE32 0x00000004 /* compiled in 32 bit mode */
#define PCRE2_FIRSTSET 0x00000010 /* first_code unit is set */
#define PCRE2_FIRSTCASELESS 0x00000020 /* caseless first code unit */
#define PCRE2_FIRSTMAPSET 0x00000040 /* bitmap of first code units is set */
#define PCRE2_LASTSET 0x00000080 /* last code unit is set */
#define PCRE2_LASTCASELESS 0x00000100 /* caseless last code unit */
#define PCRE2_STARTLINE 0x00000200 /* start after \n for multiline */
#define PCRE2_JCHANGED 0x00000400 /* j option used in pattern */
#define PCRE2_HASCRORLF 0x00000800 /* explicit \r or \n in pattern */
#define PCRE2_HASTHEN 0x00001000 /* pattern contains (*THEN) */
#define PCRE2_MATCH_EMPTY 0x00002000 /* pattern can match empty string */
#define PCRE2_BSR_SET 0x00004000 /* BSR was set in the pattern */
#define PCRE2_NL_SET 0x00008000 /* newline was set in the pattern */
#define PCRE2_NOTEMPTY_SET 0x00010000 /* (*NOTEMPTY) used ) keep */
#define PCRE2_NE_ATST_SET 0x00020000 /* (*NOTEMPTY_ATSTART) used) together */
#define PCRE2_DEREF_TABLES 0x00040000 /* release character tables */
#define PCRE2_NOJIT 0x00080000 /* (*NOJIT) used */
#define PCRE2_HASBKPORX 0x00100000 /* contains \P, \p, or \X */
#define PCRE2_DUPCAPUSED 0x00200000 /* contains (?| */
#define PCRE2_HASBKC 0x00400000 /* contains \C */
#define PCRE2_HASACCEPT 0x00800000 /* contains (*ACCEPT) */
#define PCRE2_MODE8 0x00000001u /* compiled in 8 bit mode */
#define PCRE2_MODE16 0x00000002u /* compiled in 16 bit mode */
#define PCRE2_MODE32 0x00000004u /* compiled in 32 bit mode */
#define PCRE2_FIRSTSET 0x00000010u /* first_code unit is set */
#define PCRE2_FIRSTCASELESS 0x00000020u /* caseless first code unit */
#define PCRE2_FIRSTMAPSET 0x00000040u /* bitmap of first code units is set */
#define PCRE2_LASTSET 0x00000080u /* last code unit is set */
#define PCRE2_LASTCASELESS 0x00000100u /* caseless last code unit */
#define PCRE2_STARTLINE 0x00000200u /* start after \n for multiline */
#define PCRE2_JCHANGED 0x00000400u /* j option used in pattern */
#define PCRE2_HASCRORLF 0x00000800u /* explicit \r or \n in pattern */
#define PCRE2_HASTHEN 0x00001000u /* pattern contains (*THEN) */
#define PCRE2_MATCH_EMPTY 0x00002000u /* pattern can match empty string */
#define PCRE2_BSR_SET 0x00004000u /* BSR was set in the pattern */
#define PCRE2_NL_SET 0x00008000u /* newline was set in the pattern */
#define PCRE2_NOTEMPTY_SET 0x00010000u /* (*NOTEMPTY) used ) keep */
#define PCRE2_NE_ATST_SET 0x00020000u /* (*NOTEMPTY_ATSTART) used) together */
#define PCRE2_DEREF_TABLES 0x00040000u /* release character tables */
#define PCRE2_NOJIT 0x00080000u /* (*NOJIT) used */
#define PCRE2_HASBKPORX 0x00100000u /* contains \P, \p, or \X */
#define PCRE2_DUPCAPUSED 0x00200000u /* contains (?| */
#define PCRE2_HASBKC 0x00400000u /* contains \C */
#define PCRE2_HASACCEPT 0x00800000u /* contains (*ACCEPT) */
#define PCRE2_MODE_MASK (PCRE2_MODE8 | PCRE2_MODE16 | PCRE2_MODE32)
@ -574,6 +580,16 @@ modes. */
#define REQ_CU_MAX 2000
#endif
/* The maximum nesting depth for Unicode character class sets.
Currently fixed. Warning: the interpreter relies on this so it can encode
the operand stack in a uint32_t. A nesting limit of 15 implies (15*2+1)=31
stack operands required, due to the fact that we have two (and only two)
levels of operator precedence. In the UTS#18 syntax, you can write 'x&&y[z]'
and in Perl syntax you can write '(?[ x - y & (z) ])', both of which imply
pushing the match results for x & y to the stack. */
#define ECLASS_NEST_LIMIT 15
/* Offsets for the bitmap tables in the cbits set of tables. Each table
contains a set of bits for a class map. Some classes are built by combining
these tables. */
@ -609,6 +625,13 @@ total length of the tables. */
#define ctypes_offset (cbits_offset + cbit_length) /* Character types */
#define TABLES_LENGTH (ctypes_offset + 256)
/* Private flags used in compile_context.optimization_flags */
#define PCRE2_OPTIM_AUTO_POSSESS 0x00000001u
#define PCRE2_OPTIM_DOTSTAR_ANCHOR 0x00000002u
#define PCRE2_OPTIM_START_OPTIMIZE 0x00000004u
#define PCRE2_OPTIMIZATION_ALL 0x00000007u
/* -------------------- Character and string names ------------------------ */
@ -915,6 +938,7 @@ a positive value. */
#define STRING_naplb0 "naplb\0"
#define STRING_nla0 "nla\0"
#define STRING_nlb0 "nlb\0"
#define STRING_scs0 "scs\0"
#define STRING_sr0 "sr\0"
#define STRING_asr0 "asr\0"
#define STRING_positive_lookahead0 "positive_lookahead\0"
@ -925,6 +949,7 @@ a positive value. */
#define STRING_negative_lookbehind0 "negative_lookbehind\0"
#define STRING_script_run0 "script_run\0"
#define STRING_atomic_script_run "atomic_script_run"
#define STRING_scan_substring0 "scan_substring\0"
#define STRING_alpha0 "alpha\0"
#define STRING_lower0 "lower\0"
@ -965,6 +990,8 @@ a positive value. */
#define STRING_NO_START_OPT_RIGHTPAR "NO_START_OPT)"
#define STRING_NOTEMPTY_RIGHTPAR "NOTEMPTY)"
#define STRING_NOTEMPTY_ATSTART_RIGHTPAR "NOTEMPTY_ATSTART)"
#define STRING_CASELESS_RESTRICT_RIGHTPAR "CASELESS_RESTRICT)"
#define STRING_TURKISH_CASING_RIGHTPAR "TURKISH_CASING)"
#define STRING_LIMIT_HEAP_EQ "LIMIT_HEAP="
#define STRING_LIMIT_MATCH_EQ "LIMIT_MATCH="
#define STRING_LIMIT_DEPTH_EQ "LIMIT_DEPTH="
@ -1216,6 +1243,7 @@ only. */
#define STRING_naplb0 STR_n STR_a STR_p STR_l STR_b "\0"
#define STRING_nla0 STR_n STR_l STR_a "\0"
#define STRING_nlb0 STR_n STR_l STR_b "\0"
#define STRING_scs0 STR_s STR_c STR_s "\0"
#define STRING_sr0 STR_s STR_r "\0"
#define STRING_asr0 STR_a STR_s STR_r "\0"
#define STRING_positive_lookahead0 STR_p STR_o STR_s STR_i STR_t STR_i STR_v STR_e STR_UNDERSCORE STR_l STR_o STR_o STR_k STR_a STR_h STR_e STR_a STR_d "\0"
@ -1226,6 +1254,7 @@ only. */
#define STRING_negative_lookbehind0 STR_n STR_e STR_g STR_a STR_t STR_i STR_v STR_e STR_UNDERSCORE STR_l STR_o STR_o STR_k STR_b STR_e STR_h STR_i STR_n STR_d "\0"
#define STRING_script_run0 STR_s STR_c STR_r STR_i STR_p STR_t STR_UNDERSCORE STR_r STR_u STR_n "\0"
#define STRING_atomic_script_run STR_a STR_t STR_o STR_m STR_i STR_c STR_UNDERSCORE STR_s STR_c STR_r STR_i STR_p STR_t STR_UNDERSCORE STR_r STR_u STR_n
#define STRING_scan_substring0 STR_s STR_c STR_a STR_n STR_UNDERSCORE STR_s STR_u STR_b STR_s STR_t STR_r STR_i STR_n STR_g "\0"
#define STRING_alpha0 STR_a STR_l STR_p STR_h STR_a "\0"
#define STRING_lower0 STR_l STR_o STR_w STR_e STR_r "\0"
@ -1266,6 +1295,8 @@ only. */
#define STRING_NO_START_OPT_RIGHTPAR STR_N STR_O STR_UNDERSCORE STR_S STR_T STR_A STR_R STR_T STR_UNDERSCORE STR_O STR_P STR_T STR_RIGHT_PARENTHESIS
#define STRING_NOTEMPTY_RIGHTPAR STR_N STR_O STR_T STR_E STR_M STR_P STR_T STR_Y STR_RIGHT_PARENTHESIS
#define STRING_NOTEMPTY_ATSTART_RIGHTPAR STR_N STR_O STR_T STR_E STR_M STR_P STR_T STR_Y STR_UNDERSCORE STR_A STR_T STR_S STR_T STR_A STR_R STR_T STR_RIGHT_PARENTHESIS
#define STRING_CASELESS_RESTRICT_RIGHTPAR STR_C STR_A STR_S STR_E STR_L STR_E STR_S STR_S STR_UNDERSCORE STR_R STR_E STR_S STR_T STR_R STR_I STR_C STR_T STR_RIGHT_PARENTHESIS
#define STRING_TURKISH_CASING_RIGHTPAR STR_T STR_U STR_R STR_K STR_I STR_S STR_H STR_UNDERSCORE STR_C STR_A STR_S STR_I STR_N STR_G STR_RIGHT_PARENTHESIS
#define STRING_LIMIT_HEAP_EQ STR_L STR_I STR_M STR_I STR_T STR_UNDERSCORE STR_H STR_E STR_A STR_P STR_EQUALS_SIGN
#define STRING_LIMIT_MATCH_EQ STR_L STR_I STR_M STR_I STR_T STR_UNDERSCORE STR_M STR_A STR_T STR_C STR_H STR_EQUALS_SIGN
#define STRING_LIMIT_DEPTH_EQ STR_L STR_I STR_M STR_I STR_T STR_UNDERSCORE STR_D STR_E STR_P STR_T STR_H STR_EQUALS_SIGN
@ -1290,21 +1321,22 @@ only. */
changed, the autopossessifying table in pcre2_auto_possess.c must be updated to
match. */
#define PT_ANY 0 /* Any property - matches all chars */
#define PT_LAMP 1 /* L& - the union of Lu, Ll, Lt */
#define PT_GC 2 /* Specified general characteristic (e.g. L) */
#define PT_PC 3 /* Specified particular characteristic (e.g. Lu) */
#define PT_SC 4 /* Script only (e.g. Han) */
#define PT_SCX 5 /* Script extensions (includes SC) */
#define PT_ALNUM 6 /* Alphanumeric - the union of L and N */
#define PT_SPACE 7 /* Perl space - general category Z plus 9,10,12,13 */
#define PT_PXSPACE 8 /* POSIX space - Z plus 9,10,11,12,13 */
#define PT_WORD 9 /* Word - L, N, Mn, or Pc */
#define PT_CLIST 10 /* Pseudo-property: match character list */
#define PT_UCNC 11 /* Universal Character nameable character */
#define PT_BIDICL 12 /* Specified bidi class */
#define PT_BOOL 13 /* Boolean property */
#define PT_TABSIZE 14 /* Size of square table for autopossessify tests */
#define PT_LAMP 0 /* L& - the union of Lu, Ll, Lt */
#define PT_GC 1 /* Specified general characteristic (e.g. L) */
#define PT_PC 2 /* Specified particular characteristic (e.g. Lu) */
#define PT_SC 3 /* Script only (e.g. Han) */
#define PT_SCX 4 /* Script extensions (includes SC) */
#define PT_ALNUM 5 /* Alphanumeric - the union of L and N */
#define PT_SPACE 6 /* Perl space - general category Z plus 9,10,12,13 */
#define PT_PXSPACE 7 /* POSIX space - Z plus 9,10,11,12,13 */
#define PT_WORD 8 /* Word - L, N, Mn, or Pc */
#define PT_CLIST 9 /* Pseudo-property: match character list */
#define PT_UCNC 10 /* Universal Character nameable character */
#define PT_BIDICL 11 /* Specified bidi class */
#define PT_BOOL 12 /* Boolean property */
#define PT_ANY 13 /* Must be the last entry!
Any property - matches all chars */
#define PT_TABSIZE PT_ANY /* Size of square table for autopossessify tests */
/* The following special properties are used only in XCLASS items, when POSIX
classes are specified and PCRE2_UCP is set - in other words, for Unicode
@ -1334,6 +1366,94 @@ contain characters with values greater than 255. */
#define XCL_RANGE 2 /* A range (two multibyte chars) follows */
#define XCL_PROP 3 /* Unicode property (2-byte property code follows) */
#define XCL_NOTPROP 4 /* Unicode inverted property (ditto) */
/* This value represents the beginning of character lists. The value
is 16 bit long, and stored as a high and low byte pair in 8 bit mode.
The lower 12 bit contains information about character lists (see later). */
#define XCL_LIST (sizeof(PCRE2_UCHAR) == 1 ? 0x10 : 0x1000)
/* When a character class contains many characters/ranges,
they are stored in character lists. There are four character
lists which contain characters/ranges within a given range.
The name, character range and item size for each list:
Low16 [0x100 - 0x7fff] 16 bit items
High16 [0x8000 - 0xffff] 16 bit items
Low32 [0x10000 - 0x7fffffff] 32 bit items
High32 [0x80000000 - 0xffffffff] 32 bit items
The Low32 character list is used only when utf encoding or 32 bit
character width is enabled, and the High32 character is used only
when 32 bit character width is enabled.
Each character list contain items. The lowest bit represents that
an item is the beginning of a range (bit is cleared), or not (bit
is set). The other bits represent the character shifted left by
one, so its highest bit is discarded. Due to the layout of character
lists, the highest bit of a character is always known:
Low16 and Low32: the highest bit is always zero
High16 and High32: the highest bit is always one
The items are ordered in increasing order, so binary search can be
used to find the lower bound of an input character. The lower bound
is the highest item, which value is less or equal than the input
character. If the lower bit of the item is cleard, or the character
stored in the item equals to the input character, the input
character is in the character list. */
/* Character list constants. */
#define XCL_CHAR_LIST_LOW_16_START 0x100
#define XCL_CHAR_LIST_LOW_16_END 0x7fff
#define XCL_CHAR_LIST_LOW_16_ADD 0x0
#define XCL_CHAR_LIST_HIGH_16_START 0x8000
#define XCL_CHAR_LIST_HIGH_16_END 0xffff
#define XCL_CHAR_LIST_HIGH_16_ADD 0x8000
#define XCL_CHAR_LIST_LOW_32_START 0x10000
#define XCL_CHAR_LIST_LOW_32_END 0x7fffffff
#define XCL_CHAR_LIST_LOW_32_ADD 0x0
#define XCL_CHAR_LIST_HIGH_32_START 0x80000000
#define XCL_CHAR_LIST_HIGH_32_END 0xffffffff
#define XCL_CHAR_LIST_HIGH_32_ADD 0x80000000
/* Mask for getting the descriptors of character list ranges.
Each descriptor has XCL_TYPE_BIT_LEN bits, and can be processed
by XCL_BEGIN_WITH_RANGE and XCL_ITEM_COUNT_MASK macros. */
#define XCL_TYPE_MASK 0xfff
#define XCL_TYPE_BIT_LEN 3
/* If this bit is set, the first item of the character list is the
end of a range, which started before the starting character of the
character list. */
#define XCL_BEGIN_WITH_RANGE 0x4
/* Number of items in the character list: 0, 1, or 2. The value 3
represents that the item count is stored at the begining of the
character list. The item count has the same width as the items
in the character list (e.g. 16 bit for Low16 and High16 lists). */
#define XCL_ITEM_COUNT_MASK 0x3
/* Shift and flag for constructing character list items. The XCL_CHAR_END
is set, when the item is not the beginning of a range. The XCL_CHAR_SHIFT
can be used to encode / decode the character value stored in an item. */
#define XCL_CHAR_END 0x1
#define XCL_CHAR_SHIFT 1
/* Flag bits for an extended class (OP_ECLASS), which is used for complex
character matches such as [\p{Greek} && \p{Ll}]. */
#define ECL_MAP 0x01 /* Flag: a 32-byte map is present */
/* Type tags for the items stored in an extended class (OP_ECLASS). These items
follow the OP_ECLASS's flag char and bitmap, and represent a Reverse Polish
Notation list of operands and operators manipulating a stack of bits. */
#define ECL_AND 1 /* Pop two from the stack, AND, and push result. */
#define ECL_OR 2 /* Pop two from the stack, OR, and push result. */
#define ECL_XOR 3 /* Pop two from the stack, XOR, and push result. */
#define ECL_NOT 4 /* Pop one from the stack, NOT, and push result. */
#define ECL_XCLASS 5 /* XCLASS nested within ECLASS; match and push result. */
#define ECL_ANY 6 /* Temporary, only used during compilation. */
#define ECL_NONE 7 /* Temporary, only used during compilation. */
/* These are escaped items that aren't just an encoding of a particular data
value such as \n. They must have non-zero values, as check_escape() returns 0
@ -1555,102 +1675,105 @@ enum {
character > 255 is encountered. */
OP_XCLASS, /* 112 Extended class for handling > 255 chars within the
class. This does both positive and negative. */
OP_REF, /* 113 Match a back reference, casefully */
OP_REFI, /* 114 Match a back reference, caselessly */
OP_DNREF, /* 115 Match a duplicate name backref, casefully */
OP_DNREFI, /* 116 Match a duplicate name backref, caselessly */
OP_RECURSE, /* 117 Match a numbered subpattern (possibly recursive) */
OP_CALLOUT, /* 118 Call out to external function if provided */
OP_CALLOUT_STR, /* 119 Call out with string argument */
OP_ECLASS, /* 113 Really-extended class, for handling logical
expressions computed over characters. */
OP_REF, /* 114 Match a back reference, casefully */
OP_REFI, /* 115 Match a back reference, caselessly */
OP_DNREF, /* 116 Match a duplicate name backref, casefully */
OP_DNREFI, /* 117 Match a duplicate name backref, caselessly */
OP_RECURSE, /* 118 Match a numbered subpattern (possibly recursive) */
OP_CALLOUT, /* 119 Call out to external function if provided */
OP_CALLOUT_STR, /* 120 Call out with string argument */
OP_ALT, /* 120 Start of alternation */
OP_KET, /* 121 End of group that doesn't have an unbounded repeat */
OP_KETRMAX, /* 122 These two must remain together and in this */
OP_KETRMIN, /* 123 order. They are for groups the repeat for ever. */
OP_KETRPOS, /* 124 Possessive unlimited repeat. */
OP_ALT, /* 121 Start of alternation */
OP_KET, /* 122 End of group that doesn't have an unbounded repeat */
OP_KETRMAX, /* 123 These two must remain together and in this */
OP_KETRMIN, /* 124 order. They are for groups the repeat for ever. */
OP_KETRPOS, /* 125 Possessive unlimited repeat. */
/* The assertions must come before BRA, CBRA, ONCE, and COND. */
OP_REVERSE, /* 125 Move pointer back - used in lookbehind assertions */
OP_VREVERSE, /* 126 Move pointer back - variable */
OP_ASSERT, /* 127 Positive lookahead */
OP_ASSERT_NOT, /* 128 Negative lookahead */
OP_ASSERTBACK, /* 129 Positive lookbehind */
OP_ASSERTBACK_NOT, /* 130 Negative lookbehind */
OP_ASSERT_NA, /* 131 Positive non-atomic lookahead */
OP_ASSERTBACK_NA, /* 132 Positive non-atomic lookbehind */
OP_REVERSE, /* 126 Move pointer back - used in lookbehind assertions */
OP_VREVERSE, /* 127 Move pointer back - variable */
OP_ASSERT, /* 128 Positive lookahead */
OP_ASSERT_NOT, /* 129 Negative lookahead */
OP_ASSERTBACK, /* 130 Positive lookbehind */
OP_ASSERTBACK_NOT, /* 131 Negative lookbehind */
OP_ASSERT_NA, /* 132 Positive non-atomic lookahead */
OP_ASSERTBACK_NA, /* 133 Positive non-atomic lookbehind */
OP_ASSERT_SCS, /* 134 Scan substring */
/* ONCE, SCRIPT_RUN, BRA, BRAPOS, CBRA, CBRAPOS, and COND must come
immediately after the assertions, with ONCE first, as there's a test for >=
ONCE for a subpattern that isn't an assertion. The POS versions must
immediately follow the non-POS versions in each case. */
OP_ONCE, /* 133 Atomic group, contains captures */
OP_SCRIPT_RUN, /* 134 Non-capture, but check characters' scripts */
OP_BRA, /* 135 Start of non-capturing bracket */
OP_BRAPOS, /* 136 Ditto, with unlimited, possessive repeat */
OP_CBRA, /* 137 Start of capturing bracket */
OP_CBRAPOS, /* 138 Ditto, with unlimited, possessive repeat */
OP_COND, /* 139 Conditional group */
OP_ONCE, /* 135 Atomic group, contains captures */
OP_SCRIPT_RUN, /* 136 Non-capture, but check characters' scripts */
OP_BRA, /* 137 Start of non-capturing bracket */
OP_BRAPOS, /* 138 Ditto, with unlimited, possessive repeat */
OP_CBRA, /* 139 Start of capturing bracket */
OP_CBRAPOS, /* 140 Ditto, with unlimited, possessive repeat */
OP_COND, /* 141 Conditional group */
/* These five must follow the previous five, in the same order. There's a
check for >= SBRA to distinguish the two sets. */
OP_SBRA, /* 140 Start of non-capturing bracket, check empty */
OP_SBRAPOS, /* 141 Ditto, with unlimited, possessive repeat */
OP_SCBRA, /* 142 Start of capturing bracket, check empty */
OP_SCBRAPOS, /* 143 Ditto, with unlimited, possessive repeat */
OP_SCOND, /* 144 Conditional group, check empty */
OP_SBRA, /* 142 Start of non-capturing bracket, check empty */
OP_SBRAPOS, /* 143 Ditto, with unlimited, possessive repeat */
OP_SCBRA, /* 144 Start of capturing bracket, check empty */
OP_SCBRAPOS, /* 145 Ditto, with unlimited, possessive repeat */
OP_SCOND, /* 146 Conditional group, check empty */
/* The next two pairs must (respectively) be kept together. */
OP_CREF, /* 145 Used to hold a capture number as condition */
OP_DNCREF, /* 146 Used to point to duplicate names as a condition */
OP_RREF, /* 147 Used to hold a recursion number as condition */
OP_DNRREF, /* 148 Used to point to duplicate names as a condition */
OP_FALSE, /* 149 Always false (used by DEFINE and VERSION) */
OP_TRUE, /* 150 Always true (used by VERSION) */
OP_CREF, /* 147 Used to hold a capture number as condition */
OP_DNCREF, /* 148 Used to point to duplicate names as a condition */
OP_RREF, /* 149 Used to hold a recursion number as condition */
OP_DNRREF, /* 150 Used to point to duplicate names as a condition */
OP_FALSE, /* 151 Always false (used by DEFINE and VERSION) */
OP_TRUE, /* 152 Always true (used by VERSION) */
OP_BRAZERO, /* 151 These two must remain together and in this */
OP_BRAMINZERO, /* 152 order. */
OP_BRAPOSZERO, /* 153 */
OP_BRAZERO, /* 153 These two must remain together and in this */
OP_BRAMINZERO, /* 154 order. */
OP_BRAPOSZERO, /* 155 */
/* These are backtracking control verbs */
OP_MARK, /* 154 always has an argument */
OP_PRUNE, /* 155 */
OP_PRUNE_ARG, /* 156 same, but with argument */
OP_SKIP, /* 157 */
OP_SKIP_ARG, /* 158 same, but with argument */
OP_THEN, /* 159 */
OP_THEN_ARG, /* 160 same, but with argument */
OP_COMMIT, /* 161 */
OP_COMMIT_ARG, /* 162 same, but with argument */
OP_MARK, /* 156 always has an argument */
OP_PRUNE, /* 157 */
OP_PRUNE_ARG, /* 158 same, but with argument */
OP_SKIP, /* 159 */
OP_SKIP_ARG, /* 160 same, but with argument */
OP_THEN, /* 161 */
OP_THEN_ARG, /* 162 same, but with argument */
OP_COMMIT, /* 163 */
OP_COMMIT_ARG, /* 164 same, but with argument */
/* These are forced failure and success verbs. FAIL and ACCEPT do accept an
argument, but these cases can be compiled as, for example, (*MARK:X)(*FAIL)
without the need for a special opcode. */
OP_FAIL, /* 163 */
OP_ACCEPT, /* 164 */
OP_ASSERT_ACCEPT, /* 165 Used inside assertions */
OP_CLOSE, /* 166 Used before OP_ACCEPT to close open captures */
OP_FAIL, /* 165 */
OP_ACCEPT, /* 166 */
OP_ASSERT_ACCEPT, /* 167 Used inside assertions */
OP_CLOSE, /* 168 Used before OP_ACCEPT to close open captures */
/* This is used to skip a subpattern with a {0} quantifier */
OP_SKIPZERO, /* 167 */
OP_SKIPZERO, /* 169 */
/* This is used to identify a DEFINE group during compilation so that it can
be checked for having only one branch. It is changed to OP_FALSE before
compilation finishes. */
OP_DEFINE, /* 168 */
OP_DEFINE, /* 170 */
/* These opcodes replace their normal counterparts in UCP mode when
PCRE2_EXTRA_ASCII_BSW is not set. */
OP_NOT_UCP_WORD_BOUNDARY, /* 169 */
OP_UCP_WORD_BOUNDARY, /* 170 */
OP_NOT_UCP_WORD_BOUNDARY, /* 171 */
OP_UCP_WORD_BOUNDARY, /* 172 */
/* This is not an opcode, but is used to check that tables indexed by opcode
are the correct length, in order to catch updating errors - there have been
@ -1693,19 +1816,21 @@ some cases doesn't actually use these names at all). */
"*+","++", "?+", "{", \
"*", "*?", "+", "+?", "?", "??", "{", "{", \
"*+","++", "?+", "{", \
"class", "nclass", "xclass", "Ref", "Refi", "DnRef", "DnRefi", \
"class", "nclass", "xclass", "eclass", \
"Ref", "Refi", "DnRef", "DnRefi", \
"Recurse", "Callout", "CalloutStr", \
"Alt", "Ket", "KetRmax", "KetRmin", "KetRpos", \
"Reverse", "VReverse", "Assert", "Assert not", \
"Assert back", "Assert back not", \
"Non-atomic assert", "Non-atomic assert back", \
"Scan substring", \
"Once", \
"Script run", \
"Bra", "BraPos", "CBra", "CBraPos", \
"Cond", \
"SBra", "SBraPos", "SCBra", "SCBraPos", \
"SCond", \
"Cond ref", "Cond dnref", "Cond rec", "Cond dnrec", \
"Capture ref", "Capture dnref", "Cond rec", "Cond dnrec", \
"Cond false", "Cond true", \
"Brazero", "Braminzero", "Braposzero", \
"*MARK", "*PRUNE", "*PRUNE", "*SKIP", "*SKIP", \
@ -1766,10 +1891,11 @@ in UTF-8 mode. The code that uses this table must know about such things. */
1+(32/sizeof(PCRE2_UCHAR)), /* CLASS */ \
1+(32/sizeof(PCRE2_UCHAR)), /* NCLASS */ \
0, /* XCLASS - variable length */ \
0, /* ECLASS - variable length */ \
1+IMM2_SIZE, /* REF */ \
1+IMM2_SIZE, /* REFI */ \
1+IMM2_SIZE+1, /* REFI */ \
1+2*IMM2_SIZE, /* DNREF */ \
1+2*IMM2_SIZE, /* DNREFI */ \
1+2*IMM2_SIZE+1, /* DNREFI */ \
1+LINK_SIZE, /* RECURSE */ \
1+2*LINK_SIZE+1, /* CALLOUT */ \
0, /* CALLOUT_STR - variable length */ \
@ -1786,6 +1912,7 @@ in UTF-8 mode. The code that uses this table must know about such things. */
1+LINK_SIZE, /* Assert behind not */ \
1+LINK_SIZE, /* NA Assert */ \
1+LINK_SIZE, /* NA Assert behind */ \
1+LINK_SIZE, /* Scan substring */ \
1+LINK_SIZE, /* ONCE */ \
1+LINK_SIZE, /* SCRIPT_RUN */ \
1+LINK_SIZE, /* BRA */ \
@ -1815,6 +1942,11 @@ in UTF-8 mode. The code that uses this table must know about such things. */
#define RREF_ANY 0xffff
/* Constants used by OP_REFI and OP_DNREFI to control matching behaviour. */
#define REFI_FLAG_CASELESS_RESTRICT 0x1
#define REFI_FLAG_TURKISH_CASING 0x2
/* ---------- Private structures that are mode-independent. ---------- */
@ -1890,6 +2022,14 @@ typedef struct {
#define UCD_SCRIPTX(ch) UCD_SCRIPTX_PROP(GET_UCD(ch))
#define UCD_BPROPS(ch) UCD_BPROPS_PROP(GET_UCD(ch))
#define UCD_BIDICLASS(ch) UCD_BIDICLASS_PROP(GET_UCD(ch))
#define UCD_ANY_I(ch) \
/* match any of the four characters 'i', 'I', U+0130, U+0131 */ \
(((uint32_t)(ch) | 0x20u) == 0x69u || ((uint32_t)(ch) | 1u) == 0x0131u)
#define UCD_DOTTED_I(ch) \
((uint32_t)(ch) == 0x69u || (uint32_t)(ch) == 0x0130u)
#define UCD_FOLD_I_TURKISH(ch) \
((uint32_t)(ch) == 0x0130u ? 0x69u : \
(uint32_t)(ch) == 0x49u ? 0x0131u : (uint32_t)(ch))
/* The "scriptx" and bprops fields contain offsets into vectors of 32-bit words
that form a bitmap representing a list of scripts or boolean properties. These
@ -1955,6 +2095,9 @@ extern const uint8_t PRIV(utf8_table4)[];
#define _pcre2_vspace_list PCRE2_SUFFIX(_pcre2_vspace_list_)
#define _pcre2_ucd_boolprop_sets PCRE2_SUFFIX(_pcre2_ucd_boolprop_sets_)
#define _pcre2_ucd_caseless_sets PCRE2_SUFFIX(_pcre2_ucd_caseless_sets_)
#define _pcre2_ucd_turkish_dotted_i_caseset PCRE2_SUFFIX(_pcre2_ucd_turkish_dotted_i_caseset_)
#define _pcre2_ucd_nocase_ranges PCRE2_SUFFIX(_pcre2_ucd_nocase_ranges_)
#define _pcre2_ucd_nocase_ranges_size PCRE2_SUFFIX(_pcre2_ucd_nocase_ranges_size_)
#define _pcre2_ucd_digit_sets PCRE2_SUFFIX(_pcre2_ucd_digit_sets_)
#define _pcre2_ucd_script_sets PCRE2_SUFFIX(_pcre2_ucd_script_sets_)
#define _pcre2_ucd_records PCRE2_SUFFIX(_pcre2_ucd_records_)
@ -1971,14 +2114,17 @@ extern const uint8_t PRIV(utf8_table4)[];
extern const uint8_t PRIV(OP_lengths)[];
extern const uint32_t PRIV(callout_end_delims)[];
extern const uint32_t PRIV(callout_start_delims)[];
extern const pcre2_compile_context PRIV(default_compile_context);
extern const pcre2_convert_context PRIV(default_convert_context);
extern const pcre2_match_context PRIV(default_match_context);
extern pcre2_compile_context PRIV(default_compile_context);
extern pcre2_convert_context PRIV(default_convert_context);
extern pcre2_match_context PRIV(default_match_context);
extern const uint8_t PRIV(default_tables)[];
extern const uint32_t PRIV(hspace_list)[];
extern const uint32_t PRIV(vspace_list)[];
extern const uint32_t PRIV(ucd_boolprop_sets)[];
extern const uint32_t PRIV(ucd_caseless_sets)[];
extern const uint32_t PRIV(ucd_turkish_dotted_i_caseset);
extern const uint32_t PRIV(ucd_nocase_ranges)[];
extern const uint32_t PRIV(ucd_nocase_ranges_size);
extern const uint32_t PRIV(ucd_digit_sets)[];
extern const uint32_t PRIV(ucd_script_sets)[];
extern const ucd_record PRIV(ucd_records)[];
@ -2039,11 +2185,12 @@ is available. */
#define _pcre2_valid_utf PCRE2_SUFFIX(_pcre2_valid_utf_)
#define _pcre2_was_newline PCRE2_SUFFIX(_pcre2_was_newline_)
#define _pcre2_xclass PCRE2_SUFFIX(_pcre2_xclass_)
#define _pcre2_eclass PCRE2_SUFFIX(_pcre2_eclass_)
extern int _pcre2_auto_possessify(PCRE2_UCHAR *,
const compile_block *);
extern int _pcre2_check_escape(PCRE2_SPTR *, PCRE2_SPTR, uint32_t *,
int *, uint32_t, uint32_t, BOOL, compile_block *);
int *, uint32_t, uint32_t, uint32_t, BOOL, compile_block *);
extern PCRE2_SPTR _pcre2_extuni(uint32_t, PCRE2_SPTR, PCRE2_SPTR, PCRE2_SPTR,
BOOL, int *);
extern PCRE2_SPTR _pcre2_find_bracket(PCRE2_SPTR, BOOL, int);
@ -2066,7 +2213,9 @@ extern int _pcre2_study(pcre2_real_code *);
extern int _pcre2_valid_utf(PCRE2_SPTR, PCRE2_SIZE, PCRE2_SIZE *);
extern BOOL _pcre2_was_newline(PCRE2_SPTR, uint32_t, PCRE2_SPTR,
uint32_t *, BOOL);
extern BOOL _pcre2_xclass(uint32_t, PCRE2_SPTR, BOOL);
extern BOOL _pcre2_xclass(uint32_t, PCRE2_SPTR, const uint8_t *, BOOL);
extern BOOL _pcre2_eclass(uint32_t, PCRE2_SPTR, PCRE2_SPTR,
const uint8_t *, BOOL);
/* This function is needed only when memmove() is not available. */
@ -2079,6 +2228,8 @@ extern void * _pcre2_memmove(void *, const void *, size_t);
extern BOOL PRIV(ckd_smul)(PCRE2_SIZE *, int, int);
#include "pcre2_util.h"
#endif /* PCRE2_INTERNAL_H_IDEMPOTENT_GUARD */
/* End of pcre2_internal.h */

View File

@ -47,7 +47,7 @@ to have access to the hidden structures at all supported widths.
Some of the mode-dependent macros are required at different widths for
different parts of the pcre2test code (in particular, the included
pcre_printint.c file). We undefine them here so that they can be re-defined for
pcre2_printint.c file). We undefine them here so that they can be re-defined for
multiple inclusions. Not all of these are used in pcre2test, but it's easier
just to undefine them all. */
@ -435,7 +435,7 @@ UTF-16 mode. */
c = *eptr; \
if ((c & 0xfc00u) == 0xd800u) GETUTF16LEN(c, eptr, len);
/* Get the next UTF-816character, testing for UTF-16 mode, not advancing the
/* Get the next UTF-16 character, testing for UTF-16 mode, not advancing the
pointer, incrementing length if there is a low surrogate. This is called when
we do not know if we are in UTF-16 mode. */
@ -556,6 +556,11 @@ code that uses them is simpler because it assumes this. */
/* The real general context structure. At present it holds only data for custom
memory control. */
/* WARNING: if this is ever changed, code in pcre2_substitute.c will have to be
changed because it builds a general context "by hand" in order to avoid the
malloc() call in pcre2_general_context)_create(). There is also code in
pcre2_match.c that makes the same assumption. */
typedef struct pcre2_real_general_context {
pcre2_memctl memctl;
} pcre2_real_general_context;
@ -574,6 +579,7 @@ typedef struct pcre2_real_compile_context {
uint32_t parens_nest_limit;
uint32_t extra_options;
uint32_t max_varlookbehind;
uint32_t optimization_flags;
} pcre2_real_compile_context;
/* The real match context structure. */
@ -588,6 +594,9 @@ typedef struct pcre2_real_match_context {
void *callout_data;
int (*substitute_callout)(pcre2_substitute_callout_block *, void *);
void *substitute_callout_data;
PCRE2_SIZE (*substitute_case_callout)(PCRE2_SPTR, PCRE2_SIZE, PCRE2_UCHAR *,
PCRE2_SIZE, int, void *);
void *substitute_case_callout_data;
PCRE2_SIZE offset_limit;
uint32_t heap_limit;
uint32_t match_limit;
@ -623,6 +632,7 @@ typedef struct pcre2_real_code {
void *executable_jit; /* Pointer to JIT code */
uint8_t start_bitmap[32]; /* Bitmap for starting code unit < 256 */
CODE_BLOCKSIZE_TYPE blocksize; /* Total (bytes) that was malloc-ed */
CODE_BLOCKSIZE_TYPE code_start; /* Byte code start offset */
uint32_t magic_number; /* Paranoid and endianness check */
uint32_t compile_options; /* Options passed to pcre2_compile() */
uint32_t overall_options; /* Options after processing the pattern */
@ -641,6 +651,7 @@ typedef struct pcre2_real_code {
uint16_t top_backref; /* Highest numbered back reference */
uint16_t name_entry_size; /* Size (code units) of table entries */
uint16_t name_count; /* Number of name entries in the table */
uint32_t optimization_flags; /* Optimizations enabled at compile time */
} pcre2_real_code;
/* The real match data structure. Define ovector as large as it can ever
@ -716,6 +727,23 @@ typedef struct named_group {
uint16_t isdup; /* TRUE if a duplicate */
} named_group;
/* Structure for caching sorted ranges. This improves the performance
of translating META code to byte code. */
typedef struct class_ranges {
struct class_ranges *next; /* Next class ranges */
size_t char_lists_size; /* Total size of encoded char lists */
size_t char_lists_start; /* Start offset of encoded char lists */
uint16_t range_list_size; /* Size of ranges array */
uint16_t char_lists_types; /* The XCL_LIST header of char lists */
/* Followed by the list of ranges (start/end pairs) */
} class_ranges;
typedef union class_bits_storage {
uint8_t classbits[32];
uint32_t classwords[8];
} class_bits_storage;
/* Structure for passing "static" information around between the functions
doing the compiling, so that they are thread-safe. */
@ -725,14 +753,15 @@ typedef struct compile_block {
const uint8_t *fcc; /* Points to case-flipping table */
const uint8_t *cbits; /* Points to character type table */
const uint8_t *ctypes; /* Points to table of type maps */
PCRE2_SPTR start_workspace; /* The start of working space */
PCRE2_SPTR start_code; /* The start of the compiled code */
PCRE2_UCHAR *start_workspace; /* The start of working space */
PCRE2_UCHAR *start_code; /* The start of the compiled code */
PCRE2_SPTR start_pattern; /* The start of the pattern */
PCRE2_SPTR end_pattern; /* The end of the pattern */
PCRE2_UCHAR *name_table; /* The name/number table */
PCRE2_SIZE workspace_size; /* Size of workspace */
PCRE2_SIZE small_ref_offset[10]; /* Offsets for \1 to \9 */
PCRE2_SIZE erroroffset; /* Offset of error in pattern */
class_bits_storage classbits; /* Temporary store for classbits */
uint16_t names_found; /* Number of entries so far */
uint16_t name_entry_size; /* Size of each entry */
uint16_t parens_depth; /* Depth of nested parentheses */
@ -750,9 +779,9 @@ typedef struct compile_block {
uint32_t backref_map; /* Bitmap of low back refs */
uint32_t nltype; /* Newline type */
uint32_t nllen; /* Newline string length */
uint32_t class_range_start; /* Overall class range start */
uint32_t class_range_end; /* Overall class range end */
PCRE2_UCHAR nl[4]; /* Newline string when fixed length */
uint8_t class_op_used[ECLASS_NEST_LIMIT]; /* Operation used for
extended classes */
uint32_t req_varyopt; /* "After variable item" flag for reqbyte */
uint32_t max_varlookbehind; /* Limit for variable lookbehinds */
int max_lookbehind; /* Maximum lookbehind encountered (characters) */
@ -760,6 +789,11 @@ typedef struct compile_block {
BOOL had_pruneorskip; /* (*PRUNE) or (*SKIP) encountered */
BOOL had_recurse; /* Had a pattern recursion or subroutine call */
BOOL dupnames; /* Duplicate names exist */
#ifdef SUPPORT_WIDE_CHARS
class_ranges *cranges; /* First class range. */
class_ranges *next_cranges; /* Next class range. */
size_t char_lists_size; /* Current size of character lists */
#endif
} compile_block;
/* Structure for keeping the properties of the in-memory stack used
@ -793,7 +827,7 @@ typedef struct heapframe {
to RRMATCH(), but which do not need to be copied to new frames. */
PCRE2_SPTR ecode; /* The current position in the pattern */
PCRE2_SPTR temp_sptr[2]; /* Used for short-term PCRE_SPTR values */
PCRE2_SPTR temp_sptr[2]; /* Used for short-term PCRE2_SPTR values */
PCRE2_SIZE length; /* Used for character, string, or code lengths */
PCRE2_SIZE back_frame; /* Amount to subtract on RRETURN */
PCRE2_SIZE temp_size; /* Used for short-term PCRE2_SIZE values */
@ -841,11 +875,10 @@ typedef struct heapframe {
PCRE2_SIZE ovector[131072]; /* Must be last in the structure */
} heapframe;
/* This typedef is a check that the size of the heapframe structure is a
multiple of PCRE2_SIZE. See various comments above. */
/* Assert that the size of the heapframe structure is a multiple of PCRE2_SIZE.
See various comments above. */
typedef char check_heapframe_size[
((sizeof(heapframe) % sizeof(PCRE2_SIZE)) == 0)? (+1):(-1)];
STATIC_ASSERT((sizeof(heapframe) % sizeof(PCRE2_SIZE)) == 0, heapframe_size);
/* Structure for computing the alignment of heapframe. */

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -83,7 +83,7 @@ Arguments:
Returns: > 0 => success; value is the number of ovector pairs filled
= 0 => success, but ovector is not big enough
-1 => failed to match (PCRE_ERROR_NOMATCH)
-1 => failed to match (PCRE2_ERROR_NOMATCH)
< -1 => some kind of unexpected problem
*/

View File

@ -82,7 +82,7 @@ POSSIBILITY OF SUCH DAMAGE.
# endif
# endif
#if (defined(__GNUC__) && __SANITIZE_ADDRESS__) \
#if (defined(__GNUC__) && defined(__SANITIZE_ADDRESS__) && __SANITIZE_ADDRESS__ ) \
|| (defined(__clang__) \
&& ((__clang_major__ == 3 && __clang_minor__ >= 3) || (__clang_major__ > 3)))
__attribute__((no_sanitize_address))

View File

@ -246,10 +246,10 @@ struct sljit_jump *quit;
struct sljit_jump *partial_quit[2];
vector_compare_type compare_type = vector_compare_match1;
sljit_s32 tmp1_reg_ind = sljit_get_register_index(SLJIT_GP_REGISTER, TMP1);
sljit_s32 data_ind = sljit_get_register_index(SLJIT_FLOAT_REGISTER, SLJIT_FR0);
sljit_s32 cmp1_ind = sljit_get_register_index(SLJIT_FLOAT_REGISTER, SLJIT_FR1);
sljit_s32 cmp2_ind = sljit_get_register_index(SLJIT_FLOAT_REGISTER, SLJIT_FR2);
sljit_s32 tmp_ind = sljit_get_register_index(SLJIT_FLOAT_REGISTER, SLJIT_FR3);
sljit_s32 data_ind = sljit_get_register_index(SLJIT_SIMD_REG_128, SLJIT_VR0);
sljit_s32 cmp1_ind = sljit_get_register_index(SLJIT_SIMD_REG_128, SLJIT_VR1);
sljit_s32 cmp2_ind = sljit_get_register_index(SLJIT_SIMD_REG_128, SLJIT_VR2);
sljit_s32 tmp_ind = sljit_get_register_index(SLJIT_SIMD_REG_128, SLJIT_VR3);
sljit_u32 bit = 0;
int i;
@ -273,17 +273,17 @@ if (common->mode == PCRE2_JIT_COMPLETE)
/* First part (unaligned start) */
value = SLJIT_SIMD_REG_128 | SLJIT_SIMD_ELEM_32 | SLJIT_SIMD_LANE_ZERO;
sljit_emit_simd_lane_mov(compiler, value, SLJIT_FR1, 0, SLJIT_IMM, character_to_int32(char1 | bit));
sljit_emit_simd_lane_mov(compiler, value, SLJIT_VR1, 0, SLJIT_IMM, character_to_int32(char1 | bit));
if (char1 != char2)
sljit_emit_simd_lane_mov(compiler, value, SLJIT_FR2, 0, SLJIT_IMM, character_to_int32(bit != 0 ? bit : char2));
sljit_emit_simd_lane_mov(compiler, value, SLJIT_VR2, 0, SLJIT_IMM, character_to_int32(bit != 0 ? bit : char2));
OP1(SLJIT_MOV, TMP2, 0, STR_PTR, 0);
sljit_emit_simd_lane_replicate(compiler, reg_type | SLJIT_SIMD_ELEM_32, SLJIT_FR1, SLJIT_FR1, 0);
sljit_emit_simd_lane_replicate(compiler, reg_type | SLJIT_SIMD_ELEM_32, SLJIT_VR1, SLJIT_VR1, 0);
if (char1 != char2)
sljit_emit_simd_lane_replicate(compiler, reg_type | SLJIT_SIMD_ELEM_32, SLJIT_FR2, SLJIT_FR2, 0);
sljit_emit_simd_lane_replicate(compiler, reg_type | SLJIT_SIMD_ELEM_32, SLJIT_VR2, SLJIT_VR2, 0);
#if defined SUPPORT_UNICODE && PCRE2_CODE_UNIT_WIDTH != 32
restart = LABEL();
@ -294,12 +294,12 @@ OP2(SLJIT_AND, STR_PTR, 0, STR_PTR, 0, SLJIT_IMM, ~value);
OP2(SLJIT_AND, TMP2, 0, TMP2, 0, SLJIT_IMM, value);
value = (reg_type == SLJIT_SIMD_REG_256) ? SLJIT_SIMD_MEM_ALIGNED_256 : SLJIT_SIMD_MEM_ALIGNED_128;
sljit_emit_simd_mov(compiler, reg_type | value, SLJIT_FR0, SLJIT_MEM1(STR_PTR), 0);
sljit_emit_simd_mov(compiler, reg_type | value, SLJIT_VR0, SLJIT_MEM1(STR_PTR), 0);
for (i = 0; i < 4; i++)
fast_forward_char_pair_sse2_compare(compiler, compare_type, reg_type, i, data_ind, cmp1_ind, cmp2_ind, tmp_ind);
sljit_emit_simd_sign(compiler, SLJIT_SIMD_STORE | reg_type | SLJIT_SIMD_ELEM_8, SLJIT_FR0, TMP1, 0);
sljit_emit_simd_sign(compiler, SLJIT_SIMD_STORE | reg_type | SLJIT_SIMD_ELEM_8, SLJIT_VR0, TMP1, 0);
OP2(SLJIT_ADD, STR_PTR, 0, STR_PTR, 0, TMP2, 0);
OP2(SLJIT_LSHR, TMP1, 0, TMP1, 0, TMP2, 0);
@ -318,11 +318,11 @@ if (common->mode == PCRE2_JIT_COMPLETE)
add_jump(compiler, &common->failed_match, partial_quit[1]);
value = (reg_type == SLJIT_SIMD_REG_256) ? SLJIT_SIMD_MEM_ALIGNED_256 : SLJIT_SIMD_MEM_ALIGNED_128;
sljit_emit_simd_mov(compiler, reg_type | value, SLJIT_FR0, SLJIT_MEM1(STR_PTR), 0);
sljit_emit_simd_mov(compiler, reg_type | value, SLJIT_VR0, SLJIT_MEM1(STR_PTR), 0);
for (i = 0; i < 4; i++)
fast_forward_char_pair_sse2_compare(compiler, compare_type, reg_type, i, data_ind, cmp1_ind, cmp2_ind, tmp_ind);
sljit_emit_simd_sign(compiler, SLJIT_SIMD_STORE | reg_type | SLJIT_SIMD_ELEM_8, SLJIT_FR0, TMP1, 0);
sljit_emit_simd_sign(compiler, SLJIT_SIMD_STORE | reg_type | SLJIT_SIMD_ELEM_8, SLJIT_VR0, TMP1, 0);
CMPTO(SLJIT_ZERO, TMP1, 0, SLJIT_IMM, 0, start);
JUMPHERE(quit);
@ -380,10 +380,10 @@ struct sljit_jump *quit;
jump_list *not_found = NULL;
vector_compare_type compare_type = vector_compare_match1;
sljit_s32 tmp1_reg_ind = sljit_get_register_index(SLJIT_GP_REGISTER, TMP1);
sljit_s32 data_ind = sljit_get_register_index(SLJIT_FLOAT_REGISTER, SLJIT_FR0);
sljit_s32 cmp1_ind = sljit_get_register_index(SLJIT_FLOAT_REGISTER, SLJIT_FR1);
sljit_s32 cmp2_ind = sljit_get_register_index(SLJIT_FLOAT_REGISTER, SLJIT_FR2);
sljit_s32 tmp_ind = sljit_get_register_index(SLJIT_FLOAT_REGISTER, SLJIT_FR3);
sljit_s32 data_ind = sljit_get_register_index(SLJIT_SIMD_REG_128, SLJIT_VR0);
sljit_s32 cmp1_ind = sljit_get_register_index(SLJIT_SIMD_REG_128, SLJIT_VR1);
sljit_s32 cmp2_ind = sljit_get_register_index(SLJIT_SIMD_REG_128, SLJIT_VR2);
sljit_s32 tmp_ind = sljit_get_register_index(SLJIT_SIMD_REG_128, SLJIT_VR3);
sljit_u32 bit = 0;
int i;
@ -406,29 +406,29 @@ OP1(SLJIT_MOV, TMP3, 0, STR_PTR, 0);
/* First part (unaligned start) */
value = SLJIT_SIMD_REG_128 | SLJIT_SIMD_ELEM_32 | SLJIT_SIMD_LANE_ZERO;
sljit_emit_simd_lane_mov(compiler, value, SLJIT_FR1, 0, SLJIT_IMM, character_to_int32(char1 | bit));
sljit_emit_simd_lane_mov(compiler, value, SLJIT_VR1, 0, SLJIT_IMM, character_to_int32(char1 | bit));
if (char1 != char2)
sljit_emit_simd_lane_mov(compiler, value, SLJIT_FR2, 0, SLJIT_IMM, character_to_int32(bit != 0 ? bit : char2));
sljit_emit_simd_lane_mov(compiler, value, SLJIT_VR2, 0, SLJIT_IMM, character_to_int32(bit != 0 ? bit : char2));
OP1(SLJIT_MOV, STR_PTR, 0, TMP2, 0);
sljit_emit_simd_lane_replicate(compiler, reg_type | SLJIT_SIMD_ELEM_32, SLJIT_FR1, SLJIT_FR1, 0);
sljit_emit_simd_lane_replicate(compiler, reg_type | SLJIT_SIMD_ELEM_32, SLJIT_VR1, SLJIT_VR1, 0);
if (char1 != char2)
sljit_emit_simd_lane_replicate(compiler, reg_type | SLJIT_SIMD_ELEM_32, SLJIT_FR2, SLJIT_FR2, 0);
sljit_emit_simd_lane_replicate(compiler, reg_type | SLJIT_SIMD_ELEM_32, SLJIT_VR2, SLJIT_VR2, 0);
value = (reg_type == SLJIT_SIMD_REG_256) ? 0x1f : 0xf;
OP2(SLJIT_AND, STR_PTR, 0, STR_PTR, 0, SLJIT_IMM, ~value);
OP2(SLJIT_AND, TMP2, 0, TMP2, 0, SLJIT_IMM, value);
value = (reg_type == SLJIT_SIMD_REG_256) ? SLJIT_SIMD_MEM_ALIGNED_256 : SLJIT_SIMD_MEM_ALIGNED_128;
sljit_emit_simd_mov(compiler, reg_type | value, SLJIT_FR0, SLJIT_MEM1(STR_PTR), 0);
sljit_emit_simd_mov(compiler, reg_type | value, SLJIT_VR0, SLJIT_MEM1(STR_PTR), 0);
for (i = 0; i < 4; i++)
fast_forward_char_pair_sse2_compare(compiler, compare_type, reg_type, i, data_ind, cmp1_ind, cmp2_ind, tmp_ind);
sljit_emit_simd_sign(compiler, SLJIT_SIMD_STORE | reg_type | SLJIT_SIMD_ELEM_8, SLJIT_FR0, TMP1, 0);
sljit_emit_simd_sign(compiler, SLJIT_SIMD_STORE | reg_type | SLJIT_SIMD_ELEM_8, SLJIT_VR0, TMP1, 0);
OP2(SLJIT_ADD, STR_PTR, 0, STR_PTR, 0, TMP2, 0);
OP2(SLJIT_LSHR, TMP1, 0, TMP1, 0, TMP2, 0);
@ -445,12 +445,12 @@ OP2(SLJIT_ADD, STR_PTR, 0, STR_PTR, 0, SLJIT_IMM, value);
add_jump(compiler, &not_found, CMP(SLJIT_GREATER_EQUAL, STR_PTR, 0, STR_END, 0));
value = (reg_type == SLJIT_SIMD_REG_256) ? SLJIT_SIMD_MEM_ALIGNED_256 : SLJIT_SIMD_MEM_ALIGNED_128;
sljit_emit_simd_mov(compiler, reg_type | value, SLJIT_FR0, SLJIT_MEM1(STR_PTR), 0);
sljit_emit_simd_mov(compiler, reg_type | value, SLJIT_VR0, SLJIT_MEM1(STR_PTR), 0);
for (i = 0; i < 4; i++)
fast_forward_char_pair_sse2_compare(compiler, compare_type, reg_type, i, data_ind, cmp1_ind, cmp2_ind, tmp_ind);
sljit_emit_simd_sign(compiler, SLJIT_SIMD_STORE | reg_type | SLJIT_SIMD_ELEM_8, SLJIT_FR0, TMP1, 0);
sljit_emit_simd_sign(compiler, SLJIT_SIMD_STORE | reg_type | SLJIT_SIMD_ELEM_8, SLJIT_VR0, TMP1, 0);
CMPTO(SLJIT_ZERO, TMP1, 0, SLJIT_IMM, 0, start);
JUMPHERE(quit);
@ -488,14 +488,14 @@ sljit_u32 bit1 = 0;
sljit_u32 bit2 = 0;
sljit_u32 diff = IN_UCHARS(offs1 - offs2);
sljit_s32 tmp1_reg_ind = sljit_get_register_index(SLJIT_GP_REGISTER, TMP1);
sljit_s32 data1_ind = sljit_get_register_index(SLJIT_FLOAT_REGISTER, SLJIT_FR0);
sljit_s32 data2_ind = sljit_get_register_index(SLJIT_FLOAT_REGISTER, SLJIT_FR1);
sljit_s32 cmp1a_ind = sljit_get_register_index(SLJIT_FLOAT_REGISTER, SLJIT_FR2);
sljit_s32 cmp2a_ind = sljit_get_register_index(SLJIT_FLOAT_REGISTER, SLJIT_FR3);
sljit_s32 cmp1b_ind = sljit_get_register_index(SLJIT_FLOAT_REGISTER, SLJIT_FR4);
sljit_s32 cmp2b_ind = sljit_get_register_index(SLJIT_FLOAT_REGISTER, SLJIT_FR5);
sljit_s32 tmp1_ind = sljit_get_register_index(SLJIT_FLOAT_REGISTER, SLJIT_FR6);
sljit_s32 tmp2_ind = sljit_get_register_index(SLJIT_FLOAT_REGISTER, SLJIT_TMP_FR0);
sljit_s32 data1_ind = sljit_get_register_index(SLJIT_SIMD_REG_128, SLJIT_VR0);
sljit_s32 data2_ind = sljit_get_register_index(SLJIT_SIMD_REG_128, SLJIT_VR1);
sljit_s32 cmp1a_ind = sljit_get_register_index(SLJIT_SIMD_REG_128, SLJIT_VR2);
sljit_s32 cmp2a_ind = sljit_get_register_index(SLJIT_SIMD_REG_128, SLJIT_VR3);
sljit_s32 cmp1b_ind = sljit_get_register_index(SLJIT_SIMD_REG_128, SLJIT_VR4);
sljit_s32 cmp2b_ind = sljit_get_register_index(SLJIT_SIMD_REG_128, SLJIT_VR5);
sljit_s32 tmp1_ind = sljit_get_register_index(SLJIT_SIMD_REG_128, SLJIT_VR6);
sljit_s32 tmp2_ind = sljit_get_register_index(SLJIT_SIMD_REG_128, SLJIT_TMP_DEST_VREG);
struct sljit_label *start;
#if defined SUPPORT_UNICODE && PCRE2_CODE_UNIT_WIDTH != 32
struct sljit_label *restart;
@ -541,10 +541,10 @@ else
}
value = SLJIT_SIMD_REG_128 | SLJIT_SIMD_ELEM_32 | SLJIT_SIMD_LANE_ZERO;
sljit_emit_simd_lane_mov(compiler, value, SLJIT_FR2, 0, TMP1, 0);
sljit_emit_simd_lane_mov(compiler, value, SLJIT_VR2, 0, TMP1, 0);
if (char1a != char1b)
sljit_emit_simd_lane_mov(compiler, value, SLJIT_FR4, 0, TMP2, 0);
sljit_emit_simd_lane_mov(compiler, value, SLJIT_VR4, 0, TMP2, 0);
if (char2a == char2b)
OP1(SLJIT_MOV, TMP1, 0, SLJIT_IMM, character_to_int32(char2a));
@ -566,18 +566,18 @@ else
}
}
sljit_emit_simd_lane_mov(compiler, value, SLJIT_FR3, 0, TMP1, 0);
sljit_emit_simd_lane_mov(compiler, value, SLJIT_VR3, 0, TMP1, 0);
if (char2a != char2b)
sljit_emit_simd_lane_mov(compiler, value, SLJIT_FR5, 0, TMP2, 0);
sljit_emit_simd_lane_mov(compiler, value, SLJIT_VR5, 0, TMP2, 0);
sljit_emit_simd_lane_replicate(compiler, reg_type | SLJIT_SIMD_ELEM_32, SLJIT_FR2, SLJIT_FR2, 0);
sljit_emit_simd_lane_replicate(compiler, reg_type | SLJIT_SIMD_ELEM_32, SLJIT_VR2, SLJIT_VR2, 0);
if (char1a != char1b)
sljit_emit_simd_lane_replicate(compiler, reg_type | SLJIT_SIMD_ELEM_32, SLJIT_FR4, SLJIT_FR4, 0);
sljit_emit_simd_lane_replicate(compiler, reg_type | SLJIT_SIMD_ELEM_32, SLJIT_VR4, SLJIT_VR4, 0);
sljit_emit_simd_lane_replicate(compiler, reg_type | SLJIT_SIMD_ELEM_32, SLJIT_FR3, SLJIT_FR3, 0);
sljit_emit_simd_lane_replicate(compiler, reg_type | SLJIT_SIMD_ELEM_32, SLJIT_VR3, SLJIT_VR3, 0);
if (char2a != char2b)
sljit_emit_simd_lane_replicate(compiler, reg_type | SLJIT_SIMD_ELEM_32, SLJIT_FR5, SLJIT_FR5, 0);
sljit_emit_simd_lane_replicate(compiler, reg_type | SLJIT_SIMD_ELEM_32, SLJIT_VR5, SLJIT_VR5, 0);
#if defined SUPPORT_UNICODE && PCRE2_CODE_UNIT_WIDTH != 32
restart = LABEL();
@ -589,11 +589,11 @@ value = (reg_type == SLJIT_SIMD_REG_256) ? ~0x1f : ~0xf;
OP2(SLJIT_AND, STR_PTR, 0, STR_PTR, 0, SLJIT_IMM, value);
value = (reg_type == SLJIT_SIMD_REG_256) ? SLJIT_SIMD_MEM_ALIGNED_256 : SLJIT_SIMD_MEM_ALIGNED_128;
sljit_emit_simd_mov(compiler, reg_type | value, SLJIT_FR0, SLJIT_MEM1(STR_PTR), 0);
sljit_emit_simd_mov(compiler, reg_type | value, SLJIT_VR0, SLJIT_MEM1(STR_PTR), 0);
jump[0] = CMP(SLJIT_GREATER_EQUAL, TMP1, 0, STR_PTR, 0);
sljit_emit_simd_mov(compiler, reg_type, SLJIT_FR1, SLJIT_MEM1(STR_PTR), -(sljit_sw)diff);
sljit_emit_simd_mov(compiler, reg_type, SLJIT_VR1, SLJIT_MEM1(STR_PTR), -(sljit_sw)diff);
jump[1] = JUMP(SLJIT_JUMP);
JUMPHERE(jump[0]);
@ -668,8 +668,8 @@ for (i = 0; i < 4; i++)
fast_forward_char_pair_sse2_compare(compiler, compare1_type, reg_type, i, data1_ind, cmp1a_ind, cmp1b_ind, tmp1_ind);
}
sljit_emit_simd_op2(compiler, SLJIT_SIMD_OP2_AND | reg_type, SLJIT_FR0, SLJIT_FR0, SLJIT_FR1);
sljit_emit_simd_sign(compiler, SLJIT_SIMD_STORE | reg_type | SLJIT_SIMD_ELEM_8, SLJIT_FR0, TMP1, 0);
sljit_emit_simd_op2(compiler, SLJIT_SIMD_OP2_AND | reg_type, SLJIT_VR0, SLJIT_VR0, SLJIT_VR1, 0);
sljit_emit_simd_sign(compiler, SLJIT_SIMD_STORE | reg_type | SLJIT_SIMD_ELEM_8, SLJIT_VR0, TMP1, 0);
/* Ignore matches before the first STR_PTR. */
OP2(SLJIT_ADD, STR_PTR, 0, STR_PTR, 0, TMP2, 0);
@ -687,8 +687,8 @@ OP2(SLJIT_ADD, STR_PTR, 0, STR_PTR, 0, SLJIT_IMM, value);
add_jump(compiler, &common->failed_match, CMP(SLJIT_GREATER_EQUAL, STR_PTR, 0, STR_END, 0));
value = (reg_type == SLJIT_SIMD_REG_256) ? SLJIT_SIMD_MEM_ALIGNED_256 : SLJIT_SIMD_MEM_ALIGNED_128;
sljit_emit_simd_mov(compiler, reg_type | value, SLJIT_FR0, SLJIT_MEM1(STR_PTR), 0);
sljit_emit_simd_mov(compiler, reg_type, SLJIT_FR1, SLJIT_MEM1(STR_PTR), -(sljit_sw)diff);
sljit_emit_simd_mov(compiler, reg_type | value, SLJIT_VR0, SLJIT_MEM1(STR_PTR), 0);
sljit_emit_simd_mov(compiler, reg_type, SLJIT_VR1, SLJIT_MEM1(STR_PTR), -(sljit_sw)diff);
for (i = 0; i < 4; i++)
{
@ -696,8 +696,8 @@ for (i = 0; i < 4; i++)
fast_forward_char_pair_sse2_compare(compiler, compare2_type, reg_type, i, data2_ind, cmp2a_ind, cmp2b_ind, tmp1_ind);
}
sljit_emit_simd_op2(compiler, SLJIT_SIMD_OP2_AND | reg_type, SLJIT_FR0, SLJIT_FR0, SLJIT_FR1);
sljit_emit_simd_sign(compiler, SLJIT_SIMD_STORE | reg_type | SLJIT_SIMD_ELEM_8, SLJIT_FR0, TMP1, 0);
sljit_emit_simd_op2(compiler, SLJIT_SIMD_OP2_AND | reg_type, SLJIT_VR0, SLJIT_VR0, SLJIT_VR1, 0);
sljit_emit_simd_sign(compiler, SLJIT_SIMD_STORE | reg_type | SLJIT_SIMD_ELEM_8, SLJIT_VR0, TMP1, 0);
CMPTO(SLJIT_ZERO, TMP1, 0, SLJIT_IMM, 0, start);
@ -843,12 +843,13 @@ DEFINE_COMPILER;
int_char ic;
struct sljit_jump *partial_quit, *quit;
/* Save temporary registers. */
OP1(SLJIT_MOV, SLJIT_MEM1(SLJIT_SP), LOCALS0, STR_PTR, 0);
OP1(SLJIT_MOV, SLJIT_MEM1(SLJIT_SP), LOCALS1, TMP3, 0);
SLJIT_ASSERT(common->locals_size >= 2 * (int)sizeof(sljit_sw));
OP1(SLJIT_MOV, SLJIT_MEM1(SLJIT_SP), LOCAL0, STR_PTR, 0);
OP1(SLJIT_MOV, SLJIT_MEM1(SLJIT_SP), LOCAL1, TMP3, 0);
/* Prepare function arguments */
OP1(SLJIT_MOV, SLJIT_R0, 0, STR_END, 0);
GET_LOCAL_BASE(SLJIT_R1, 0, LOCALS0);
GET_LOCAL_BASE(SLJIT_R1, 0, LOCAL0);
OP1(SLJIT_MOV, SLJIT_R2, 0, SLJIT_IMM, offset);
if (char1 == char2)
@ -910,8 +911,8 @@ else
}
}
/* Restore registers. */
OP1(SLJIT_MOV, STR_PTR, 0, SLJIT_MEM1(SLJIT_SP), LOCALS0);
OP1(SLJIT_MOV, TMP3, 0, SLJIT_MEM1(SLJIT_SP), LOCALS1);
OP1(SLJIT_MOV, STR_PTR, 0, SLJIT_MEM1(SLJIT_SP), LOCAL0);
OP1(SLJIT_MOV, TMP3, 0, SLJIT_MEM1(SLJIT_SP), LOCAL1);
/* Check return value. */
partial_quit = CMP(SLJIT_EQUAL, SLJIT_RETURN_REG, 0, SLJIT_IMM, 0);
@ -1038,7 +1039,7 @@ SLJIT_ASSERT(diff <= IN_UCHARS(max_fast_forward_char_pair_offset()));
SLJIT_ASSERT(compiler->scratches == 5);
/* Save temporary register STR_PTR. */
OP1(SLJIT_MOV, SLJIT_MEM1(SLJIT_SP), LOCALS0, STR_PTR, 0);
OP1(SLJIT_MOV, SLJIT_MEM1(SLJIT_SP), LOCAL0, STR_PTR, 0);
/* Prepare arguments for the function call. */
if (common->match_end_ptr == 0)
@ -1052,7 +1053,7 @@ else
SELECT(SLJIT_LESS, SLJIT_R0, STR_END, 0, SLJIT_R0);
}
GET_LOCAL_BASE(SLJIT_R1, 0, LOCALS0);
GET_LOCAL_BASE(SLJIT_R1, 0, LOCAL0);
OP1(SLJIT_MOV_S32, SLJIT_R2, 0, SLJIT_IMM, offs1);
OP1(SLJIT_MOV_S32, SLJIT_R3, 0, SLJIT_IMM, offs2);
ic.c.c1 = char1a;
@ -1093,7 +1094,7 @@ if (diff == 1) {
}
/* Restore STR_PTR register. */
OP1(SLJIT_MOV, STR_PTR, 0, SLJIT_MEM1(SLJIT_SP), LOCALS0);
OP1(SLJIT_MOV, STR_PTR, 0, SLJIT_MEM1(SLJIT_SP), LOCAL0);
/* Check return value. */
partial_quit = CMP(SLJIT_EQUAL, SLJIT_RETURN_REG, 0, SLJIT_IMM, 0);

View File

@ -7,7 +7,7 @@ and semantics are as close as possible to those of the Perl 5 language.
Written by Philip Hazel
Original API code Copyright (c) 1997-2012 University of Cambridge
New API code Copyright (c) 2016-2020 University of Cambridge
New API code Copyright (c) 2016-2024 University of Cambridge
-----------------------------------------------------------------------------
Redistribution and use in source and binary forms, with or without
@ -155,9 +155,9 @@ return yield;
PCRE2_EXP_DEFN void PCRE2_CALL_CONVENTION
pcre2_maketables_free(pcre2_general_context *gcontext, const uint8_t *tables)
{
if (gcontext)
if (gcontext != NULL)
gcontext->memctl.free((void *)tables, gcontext->memctl.memory_data);
else
else
free((void *)tables);
}
#endif

File diff suppressed because it is too large Load Diff

View File

@ -7,7 +7,7 @@ and semantics are as close as possible to those of the Perl 5 language.
Written by Philip Hazel
Original API code Copyright (c) 1997-2012 University of Cambridge
New API code Copyright (c) 2016-2022 University of Cambridge
New API code Copyright (c) 2016-2024 University of Cambridge
-----------------------------------------------------------------------------
Redistribution and use in source and binary forms, with or without
@ -77,14 +77,16 @@ return yield;
* Create a match data block using pattern data *
*************************************************/
/* If no context is supplied, use the memory allocator from the code. */
/* If no context is supplied, use the memory allocator from the code. This code
assumes that a general context contains nothing other than a memory allocator.
If that ever changes, this code will need fixing. */
PCRE2_EXP_DEFN pcre2_match_data * PCRE2_CALL_CONVENTION
pcre2_match_data_create_from_pattern(const pcre2_code *code,
pcre2_general_context *gcontext)
{
if (gcontext == NULL) gcontext = (pcre2_general_context *)code;
return pcre2_match_data_create(((pcre2_real_code *)code)->top_bracket + 1,
return pcre2_match_data_create(((const pcre2_real_code *)code)->top_bracket + 1,
gcontext);
}

View File

@ -117,4 +117,4 @@ return 1;
}
#endif /* SUPPORT_UNICODE */
/* End of pcre_ord2utf.c */
/* End of pcre2_ord2utf.c */

View File

@ -7,7 +7,7 @@ and semantics are as close as possible to those of the Perl 5 language.
Written by Philip Hazel
Original API code Copyright (c) 1997-2012 University of Cambridge
New API code Copyright (c) 2016-2018 University of Cambridge
New API code Copyright (c) 2016-2024 University of Cambridge
-----------------------------------------------------------------------------
Redistribution and use in source and binary forms, with or without
@ -64,7 +64,7 @@ Returns: 0 when data returned
PCRE2_EXP_DEFN int PCRE2_CALL_CONVENTION
pcre2_pattern_info(const pcre2_code *code, uint32_t what, void *where)
{
const pcre2_real_code *re = (pcre2_real_code *)code;
const pcre2_real_code *re = (const pcre2_real_code *)code;
if (where == NULL) /* Requests field length */
{
@ -230,7 +230,8 @@ switch(what)
break;
case PCRE2_INFO_NAMETABLE:
*((PCRE2_SPTR *)where) = (PCRE2_SPTR)((char *)re + sizeof(pcre2_real_code));
*((PCRE2_SPTR *)where) = (PCRE2_SPTR)((const char *)re +
sizeof(pcre2_real_code));
break;
case PCRE2_INFO_NEWLINE:
@ -268,7 +269,7 @@ PCRE2_EXP_DEFN int PCRE2_CALL_CONVENTION
pcre2_callout_enumerate(const pcre2_code *code,
int (*callback)(pcre2_callout_enumerate_block *, void *), void *callout_data)
{
pcre2_real_code *re = (pcre2_real_code *)code;
const pcre2_real_code *re = (const pcre2_real_code *)code;
pcre2_callout_enumerate_block cb;
PCRE2_SPTR cc;
#ifdef SUPPORT_UNICODE
@ -291,7 +292,7 @@ if (re->magic_number != MAGIC_NUMBER) return PCRE2_ERROR_BADMAGIC;
if ((re->flags & (PCRE2_CODE_UNIT_WIDTH/8)) == 0) return PCRE2_ERROR_BADMODE;
cb.version = 0;
cc = (PCRE2_SPTR)((uint8_t *)re + sizeof(pcre2_real_code))
cc = (PCRE2_SPTR)((const uint8_t *)re + sizeof(pcre2_real_code))
+ re->name_count * re->name_entry_size;
while (TRUE)
@ -383,8 +384,9 @@ while (TRUE)
#endif
break;
#if defined SUPPORT_UNICODE || PCRE2_CODE_UNIT_WIDTH != 8
#ifdef SUPPORT_WIDE_CHARS
case OP_XCLASS:
case OP_ECLASS:
cc += GET(cc, 1);
break;
#endif

View File

@ -7,7 +7,7 @@ and semantics are as close as possible to those of the Perl 5 language.
Written by Philip Hazel
Original API code Copyright (c) 1997-2012 University of Cambridge
New API code Copyright (c) 2016-2020 University of Cambridge
New API code Copyright (c) 2016-2024 University of Cambridge
-----------------------------------------------------------------------------
Redistribution and use in source and binary forms, with or without
@ -127,7 +127,7 @@ dst_bytes += TABLES_LENGTH;
for (i = 0; i < number_of_codes; i++)
{
re = (const pcre2_real_code *)(codes[i]);
(void)memcpy(dst_bytes, (char *)re, re->blocksize);
(void)memcpy(dst_bytes, (const char *)re, re->blocksize);
/* Certain fields in the compiled code block are re-set during
deserialization. In order to ensure that the serialized data stream is always

View File

@ -7,7 +7,7 @@ and semantics are as close as possible to those of the Perl 5 language.
Written by Philip Hazel
Original API code Copyright (c) 1997-2012 University of Cambridge
New API code Copyright (c) 2016-2023 University of Cambridge
New API code Copyright (c) 2016-2024 University of Cambridge
-----------------------------------------------------------------------------
Redistribution and use in source and binary forms, with or without
@ -114,7 +114,7 @@ uint32_t once_fudge = 0;
BOOL had_recurse = FALSE;
BOOL dupcapused = (re->flags & PCRE2_DUPCAPUSED) != 0;
PCRE2_SPTR nextbranch = code + GET(code, 1);
PCRE2_UCHAR *cc = (PCRE2_UCHAR *)code + 1 + LINK_SIZE;
PCRE2_SPTR cc = code + 1 + LINK_SIZE;
recurse_check this_recurse;
/* If this is a "could be empty" group, its minimum length is 0. */
@ -136,12 +136,13 @@ passes 16-bits, reset to that value and skip the rest of the branch. */
for (;;)
{
int d, min, recno;
PCRE2_UCHAR op, *cs, *ce;
PCRE2_UCHAR op;
PCRE2_SPTR cs, ce;
if (branchlength >= UINT16_MAX)
{
branchlength = UINT16_MAX;
cc = (PCRE2_UCHAR *)nextbranch;
cc = nextbranch;
}
op = *cc;
@ -249,6 +250,7 @@ for (;;)
case OP_ASSERTBACK:
case OP_ASSERTBACK_NOT:
case OP_ASSERT_NA:
case OP_ASSERT_SCS:
case OP_ASSERTBACK_NA:
do cc += GET(cc, 1); while (*cc == OP_ALT);
/* Fall through */
@ -417,15 +419,14 @@ for (;;)
case OP_NCLASS:
#ifdef SUPPORT_WIDE_CHARS
case OP_XCLASS:
case OP_ECLASS:
/* The original code caused an unsigned overflow in 64 bit systems,
so now we use a conditional statement. */
if (op == OP_XCLASS)
if (op == OP_XCLASS || op == OP_ECLASS)
cc += GET(cc, 1);
else
cc += PRIV(OP_lengths)[OP_CLASS];
#else
cc += PRIV(OP_lengths)[OP_CLASS];
#endif
cc += PRIV(OP_lengths)[OP_CLASS];
switch (*cc)
{
@ -479,8 +480,8 @@ for (;;)
if (!dupcapused && (re->overall_options & PCRE2_MATCH_UNSET_BACKREF) == 0)
{
int count = GET2(cc, 1+IMM2_SIZE);
PCRE2_UCHAR *slot =
(PCRE2_UCHAR *)((uint8_t *)re + sizeof(pcre2_real_code)) +
PCRE2_SPTR slot =
(PCRE2_SPTR)((const uint8_t *)re + sizeof(pcre2_real_code)) +
GET2(cc, 1) * re->name_entry_size;
d = INT_MAX;
@ -496,13 +497,12 @@ for (;;)
dd = backref_cache[recno];
else
{
ce = cs = (PCRE2_UCHAR *)PRIV(find_bracket)(startcode, utf, recno);
ce = cs = PRIV(find_bracket)(startcode, utf, recno);
if (cs == NULL) return -2;
do ce += GET(ce, 1); while (*ce == OP_ALT);
dd = 0;
if (!dupcapused ||
(PCRE2_UCHAR *)PRIV(find_bracket)(ce, utf, recno) == NULL)
if (!dupcapused || PRIV(find_bracket)(ce, utf, recno) == NULL)
{
if (cc > cs && cc < ce) /* Simple recursion */
{
@ -539,7 +539,7 @@ for (;;)
}
}
else d = 0;
cc += 1 + 2*IMM2_SIZE;
cc += PRIV(OP_lengths)[*cc];
goto REPEAT_BACK_REFERENCE;
/* Single back reference by number. References by name are converted to by
@ -557,12 +557,11 @@ for (;;)
if ((re->overall_options & PCRE2_MATCH_UNSET_BACKREF) == 0)
{
ce = cs = (PCRE2_UCHAR *)PRIV(find_bracket)(startcode, utf, recno);
ce = cs = PRIV(find_bracket)(startcode, utf, recno);
if (cs == NULL) return -2;
do ce += GET(ce, 1); while (*ce == OP_ALT);
if (!dupcapused ||
(PCRE2_UCHAR *)PRIV(find_bracket)(ce, utf, recno) == NULL)
if (!dupcapused || PRIV(find_bracket)(ce, utf, recno) == NULL)
{
if (cc > cs && cc < ce) /* Simple recursion */
{
@ -593,7 +592,7 @@ for (;;)
backref_cache[0] = recno;
}
cc += 1 + IMM2_SIZE;
cc += PRIV(OP_lengths)[*cc];
/* Handle repeated back references */
@ -643,7 +642,7 @@ for (;;)
pattern contains multiple subpatterns with the same number. */
case OP_RECURSE:
cs = ce = (PCRE2_UCHAR *)startcode + GET(cc, 1);
cs = ce = startcode + GET(cc, 1);
recno = GET2(cs, 1+LINK_SIZE);
if (recno == prev_recurse_recno)
{
@ -755,10 +754,13 @@ for (;;)
new ones get added they are properly considered. */
default:
PCRE2_DEBUG_UNREACHABLE();
return -3;
}
}
/* Control never gets here */
PCRE2_DEBUG_UNREACHABLE(); /* Control should never reach here */
return -3; /* Avoid compiler warnings */
}
@ -919,6 +921,138 @@ if (table_limit != 32) for (c = 24; c < 32; c++) re->start_bitmap[c] = 0xff;
#if defined SUPPORT_UNICODE && PCRE2_CODE_UNIT_WIDTH == 8
/*************************************************
* Set starting bits for a character list. *
*************************************************/
/* This function sets starting bits for a character list. It enumerates
all characters and character ranges in the character list, and sets
the starting bits accordingly.
Arguments:
code pointer to the code
start_bitmap pointer to the starting bitmap
Returns: nothing
*/
static void
study_char_list(PCRE2_SPTR code, uint8_t *start_bitmap,
const uint8_t *char_lists_end)
{
uint32_t type, list_ind;
uint32_t char_list_add = XCL_CHAR_LIST_LOW_16_ADD;
uint32_t range_start = ~(uint32_t)0, range_end = 0;
const uint8_t *next_char;
PCRE2_UCHAR start_buffer[6], end_buffer[6];
PCRE2_UCHAR start, end;
/* Only needed in 8-bit mode at the moment. */
type = (uint32_t)(code[0] << 8) | code[1];
code += 2;
/* Align characters. */
next_char = char_lists_end - (GET(code, 0) << 1);
type &= XCL_TYPE_MASK;
list_ind = 0;
if ((type & XCL_BEGIN_WITH_RANGE) != 0)
range_start = XCL_CHAR_LIST_LOW_16_START;
while (type > 0)
{
uint32_t item_count = type & XCL_ITEM_COUNT_MASK;
if (item_count == XCL_ITEM_COUNT_MASK)
{
if (list_ind <= 1)
{
item_count = *(const uint16_t*)next_char;
next_char += 2;
}
else
{
item_count = *(const uint32_t*)next_char;
next_char += 4;
}
}
while (item_count > 0)
{
if (list_ind <= 1)
{
range_end = *(const uint16_t*)next_char;
next_char += 2;
}
else
{
range_end = *(const uint32_t*)next_char;
next_char += 4;
}
if ((range_end & XCL_CHAR_END) != 0)
{
range_end = char_list_add + (range_end >> XCL_CHAR_SHIFT);
PRIV(ord2utf)(range_end, end_buffer);
end = end_buffer[0];
if (range_start < range_end)
{
PRIV(ord2utf)(range_start, start_buffer);
for (start = start_buffer[0]; start <= end; start++)
start_bitmap[start / 8] |= (1u << (start & 7));
}
else
start_bitmap[end / 8] |= (1u << (end & 7));
range_start = ~(uint32_t)0;
}
else
range_start = char_list_add + (range_end >> XCL_CHAR_SHIFT);
item_count--;
}
list_ind++;
type >>= XCL_TYPE_BIT_LEN;
if (range_start == ~(uint32_t)0)
{
if ((type & XCL_BEGIN_WITH_RANGE) != 0)
{
/* In 8 bit mode XCL_CHAR_LIST_HIGH_32_START is not possible. */
if (list_ind == 1) range_start = XCL_CHAR_LIST_HIGH_16_START;
else range_start = XCL_CHAR_LIST_LOW_32_START;
}
}
else if ((type & XCL_BEGIN_WITH_RANGE) == 0)
{
PRIV(ord2utf)(range_start, start_buffer);
/* In 8 bit mode XCL_CHAR_LIST_LOW_32_END and
XCL_CHAR_LIST_HIGH_32_END are not possible. */
if (list_ind == 1) range_end = XCL_CHAR_LIST_LOW_16_END;
else range_end = XCL_CHAR_LIST_HIGH_16_END;
PRIV(ord2utf)(range_end, end_buffer);
end = end_buffer[0];
for (start = start_buffer[0]; start <= end; start++)
start_bitmap[start / 8] |= (1u << (start & 7));
range_start = ~(uint32_t)0;
}
/* In 8 bit mode XCL_CHAR_LIST_HIGH_32_ADD is not possible. */
if (list_ind == 1) char_list_add = XCL_CHAR_LIST_HIGH_16_ADD;
else char_list_add = XCL_CHAR_LIST_LOW_32_ADD;
}
}
#endif
/*************************************************
* Create bitmap of starting code units *
*************************************************/
@ -980,7 +1114,7 @@ do
{
int rc;
PCRE2_SPTR ncode;
uint8_t *classmap = NULL;
const uint8_t *classmap = NULL;
#ifdef SUPPORT_WIDE_CHARS
PCRE2_UCHAR xclassflags;
#endif
@ -1134,6 +1268,7 @@ do
case OP_ASSERTBACK_NOT:
case OP_ASSERT_NA:
case OP_ASSERTBACK_NA:
case OP_ASSERT_SCS:
ncode += GET(ncode, 1);
while (*ncode == OP_ALT) ncode += GET(ncode, 1);
ncode += 1 + LINK_SIZE;
@ -1252,12 +1387,14 @@ do
tcode += GET(tcode, 1 + 2*LINK_SIZE);
break;
/* Skip over lookbehind and negative lookahead assertions */
/* Skip over lookbehind, negative lookahead, and scan substring
assertions */
case OP_ASSERT_NOT:
case OP_ASSERTBACK:
case OP_ASSERTBACK_NOT:
case OP_ASSERTBACK_NA:
case OP_ASSERT_SCS:
do tcode += GET(tcode, 1); while (*tcode == OP_ALT);
tcode += 1 + LINK_SIZE;
break;
@ -1578,6 +1715,13 @@ do
tcode += 2;
break;
/* Set-based ECLASS: treat it the same as a "complex" XCLASS; give up. */
#ifdef SUPPORT_WIDE_CHARS
case OP_ECLASS:
return SSB_FAIL;
#endif
/* Extended class: if there are any property checks, or if this is a
negative XCLASS without a map, give up. If there are no property checks,
there must be wide characters on the XCLASS list, because otherwise an
@ -1596,7 +1740,7 @@ do
map pointer if there is one, and fall through. */
classmap = ((xclassflags & XCL_MAP) == 0)? NULL :
(uint8_t *)(tcode + 1 + LINK_SIZE + 1);
(const uint8_t *)(tcode + 1 + LINK_SIZE + 1);
/* In UTF-8 mode, scan the character list and set bits for leading bytes,
then jump to handle the map. */
@ -1608,6 +1752,13 @@ do
PCRE2_SPTR p = tcode + 1 + LINK_SIZE + 1 + ((classmap == NULL)? 0:32);
tcode += GET(tcode, 1);
if (*p >= XCL_LIST)
{
study_char_list(p, re->start_bitmap,
((const uint8_t *)re + re->code_start));
goto HANDLE_CLASSMAP;
}
for (;;) switch (*p++)
{
case XCL_SINGLE:
@ -1629,6 +1780,7 @@ do
goto HANDLE_CLASSMAP;
default:
PCRE2_DEBUG_UNREACHABLE();
return SSB_UNKNOWN; /* Internal error, should not occur */
}
}
@ -1665,7 +1817,7 @@ do
case OP_CLASS:
if (*tcode == OP_XCLASS) tcode += GET(tcode, 1); else
{
classmap = (uint8_t *)(++tcode);
classmap = (const uint8_t *)(++tcode);
tcode += 32 / sizeof(PCRE2_UCHAR);
}
@ -1768,8 +1920,7 @@ BOOL ucp = (re->overall_options & PCRE2_UCP) != 0;
/* Find start of compiled code */
code = (PCRE2_UCHAR *)((uint8_t *)re + sizeof(pcre2_real_code)) +
re->name_entry_size * re->name_count;
code = (PCRE2_UCHAR *)((uint8_t *)re + re->code_start);
/* For a pattern that has a first code unit, or a multiline pattern that
matches only at "line start", there is no point in seeking a list of starting
@ -1779,7 +1930,11 @@ if ((re->flags & (PCRE2_FIRSTSET|PCRE2_STARTLINE)) == 0)
{
int depth = 0;
int rc = set_start_bits(re, code, utf, ucp, &depth);
if (rc == SSB_UNKNOWN) return 1;
if (rc == SSB_UNKNOWN)
{
PCRE2_DEBUG_UNREACHABLE();
return 1;
}
/* If a list of starting code units was set up, scan the list to see if only
one or two were listed. Having only one listed is rare because usually a
@ -1852,21 +2007,18 @@ if ((re->flags & (PCRE2_FIRSTSET|PCRE2_STARTLINE)) == 0)
}
}
/* Replace the start code unit bits with a first code unit, but only if it
is not the same as a required later code unit. This is because a search for
a required code unit starts after an explicit first code unit, but at a
code unit found from the bitmap. Patterns such as /a*a/ don't work
if both the start unit and required unit are the same. */
/* Replace the start code unit bits with a first code unit. If it is the
same as a required later code unit, then clear the required later code
unit. This is because a search for a required code unit starts after an
explicit first code unit, but at a code unit found from the bitmap.
Patterns such as /a*a/ don't work if both the start unit and required
unit are the same. */
if (a >= 0 &&
(
(re->flags & PCRE2_LASTSET) == 0 ||
(
re->last_codeunit != (uint32_t)a &&
(b < 0 || re->last_codeunit != (uint32_t)b)
)
))
{
if (a >= 0) {
if ((re->flags & PCRE2_LASTSET) && (re->last_codeunit == (uint32_t)a || (b >= 0 && re->last_codeunit == (uint32_t)b))) {
re->flags &= ~(PCRE2_LASTSET | PCRE2_LASTCASELESS);
re->last_codeunit = 0;
}
re->first_codeunit = a;
flags = PCRE2_FIRSTSET;
if (b >= 0) flags |= PCRE2_FIRSTCASELESS;
@ -1898,9 +2050,11 @@ if ((re->flags & (PCRE2_MATCH_EMPTY|PCRE2_HASACCEPT)) == 0 &&
break; /* Leave minlength unchanged (will be zero) */
case -2:
PCRE2_DEBUG_UNREACHABLE();
return 2; /* missing capturing bracket */
case -3:
PCRE2_DEBUG_UNREACHABLE();
return 3; /* unrecognized opcode */
default:

File diff suppressed because it is too large Load Diff

View File

@ -7,7 +7,7 @@ and semantics are as close as possible to those of the Perl 5 language.
Written by Philip Hazel
Original API code Copyright (c) 1997-2012 University of Cambridge
New API code Copyright (c) 2016-2023 University of Cambridge
New API code Copyright (c) 2016-2024 University of Cambridge
-----------------------------------------------------------------------------
Redistribution and use in source and binary forms, with or without
@ -486,7 +486,7 @@ pcre2_substring_nametable_scan(const pcre2_code *code, PCRE2_SPTR stringname,
uint16_t bot = 0;
uint16_t top = code->name_count;
uint16_t entrysize = code->name_entry_size;
PCRE2_SPTR nametable = (PCRE2_SPTR)((char *)code + sizeof(pcre2_real_code));
PCRE2_SPTR nametable = (PCRE2_SPTR)((const char *)code + sizeof(pcre2_real_code));
while (top > bot)
{

File diff suppressed because it is too large Load Diff

View File

@ -132,13 +132,18 @@ enum {
ucp_Hex_Digit,
ucp_IDS_Binary_Operator,
ucp_IDS_Trinary_Operator,
ucp_IDS_Unary_Operator,
ucp_ID_Compat_Math_Continue,
ucp_ID_Compat_Math_Start,
ucp_ID_Continue,
ucp_ID_Start,
ucp_Ideographic,
ucp_InCB,
ucp_Join_Control,
ucp_Logical_Order_Exception,
ucp_Lowercase,
ucp_Math,
ucp_Modifier_Combining_Mark,
ucp_Noncharacter_Code_Point,
ucp_Pattern_Syntax,
ucp_Pattern_White_Space,
@ -219,6 +224,8 @@ enum {
ucp_Latin,
ucp_Greek,
ucp_Cyrillic,
ucp_Armenian,
ucp_Hebrew,
ucp_Arabic,
ucp_Syriac,
ucp_Thaana,
@ -232,15 +239,21 @@ enum {
ucp_Kannada,
ucp_Malayalam,
ucp_Sinhala,
ucp_Thai,
ucp_Tibetan,
ucp_Myanmar,
ucp_Georgian,
ucp_Hangul,
ucp_Ethiopic,
ucp_Cherokee,
ucp_Runic,
ucp_Mongolian,
ucp_Hiragana,
ucp_Katakana,
ucp_Bopomofo,
ucp_Han,
ucp_Yi,
ucp_Gothic,
ucp_Tagalog,
ucp_Hanunoo,
ucp_Buhid,
@ -248,21 +261,33 @@ enum {
ucp_Limbu,
ucp_Tai_Le,
ucp_Linear_B,
ucp_Shavian,
ucp_Cypriot,
ucp_Buginese,
ucp_Coptic,
ucp_Glagolitic,
ucp_Tifinagh,
ucp_Syloti_Nagri,
ucp_Phags_Pa,
ucp_Nko,
ucp_Kayah_Li,
ucp_Lycian,
ucp_Carian,
ucp_Lydian,
ucp_Avestan,
ucp_Samaritan,
ucp_Lisu,
ucp_Javanese,
ucp_Old_Turkic,
ucp_Kaithi,
ucp_Mandaic,
ucp_Chakma,
ucp_Meroitic_Hieroglyphs,
ucp_Sharada,
ucp_Takri,
ucp_Caucasian_Albanian,
ucp_Duployan,
ucp_Elbasan,
ucp_Grantha,
ucp_Khojki,
ucp_Linear_A,
@ -274,7 +299,10 @@ enum {
ucp_Khudawadi,
ucp_Tirhuta,
ucp_Multani,
ucp_Old_Hungarian,
ucp_Adlam,
ucp_Osage,
ucp_Tangut,
ucp_Masaram_Gondi,
ucp_Dogra,
ucp_Gunjala_Gondi,
@ -284,31 +312,28 @@ enum {
ucp_Yezidi,
ucp_Cypro_Minoan,
ucp_Old_Uyghur,
ucp_Toto,
ucp_Garay,
ucp_Gurung_Khema,
ucp_Ol_Onal,
ucp_Sunuwar,
ucp_Todhri,
ucp_Tulu_Tigalari,
/* Scripts which has no characters in other scripts. */
ucp_Unknown,
ucp_Common,
ucp_Armenian,
ucp_Hebrew,
ucp_Thai,
ucp_Lao,
ucp_Tibetan,
ucp_Ethiopic,
ucp_Cherokee,
ucp_Canadian_Aboriginal,
ucp_Ogham,
ucp_Runic,
ucp_Khmer,
ucp_Old_Italic,
ucp_Gothic,
ucp_Deseret,
ucp_Inherited,
ucp_Ugaritic,
ucp_Shavian,
ucp_Osmanya,
ucp_Braille,
ucp_New_Tai_Lue,
ucp_Tifinagh,
ucp_Old_Persian,
ucp_Kharoshthi,
ucp_Balinese,
@ -320,32 +345,22 @@ enum {
ucp_Vai,
ucp_Saurashtra,
ucp_Rejang,
ucp_Lycian,
ucp_Carian,
ucp_Lydian,
ucp_Cham,
ucp_Tai_Tham,
ucp_Tai_Viet,
ucp_Avestan,
ucp_Egyptian_Hieroglyphs,
ucp_Samaritan,
ucp_Lisu,
ucp_Bamum,
ucp_Meetei_Mayek,
ucp_Imperial_Aramaic,
ucp_Old_South_Arabian,
ucp_Inscriptional_Parthian,
ucp_Inscriptional_Pahlavi,
ucp_Old_Turkic,
ucp_Batak,
ucp_Brahmi,
ucp_Meroitic_Cursive,
ucp_Meroitic_Hieroglyphs,
ucp_Miao,
ucp_Sora_Sompeng,
ucp_Caucasian_Albanian,
ucp_Bassa_Vah,
ucp_Elbasan,
ucp_Pahawh_Hmong,
ucp_Mende_Kikakui,
ucp_Mro,
@ -358,13 +373,10 @@ enum {
ucp_Ahom,
ucp_Anatolian_Hieroglyphs,
ucp_Hatran,
ucp_Old_Hungarian,
ucp_SignWriting,
ucp_Bhaiksuki,
ucp_Marchen,
ucp_Newa,
ucp_Osage,
ucp_Tangut,
ucp_Nushu,
ucp_Soyombo,
ucp_Zanabazar_Square,
@ -378,10 +390,10 @@ enum {
ucp_Dives_Akuru,
ucp_Khitan_Small_Script,
ucp_Tangsa,
ucp_Toto,
ucp_Vithkuqi,
ucp_Kawi,
ucp_Nag_Mundari,
ucp_Kirat_Rai,
/* This must be last */
ucp_Script_Count
@ -389,7 +401,7 @@ enum {
/* Size of entries in ucd_script_sets[] */
#define ucd_script_sets_item_size 3
#define ucd_script_sets_item_size 4
#endif /* PCRE2_UCP_H_IDEMPOTENT_GUARD */

View File

@ -199,6 +199,8 @@ the "loose matching" rules that Unicode advises and Perl uses. */
#define STRING_extendedpictographic0 STR_e STR_x STR_t STR_e STR_n STR_d STR_e STR_d STR_p STR_i STR_c STR_t STR_o STR_g STR_r STR_a STR_p STR_h STR_i STR_c "\0"
#define STRING_extender0 STR_e STR_x STR_t STR_e STR_n STR_d STR_e STR_r "\0"
#define STRING_extpict0 STR_e STR_x STR_t STR_p STR_i STR_c STR_t "\0"
#define STRING_gara0 STR_g STR_a STR_r STR_a "\0"
#define STRING_garay0 STR_g STR_a STR_r STR_a STR_y "\0"
#define STRING_geor0 STR_g STR_e STR_o STR_r "\0"
#define STRING_georgian0 STR_g STR_e STR_o STR_r STR_g STR_i STR_a STR_n "\0"
#define STRING_glag0 STR_g STR_l STR_a STR_g "\0"
@ -219,9 +221,11 @@ the "loose matching" rules that Unicode advises and Perl uses. */
#define STRING_grlink0 STR_g STR_r STR_l STR_i STR_n STR_k "\0"
#define STRING_gujarati0 STR_g STR_u STR_j STR_a STR_r STR_a STR_t STR_i "\0"
#define STRING_gujr0 STR_g STR_u STR_j STR_r "\0"
#define STRING_gukh0 STR_g STR_u STR_k STR_h "\0"
#define STRING_gunjalagondi0 STR_g STR_u STR_n STR_j STR_a STR_l STR_a STR_g STR_o STR_n STR_d STR_i "\0"
#define STRING_gurmukhi0 STR_g STR_u STR_r STR_m STR_u STR_k STR_h STR_i "\0"
#define STRING_guru0 STR_g STR_u STR_r STR_u "\0"
#define STRING_gurungkhema0 STR_g STR_u STR_r STR_u STR_n STR_g STR_k STR_h STR_e STR_m STR_a "\0"
#define STRING_han0 STR_h STR_a STR_n "\0"
#define STRING_hang0 STR_h STR_a STR_n STR_g "\0"
#define STRING_hangul0 STR_h STR_a STR_n STR_g STR_u STR_l "\0"
@ -242,6 +246,8 @@ the "loose matching" rules that Unicode advises and Perl uses. */
#define STRING_hmnp0 STR_h STR_m STR_n STR_p "\0"
#define STRING_hung0 STR_h STR_u STR_n STR_g "\0"
#define STRING_idc0 STR_i STR_d STR_c "\0"
#define STRING_idcompatmathcontinue0 STR_i STR_d STR_c STR_o STR_m STR_p STR_a STR_t STR_m STR_a STR_t STR_h STR_c STR_o STR_n STR_t STR_i STR_n STR_u STR_e "\0"
#define STRING_idcompatmathstart0 STR_i STR_d STR_c STR_o STR_m STR_p STR_a STR_t STR_m STR_a STR_t STR_h STR_s STR_t STR_a STR_r STR_t "\0"
#define STRING_idcontinue0 STR_i STR_d STR_c STR_o STR_n STR_t STR_i STR_n STR_u STR_e "\0"
#define STRING_ideo0 STR_i STR_d STR_e STR_o "\0"
#define STRING_ideographic0 STR_i STR_d STR_e STR_o STR_g STR_r STR_a STR_p STR_h STR_i STR_c "\0"
@ -251,7 +257,10 @@ the "loose matching" rules that Unicode advises and Perl uses. */
#define STRING_idst0 STR_i STR_d STR_s STR_t "\0"
#define STRING_idstart0 STR_i STR_d STR_s STR_t STR_a STR_r STR_t "\0"
#define STRING_idstrinaryoperator0 STR_i STR_d STR_s STR_t STR_r STR_i STR_n STR_a STR_r STR_y STR_o STR_p STR_e STR_r STR_a STR_t STR_o STR_r "\0"
#define STRING_idsu0 STR_i STR_d STR_s STR_u "\0"
#define STRING_idsunaryoperator0 STR_i STR_d STR_s STR_u STR_n STR_a STR_r STR_y STR_o STR_p STR_e STR_r STR_a STR_t STR_o STR_r "\0"
#define STRING_imperialaramaic0 STR_i STR_m STR_p STR_e STR_r STR_i STR_a STR_l STR_a STR_r STR_a STR_m STR_a STR_i STR_c "\0"
#define STRING_incb0 STR_i STR_n STR_c STR_b "\0"
#define STRING_inherited0 STR_i STR_n STR_h STR_e STR_r STR_i STR_t STR_e STR_d "\0"
#define STRING_inscriptionalpahlavi0 STR_i STR_n STR_s STR_c STR_r STR_i STR_p STR_t STR_i STR_o STR_n STR_a STR_l STR_p STR_a STR_h STR_l STR_a STR_v STR_i "\0"
#define STRING_inscriptionalparthian0 STR_i STR_n STR_s STR_c STR_r STR_i STR_p STR_t STR_i STR_o STR_n STR_a STR_l STR_p STR_a STR_r STR_t STR_h STR_i STR_a STR_n "\0"
@ -275,8 +284,10 @@ the "loose matching" rules that Unicode advises and Perl uses. */
#define STRING_khoj0 STR_k STR_h STR_o STR_j "\0"
#define STRING_khojki0 STR_k STR_h STR_o STR_j STR_k STR_i "\0"
#define STRING_khudawadi0 STR_k STR_h STR_u STR_d STR_a STR_w STR_a STR_d STR_i "\0"
#define STRING_kiratrai0 STR_k STR_i STR_r STR_a STR_t STR_r STR_a STR_i "\0"
#define STRING_kits0 STR_k STR_i STR_t STR_s "\0"
#define STRING_knda0 STR_k STR_n STR_d STR_a "\0"
#define STRING_krai0 STR_k STR_r STR_a STR_i "\0"
#define STRING_kthi0 STR_k STR_t STR_h STR_i "\0"
#define STRING_l0 STR_l "\0"
#define STRING_l_AMPERSAND0 STR_l STR_AMPERSAND "\0"
@ -323,6 +334,7 @@ the "loose matching" rules that Unicode advises and Perl uses. */
#define STRING_masaramgondi0 STR_m STR_a STR_s STR_a STR_r STR_a STR_m STR_g STR_o STR_n STR_d STR_i "\0"
#define STRING_math0 STR_m STR_a STR_t STR_h "\0"
#define STRING_mc0 STR_m STR_c "\0"
#define STRING_mcm0 STR_m STR_c STR_m "\0"
#define STRING_me0 STR_m STR_e "\0"
#define STRING_medefaidrin0 STR_m STR_e STR_d STR_e STR_f STR_a STR_i STR_d STR_r STR_i STR_n "\0"
#define STRING_medf0 STR_m STR_e STR_d STR_f "\0"
@ -337,6 +349,7 @@ the "loose matching" rules that Unicode advises and Perl uses. */
#define STRING_mlym0 STR_m STR_l STR_y STR_m "\0"
#define STRING_mn0 STR_m STR_n "\0"
#define STRING_modi0 STR_m STR_o STR_d STR_i "\0"
#define STRING_modifiercombiningmark0 STR_m STR_o STR_d STR_i STR_f STR_i STR_e STR_r STR_c STR_o STR_m STR_b STR_i STR_n STR_i STR_n STR_g STR_m STR_a STR_r STR_k "\0"
#define STRING_mong0 STR_m STR_o STR_n STR_g "\0"
#define STRING_mongolian0 STR_m STR_o STR_n STR_g STR_o STR_l STR_i STR_a STR_n "\0"
#define STRING_mro0 STR_m STR_r STR_o "\0"
@ -379,6 +392,8 @@ the "loose matching" rules that Unicode advises and Perl uses. */
#define STRING_oldsoutharabian0 STR_o STR_l STR_d STR_s STR_o STR_u STR_t STR_h STR_a STR_r STR_a STR_b STR_i STR_a STR_n "\0"
#define STRING_oldturkic0 STR_o STR_l STR_d STR_t STR_u STR_r STR_k STR_i STR_c "\0"
#define STRING_olduyghur0 STR_o STR_l STR_d STR_u STR_y STR_g STR_h STR_u STR_r "\0"
#define STRING_olonal0 STR_o STR_l STR_o STR_n STR_a STR_l "\0"
#define STRING_onao0 STR_o STR_n STR_a STR_o "\0"
#define STRING_oriya0 STR_o STR_r STR_i STR_y STR_a "\0"
#define STRING_orkh0 STR_o STR_r STR_k STR_h "\0"
#define STRING_orya0 STR_o STR_r STR_y STR_a "\0"
@ -463,6 +478,8 @@ the "loose matching" rules that Unicode advises and Perl uses. */
#define STRING_sterm0 STR_s STR_t STR_e STR_r STR_m "\0"
#define STRING_sund0 STR_s STR_u STR_n STR_d "\0"
#define STRING_sundanese0 STR_s STR_u STR_n STR_d STR_a STR_n STR_e STR_s STR_e "\0"
#define STRING_sunu0 STR_s STR_u STR_n STR_u "\0"
#define STRING_sunuwar0 STR_s STR_u STR_n STR_u STR_w STR_a STR_r "\0"
#define STRING_sylo0 STR_s STR_y STR_l STR_o "\0"
#define STRING_sylotinagri0 STR_s STR_y STR_l STR_o STR_t STR_i STR_n STR_a STR_g STR_r STR_i "\0"
#define STRING_syrc0 STR_s STR_y STR_r STR_c "\0"
@ -498,7 +515,11 @@ the "loose matching" rules that Unicode advises and Perl uses. */
#define STRING_tirh0 STR_t STR_i STR_r STR_h "\0"
#define STRING_tirhuta0 STR_t STR_i STR_r STR_h STR_u STR_t STR_a "\0"
#define STRING_tnsa0 STR_t STR_n STR_s STR_a "\0"
#define STRING_todhri0 STR_t STR_o STR_d STR_h STR_r STR_i "\0"
#define STRING_todr0 STR_t STR_o STR_d STR_r "\0"
#define STRING_toto0 STR_t STR_o STR_t STR_o "\0"
#define STRING_tulutigalari0 STR_t STR_u STR_l STR_u STR_t STR_i STR_g STR_a STR_l STR_a STR_r STR_i "\0"
#define STRING_tutg0 STR_t STR_u STR_t STR_g "\0"
#define STRING_ugar0 STR_u STR_g STR_a STR_r "\0"
#define STRING_ugaritic0 STR_u STR_g STR_a STR_r STR_i STR_t STR_i STR_c "\0"
#define STRING_uideo0 STR_u STR_i STR_d STR_e STR_o "\0"
@ -690,6 +711,8 @@ const char PRIV(utt_names)[] =
STRING_extendedpictographic0
STRING_extender0
STRING_extpict0
STRING_gara0
STRING_garay0
STRING_geor0
STRING_georgian0
STRING_glag0
@ -710,9 +733,11 @@ const char PRIV(utt_names)[] =
STRING_grlink0
STRING_gujarati0
STRING_gujr0
STRING_gukh0
STRING_gunjalagondi0
STRING_gurmukhi0
STRING_guru0
STRING_gurungkhema0
STRING_han0
STRING_hang0
STRING_hangul0
@ -733,6 +758,8 @@ const char PRIV(utt_names)[] =
STRING_hmnp0
STRING_hung0
STRING_idc0
STRING_idcompatmathcontinue0
STRING_idcompatmathstart0
STRING_idcontinue0
STRING_ideo0
STRING_ideographic0
@ -742,7 +769,10 @@ const char PRIV(utt_names)[] =
STRING_idst0
STRING_idstart0
STRING_idstrinaryoperator0
STRING_idsu0
STRING_idsunaryoperator0
STRING_imperialaramaic0
STRING_incb0
STRING_inherited0
STRING_inscriptionalpahlavi0
STRING_inscriptionalparthian0
@ -766,8 +796,10 @@ const char PRIV(utt_names)[] =
STRING_khoj0
STRING_khojki0
STRING_khudawadi0
STRING_kiratrai0
STRING_kits0
STRING_knda0
STRING_krai0
STRING_kthi0
STRING_l0
STRING_l_AMPERSAND0
@ -814,6 +846,7 @@ const char PRIV(utt_names)[] =
STRING_masaramgondi0
STRING_math0
STRING_mc0
STRING_mcm0
STRING_me0
STRING_medefaidrin0
STRING_medf0
@ -828,6 +861,7 @@ const char PRIV(utt_names)[] =
STRING_mlym0
STRING_mn0
STRING_modi0
STRING_modifiercombiningmark0
STRING_mong0
STRING_mongolian0
STRING_mro0
@ -870,6 +904,8 @@ const char PRIV(utt_names)[] =
STRING_oldsoutharabian0
STRING_oldturkic0
STRING_olduyghur0
STRING_olonal0
STRING_onao0
STRING_oriya0
STRING_orkh0
STRING_orya0
@ -954,6 +990,8 @@ const char PRIV(utt_names)[] =
STRING_sterm0
STRING_sund0
STRING_sundanese0
STRING_sunu0
STRING_sunuwar0
STRING_sylo0
STRING_sylotinagri0
STRING_syrc0
@ -989,7 +1027,11 @@ const char PRIV(utt_names)[] =
STRING_tirh0
STRING_tirhuta0
STRING_tnsa0
STRING_todhri0
STRING_todr0
STRING_toto0
STRING_tulutigalari0
STRING_tutg0
STRING_ugar0
STRING_ugaritic0
STRING_uideo0
@ -1037,7 +1079,7 @@ const char PRIV(utt_names)[] =
const ucp_type_table PRIV(utt)[] = {
{ 0, PT_SCX, ucp_Adlam },
{ 6, PT_SCX, ucp_Adlam },
{ 11, PT_SC, ucp_Caucasian_Albanian },
{ 11, PT_SCX, ucp_Caucasian_Albanian },
{ 16, PT_BOOL, ucp_ASCII_Hex_Digit },
{ 21, PT_SC, ucp_Ahom },
{ 26, PT_BOOL, ucp_Alphabetic },
@ -1046,13 +1088,13 @@ const ucp_type_table PRIV(utt)[] = {
{ 64, PT_ANY, 0 },
{ 68, PT_SCX, ucp_Arabic },
{ 73, PT_SCX, ucp_Arabic },
{ 80, PT_SC, ucp_Armenian },
{ 80, PT_SCX, ucp_Armenian },
{ 89, PT_SC, ucp_Imperial_Aramaic },
{ 94, PT_SC, ucp_Armenian },
{ 94, PT_SCX, ucp_Armenian },
{ 99, PT_BOOL, ucp_ASCII },
{ 105, PT_BOOL, ucp_ASCII_Hex_Digit },
{ 119, PT_SC, ucp_Avestan },
{ 127, PT_SC, ucp_Avestan },
{ 119, PT_SCX, ucp_Avestan },
{ 127, PT_SCX, ucp_Avestan },
{ 132, PT_SC, ucp_Balinese },
{ 137, PT_SC, ucp_Balinese },
{ 146, PT_SC, ucp_Bamum },
@ -1106,11 +1148,11 @@ const ucp_type_table PRIV(utt)[] = {
{ 480, PT_SCX, ucp_Chakma },
{ 485, PT_SC, ucp_Canadian_Aboriginal },
{ 504, PT_SC, ucp_Canadian_Aboriginal },
{ 509, PT_SC, ucp_Carian },
{ 514, PT_SC, ucp_Carian },
{ 509, PT_SCX, ucp_Carian },
{ 514, PT_SCX, ucp_Carian },
{ 521, PT_BOOL, ucp_Cased },
{ 527, PT_BOOL, ucp_Case_Ignorable },
{ 541, PT_SC, ucp_Caucasian_Albanian },
{ 541, PT_SCX, ucp_Caucasian_Albanian },
{ 559, PT_PC, ucp_Cc },
{ 562, PT_PC, ucp_Cf },
{ 565, PT_SCX, ucp_Chakma },
@ -1120,8 +1162,8 @@ const ucp_type_table PRIV(utt)[] = {
{ 621, PT_BOOL, ucp_Changes_When_Lowercased },
{ 643, PT_BOOL, ucp_Changes_When_Titlecased },
{ 665, PT_BOOL, ucp_Changes_When_Uppercased },
{ 687, PT_SC, ucp_Cherokee },
{ 692, PT_SC, ucp_Cherokee },
{ 687, PT_SCX, ucp_Cherokee },
{ 692, PT_SCX, ucp_Cherokee },
{ 701, PT_SC, ucp_Chorasmian },
{ 712, PT_SC, ucp_Chorasmian },
{ 717, PT_BOOL, ucp_Case_Ignorable },
@ -1164,8 +1206,8 @@ const ucp_type_table PRIV(utt)[] = {
{ 963, PT_BOOL, ucp_Emoji_Component },
{ 969, PT_SC, ucp_Egyptian_Hieroglyphs },
{ 974, PT_SC, ucp_Egyptian_Hieroglyphs },
{ 994, PT_SC, ucp_Elbasan },
{ 999, PT_SC, ucp_Elbasan },
{ 994, PT_SCX, ucp_Elbasan },
{ 999, PT_SCX, ucp_Elbasan },
{ 1007, PT_SC, ucp_Elymaic },
{ 1012, PT_SC, ucp_Elymaic },
{ 1020, PT_BOOL, ucp_Emoji_Modifier },
@ -1175,355 +1217,376 @@ const ucp_type_table PRIV(utt)[] = {
{ 1060, PT_BOOL, ucp_Emoji_Modifier_Base },
{ 1078, PT_BOOL, ucp_Emoji_Presentation },
{ 1096, PT_BOOL, ucp_Emoji_Presentation },
{ 1102, PT_SC, ucp_Ethiopic },
{ 1107, PT_SC, ucp_Ethiopic },
{ 1102, PT_SCX, ucp_Ethiopic },
{ 1107, PT_SCX, ucp_Ethiopic },
{ 1116, PT_BOOL, ucp_Extender },
{ 1120, PT_BOOL, ucp_Extended_Pictographic },
{ 1141, PT_BOOL, ucp_Extender },
{ 1150, PT_BOOL, ucp_Extended_Pictographic },
{ 1158, PT_SCX, ucp_Georgian },
{ 1163, PT_SCX, ucp_Georgian },
{ 1172, PT_SCX, ucp_Glagolitic },
{ 1177, PT_SCX, ucp_Glagolitic },
{ 1188, PT_SCX, ucp_Gunjala_Gondi },
{ 1193, PT_SCX, ucp_Masaram_Gondi },
{ 1198, PT_SC, ucp_Gothic },
{ 1203, PT_SC, ucp_Gothic },
{ 1210, PT_SCX, ucp_Grantha },
{ 1215, PT_SCX, ucp_Grantha },
{ 1223, PT_BOOL, ucp_Grapheme_Base },
{ 1236, PT_BOOL, ucp_Grapheme_Extend },
{ 1251, PT_BOOL, ucp_Grapheme_Link },
{ 1264, PT_BOOL, ucp_Grapheme_Base },
{ 1271, PT_SCX, ucp_Greek },
{ 1277, PT_SCX, ucp_Greek },
{ 1282, PT_BOOL, ucp_Grapheme_Extend },
{ 1288, PT_BOOL, ucp_Grapheme_Link },
{ 1295, PT_SCX, ucp_Gujarati },
{ 1304, PT_SCX, ucp_Gujarati },
{ 1309, PT_SCX, ucp_Gunjala_Gondi },
{ 1322, PT_SCX, ucp_Gurmukhi },
{ 1331, PT_SCX, ucp_Gurmukhi },
{ 1336, PT_SCX, ucp_Han },
{ 1340, PT_SCX, ucp_Hangul },
{ 1345, PT_SCX, ucp_Hangul },
{ 1352, PT_SCX, ucp_Han },
{ 1357, PT_SCX, ucp_Hanifi_Rohingya },
{ 1372, PT_SCX, ucp_Hanunoo },
{ 1377, PT_SCX, ucp_Hanunoo },
{ 1385, PT_SC, ucp_Hatran },
{ 1390, PT_SC, ucp_Hatran },
{ 1397, PT_SC, ucp_Hebrew },
{ 1402, PT_SC, ucp_Hebrew },
{ 1409, PT_BOOL, ucp_Hex_Digit },
{ 1413, PT_BOOL, ucp_Hex_Digit },
{ 1422, PT_SCX, ucp_Hiragana },
{ 1427, PT_SCX, ucp_Hiragana },
{ 1436, PT_SC, ucp_Anatolian_Hieroglyphs },
{ 1441, PT_SC, ucp_Pahawh_Hmong },
{ 1446, PT_SC, ucp_Nyiakeng_Puachue_Hmong },
{ 1451, PT_SC, ucp_Old_Hungarian },
{ 1456, PT_BOOL, ucp_ID_Continue },
{ 1460, PT_BOOL, ucp_ID_Continue },
{ 1471, PT_BOOL, ucp_Ideographic },
{ 1476, PT_BOOL, ucp_Ideographic },
{ 1488, PT_BOOL, ucp_ID_Start },
{ 1492, PT_BOOL, ucp_IDS_Binary_Operator },
{ 1497, PT_BOOL, ucp_IDS_Binary_Operator },
{ 1515, PT_BOOL, ucp_IDS_Trinary_Operator },
{ 1520, PT_BOOL, ucp_ID_Start },
{ 1528, PT_BOOL, ucp_IDS_Trinary_Operator },
{ 1547, PT_SC, ucp_Imperial_Aramaic },
{ 1563, PT_SC, ucp_Inherited },
{ 1573, PT_SC, ucp_Inscriptional_Pahlavi },
{ 1594, PT_SC, ucp_Inscriptional_Parthian },
{ 1616, PT_SC, ucp_Old_Italic },
{ 1621, PT_SCX, ucp_Javanese },
{ 1626, PT_SCX, ucp_Javanese },
{ 1635, PT_BOOL, ucp_Join_Control },
{ 1641, PT_BOOL, ucp_Join_Control },
{ 1653, PT_SCX, ucp_Kaithi },
{ 1660, PT_SCX, ucp_Kayah_Li },
{ 1665, PT_SCX, ucp_Katakana },
{ 1670, PT_SCX, ucp_Kannada },
{ 1678, PT_SCX, ucp_Katakana },
{ 1687, PT_SC, ucp_Kawi },
{ 1692, PT_SCX, ucp_Kayah_Li },
{ 1700, PT_SC, ucp_Kharoshthi },
{ 1705, PT_SC, ucp_Kharoshthi },
{ 1716, PT_SC, ucp_Khitan_Small_Script },
{ 1734, PT_SC, ucp_Khmer },
{ 1740, PT_SC, ucp_Khmer },
{ 1745, PT_SCX, ucp_Khojki },
{ 1750, PT_SCX, ucp_Khojki },
{ 1757, PT_SCX, ucp_Khudawadi },
{ 1767, PT_SC, ucp_Khitan_Small_Script },
{ 1772, PT_SCX, ucp_Kannada },
{ 1777, PT_SCX, ucp_Kaithi },
{ 1782, PT_GC, ucp_L },
{ 1784, PT_LAMP, 0 },
{ 1787, PT_SC, ucp_Tai_Tham },
{ 1792, PT_SC, ucp_Lao },
{ 1796, PT_SC, ucp_Lao },
{ 1801, PT_SCX, ucp_Latin },
{ 1807, PT_SCX, ucp_Latin },
{ 1812, PT_LAMP, 0 },
{ 1815, PT_SC, ucp_Lepcha },
{ 1820, PT_SC, ucp_Lepcha },
{ 1827, PT_SCX, ucp_Limbu },
{ 1832, PT_SCX, ucp_Limbu },
{ 1838, PT_SCX, ucp_Linear_A },
{ 1843, PT_SCX, ucp_Linear_B },
{ 1848, PT_SCX, ucp_Linear_A },
{ 1856, PT_SCX, ucp_Linear_B },
{ 1864, PT_SC, ucp_Lisu },
{ 1869, PT_PC, ucp_Ll },
{ 1872, PT_PC, ucp_Lm },
{ 1875, PT_PC, ucp_Lo },
{ 1878, PT_BOOL, ucp_Logical_Order_Exception },
{ 1882, PT_BOOL, ucp_Logical_Order_Exception },
{ 1904, PT_BOOL, ucp_Lowercase },
{ 1910, PT_BOOL, ucp_Lowercase },
{ 1920, PT_PC, ucp_Lt },
{ 1923, PT_PC, ucp_Lu },
{ 1926, PT_SC, ucp_Lycian },
{ 1931, PT_SC, ucp_Lycian },
{ 1938, PT_SC, ucp_Lydian },
{ 1943, PT_SC, ucp_Lydian },
{ 1950, PT_GC, ucp_M },
{ 1952, PT_SCX, ucp_Mahajani },
{ 1961, PT_SCX, ucp_Mahajani },
{ 1966, PT_SC, ucp_Makasar },
{ 1971, PT_SC, ucp_Makasar },
{ 1979, PT_SCX, ucp_Malayalam },
{ 1989, PT_SCX, ucp_Mandaic },
{ 1994, PT_SCX, ucp_Mandaic },
{ 2002, PT_SCX, ucp_Manichaean },
{ 2007, PT_SCX, ucp_Manichaean },
{ 2018, PT_SC, ucp_Marchen },
{ 2023, PT_SC, ucp_Marchen },
{ 2031, PT_SCX, ucp_Masaram_Gondi },
{ 2044, PT_BOOL, ucp_Math },
{ 2049, PT_PC, ucp_Mc },
{ 2052, PT_PC, ucp_Me },
{ 2055, PT_SC, ucp_Medefaidrin },
{ 2067, PT_SC, ucp_Medefaidrin },
{ 2072, PT_SC, ucp_Meetei_Mayek },
{ 2084, PT_SC, ucp_Mende_Kikakui },
{ 2089, PT_SC, ucp_Mende_Kikakui },
{ 2102, PT_SC, ucp_Meroitic_Cursive },
{ 2107, PT_SC, ucp_Meroitic_Hieroglyphs },
{ 2112, PT_SC, ucp_Meroitic_Cursive },
{ 2128, PT_SC, ucp_Meroitic_Hieroglyphs },
{ 2148, PT_SC, ucp_Miao },
{ 2153, PT_SCX, ucp_Malayalam },
{ 2158, PT_PC, ucp_Mn },
{ 2161, PT_SCX, ucp_Modi },
{ 2166, PT_SCX, ucp_Mongolian },
{ 2171, PT_SCX, ucp_Mongolian },
{ 2181, PT_SC, ucp_Mro },
{ 2185, PT_SC, ucp_Mro },
{ 2190, PT_SC, ucp_Meetei_Mayek },
{ 2195, PT_SCX, ucp_Multani },
{ 2200, PT_SCX, ucp_Multani },
{ 2208, PT_SCX, ucp_Myanmar },
{ 2216, PT_SCX, ucp_Myanmar },
{ 2221, PT_GC, ucp_N },
{ 2223, PT_SC, ucp_Nabataean },
{ 2233, PT_SC, ucp_Nag_Mundari },
{ 2238, PT_SC, ucp_Nag_Mundari },
{ 2249, PT_SCX, ucp_Nandinagari },
{ 2254, PT_SCX, ucp_Nandinagari },
{ 2266, PT_SC, ucp_Old_North_Arabian },
{ 2271, PT_SC, ucp_Nabataean },
{ 2276, PT_BOOL, ucp_Noncharacter_Code_Point },
{ 2282, PT_PC, ucp_Nd },
{ 2285, PT_SC, ucp_Newa },
{ 2290, PT_SC, ucp_New_Tai_Lue },
{ 2300, PT_SCX, ucp_Nko },
{ 2304, PT_SCX, ucp_Nko },
{ 2309, PT_PC, ucp_Nl },
{ 2312, PT_PC, ucp_No },
{ 2315, PT_BOOL, ucp_Noncharacter_Code_Point },
{ 2337, PT_SC, ucp_Nushu },
{ 2342, PT_SC, ucp_Nushu },
{ 2348, PT_SC, ucp_Nyiakeng_Puachue_Hmong },
{ 2369, PT_SC, ucp_Ogham },
{ 2374, PT_SC, ucp_Ogham },
{ 2380, PT_SC, ucp_Ol_Chiki },
{ 2388, PT_SC, ucp_Ol_Chiki },
{ 2393, PT_SC, ucp_Old_Hungarian },
{ 2406, PT_SC, ucp_Old_Italic },
{ 2416, PT_SC, ucp_Old_North_Arabian },
{ 2432, PT_SCX, ucp_Old_Permic },
{ 2442, PT_SC, ucp_Old_Persian },
{ 2453, PT_SC, ucp_Old_Sogdian },
{ 2464, PT_SC, ucp_Old_South_Arabian },
{ 2480, PT_SC, ucp_Old_Turkic },
{ 2490, PT_SCX, ucp_Old_Uyghur },
{ 2500, PT_SCX, ucp_Oriya },
{ 2506, PT_SC, ucp_Old_Turkic },
{ 2511, PT_SCX, ucp_Oriya },
{ 2516, PT_SC, ucp_Osage },
{ 2522, PT_SC, ucp_Osage },
{ 2527, PT_SC, ucp_Osmanya },
{ 2532, PT_SC, ucp_Osmanya },
{ 2540, PT_SCX, ucp_Old_Uyghur },
{ 2545, PT_GC, ucp_P },
{ 2547, PT_SC, ucp_Pahawh_Hmong },
{ 2559, PT_SC, ucp_Palmyrene },
{ 2564, PT_SC, ucp_Palmyrene },
{ 2574, PT_BOOL, ucp_Pattern_Syntax },
{ 2581, PT_BOOL, ucp_Pattern_Syntax },
{ 2595, PT_BOOL, ucp_Pattern_White_Space },
{ 2613, PT_BOOL, ucp_Pattern_White_Space },
{ 2619, PT_SC, ucp_Pau_Cin_Hau },
{ 2624, PT_SC, ucp_Pau_Cin_Hau },
{ 2634, PT_PC, ucp_Pc },
{ 2637, PT_BOOL, ucp_Prepended_Concatenation_Mark },
{ 2641, PT_PC, ucp_Pd },
{ 2644, PT_PC, ucp_Pe },
{ 2647, PT_SCX, ucp_Old_Permic },
{ 2652, PT_PC, ucp_Pf },
{ 2655, PT_SCX, ucp_Phags_Pa },
{ 2660, PT_SCX, ucp_Phags_Pa },
{ 2668, PT_SC, ucp_Inscriptional_Pahlavi },
{ 2673, PT_SCX, ucp_Psalter_Pahlavi },
{ 2678, PT_SC, ucp_Phoenician },
{ 2683, PT_SC, ucp_Phoenician },
{ 2694, PT_PC, ucp_Pi },
{ 2697, PT_SC, ucp_Miao },
{ 2702, PT_PC, ucp_Po },
{ 2705, PT_BOOL, ucp_Prepended_Concatenation_Mark },
{ 2732, PT_SC, ucp_Inscriptional_Parthian },
{ 2737, PT_PC, ucp_Ps },
{ 2740, PT_SCX, ucp_Psalter_Pahlavi },
{ 2755, PT_SCX, ucp_Coptic },
{ 2760, PT_SC, ucp_Inherited },
{ 2765, PT_BOOL, ucp_Quotation_Mark },
{ 2771, PT_BOOL, ucp_Quotation_Mark },
{ 2785, PT_BOOL, ucp_Radical },
{ 2793, PT_BOOL, ucp_Regional_Indicator },
{ 2811, PT_SC, ucp_Rejang },
{ 2818, PT_BOOL, ucp_Regional_Indicator },
{ 2821, PT_SC, ucp_Rejang },
{ 2826, PT_SCX, ucp_Hanifi_Rohingya },
{ 2831, PT_SC, ucp_Runic },
{ 2837, PT_SC, ucp_Runic },
{ 2842, PT_GC, ucp_S },
{ 2844, PT_SC, ucp_Samaritan },
{ 2854, PT_SC, ucp_Samaritan },
{ 2859, PT_SC, ucp_Old_South_Arabian },
{ 2864, PT_SC, ucp_Saurashtra },
{ 2869, PT_SC, ucp_Saurashtra },
{ 2880, PT_PC, ucp_Sc },
{ 2883, PT_BOOL, ucp_Soft_Dotted },
{ 2886, PT_BOOL, ucp_Sentence_Terminal },
{ 2903, PT_SC, ucp_SignWriting },
{ 2908, PT_SCX, ucp_Sharada },
{ 2916, PT_SC, ucp_Shavian },
{ 2924, PT_SC, ucp_Shavian },
{ 2929, PT_SCX, ucp_Sharada },
{ 2934, PT_SC, ucp_Siddham },
{ 2939, PT_SC, ucp_Siddham },
{ 2947, PT_SC, ucp_SignWriting },
{ 2959, PT_SCX, ucp_Khudawadi },
{ 2964, PT_SCX, ucp_Sinhala },
{ 2969, PT_SCX, ucp_Sinhala },
{ 2977, PT_PC, ucp_Sk },
{ 2980, PT_PC, ucp_Sm },
{ 2983, PT_PC, ucp_So },
{ 2986, PT_BOOL, ucp_Soft_Dotted },
{ 2997, PT_SCX, ucp_Sogdian },
{ 3002, PT_SCX, ucp_Sogdian },
{ 3010, PT_SC, ucp_Old_Sogdian },
{ 3015, PT_SC, ucp_Sora_Sompeng },
{ 3020, PT_SC, ucp_Sora_Sompeng },
{ 3032, PT_SC, ucp_Soyombo },
{ 3037, PT_SC, ucp_Soyombo },
{ 3045, PT_BOOL, ucp_White_Space },
{ 3051, PT_BOOL, ucp_Sentence_Terminal },
{ 3057, PT_SC, ucp_Sundanese },
{ 3062, PT_SC, ucp_Sundanese },
{ 3072, PT_SCX, ucp_Syloti_Nagri },
{ 3077, PT_SCX, ucp_Syloti_Nagri },
{ 3089, PT_SCX, ucp_Syriac },
{ 3094, PT_SCX, ucp_Syriac },
{ 3101, PT_SCX, ucp_Tagalog },
{ 3109, PT_SCX, ucp_Tagbanwa },
{ 3114, PT_SCX, ucp_Tagbanwa },
{ 3123, PT_SCX, ucp_Tai_Le },
{ 3129, PT_SC, ucp_Tai_Tham },
{ 3137, PT_SC, ucp_Tai_Viet },
{ 3145, PT_SCX, ucp_Takri },
{ 3150, PT_SCX, ucp_Takri },
{ 3156, PT_SCX, ucp_Tai_Le },
{ 3161, PT_SC, ucp_New_Tai_Lue },
{ 3166, PT_SCX, ucp_Tamil },
{ 3172, PT_SCX, ucp_Tamil },
{ 3177, PT_SC, ucp_Tangut },
{ 3182, PT_SC, ucp_Tangsa },
{ 3189, PT_SC, ucp_Tangut },
{ 3196, PT_SC, ucp_Tai_Viet },
{ 3201, PT_SCX, ucp_Telugu },
{ 3206, PT_SCX, ucp_Telugu },
{ 3213, PT_BOOL, ucp_Terminal_Punctuation },
{ 3218, PT_BOOL, ucp_Terminal_Punctuation },
{ 3238, PT_SC, ucp_Tifinagh },
{ 3243, PT_SCX, ucp_Tagalog },
{ 3248, PT_SCX, ucp_Thaana },
{ 3253, PT_SCX, ucp_Thaana },
{ 3260, PT_SC, ucp_Thai },
{ 3265, PT_SC, ucp_Tibetan },
{ 3273, PT_SC, ucp_Tibetan },
{ 3278, PT_SC, ucp_Tifinagh },
{ 3287, PT_SCX, ucp_Tirhuta },
{ 3292, PT_SCX, ucp_Tirhuta },
{ 3300, PT_SC, ucp_Tangsa },
{ 3305, PT_SC, ucp_Toto },
{ 3310, PT_SC, ucp_Ugaritic },
{ 3315, PT_SC, ucp_Ugaritic },
{ 3324, PT_BOOL, ucp_Unified_Ideograph },
{ 3330, PT_BOOL, ucp_Unified_Ideograph },
{ 3347, PT_SC, ucp_Unknown },
{ 3355, PT_BOOL, ucp_Uppercase },
{ 3361, PT_BOOL, ucp_Uppercase },
{ 3371, PT_SC, ucp_Vai },
{ 3375, PT_SC, ucp_Vai },
{ 3380, PT_BOOL, ucp_Variation_Selector },
{ 3398, PT_SC, ucp_Vithkuqi },
{ 3403, PT_SC, ucp_Vithkuqi },
{ 3412, PT_BOOL, ucp_Variation_Selector },
{ 3415, PT_SC, ucp_Wancho },
{ 3422, PT_SC, ucp_Warang_Citi },
{ 3427, PT_SC, ucp_Warang_Citi },
{ 3438, PT_SC, ucp_Wancho },
{ 3443, PT_BOOL, ucp_White_Space },
{ 3454, PT_BOOL, ucp_White_Space },
{ 3461, PT_ALNUM, 0 },
{ 3465, PT_BOOL, ucp_XID_Continue },
{ 3470, PT_BOOL, ucp_XID_Continue },
{ 3482, PT_BOOL, ucp_XID_Start },
{ 3487, PT_BOOL, ucp_XID_Start },
{ 3496, PT_SC, ucp_Old_Persian },
{ 3501, PT_PXSPACE, 0 },
{ 3505, PT_SPACE, 0 },
{ 3509, PT_SC, ucp_Cuneiform },
{ 3514, PT_UCNC, 0 },
{ 3518, PT_WORD, 0 },
{ 3522, PT_SCX, ucp_Yezidi },
{ 3527, PT_SCX, ucp_Yezidi },
{ 3534, PT_SCX, ucp_Yi },
{ 3537, PT_SCX, ucp_Yi },
{ 3542, PT_GC, ucp_Z },
{ 3544, PT_SC, ucp_Zanabazar_Square },
{ 3560, PT_SC, ucp_Zanabazar_Square },
{ 3565, PT_SC, ucp_Inherited },
{ 3570, PT_PC, ucp_Zl },
{ 3573, PT_PC, ucp_Zp },
{ 3576, PT_PC, ucp_Zs },
{ 3579, PT_SC, ucp_Common },
{ 3584, PT_SC, ucp_Unknown }
{ 1158, PT_SCX, ucp_Garay },
{ 1163, PT_SCX, ucp_Garay },
{ 1169, PT_SCX, ucp_Georgian },
{ 1174, PT_SCX, ucp_Georgian },
{ 1183, PT_SCX, ucp_Glagolitic },
{ 1188, PT_SCX, ucp_Glagolitic },
{ 1199, PT_SCX, ucp_Gunjala_Gondi },
{ 1204, PT_SCX, ucp_Masaram_Gondi },
{ 1209, PT_SCX, ucp_Gothic },
{ 1214, PT_SCX, ucp_Gothic },
{ 1221, PT_SCX, ucp_Grantha },
{ 1226, PT_SCX, ucp_Grantha },
{ 1234, PT_BOOL, ucp_Grapheme_Base },
{ 1247, PT_BOOL, ucp_Grapheme_Extend },
{ 1262, PT_BOOL, ucp_Grapheme_Link },
{ 1275, PT_BOOL, ucp_Grapheme_Base },
{ 1282, PT_SCX, ucp_Greek },
{ 1288, PT_SCX, ucp_Greek },
{ 1293, PT_BOOL, ucp_Grapheme_Extend },
{ 1299, PT_BOOL, ucp_Grapheme_Link },
{ 1306, PT_SCX, ucp_Gujarati },
{ 1315, PT_SCX, ucp_Gujarati },
{ 1320, PT_SCX, ucp_Gurung_Khema },
{ 1325, PT_SCX, ucp_Gunjala_Gondi },
{ 1338, PT_SCX, ucp_Gurmukhi },
{ 1347, PT_SCX, ucp_Gurmukhi },
{ 1352, PT_SCX, ucp_Gurung_Khema },
{ 1364, PT_SCX, ucp_Han },
{ 1368, PT_SCX, ucp_Hangul },
{ 1373, PT_SCX, ucp_Hangul },
{ 1380, PT_SCX, ucp_Han },
{ 1385, PT_SCX, ucp_Hanifi_Rohingya },
{ 1400, PT_SCX, ucp_Hanunoo },
{ 1405, PT_SCX, ucp_Hanunoo },
{ 1413, PT_SC, ucp_Hatran },
{ 1418, PT_SC, ucp_Hatran },
{ 1425, PT_SCX, ucp_Hebrew },
{ 1430, PT_SCX, ucp_Hebrew },
{ 1437, PT_BOOL, ucp_Hex_Digit },
{ 1441, PT_BOOL, ucp_Hex_Digit },
{ 1450, PT_SCX, ucp_Hiragana },
{ 1455, PT_SCX, ucp_Hiragana },
{ 1464, PT_SC, ucp_Anatolian_Hieroglyphs },
{ 1469, PT_SC, ucp_Pahawh_Hmong },
{ 1474, PT_SC, ucp_Nyiakeng_Puachue_Hmong },
{ 1479, PT_SCX, ucp_Old_Hungarian },
{ 1484, PT_BOOL, ucp_ID_Continue },
{ 1488, PT_BOOL, ucp_ID_Compat_Math_Continue },
{ 1509, PT_BOOL, ucp_ID_Compat_Math_Start },
{ 1527, PT_BOOL, ucp_ID_Continue },
{ 1538, PT_BOOL, ucp_Ideographic },
{ 1543, PT_BOOL, ucp_Ideographic },
{ 1555, PT_BOOL, ucp_ID_Start },
{ 1559, PT_BOOL, ucp_IDS_Binary_Operator },
{ 1564, PT_BOOL, ucp_IDS_Binary_Operator },
{ 1582, PT_BOOL, ucp_IDS_Trinary_Operator },
{ 1587, PT_BOOL, ucp_ID_Start },
{ 1595, PT_BOOL, ucp_IDS_Trinary_Operator },
{ 1614, PT_BOOL, ucp_IDS_Unary_Operator },
{ 1619, PT_BOOL, ucp_IDS_Unary_Operator },
{ 1636, PT_SC, ucp_Imperial_Aramaic },
{ 1652, PT_BOOL, ucp_InCB },
{ 1657, PT_SC, ucp_Inherited },
{ 1667, PT_SC, ucp_Inscriptional_Pahlavi },
{ 1688, PT_SC, ucp_Inscriptional_Parthian },
{ 1710, PT_SC, ucp_Old_Italic },
{ 1715, PT_SCX, ucp_Javanese },
{ 1720, PT_SCX, ucp_Javanese },
{ 1729, PT_BOOL, ucp_Join_Control },
{ 1735, PT_BOOL, ucp_Join_Control },
{ 1747, PT_SCX, ucp_Kaithi },
{ 1754, PT_SCX, ucp_Kayah_Li },
{ 1759, PT_SCX, ucp_Katakana },
{ 1764, PT_SCX, ucp_Kannada },
{ 1772, PT_SCX, ucp_Katakana },
{ 1781, PT_SC, ucp_Kawi },
{ 1786, PT_SCX, ucp_Kayah_Li },
{ 1794, PT_SC, ucp_Kharoshthi },
{ 1799, PT_SC, ucp_Kharoshthi },
{ 1810, PT_SC, ucp_Khitan_Small_Script },
{ 1828, PT_SC, ucp_Khmer },
{ 1834, PT_SC, ucp_Khmer },
{ 1839, PT_SCX, ucp_Khojki },
{ 1844, PT_SCX, ucp_Khojki },
{ 1851, PT_SCX, ucp_Khudawadi },
{ 1861, PT_SC, ucp_Kirat_Rai },
{ 1870, PT_SC, ucp_Khitan_Small_Script },
{ 1875, PT_SCX, ucp_Kannada },
{ 1880, PT_SC, ucp_Kirat_Rai },
{ 1885, PT_SCX, ucp_Kaithi },
{ 1890, PT_GC, ucp_L },
{ 1892, PT_LAMP, 0 },
{ 1895, PT_SC, ucp_Tai_Tham },
{ 1900, PT_SC, ucp_Lao },
{ 1904, PT_SC, ucp_Lao },
{ 1909, PT_SCX, ucp_Latin },
{ 1915, PT_SCX, ucp_Latin },
{ 1920, PT_LAMP, 0 },
{ 1923, PT_SC, ucp_Lepcha },
{ 1928, PT_SC, ucp_Lepcha },
{ 1935, PT_SCX, ucp_Limbu },
{ 1940, PT_SCX, ucp_Limbu },
{ 1946, PT_SCX, ucp_Linear_A },
{ 1951, PT_SCX, ucp_Linear_B },
{ 1956, PT_SCX, ucp_Linear_A },
{ 1964, PT_SCX, ucp_Linear_B },
{ 1972, PT_SCX, ucp_Lisu },
{ 1977, PT_PC, ucp_Ll },
{ 1980, PT_PC, ucp_Lm },
{ 1983, PT_PC, ucp_Lo },
{ 1986, PT_BOOL, ucp_Logical_Order_Exception },
{ 1990, PT_BOOL, ucp_Logical_Order_Exception },
{ 2012, PT_BOOL, ucp_Lowercase },
{ 2018, PT_BOOL, ucp_Lowercase },
{ 2028, PT_PC, ucp_Lt },
{ 2031, PT_PC, ucp_Lu },
{ 2034, PT_SCX, ucp_Lycian },
{ 2039, PT_SCX, ucp_Lycian },
{ 2046, PT_SCX, ucp_Lydian },
{ 2051, PT_SCX, ucp_Lydian },
{ 2058, PT_GC, ucp_M },
{ 2060, PT_SCX, ucp_Mahajani },
{ 2069, PT_SCX, ucp_Mahajani },
{ 2074, PT_SC, ucp_Makasar },
{ 2079, PT_SC, ucp_Makasar },
{ 2087, PT_SCX, ucp_Malayalam },
{ 2097, PT_SCX, ucp_Mandaic },
{ 2102, PT_SCX, ucp_Mandaic },
{ 2110, PT_SCX, ucp_Manichaean },
{ 2115, PT_SCX, ucp_Manichaean },
{ 2126, PT_SC, ucp_Marchen },
{ 2131, PT_SC, ucp_Marchen },
{ 2139, PT_SCX, ucp_Masaram_Gondi },
{ 2152, PT_BOOL, ucp_Math },
{ 2157, PT_PC, ucp_Mc },
{ 2160, PT_BOOL, ucp_Modifier_Combining_Mark },
{ 2164, PT_PC, ucp_Me },
{ 2167, PT_SC, ucp_Medefaidrin },
{ 2179, PT_SC, ucp_Medefaidrin },
{ 2184, PT_SC, ucp_Meetei_Mayek },
{ 2196, PT_SC, ucp_Mende_Kikakui },
{ 2201, PT_SC, ucp_Mende_Kikakui },
{ 2214, PT_SC, ucp_Meroitic_Cursive },
{ 2219, PT_SCX, ucp_Meroitic_Hieroglyphs },
{ 2224, PT_SC, ucp_Meroitic_Cursive },
{ 2240, PT_SCX, ucp_Meroitic_Hieroglyphs },
{ 2260, PT_SC, ucp_Miao },
{ 2265, PT_SCX, ucp_Malayalam },
{ 2270, PT_PC, ucp_Mn },
{ 2273, PT_SCX, ucp_Modi },
{ 2278, PT_BOOL, ucp_Modifier_Combining_Mark },
{ 2300, PT_SCX, ucp_Mongolian },
{ 2305, PT_SCX, ucp_Mongolian },
{ 2315, PT_SC, ucp_Mro },
{ 2319, PT_SC, ucp_Mro },
{ 2324, PT_SC, ucp_Meetei_Mayek },
{ 2329, PT_SCX, ucp_Multani },
{ 2334, PT_SCX, ucp_Multani },
{ 2342, PT_SCX, ucp_Myanmar },
{ 2350, PT_SCX, ucp_Myanmar },
{ 2355, PT_GC, ucp_N },
{ 2357, PT_SC, ucp_Nabataean },
{ 2367, PT_SC, ucp_Nag_Mundari },
{ 2372, PT_SC, ucp_Nag_Mundari },
{ 2383, PT_SCX, ucp_Nandinagari },
{ 2388, PT_SCX, ucp_Nandinagari },
{ 2400, PT_SC, ucp_Old_North_Arabian },
{ 2405, PT_SC, ucp_Nabataean },
{ 2410, PT_BOOL, ucp_Noncharacter_Code_Point },
{ 2416, PT_PC, ucp_Nd },
{ 2419, PT_SC, ucp_Newa },
{ 2424, PT_SC, ucp_New_Tai_Lue },
{ 2434, PT_SCX, ucp_Nko },
{ 2438, PT_SCX, ucp_Nko },
{ 2443, PT_PC, ucp_Nl },
{ 2446, PT_PC, ucp_No },
{ 2449, PT_BOOL, ucp_Noncharacter_Code_Point },
{ 2471, PT_SC, ucp_Nushu },
{ 2476, PT_SC, ucp_Nushu },
{ 2482, PT_SC, ucp_Nyiakeng_Puachue_Hmong },
{ 2503, PT_SC, ucp_Ogham },
{ 2508, PT_SC, ucp_Ogham },
{ 2514, PT_SC, ucp_Ol_Chiki },
{ 2522, PT_SC, ucp_Ol_Chiki },
{ 2527, PT_SCX, ucp_Old_Hungarian },
{ 2540, PT_SC, ucp_Old_Italic },
{ 2550, PT_SC, ucp_Old_North_Arabian },
{ 2566, PT_SCX, ucp_Old_Permic },
{ 2576, PT_SC, ucp_Old_Persian },
{ 2587, PT_SC, ucp_Old_Sogdian },
{ 2598, PT_SC, ucp_Old_South_Arabian },
{ 2614, PT_SCX, ucp_Old_Turkic },
{ 2624, PT_SCX, ucp_Old_Uyghur },
{ 2634, PT_SCX, ucp_Ol_Onal },
{ 2641, PT_SCX, ucp_Ol_Onal },
{ 2646, PT_SCX, ucp_Oriya },
{ 2652, PT_SCX, ucp_Old_Turkic },
{ 2657, PT_SCX, ucp_Oriya },
{ 2662, PT_SCX, ucp_Osage },
{ 2668, PT_SCX, ucp_Osage },
{ 2673, PT_SC, ucp_Osmanya },
{ 2678, PT_SC, ucp_Osmanya },
{ 2686, PT_SCX, ucp_Old_Uyghur },
{ 2691, PT_GC, ucp_P },
{ 2693, PT_SC, ucp_Pahawh_Hmong },
{ 2705, PT_SC, ucp_Palmyrene },
{ 2710, PT_SC, ucp_Palmyrene },
{ 2720, PT_BOOL, ucp_Pattern_Syntax },
{ 2727, PT_BOOL, ucp_Pattern_Syntax },
{ 2741, PT_BOOL, ucp_Pattern_White_Space },
{ 2759, PT_BOOL, ucp_Pattern_White_Space },
{ 2765, PT_SC, ucp_Pau_Cin_Hau },
{ 2770, PT_SC, ucp_Pau_Cin_Hau },
{ 2780, PT_PC, ucp_Pc },
{ 2783, PT_BOOL, ucp_Prepended_Concatenation_Mark },
{ 2787, PT_PC, ucp_Pd },
{ 2790, PT_PC, ucp_Pe },
{ 2793, PT_SCX, ucp_Old_Permic },
{ 2798, PT_PC, ucp_Pf },
{ 2801, PT_SCX, ucp_Phags_Pa },
{ 2806, PT_SCX, ucp_Phags_Pa },
{ 2814, PT_SC, ucp_Inscriptional_Pahlavi },
{ 2819, PT_SCX, ucp_Psalter_Pahlavi },
{ 2824, PT_SC, ucp_Phoenician },
{ 2829, PT_SC, ucp_Phoenician },
{ 2840, PT_PC, ucp_Pi },
{ 2843, PT_SC, ucp_Miao },
{ 2848, PT_PC, ucp_Po },
{ 2851, PT_BOOL, ucp_Prepended_Concatenation_Mark },
{ 2878, PT_SC, ucp_Inscriptional_Parthian },
{ 2883, PT_PC, ucp_Ps },
{ 2886, PT_SCX, ucp_Psalter_Pahlavi },
{ 2901, PT_SCX, ucp_Coptic },
{ 2906, PT_SC, ucp_Inherited },
{ 2911, PT_BOOL, ucp_Quotation_Mark },
{ 2917, PT_BOOL, ucp_Quotation_Mark },
{ 2931, PT_BOOL, ucp_Radical },
{ 2939, PT_BOOL, ucp_Regional_Indicator },
{ 2957, PT_SC, ucp_Rejang },
{ 2964, PT_BOOL, ucp_Regional_Indicator },
{ 2967, PT_SC, ucp_Rejang },
{ 2972, PT_SCX, ucp_Hanifi_Rohingya },
{ 2977, PT_SCX, ucp_Runic },
{ 2983, PT_SCX, ucp_Runic },
{ 2988, PT_GC, ucp_S },
{ 2990, PT_SCX, ucp_Samaritan },
{ 3000, PT_SCX, ucp_Samaritan },
{ 3005, PT_SC, ucp_Old_South_Arabian },
{ 3010, PT_SC, ucp_Saurashtra },
{ 3015, PT_SC, ucp_Saurashtra },
{ 3026, PT_PC, ucp_Sc },
{ 3029, PT_BOOL, ucp_Soft_Dotted },
{ 3032, PT_BOOL, ucp_Sentence_Terminal },
{ 3049, PT_SC, ucp_SignWriting },
{ 3054, PT_SCX, ucp_Sharada },
{ 3062, PT_SCX, ucp_Shavian },
{ 3070, PT_SCX, ucp_Shavian },
{ 3075, PT_SCX, ucp_Sharada },
{ 3080, PT_SC, ucp_Siddham },
{ 3085, PT_SC, ucp_Siddham },
{ 3093, PT_SC, ucp_SignWriting },
{ 3105, PT_SCX, ucp_Khudawadi },
{ 3110, PT_SCX, ucp_Sinhala },
{ 3115, PT_SCX, ucp_Sinhala },
{ 3123, PT_PC, ucp_Sk },
{ 3126, PT_PC, ucp_Sm },
{ 3129, PT_PC, ucp_So },
{ 3132, PT_BOOL, ucp_Soft_Dotted },
{ 3143, PT_SCX, ucp_Sogdian },
{ 3148, PT_SCX, ucp_Sogdian },
{ 3156, PT_SC, ucp_Old_Sogdian },
{ 3161, PT_SC, ucp_Sora_Sompeng },
{ 3166, PT_SC, ucp_Sora_Sompeng },
{ 3178, PT_SC, ucp_Soyombo },
{ 3183, PT_SC, ucp_Soyombo },
{ 3191, PT_BOOL, ucp_White_Space },
{ 3197, PT_BOOL, ucp_Sentence_Terminal },
{ 3203, PT_SC, ucp_Sundanese },
{ 3208, PT_SC, ucp_Sundanese },
{ 3218, PT_SCX, ucp_Sunuwar },
{ 3223, PT_SCX, ucp_Sunuwar },
{ 3231, PT_SCX, ucp_Syloti_Nagri },
{ 3236, PT_SCX, ucp_Syloti_Nagri },
{ 3248, PT_SCX, ucp_Syriac },
{ 3253, PT_SCX, ucp_Syriac },
{ 3260, PT_SCX, ucp_Tagalog },
{ 3268, PT_SCX, ucp_Tagbanwa },
{ 3273, PT_SCX, ucp_Tagbanwa },
{ 3282, PT_SCX, ucp_Tai_Le },
{ 3288, PT_SC, ucp_Tai_Tham },
{ 3296, PT_SC, ucp_Tai_Viet },
{ 3304, PT_SCX, ucp_Takri },
{ 3309, PT_SCX, ucp_Takri },
{ 3315, PT_SCX, ucp_Tai_Le },
{ 3320, PT_SC, ucp_New_Tai_Lue },
{ 3325, PT_SCX, ucp_Tamil },
{ 3331, PT_SCX, ucp_Tamil },
{ 3336, PT_SCX, ucp_Tangut },
{ 3341, PT_SC, ucp_Tangsa },
{ 3348, PT_SCX, ucp_Tangut },
{ 3355, PT_SC, ucp_Tai_Viet },
{ 3360, PT_SCX, ucp_Telugu },
{ 3365, PT_SCX, ucp_Telugu },
{ 3372, PT_BOOL, ucp_Terminal_Punctuation },
{ 3377, PT_BOOL, ucp_Terminal_Punctuation },
{ 3397, PT_SCX, ucp_Tifinagh },
{ 3402, PT_SCX, ucp_Tagalog },
{ 3407, PT_SCX, ucp_Thaana },
{ 3412, PT_SCX, ucp_Thaana },
{ 3419, PT_SCX, ucp_Thai },
{ 3424, PT_SCX, ucp_Tibetan },
{ 3432, PT_SCX, ucp_Tibetan },
{ 3437, PT_SCX, ucp_Tifinagh },
{ 3446, PT_SCX, ucp_Tirhuta },
{ 3451, PT_SCX, ucp_Tirhuta },
{ 3459, PT_SC, ucp_Tangsa },
{ 3464, PT_SCX, ucp_Todhri },
{ 3471, PT_SCX, ucp_Todhri },
{ 3476, PT_SCX, ucp_Toto },
{ 3481, PT_SCX, ucp_Tulu_Tigalari },
{ 3494, PT_SCX, ucp_Tulu_Tigalari },
{ 3499, PT_SC, ucp_Ugaritic },
{ 3504, PT_SC, ucp_Ugaritic },
{ 3513, PT_BOOL, ucp_Unified_Ideograph },
{ 3519, PT_BOOL, ucp_Unified_Ideograph },
{ 3536, PT_SC, ucp_Unknown },
{ 3544, PT_BOOL, ucp_Uppercase },
{ 3550, PT_BOOL, ucp_Uppercase },
{ 3560, PT_SC, ucp_Vai },
{ 3564, PT_SC, ucp_Vai },
{ 3569, PT_BOOL, ucp_Variation_Selector },
{ 3587, PT_SC, ucp_Vithkuqi },
{ 3592, PT_SC, ucp_Vithkuqi },
{ 3601, PT_BOOL, ucp_Variation_Selector },
{ 3604, PT_SC, ucp_Wancho },
{ 3611, PT_SC, ucp_Warang_Citi },
{ 3616, PT_SC, ucp_Warang_Citi },
{ 3627, PT_SC, ucp_Wancho },
{ 3632, PT_BOOL, ucp_White_Space },
{ 3643, PT_BOOL, ucp_White_Space },
{ 3650, PT_ALNUM, 0 },
{ 3654, PT_BOOL, ucp_XID_Continue },
{ 3659, PT_BOOL, ucp_XID_Continue },
{ 3671, PT_BOOL, ucp_XID_Start },
{ 3676, PT_BOOL, ucp_XID_Start },
{ 3685, PT_SC, ucp_Old_Persian },
{ 3690, PT_PXSPACE, 0 },
{ 3694, PT_SPACE, 0 },
{ 3698, PT_SC, ucp_Cuneiform },
{ 3703, PT_UCNC, 0 },
{ 3707, PT_WORD, 0 },
{ 3711, PT_SCX, ucp_Yezidi },
{ 3716, PT_SCX, ucp_Yezidi },
{ 3723, PT_SCX, ucp_Yi },
{ 3726, PT_SCX, ucp_Yi },
{ 3731, PT_GC, ucp_Z },
{ 3733, PT_SC, ucp_Zanabazar_Square },
{ 3749, PT_SC, ucp_Zanabazar_Square },
{ 3754, PT_SC, ucp_Inherited },
{ 3759, PT_PC, ucp_Zl },
{ 3762, PT_PC, ucp_Zp },
{ 3765, PT_PC, ucp_Zs },
{ 3768, PT_SC, ucp_Common },
{ 3773, PT_SC, ucp_Unknown }
};
const size_t PRIV(utt_size) = sizeof(PRIV(utt)) / sizeof(ucp_type_table);

132
src/3rdparty/pcre2/src/pcre2_util.h vendored Normal file
View File

@ -0,0 +1,132 @@
/*************************************************
* Perl-Compatible Regular Expressions *
*************************************************/
/* PCRE2 is a library of functions to support regular expressions whose syntax
and semantics are as close as possible to those of the Perl 5 language.
Written by Philip Hazel
Original API code Copyright (c) 1997-2012 University of Cambridge
New API code Copyright (c) 2016-2024 University of Cambridge
-----------------------------------------------------------------------------
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice,
this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
* Neither the name of the University of Cambridge nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
POSSIBILITY OF SUCH DAMAGE.
-----------------------------------------------------------------------------
*/
#ifndef PCRE2_UTIL_H_IDEMPOTENT_GUARD
#define PCRE2_UTIL_H_IDEMPOTENT_GUARD
/* Assertion macros */
#ifdef PCRE2_DEBUG
#if defined(HAVE_ASSERT_H) && !defined(NDEBUG)
#include <assert.h>
#endif
/* PCRE2_ASSERT(x) can be used to inject an assert() for conditions
that the code below doesn't support. It is a NOP for non debug builds
but in debug builds will print information about the location of the
code where it triggered and crash.
It is meant to work like assert(), and therefore the expression used
should indicate what the expected state is, and shouldn't have any
side-effects. */
#if defined(HAVE_ASSERT_H) && !defined(NDEBUG)
#define PCRE2_ASSERT(x) assert(x)
#else
#define PCRE2_ASSERT(x) do \
{ \
if (!(x)) \
{ \
fprintf(stderr, "Assertion failed at " __FILE__ ":%d\n", __LINE__); \
abort(); \
} \
} while(0)
#endif
/* PCRE2_UNREACHABLE() can be used to mark locations on the code that
shouldn't be reached. In non debug builds is defined as a hint for
the compiler to eliminate any code after it, so it is useful also for
performance reasons, but should be used with care because if it is
ever reached will trigger Undefined Behaviour and if you are lucky a
crash. In debug builds it will report the location where it was triggered
and crash. One important point to consider when using this macro, is
that it is only implemented for a few compilers, and therefore can't
be relied on to always be active either, so if it is followed by some
code it is important to make sure that the whole thing is safe to
use even if the macro is not there (ex: make sure there is a `break`
after it if used at the end of a `case`) and to test your code also
with a configuration where the macro will be a NOP. */
#if defined(HAVE_ASSERT_H) && !defined(NDEBUG)
#define PCRE2_UNREACHABLE() \
assert(((void)"Execution reached unexpected point", 0))
#else
#define PCRE2_UNREACHABLE() do \
{ \
fprintf(stderr, "Execution reached unexpected point at " __FILE__ \
":%d\n", __LINE__); \
abort(); \
} while(0)
#endif
/* PCRE2_DEBUG_UNREACHABLE() is a debug only version of the previous
macro. It is meant to be used in places where the code is handling
an error situation in code that shouldn't be reached, but that has
some sort of fallback code to normally handle the error. When in
doubt you should use this instead of the previous macro. Like in
the previous case, it is a good idea to document as much as possible
the reason and the actions that should be taken if it ever triggers. */
#define PCRE2_DEBUG_UNREACHABLE() PCRE2_UNREACHABLE()
#endif /* PCRE2_DEBUG */
#ifndef PCRE2_DEBUG_UNREACHABLE
#define PCRE2_DEBUG_UNREACHABLE() do {} while(0)
#endif
#ifndef PCRE2_UNREACHABLE
#ifdef HAVE_BUILTIN_UNREACHABLE
#define PCRE2_UNREACHABLE() __builtin_unreachable()
#elif defined(HAVE_BUILTIN_ASSUME)
#define PCRE2_UNREACHABLE() __assume(0)
#else
#define PCRE2_UNREACHABLE() do {} while(0)
#endif
#endif /* !PCRE2_UNREACHABLE */
#ifndef PCRE2_ASSERT
#define PCRE2_ASSERT(x) do {} while(0)
#endif
#endif /* PCRE2_UTIL_H_IDEMPOTENT_GUARD */
/* End of pcre2_util.h */

View File

@ -7,7 +7,7 @@ and semantics are as close as possible to those of the Perl 5 language.
Written by Philip Hazel
Original API code Copyright (c) 1997-2012 University of Cambridge
New API code Copyright (c) 2016-2023 University of Cambridge
New API code Copyright (c) 2016-2024 University of Cambridge
-----------------------------------------------------------------------------
Redistribution and use in source and binary forms, with or without
@ -38,9 +38,9 @@ POSSIBILITY OF SUCH DAMAGE.
-----------------------------------------------------------------------------
*/
/* This module contains an internal function that is used to match an extended
class. It is used by pcre2_auto_possessify() and by both pcre2_match() and
pcre2_def_match(). */
/* This module contains two internal functions that are used to match
OP_XCLASS and OP_ECLASS. It is used by pcre2_auto_possessify() and by both
pcre2_match() and pcre2_dfa_match(). */
#ifdef HAVE_CONFIG_H
@ -66,114 +66,75 @@ Returns: TRUE if character matches, else FALSE
*/
BOOL
PRIV(xclass)(uint32_t c, PCRE2_SPTR data, BOOL utf)
PRIV(xclass)(uint32_t c, PCRE2_SPTR data, const uint8_t *char_lists_end, BOOL utf)
{
/* Update PRIV(update_classbits) when this function is changed. */
PCRE2_UCHAR t;
BOOL negated = (*data & XCL_NOT) != 0;
BOOL not_negated = (*data & XCL_NOT) == 0;
uint32_t type, max_index, min_index, value;
const uint8_t *next_char;
#if PCRE2_CODE_UNIT_WIDTH == 8
/* In 8 bit mode, this must always be TRUE. Help the compiler to know that. */
utf = TRUE;
#endif
/* Code points < 256 are matched against a bitmap, if one is present. If not,
we still carry on, because there may be ranges that start below 256 in the
additional data. */
/* Code points < 256 are matched against a bitmap, if one is present. */
if (c < 256)
if ((*data++ & XCL_MAP) != 0)
{
if ((*data & XCL_HASPROP) == 0)
{
if ((*data & XCL_MAP) == 0) return negated;
return (((uint8_t *)(data + 1))[c/8] & (1u << (c&7))) != 0;
}
if ((*data & XCL_MAP) != 0 &&
(((uint8_t *)(data + 1))[c/8] & (1u << (c&7))) != 0)
return !negated; /* char found */
if (c < 256)
return (((const uint8_t *)data)[c/8] & (1u << (c&7))) != 0;
/* Skip bitmap. */
data += 32 / sizeof(PCRE2_UCHAR);
}
/* First skip the bit map if present. Then match against the list of Unicode
properties or large chars or ranges that end with a large char. We won't ever
/* Match against the list of Unicode properties. We won't ever
encounter XCL_PROP or XCL_NOTPROP when UTF support is not compiled. */
if ((*data++ & XCL_MAP) != 0) data += 32 / sizeof(PCRE2_UCHAR);
while ((t = *data++) != XCL_END)
{
uint32_t x, y;
if (t == XCL_SINGLE)
{
#ifdef SUPPORT_UNICODE
if (utf)
if (*data == XCL_PROP || *data == XCL_NOTPROP)
{
GETCHARINC(x, data); /* macro generates multiple statements */
}
else
#endif
x = *data++;
if (c == x) return !negated;
}
else if (t == XCL_RANGE)
{
#ifdef SUPPORT_UNICODE
if (utf)
{
GETCHARINC(x, data); /* macro generates multiple statements */
GETCHARINC(y, data); /* macro generates multiple statements */
}
else
#endif
{
x = *data++;
y = *data++;
}
if (c >= x && c <= y) return !negated;
}
/* The UCD record is the same for all properties. */
const ucd_record *prop = GET_UCD(c);
#ifdef SUPPORT_UNICODE
else /* XCL_PROP & XCL_NOTPROP */
do
{
int chartype;
const ucd_record *prop = GET_UCD(c);
BOOL isprop = t == XCL_PROP;
BOOL isprop = (*data++) == XCL_PROP;
BOOL ok;
switch(*data)
{
case PT_ANY:
if (isprop) return !negated;
break;
case PT_LAMP:
chartype = prop->chartype;
if ((chartype == ucp_Lu || chartype == ucp_Ll ||
chartype == ucp_Lt) == isprop) return !negated;
chartype == ucp_Lt) == isprop) return not_negated;
break;
case PT_GC:
if ((data[1] == PRIV(ucp_gentype)[prop->chartype]) == isprop)
return !negated;
return not_negated;
break;
case PT_PC:
if ((data[1] == prop->chartype) == isprop) return !negated;
if ((data[1] == prop->chartype) == isprop) return not_negated;
break;
case PT_SC:
if ((data[1] == prop->script) == isprop) return !negated;
if ((data[1] == prop->script) == isprop) return not_negated;
break;
case PT_SCX:
ok = (data[1] == prop->script ||
MAPBIT(PRIV(ucd_script_sets) + UCD_SCRIPTX_PROP(prop), data[1]) != 0);
if (ok == isprop) return !negated;
if (ok == isprop) return not_negated;
break;
case PT_ALNUM:
chartype = prop->chartype;
if ((PRIV(ucp_gentype)[chartype] == ucp_L ||
PRIV(ucp_gentype)[chartype] == ucp_N) == isprop)
return !negated;
return not_negated;
break;
/* Perl space used to exclude VT, but from Perl 5.18 it is included,
@ -186,12 +147,12 @@ while ((t = *data++) != XCL_END)
{
HSPACE_CASES:
VSPACE_CASES:
if (isprop) return !negated;
if (isprop) return not_negated;
break;
default:
if ((PRIV(ucp_gentype)[prop->chartype] == ucp_Z) == isprop)
return !negated;
return not_negated;
break;
}
break;
@ -201,7 +162,7 @@ while ((t = *data++) != XCL_END)
if ((PRIV(ucp_gentype)[chartype] == ucp_L ||
PRIV(ucp_gentype)[chartype] == ucp_N ||
chartype == ucp_Mn || chartype == ucp_Pc) == isprop)
return !negated;
return not_negated;
break;
case PT_UCNC:
@ -209,24 +170,24 @@ while ((t = *data++) != XCL_END)
{
if ((c == CHAR_DOLLAR_SIGN || c == CHAR_COMMERCIAL_AT ||
c == CHAR_GRAVE_ACCENT) == isprop)
return !negated;
return not_negated;
}
else
{
if ((c < 0xd800 || c > 0xdfff) == isprop)
return !negated;
return not_negated;
}
break;
case PT_BIDICL:
if ((UCD_BIDICLASS_PROP(prop) == data[1]) == isprop)
return !negated;
return not_negated;
break;
case PT_BOOL:
ok = MAPBIT(PRIV(ucd_boolprop_sets) +
UCD_BPROPS_PROP(prop), data[1]) != 0;
if (ok == isprop) return !negated;
if (ok == isprop) return not_negated;
break;
/* The following three properties can occur only in an XCLASS, as there
@ -248,7 +209,7 @@ while ((t = *data++) != XCL_END)
(chartype == ucp_Cf &&
c != 0x061c && c != 0x180e && (c < 0x2066 || c > 0x2069))
)) == isprop)
return !negated;
return not_negated;
break;
/* Printable character: same as graphic, with the addition of Zs, i.e.
@ -262,7 +223,7 @@ while ((t = *data++) != XCL_END)
(chartype == ucp_Cf &&
c != 0x061c && (c < 0x2066 || c > 0x2069))
)) == isprop)
return !negated;
return not_negated;
break;
/* Punctuation: all Unicode punctuation, plus ASCII characters that
@ -273,7 +234,7 @@ while ((t = *data++) != XCL_END)
chartype = prop->chartype;
if ((PRIV(ucp_gentype)[chartype] == ucp_P ||
(c < 128 && PRIV(ucp_gentype)[chartype] == ucp_S)) == isprop)
return !negated;
return not_negated;
break;
/* Perl has two sets of hex digits */
@ -285,24 +246,300 @@ while ((t = *data++) != XCL_END)
(c >= 0xff10 && c <= 0xff19) || /* Fullwidth digits */
(c >= 0xff21 && c <= 0xff26) || /* Fullwidth letters */
(c >= 0xff41 && c <= 0xff46)) == isprop)
return !negated;
return not_negated;
break;
/* This should never occur, but compilers may mutter if there is no
default. */
default:
PCRE2_DEBUG_UNREACHABLE();
return FALSE;
}
data += 2;
}
while (*data == XCL_PROP || *data == XCL_NOTPROP);
}
#else
(void)utf; /* Avoid compiler warning */
#endif /* SUPPORT_UNICODE */
/* Match against large chars or ranges that end with a large char. */
if (*data < XCL_LIST)
{
while ((t = *data++) != XCL_END)
{
uint32_t x, y;
#ifdef SUPPORT_UNICODE
if (utf)
{
GETCHARINC(x, data); /* macro generates multiple statements */
}
else
#endif
x = *data++;
if (t == XCL_SINGLE)
{
/* Since character ranges follow the properties, and they are
sorted, early return is possible for all characters <= x. */
if (c <= x) return (c == x) ? not_negated : !not_negated;
continue;
}
return negated; /* char did not match */
PCRE2_ASSERT(t == XCL_RANGE);
#ifdef SUPPORT_UNICODE
if (utf)
{
GETCHARINC(y, data); /* macro generates multiple statements */
}
else
#endif
y = *data++;
/* Since character ranges follow the properties, and they are
sorted, early return is possible for all characters <= y. */
if (c <= y) return (c >= x) ? not_negated : !not_negated;
}
return !not_negated; /* char did not match */
}
#if PCRE2_CODE_UNIT_WIDTH == 8
type = (uint32_t)(data[0] << 8) | data[1];
data += 2;
#else
type = data[0];
data++;
#endif /* CODE_UNIT_WIDTH */
/* Align characters. */
next_char = char_lists_end - (GET(data, 0) << 1);
type &= XCL_TYPE_MASK;
/* Alignment check. */
PCRE2_ASSERT(((uintptr_t)next_char & 0x1) == 0);
if (c >= XCL_CHAR_LIST_HIGH_16_START)
{
max_index = type & XCL_ITEM_COUNT_MASK;
if (max_index == XCL_ITEM_COUNT_MASK)
{
max_index = *(const uint16_t*)next_char;
PCRE2_ASSERT(max_index >= XCL_ITEM_COUNT_MASK);
next_char += 2;
}
next_char += max_index << 1;
type >>= XCL_TYPE_BIT_LEN;
}
if (c < XCL_CHAR_LIST_LOW_32_START)
{
max_index = type & XCL_ITEM_COUNT_MASK;
c = (uint16_t)((c << XCL_CHAR_SHIFT) | XCL_CHAR_END);
if (max_index == XCL_ITEM_COUNT_MASK)
{
max_index = *(const uint16_t*)next_char;
PCRE2_ASSERT(max_index >= XCL_ITEM_COUNT_MASK);
next_char += 2;
}
if (max_index == 0 || c < *(const uint16_t*)next_char)
return ((type & XCL_BEGIN_WITH_RANGE) != 0) == not_negated;
min_index = 0;
value = ((const uint16_t*)next_char)[--max_index];
if (c >= value)
return (value == c || (value & XCL_CHAR_END) == 0) == not_negated;
max_index--;
/* Binary search of a range. */
while (TRUE)
{
uint32_t mid_index = (min_index + max_index) >> 1;
value = ((const uint16_t*)next_char)[mid_index];
if (c < value)
max_index = mid_index - 1;
else if (((const uint16_t*)next_char)[mid_index + 1] <= c)
min_index = mid_index + 1;
else
return (value == c || (value & XCL_CHAR_END) == 0) == not_negated;
}
}
/* Skip the 16 bit ranges. */
max_index = type & XCL_ITEM_COUNT_MASK;
if (max_index == XCL_ITEM_COUNT_MASK)
{
max_index = *(const uint16_t*)next_char;
PCRE2_ASSERT(max_index >= XCL_ITEM_COUNT_MASK);
next_char += 2;
}
next_char += (max_index << 1);
type >>= XCL_TYPE_BIT_LEN;
/* Alignment check. */
PCRE2_ASSERT(((uintptr_t)next_char & 0x3) == 0);
max_index = type & XCL_ITEM_COUNT_MASK;
#if PCRE2_CODE_UNIT_WIDTH == 32
if (c >= XCL_CHAR_LIST_HIGH_32_START)
{
if (max_index == XCL_ITEM_COUNT_MASK)
{
max_index = *(const uint32_t*)next_char;
PCRE2_ASSERT(max_index >= XCL_ITEM_COUNT_MASK);
next_char += 4;
}
next_char += max_index << 2;
type >>= XCL_TYPE_BIT_LEN;
max_index = type & XCL_ITEM_COUNT_MASK;
}
#endif
c = (uint32_t)((c << XCL_CHAR_SHIFT) | XCL_CHAR_END);
if (max_index == XCL_ITEM_COUNT_MASK)
{
max_index = *(const uint32_t*)next_char;
next_char += 4;
}
if (max_index == 0 || c < *(const uint32_t*)next_char)
return ((type & XCL_BEGIN_WITH_RANGE) != 0) == not_negated;
min_index = 0;
value = ((const uint32_t*)next_char)[--max_index];
if (c >= value)
return (value == c || (value & XCL_CHAR_END) == 0) == not_negated;
max_index--;
/* Binary search of a range. */
while (TRUE)
{
uint32_t mid_index = (min_index + max_index) >> 1;
value = ((const uint32_t*)next_char)[mid_index];
if (c < value)
max_index = mid_index - 1;
else if (((const uint32_t*)next_char)[mid_index + 1] <= c)
min_index = mid_index + 1;
else
return (value == c || (value & XCL_CHAR_END) == 0) == not_negated;
}
}
/*************************************************
* Match character against an ECLASS *
*************************************************/
/* This function is called to match a character against an extended class
used for describing characters using boolean operations on sets.
Arguments:
c the character
data_start points to the start of the ECLASS data
data_end points one-past-the-last of the ECLASS data
utf TRUE if in UTF mode
Returns: TRUE if character matches, else FALSE
*/
BOOL
PRIV(eclass)(uint32_t c, PCRE2_SPTR data_start, PCRE2_SPTR data_end,
const uint8_t *char_lists_end, BOOL utf)
{
PCRE2_SPTR ptr = data_start;
PCRE2_UCHAR flags;
uint32_t stack = 0;
int stack_depth = 0;
PCRE2_ASSERT(data_start < data_end);
flags = *ptr++;
PCRE2_ASSERT((flags & ECL_MAP) == 0 ||
(data_end - ptr) >= 32 / (int)sizeof(PCRE2_UCHAR));
/* Code points < 256 are matched against a bitmap, if one is present.
Otherwise all codepoints are checked later. */
if ((flags & ECL_MAP) != 0)
{
if (c < 256)
return (((const uint8_t *)ptr)[c/8] & (1u << (c&7))) != 0;
/* Skip the bitmap. */
ptr += 32 / sizeof(PCRE2_UCHAR);
}
/* Do a little loop, until we reach the end of the ECLASS. */
while (ptr < data_end)
{
switch (*ptr)
{
case ECL_AND:
++ptr;
stack = (stack >> 1) & (stack | ~(uint32_t)1u);
PCRE2_ASSERT(stack_depth >= 2);
--stack_depth;
break;
case ECL_OR:
++ptr;
stack = (stack >> 1) | (stack & (uint32_t)1u);
PCRE2_ASSERT(stack_depth >= 2);
--stack_depth;
break;
case ECL_XOR:
++ptr;
stack = (stack >> 1) ^ (stack & (uint32_t)1u);
PCRE2_ASSERT(stack_depth >= 2);
--stack_depth;
break;
case ECL_NOT:
++ptr;
stack ^= (uint32_t)1u;
PCRE2_ASSERT(stack_depth >= 1);
break;
case ECL_XCLASS:
{
uint32_t matched = PRIV(xclass)(c, ptr + 1 + LINK_SIZE, char_lists_end, utf);
ptr += GET(ptr, 1);
stack = (stack << 1) | matched;
++stack_depth;
break;
}
/* This should never occur, but compilers may mutter if there is no
default. */
default:
PCRE2_DEBUG_UNREACHABLE();
return FALSE;
}
}
PCRE2_ASSERT(stack_depth == 1);
(void)stack_depth; /* Ignore unused variable, if assertions are disabled. */
/* The final bit left on the stack now holds the match result. */
return (stack & 1u) != 0;
}
/* End of pcre2_xclass.c */

View File

@ -172,6 +172,7 @@ qt_internal_extend_target(Bootstrap CONDITION CMAKE_CROSSCOMPILING OR NOT QT_FEA
../../3rdparty/pcre2/src/pcre2_chartables.c
../../3rdparty/pcre2/src/pcre2_chkdint.c
../../3rdparty/pcre2/src/pcre2_compile.c
../../3rdparty/pcre2/src/pcre2_compile_class.c
../../3rdparty/pcre2/src/pcre2_config.c
../../3rdparty/pcre2/src/pcre2_context.c
../../3rdparty/pcre2/src/pcre2_dfa_match.c