Removed numpy and scipy files

This commit is contained in:
Slater-Victoroff
2013-05-30 11:11:05 -04:00
parent 84149f8f78
commit e493ada3ff
4378 changed files with 0 additions and 1098666 deletions

View File

@@ -1,59 +0,0 @@
X.flat returns an indexable 1-D iterator (mostly similar to an array
but always 1-d) --- only has .copy and .__array__ attributes of an array!!!
.typecode() --> .dtype.char
.iscontiguous() --> .flags['CONTIGUOUS'] or .flags.contiguous
.byteswapped() -> .byteswap()
.itemsize() -> .itemsize
.toscalar() -> .item()
If you used typecode characters:
'c' -> 'S1' or 'c'
'b' -> 'B'
'1' -> 'b'
's' -> 'h'
'w' -> 'H'
'u' -> 'I'
C -level
some API calls that used to take PyObject * now take PyArrayObject *
(this should only cause warnings during compile and not actual problems).
PyArray_Take
These commands now return a buffer that must be freed once it is used
using PyMemData_FREE(ptr);
a->descr->zero --> PyArray_Zero(a)
a->descr->one --> PyArray_One(a)
Numeric/arrayobject.h --> numpy/oldnumeric.h
# These will actually work and are defines for PyArray_BYTE,
# but you really should change it in your code
PyArray_CHAR --> PyArray_CHAR
(or PyArray_STRING which is more flexible)
PyArray_SBYTE --> PyArray_BYTE
Any uses of character codes will need adjusting....
use PyArray_XXXLTR where XXX is the name of the type.
If you used function pointers directly (why did you do that?),
the arguments have changed. Everything that was an int is now an intp.
Also, arrayobjects should be passed in at the end.
a->descr->cast[i](fromdata, fromstep, todata, tostep, n)
a->descr->cast[i](fromdata, todata, n, PyArrayObject *in, PyArrayObject *out)
anything but single-stepping is not supported by this function
use the PyArray_CastXXXX functions.

View File

@@ -1,19 +0,0 @@
Thank you for your willingness to help make NumPy the best array system
available.
We have a few simple rules:
* try hard to keep the Git repository in a buildable state and to not
indiscriminately muck with what others have contributed.
* Simple changes (including bug fixes) and obvious improvements are
always welcome. Changes that fundamentally change behavior need
discussion on numpy-discussions@scipy.org before anything is
done.
* Please add meaningful comments when you check changes in. These
comments form the basis of the change-log.
* Add unit tests to exercise new code, and regression tests
whenever you fix a bug.

View File

@@ -1,139 +0,0 @@
.. -*- rest -*-
.. vim:syntax=rest
.. NB! Keep this document a valid restructured document.
Building and installing NumPy
+++++++++++++++++++++++++++++
:Authors: Numpy Developers <numpy-discussion@scipy.org>
:Discussions to: numpy-discussion@scipy.org
.. Contents::
PREREQUISITES
=============
Building NumPy requires the following software installed:
1) Python__ 2.4.x or newer
On Debian and derivative (Ubuntu): python python-dev
On Windows: the official python installer on Python__ is enough
Make sure that the Python package distutils is installed before
continuing. For example, in Debian GNU/Linux, distutils is included
in the python-dev package.
Python must also be compiled with the zlib module enabled.
2) nose__ (optional) 0.10.3 or later
This is required for testing numpy, but not for using it.
Python__ http://www.python.org
nose__ http://somethingaboutorange.com/mrl/projects/nose/
Fortran ABI mismatch
====================
The two most popular open source fortran compilers are g77 and gfortran.
Unfortunately, they are not ABI compatible, which means that concretely you
should avoid mixing libraries built with one with another. In particular, if
your blas/lapack/atlas is built with g77, you *must* use g77 when building
numpy and scipy; on the contrary, if your atlas is built with gfortran, you
*must* build numpy/scipy with gfortran.
Choosing the fortran compiler
-----------------------------
To build with g77:
python setup.py build --fcompiler=gnu
To build with gfortran:
python setup.py build --fcompiler=gnu95
How to check the ABI of blas/lapack/atlas
-----------------------------------------
One relatively simple and reliable way to check for the compiler used to build
a library is to use ldd on the library. If libg2c.so is a dependency, this
means that g77 has been used. If libgfortran.so is a a dependency, gfortran has
been used. If both are dependencies, this means both have been used, which is
almost always a very bad idea.
Building with ATLAS support
===========================
Ubuntu 8.10 (Intrepid)
----------------------
You can install the necessary packages for optimized ATLAS with this command:
sudo apt-get install libatlas-base-dev
If you have a recent CPU with SIMD suppport (SSE, SSE2, etc...), you should
also install the corresponding package for optimal performances. For example,
for SSE2:
sudo apt-get install libatlas3gf-sse2
*NOTE*: if you build your own atlas, Intrepid changed its default fortran
compiler to gfortran. So you should rebuild everything from scratch, including
lapack, to use it on Intrepid.
Ubuntu 8.04 and lower
---------------------
You can install the necessary packages for optimized ATLAS with this command:
sudo apt-get install atlas3-base-dev
If you have a recent CPU with SIMD suppport (SSE, SSE2, etc...), you should
also install the corresponding package for optimal performances. For example,
for SSE2:
sudo apt-get install atlas3-sse2
Windows 64 bits notes
=====================
Note: only AMD64 is supported (IA64 is not) - AMD64 is the version most people
want.
Free compilers (mingw-w64)
--------------------------
http://mingw-w64.sourceforge.net/
To use the free compilers (mingw-w64), you need to build your own toolchain, as
the mingw project only distribute cross-compilers (cross-compilation is not
supported by numpy). Since this toolchain is still being worked on, serious
compilers bugs can be expected. binutil 2.19 + gcc 4.3.3 + mingw-w64 runtime
gives you a working C compiler (but the C++ is broken). gcc 4.4 will hopefully
be able to run natively.
This is the only tested way to get a numpy with a FULL blas/lapack (scipy does
not work because of C++).
MS compilers
------------
If you are familiar with MS tools, that's obviously the easiest path, and the
compilers are hopefully more mature (although in my experience, they are quite
fragile, and often segfault on invalid C code). The main drawback is that no
fortran compiler + MS compiler combination has been tested - mingw-w64 gfortran
+ MS compiler does not work at all (it is unclear whether it ever will).
For python 2.5, you need VS 2005 (MS compiler version 14) targetting
AMD64 bits, or the Platform SDK v6.0 or below (which gives command
line versions of 64 bits target compilers). The PSDK is free.
For python 2.6, you need VS 2008. The freely available version does not
contains 64 bits compilers (you also need the PSDK, v6.1).
It is *crucial* to use the right version: python 2.5 -> version 14, python 2.6,
version 15. You can check the compiler version with cl.exe /?. Note also that
for python 2.5, 64 bits and 32 bits versions use a different compiler version.

View File

@@ -1,30 +0,0 @@
Copyright (c) 2005-2009, NumPy Developers.
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following
disclaimer in the documentation and/or other materials provided
with the distribution.
* Neither the name of the NumPy Developers nor the names of any
contributors may be used to endorse or promote products derived
from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

View File

@@ -1,26 +0,0 @@
#
# Use .add_data_files and .add_data_dir methods in a appropriate
# setup.py files to include non-python files such as documentation,
# data, etc files to distribution. Avoid using MANIFEST.in for that.
#
include MANIFEST.in
include COMPATIBILITY
include *.txt
include setupscons.py
include setupsconsegg.py
include setupegg.py
include site.cfg.example
include tools/py3tool.py
# Adding scons build related files not found by distutils
recursive-include numpy/core/code_generators *.py *.txt
recursive-include numpy/core *.in *.h
recursive-include numpy SConstruct SConscript
# Add documentation: we don't use add_data_dir since we do not want to include
# this at installation, only for sdist-generated tarballs
include doc/Makefile doc/postprocess.py
recursive-include doc/release *
recursive-include doc/source *
recursive-include doc/sphinxext *
recursive-include doc/cython *
recursive-include doc/pyrex *
recursive-include doc/swig *

View File

@@ -1,38 +0,0 @@
Metadata-Version: 1.0
Name: numpy
Version: 1.6.2
Summary: NumPy: array processing for numbers, strings, records, and objects.
Home-page: http://numpy.scipy.org
Author: NumPy Developers
Author-email: numpy-discussion@scipy.org
License: BSD
Download-URL: http://sourceforge.net/project/showfiles.php?group_id=1369&package_id=175103
Description: NumPy is a general-purpose array-processing package designed to
efficiently manipulate large multi-dimensional arrays of arbitrary
records without sacrificing too much speed for small multi-dimensional
arrays. NumPy is built on the Numeric code base and adds features
introduced by numarray as well as an extended C-API and the ability to
create arrays of arbitrary type which also makes NumPy suitable for
interfacing with general-purpose data-base applications.
There are also basic facilities for discrete fourier transform,
basic linear algebra and random number generation.
Platform: Windows
Platform: Linux
Platform: Solaris
Platform: Mac OS-X
Platform: Unix
Classifier: Development Status :: 5 - Production/Stable
Classifier: Intended Audience :: Science/Research
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved
Classifier: Programming Language :: C
Classifier: Programming Language :: Python
Classifier: Programming Language :: Python :: 3
Classifier: Topic :: Software Development
Classifier: Topic :: Scientific/Engineering
Classifier: Operating System :: Microsoft :: Windows
Classifier: Operating System :: POSIX
Classifier: Operating System :: Unix
Classifier: Operating System :: MacOS

View File

@@ -1,30 +0,0 @@
NumPy is the fundamental package needed for scientific computing with Python.
This package contains:
* a powerful N-dimensional array object
* sophisticated (broadcasting) functions
* tools for integrating C/C++ and Fortran code
* useful linear algebra, Fourier transform, and random number capabilities.
It derives from the old Numeric code base and can be used as a replacement for Numeric. It also adds the features introduced by numarray and can be used to replace numarray.
More information can be found at the website:
http://scipy.org/NumPy
After installation, tests can be run with:
python -c 'import numpy; numpy.test()'
When installing a new version of numpy for the first time or before upgrading
to a newer version, it is recommended to turn on deprecation warnings when
running the tests:
python -Wd -c 'import numpy; numpy.test()'
The most current development version is always available from our
git repository:
http://github.com/numpy/numpy

View File

@@ -1,64 +0,0 @@
Travis Oliphant for the NumPy core, the NumPy guide, various
bug-fixes and code contributions.
Paul Dubois, who implemented the original Masked Arrays.
Pearu Peterson for f2py, numpy.distutils and help with code
organization.
Robert Kern for mtrand, bug fixes, help with distutils, code
organization, strided tricks and much more.
Eric Jones for planning and code contributions.
Fernando Perez for code snippets, ideas, bugfixes, and testing.
Ed Schofield for matrix.py patches, bugfixes, testing, and docstrings.
Robert Cimrman for array set operations and numpy.distutils help.
John Hunter for code snippets from matplotlib.
Chris Hanley for help with records.py, testing, and bug fixes.
Travis Vaught for administration, community coordination and
marketing.
Joe Cooper, Jeff Strunk for administration.
Eric Firing for bugfixes.
Arnd Baecker for 64-bit testing.
David Cooke for many code improvements including the auto-generated C-API,
and optimizations.
Andrew Straw for help with the web-page, documentation, packaging and
testing.
Alexander Belopolsky (Sasha) for Masked array bug-fixes and tests,
rank-0 array improvements, scalar math help and other code additions.
Francesc Altet for unicode, work on nested record arrays, and bug-fixes.
Tim Hochberg for getting the build working on MSVC, optimization
improvements, and code review.
Charles (Chuck) Harris for the sorting code originally written for
Numarray and for improvements to polyfit, many bug fixes, delving
into the C code, release management, and documentation.
David Huard for histogram improvements including 2-D and d-D code and
other bug-fixes.
Stefan van der Walt for numerous bug-fixes, testing and documentation.
Albert Strasheim for documentation, bug-fixes, regression tests and
Valgrind expertise.
David Cournapeau for build support, doc-and-bug fixes, and code
contributions including fast_clipping.
Jarrod Millman for release management, community coordination, and code
clean up.
Chris Burns for work on memory mapped arrays and bug-fixes.
Pauli Virtanen for documentation, bug-fixes, lookfor and the
documentation editor.
A.M. Archibald for no-copy-reshape code, strided array tricks,
documentation and bug-fixes.
Pierre Gerard-Marchant for rewriting masked array functionality.
Roberto de Almeida for the buffered array iterator.
Alan McIntyre for updating the NumPy test framework to use nose, improve
the test coverage, and enhancing the test system documentation.
Joe Harrington for administering the 2008 Documentation Sprint.
Mark Wiebe for the new NumPy iterator, the float16 data type, improved
low-level data type operations, and other NumPy core improvements.
NumPy is based on the Numeric (Jim Hugunin, Paul Dubois, Konrad
Hinsen, and David Ascher) and NumArray (Perry Greenfield, J Todd
Miller, Rick White and Paul Barrett) projects. We thank them for
paving the way ahead.
Institutions
------------
Enthought for providing resources and finances for development of NumPy.
UC Berkeley for providing travel money and hosting numerous sprints.
The University of Central Florida for funding the 2008 Documentation Marathon.
The University of Stellenbosch for hosting the buildbot.

View File

@@ -1,42 +0,0 @@
from timeit import Timer
class Benchmark(dict):
"""Benchmark a feature in different modules."""
def __init__(self,modules,title='',runs=3,reps=1000):
self.module_test = dict((m,'') for m in modules)
self.runs = runs
self.reps = reps
self.title = title
def __setitem__(self,module,(test_str,setup_str)):
"""Set the test code for modules."""
if module == 'all':
modules = self.module_test.keys()
else:
modules = [module]
for m in modules:
setup_str = 'import %s; import %s as np; ' % (m,m) \
+ setup_str
self.module_test[m] = Timer(test_str, setup_str)
def run(self):
"""Run the benchmark on the different modules."""
module_column_len = max(len(mod) for mod in self.module_test)
if self.title:
print self.title
print 'Doing %d runs, each with %d reps.' % (self.runs,self.reps)
print '-'*79
for mod in sorted(self.module_test):
modname = mod.ljust(module_column_len)
try:
print "%s: %s" % (modname, \
self.module_test[mod].repeat(self.runs,self.reps))
except Exception, e:
print "%s: Failed to benchmark (%s)." % (modname,e)
print '-'*79
print

View File

@@ -1,17 +0,0 @@
from benchmark import Benchmark
modules = ['numpy','Numeric','numarray']
b = Benchmark(modules,
title='Casting a (10,10) integer array to float.',
runs=3,reps=10000)
N = [10,10]
b['numpy'] = ('b = a.astype(int)',
'a=numpy.zeros(shape=%s,dtype=float)' % N)
b['Numeric'] = ('b = a.astype("l")',
'a=Numeric.zeros(shape=%s,typecode="d")' % N)
b['numarray'] = ("b = a.astype('l')",
"a=numarray.zeros(shape=%s,typecode='d')" % N)
b.run()

View File

@@ -1,14 +0,0 @@
from benchmark import Benchmark
modules = ['numpy','Numeric','numarray']
N = [10,10]
b = Benchmark(modules,
title='Creating %s zeros.' % N,
runs=3,reps=10000)
b['numpy'] = ('a=np.zeros(shape,type)', 'shape=%s;type=float' % N)
b['Numeric'] = ('a=np.zeros(shape,type)', 'shape=%s;type=np.Float' % N)
b['numarray'] = ('a=np.zeros(shape,type)', "shape=%s;type=np.Float" % N)
b.run()

View File

@@ -1,48 +0,0 @@
import timeit
# This is to show that NumPy is a poorer choice than nested Python lists
# if you are writing nested for loops.
# This is slower than Numeric was but Numeric was slower than Python lists were
# in the first place.
N = 30
code2 = r"""
for k in xrange(%d):
for l in xrange(%d):
res = a[k,l].item() + a[l,k].item()
""" % (N,N)
code3 = r"""
for k in xrange(%d):
for l in xrange(%d):
res = a[k][l] + a[l][k]
""" % (N,N)
code = r"""
for k in xrange(%d):
for l in xrange(%d):
res = a[k,l] + a[l,k]
""" % (N,N)
setup3 = r"""
import random
a = [[None for k in xrange(%d)] for l in xrange(%d)]
for k in xrange(%d):
for l in xrange(%d):
a[k][l] = random.random()
""" % (N,N,N,N)
numpy_timer1 = timeit.Timer(code, 'import numpy as np; a = np.random.rand(%d,%d)' % (N,N))
numeric_timer = timeit.Timer(code, 'import MLab as np; a=np.rand(%d,%d)' % (N,N))
numarray_timer = timeit.Timer(code, 'import numarray.mlab as np; a=np.rand(%d,%d)' % (N,N))
numpy_timer2 = timeit.Timer(code2, 'import numpy as np; a = np.random.rand(%d,%d)' % (N,N))
python_timer = timeit.Timer(code3, setup3)
numpy_timer3 = timeit.Timer("res = a + a.transpose()","import numpy as np; a=np.random.rand(%d,%d)" % (N,N))
print "shape = ", (N,N)
print "NumPy 1: ", numpy_timer1.repeat(3,100)
print "NumPy 2: ", numpy_timer2.repeat(3,100)
print "Numeric: ", numeric_timer.repeat(3,100)
print "Numarray: ", numarray_timer.repeat(3,100)
print "Python: ", python_timer.repeat(3,100)
print "Optimized: ", numpy_timer3.repeat(3,100)

View File

@@ -1,25 +0,0 @@
from benchmark import Benchmark
modules = ['numpy','Numeric','numarray']
b = Benchmark(modules,runs=3,reps=100)
N = 10000
b.title = 'Sorting %d elements' % N
b['numarray'] = ('a=np.array(None,shape=%d,typecode="i");a.sort()'%N,'')
b['numpy'] = ('a=np.empty(shape=%d, dtype="i");a.sort()'%N,'')
b['Numeric'] = ('a=np.empty(shape=%d, typecode="i");np.sort(a)'%N,'')
b.run()
N1,N2 = 100,100
b.title = 'Sorting (%d,%d) elements, last axis' % (N1,N2)
b['numarray'] = ('a=np.array(None,shape=(%d,%d),typecode="i");a.sort()'%(N1,N2),'')
b['numpy'] = ('a=np.empty(shape=(%d,%d), dtype="i");a.sort()'%(N1,N2),'')
b['Numeric'] = ('a=np.empty(shape=(%d,%d),typecode="i");np.sort(a)'%(N1,N2),'')
b.run()
N1,N2 = 100,100
b.title = 'Sorting (%d,%d) elements, first axis' % (N1,N2)
b['numarray'] = ('a=np.array(None,shape=(%d,%d), typecode="i");a.sort(0)'%(N1,N2),'')
b['numpy'] = ('a=np.empty(shape=(%d,%d),dtype="i");np.sort(a,0)'%(N1,N2),'')
b['Numeric'] = ('a=np.empty(shape=(%d,%d),typecode="i");np.sort(a,0)'%(N1,N2),'')
b.run()

View File

@@ -1,168 +0,0 @@
# Makefile for Sphinx documentation
#
PYVER =
PYTHON = python$(PYVER)
# You can set these variables from the command line.
SPHINXOPTS =
SPHINXBUILD = LANG=C sphinx-build
PAPER =
FILES=
# Internal variables.
PAPEROPT_a4 = -D latex_paper_size=a4
PAPEROPT_letter = -D latex_paper_size=letter
ALLSPHINXOPTS = -d build/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) source
.PHONY: help clean html web pickle htmlhelp latex changes linkcheck \
dist dist-build gitwash-update
#------------------------------------------------------------------------------
help:
@echo "Please use \`make <target>' where <target> is one of"
@echo " html to make standalone HTML files"
@echo " pickle to make pickle files (usable by e.g. sphinx-web)"
@echo " htmlhelp to make HTML files and a HTML help project"
@echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter"
@echo " changes to make an overview over all changed/added/deprecated items"
@echo " linkcheck to check all external links for integrity"
@echo " dist PYVER=... to make a distribution-ready tree"
@echo " upload USER=... to upload results to docs.scipy.org"
@echo " gitwash-update GITWASH=path/to/gitwash update gitwash developer docs"
clean:
-rm -rf build/* source/reference/generated
gitwash-update:
rm -rf source/dev/gitwash
install -d source/dev/gitwash
python $(GITWASH)/gitwash_dumper.py source/dev NumPy \
--repo-name=numpy \
--github-user=numpy
cat source/dev/gitwash_links.txt >> source/dev/gitwash/git_links.inc
#------------------------------------------------------------------------------
# Automated generation of all documents
#------------------------------------------------------------------------------
# Build the current numpy version, and extract docs from it.
# We have to be careful of some issues:
#
# - Everything must be done using the same Python version
# - We must use eggs (otherwise they might override PYTHONPATH on import).
# - Different versions of easy_install install to different directories (!)
#
INSTALL_DIR = $(CURDIR)/build/inst-dist/
INSTALL_PPH = $(INSTALL_DIR)/lib/python$(PYVER)/site-packages:$(INSTALL_DIR)/local/lib/python$(PYVER)/site-packages:$(INSTALL_DIR)/lib/python$(PYVER)/dist-packages:$(INSTALL_DIR)/local/lib/python$(PYVER)/dist-packages
DIST_VARS=SPHINXBUILD="LANG=C PYTHONPATH=$(INSTALL_PPH) python$(PYVER) `which sphinx-build`" PYTHON="PYTHONPATH=$(INSTALL_PPH) python$(PYVER)" SPHINXOPTS="$(SPHINXOPTS)"
UPLOAD_TARGET = $(USER)@docs.scipy.org:/home/docserver/www-root/doc/numpy/
upload:
@test -e build/dist || { echo "make dist is required first"; exit 1; }
@test output-is-fine -nt build/dist || { \
echo "Review the output in build/dist, and do 'touch output-is-fine' before uploading."; exit 1; }
rsync -r -z --delete-after -p \
$(if $(shell test -f build/dist/numpy-ref.pdf && echo "y"),, \
--exclude '**-ref.pdf' --exclude '**-user.pdf') \
$(if $(shell test -f build/dist/numpy-chm.zip && echo "y"),, \
--exclude '**-chm.zip') \
build/dist/ $(UPLOAD_TARGET)
dist:
make $(DIST_VARS) real-dist
real-dist: dist-build html
test -d build/latex || make latex
make -C build/latex all-pdf
-test -d build/htmlhelp || make htmlhelp-build
-rm -rf build/dist
cp -r build/html build/dist
perl -pi -e 's#^\s*(<li><a href=".*?">NumPy.*?Manual.*?&raquo;</li>)#<li><a href="/">Numpy and Scipy Documentation</a> &raquo;</li>#;' build/dist/*.html build/dist/*/*.html build/dist/*/*/*.html
cd build/html && zip -9r ../dist/numpy-html.zip .
cp build/latex/numpy-*.pdf build/dist
-zip build/dist/numpy-chm.zip build/htmlhelp/numpy.chm
cd build/dist && tar czf ../dist.tar.gz *
chmod ug=rwX,o=rX -R build/dist
find build/dist -type d -print0 | xargs -0r chmod g+s
dist-build:
rm -f ../dist/*.egg
cd .. && $(PYTHON) setupegg.py bdist_egg
install -d $(subst :, ,$(INSTALL_PPH))
$(PYTHON) `which easy_install` --prefix=$(INSTALL_DIR) ../dist/*.egg
#------------------------------------------------------------------------------
# Basic Sphinx generation rules for different formats
#------------------------------------------------------------------------------
generate: build/generate-stamp
build/generate-stamp: $(wildcard source/reference/*.rst)
mkdir -p build
touch build/generate-stamp
html: generate
mkdir -p build/html build/doctrees
$(SPHINXBUILD) -b html $(ALLSPHINXOPTS) build/html $(FILES)
$(PYTHON) postprocess.py html build/html/*.html
@echo
@echo "Build finished. The HTML pages are in build/html."
pickle: generate
mkdir -p build/pickle build/doctrees
$(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) build/pickle $(FILES)
@echo
@echo "Build finished; now you can process the pickle files or run"
@echo " sphinx-web build/pickle"
@echo "to start the sphinx-web server."
web: pickle
htmlhelp: generate
mkdir -p build/htmlhelp build/doctrees
$(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) build/htmlhelp $(FILES)
@echo
@echo "Build finished; now you can run HTML Help Workshop with the" \
".hhp project file in build/htmlhelp."
htmlhelp-build: htmlhelp build/htmlhelp/numpy.chm
%.chm: %.hhp
-hhc.exe $^
qthelp: generate
mkdir -p build/qthelp build/doctrees
$(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) build/qthelp $(FILES)
latex: generate
mkdir -p build/latex build/doctrees
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) build/latex $(FILES)
$(PYTHON) postprocess.py tex build/latex/*.tex
perl -pi -e 's/\t(latex.*|pdflatex) (.*)/\t-$$1 -interaction batchmode $$2/' build/latex/Makefile
@echo
@echo "Build finished; the LaTeX files are in build/latex."
@echo "Run \`make all-pdf' or \`make all-ps' in that directory to" \
"run these through (pdf)latex."
coverage: build
mkdir -p build/coverage build/doctrees
$(SPHINXBUILD) -b coverage $(ALLSPHINXOPTS) build/coverage $(FILES)
@echo "Coverage finished; see c.txt and python.txt in build/coverage"
changes: generate
mkdir -p build/changes build/doctrees
$(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) build/changes $(FILES)
@echo
@echo "The overview file is in build/changes."
linkcheck: generate
mkdir -p build/linkcheck build/doctrees
$(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) build/linkcheck $(FILES)
@echo
@echo "Link check complete; look for any errors in the above output " \
"or in build/linkcheck/output.txt."

View File

@@ -1,2 +0,0 @@
numpyx.pyx
setup.py

View File

@@ -1,37 +0,0 @@
# Simple makefile to quickly access handy build commands for Cython extension
# code generation. Note that the actual code to produce the extension lives in
# the setup.py file, this Makefile is just meant as a command
# convenience/reminder while doing development.
help:
@echo "Numpy/Cython tasks. Available tasks:"
@echo "ext -> build the Cython extension module."
@echo "html -> create annotated HTML from the .pyx sources"
@echo "test -> run a simple test demo."
@echo "all -> Call ext, html and finally test."
all: ext html test
ext: numpyx.so
test: ext
python run_test.py
html: numpyx.pyx.html
numpyx.so: numpyx.pyx numpyx.c
python setup.py build_ext --inplace
numpyx.pyx.html: numpyx.pyx
cython -a numpyx.pyx
@echo "Annotated HTML of the C code generated in numpyx.html"
# Phony targets for cleanup and similar uses
.PHONY: clean
clean:
rm -rf *~ *.so *.c *.o *.html build
# Suffix rules
%.c : %.pyx
cython $<

View File

@@ -1,20 +0,0 @@
==================
NumPy and Cython
==================
This directory contains a small example of how to use NumPy and Cython
together. While much work is planned for the Summer of 2008 as part of the
Google Summer of Code project to improve integration between the two, even
today Cython can be used effectively to write optimized code that accesses
NumPy arrays.
The example provided is just a stub showing how to build an extension and
access the array objects; improvements to this to show more sophisticated tasks
are welcome.
To run it locally, simply type::
make help
which shows you the currently available targets (these are just handy
shorthands for common commands).

View File

@@ -1,137 +0,0 @@
# :Author: Travis Oliphant
# API declaration section. This basically exposes the NumPy C API to
# Pyrex/Cython programs.
cdef extern from "numpy/arrayobject.h":
cdef enum NPY_TYPES:
NPY_BOOL
NPY_BYTE
NPY_UBYTE
NPY_SHORT
NPY_USHORT
NPY_INT
NPY_UINT
NPY_LONG
NPY_ULONG
NPY_LONGLONG
NPY_ULONGLONG
NPY_FLOAT
NPY_DOUBLE
NPY_LONGDOUBLE
NPY_CFLOAT
NPY_CDOUBLE
NPY_CLONGDOUBLE
NPY_OBJECT
NPY_STRING
NPY_UNICODE
NPY_VOID
NPY_NTYPES
NPY_NOTYPE
cdef enum requirements:
NPY_CONTIGUOUS
NPY_FORTRAN
NPY_OWNDATA
NPY_FORCECAST
NPY_ENSURECOPY
NPY_ENSUREARRAY
NPY_ELEMENTSTRIDES
NPY_ALIGNED
NPY_NOTSWAPPED
NPY_WRITEABLE
NPY_UPDATEIFCOPY
NPY_ARR_HAS_DESCR
NPY_BEHAVED
NPY_BEHAVED_NS
NPY_CARRAY
NPY_CARRAY_RO
NPY_FARRAY
NPY_FARRAY_RO
NPY_DEFAULT
NPY_IN_ARRAY
NPY_OUT_ARRAY
NPY_INOUT_ARRAY
NPY_IN_FARRAY
NPY_OUT_FARRAY
NPY_INOUT_FARRAY
NPY_UPDATE_ALL
cdef enum defines:
NPY_MAXDIMS
ctypedef struct npy_cdouble:
double real
double imag
ctypedef struct npy_cfloat:
double real
double imag
ctypedef int npy_intp
ctypedef extern class numpy.dtype [object PyArray_Descr]:
cdef int type_num, elsize, alignment
cdef char type, kind, byteorder
cdef int flags
cdef object fields, typeobj
ctypedef extern class numpy.ndarray [object PyArrayObject]:
cdef char *data
cdef int nd
cdef npy_intp *dimensions
cdef npy_intp *strides
cdef object base
cdef dtype descr
cdef int flags
ctypedef extern class numpy.flatiter [object PyArrayIterObject]:
cdef int nd_m1
cdef npy_intp index, size
cdef ndarray ao
cdef char *dataptr
ctypedef extern class numpy.broadcast [object PyArrayMultiIterObject]:
cdef int numiter
cdef npy_intp size, index
cdef int nd
cdef npy_intp *dimensions
cdef void **iters
object PyArray_ZEROS(int ndims, npy_intp* dims, NPY_TYPES type_num, int fortran)
object PyArray_EMPTY(int ndims, npy_intp* dims, NPY_TYPES type_num, int fortran)
dtype PyArray_DescrFromTypeNum(NPY_TYPES type_num)
object PyArray_SimpleNew(int ndims, npy_intp* dims, NPY_TYPES type_num)
int PyArray_Check(object obj)
object PyArray_ContiguousFromAny(object obj, NPY_TYPES type,
int mindim, int maxdim)
object PyArray_ContiguousFromObject(object obj, NPY_TYPES type,
int mindim, int maxdim)
npy_intp PyArray_SIZE(ndarray arr)
npy_intp PyArray_NBYTES(ndarray arr)
void *PyArray_DATA(ndarray arr)
object PyArray_FromAny(object obj, dtype newtype, int mindim, int maxdim,
int requirements, object context)
object PyArray_FROMANY(object obj, NPY_TYPES type_num, int min,
int max, int requirements)
object PyArray_NewFromDescr(object subtype, dtype newtype, int nd,
npy_intp* dims, npy_intp* strides, void* data,
int flags, object parent)
object PyArray_FROM_OTF(object obj, NPY_TYPES type, int flags)
object PyArray_EnsureArray(object)
object PyArray_MultiIterNew(int n, ...)
char *PyArray_MultiIter_DATA(broadcast multi, int i)
void PyArray_MultiIter_NEXTi(broadcast multi, int i)
void PyArray_MultiIter_NEXT(broadcast multi)
object PyArray_IterNew(object arr)
void PyArray_ITER_NEXT(flatiter it)
void import_array()

View File

@@ -1,62 +0,0 @@
# :Author: Robert Kern
# :Copyright: 2004, Enthought, Inc.
# :License: BSD Style
cdef extern from "Python.h":
# Not part of the Python API, but we might as well define it here.
# Note that the exact type doesn't actually matter for Pyrex.
ctypedef int size_t
# Some type declarations we need
ctypedef int Py_intptr_t
# String API
char* PyString_AsString(object string)
char* PyString_AS_STRING(object string)
object PyString_FromString(char* c_string)
object PyString_FromStringAndSize(char* c_string, int length)
object PyString_InternFromString(char *v)
# Float API
object PyFloat_FromDouble(double v)
double PyFloat_AsDouble(object ob)
long PyInt_AsLong(object ob)
# Memory API
void* PyMem_Malloc(size_t n)
void* PyMem_Realloc(void* buf, size_t n)
void PyMem_Free(void* buf)
void Py_DECREF(object obj)
void Py_XDECREF(object obj)
void Py_INCREF(object obj)
void Py_XINCREF(object obj)
# CObject API
ctypedef void (*destructor1)(void* cobj)
ctypedef void (*destructor2)(void* cobj, void* desc)
int PyCObject_Check(object p)
object PyCObject_FromVoidPtr(void* cobj, destructor1 destr)
object PyCObject_FromVoidPtrAndDesc(void* cobj, void* desc,
destructor2 destr)
void* PyCObject_AsVoidPtr(object self)
void* PyCObject_GetDesc(object self)
int PyCObject_SetVoidPtr(object self, void* cobj)
# TypeCheck API
int PyFloat_Check(object obj)
int PyInt_Check(object obj)
# Error API
int PyErr_Occurred()
void PyErr_Clear()
int PyErr_CheckSignals()
cdef extern from "string.h":
void *memcpy(void *s1, void *s2, int n)
cdef extern from "math.h":
double fabs(double x)

View File

@@ -1,127 +0,0 @@
# -*- Mode: Python -*- Not really, but close enough
"""Cython access to Numpy arrays - simple example.
"""
#############################################################################
# Load C APIs declared in .pxd files via cimport
#
# A 'cimport' is similar to a Python 'import' statement, but it provides access
# to the C part of a library instead of its Python-visible API. See the
# Pyrex/Cython documentation for details.
cimport c_python as py
cimport c_numpy as cnp
# NOTE: numpy MUST be initialized before any other code is executed.
cnp.import_array()
#############################################################################
# Load Python modules via normal import statements
import numpy as np
#############################################################################
# Regular code section begins
# A 'def' function is visible in the Python-imported module
def print_array_info(cnp.ndarray arr):
"""Simple information printer about an array.
Code meant to illustrate Cython/NumPy integration only."""
cdef int i
print '-='*10
# Note: the double cast here (void * first, then py.Py_intptr_t) is needed
# in Cython but not in Pyrex, since the casting behavior of cython is
# slightly different (and generally safer) than that of Pyrex. In this
# case, we just want the memory address of the actual Array object, so we
# cast it to void before doing the py.Py_intptr_t cast:
print 'Printing array info for ndarray at 0x%0lx'% \
(<py.Py_intptr_t><void *>arr,)
print 'number of dimensions:',arr.nd
print 'address of strides: 0x%0lx'%(<py.Py_intptr_t>arr.strides,)
print 'strides:'
for i from 0<=i<arr.nd:
# print each stride
print ' stride %d:'%i,<py.Py_intptr_t>arr.strides[i]
print 'memory dump:'
print_elements( arr.data, arr.strides, arr.dimensions,
arr.nd, sizeof(double), arr.dtype )
print '-='*10
print
# A 'cdef' function is NOT visible to the python side, but it is accessible to
# the rest of this Cython module
cdef print_elements(char *data,
py.Py_intptr_t* strides,
py.Py_intptr_t* dimensions,
int nd,
int elsize,
object dtype):
cdef py.Py_intptr_t i,j
cdef void* elptr
if dtype not in [np.dtype(np.object_),
np.dtype(np.float64)]:
print ' print_elements() not (yet) implemented for dtype %s'%dtype.name
return
if nd ==0:
if dtype==np.dtype(np.object_):
elptr = (<void**>data)[0] #[0] dereferences pointer in Pyrex
print ' ',<object>elptr
elif dtype==np.dtype(np.float64):
print ' ',(<double*>data)[0]
elif nd == 1:
for i from 0<=i<dimensions[0]:
if dtype==np.dtype(np.object_):
elptr = (<void**>data)[0]
print ' ',<object>elptr
elif dtype==np.dtype(np.float64):
print ' ',(<double*>data)[0]
data = data + strides[0]
else:
for i from 0<=i<dimensions[0]:
print_elements(data, strides+1, dimensions+1, nd-1, elsize, dtype)
data = data + strides[0]
def test_methods(cnp.ndarray arr):
"""Test a few attribute accesses for an array.
This illustrates how the pyrex-visible object is in practice a strange
hybrid of the C PyArrayObject struct and the python object. Some
properties (like .nd) are visible here but not in python, while others
like flags behave very differently: in python flags appears as a separate,
object while here we see the raw int holding the bit pattern.
This makes sense when we think of how pyrex resolves arr.foo: if foo is
listed as a field in the ndarray struct description, it will be directly
accessed as a C variable without going through Python at all. This is why
for arr.flags, we see the actual int which holds all the flags as bit
fields. However, for any other attribute not listed in the struct, it
simply forwards the attribute lookup to python at runtime, just like python
would (which means that AttributeError can be raised for non-existent
attributes, for example)."""
print 'arr.any() :',arr.any()
print 'arr.nd :',arr.nd
print 'arr.flags :',arr.flags
def test():
"""this function is pure Python"""
arr1 = np.array(-1e-30,dtype=np.float64)
arr2 = np.array([1.0,2.0,3.0],dtype=np.float64)
arr3 = np.arange(9,dtype=np.float64)
arr3.shape = 3,3
four = 4
arr4 = np.array(['one','two',3,four],dtype=np.object_)
arr5 = np.array([1,2,3]) # int types not (yet) supported by print_elements
for arr in [arr1,arr2,arr3,arr4,arr5]:
print_array_info(arr)

View File

@@ -1,3 +0,0 @@
#!/usr/bin/env python
from numpyx import test
test()

View File

@@ -1,49 +0,0 @@
#!/usr/bin/env python
"""Install file for example on how to use Cython with Numpy.
Note: Cython is the successor project to Pyrex. For more information, see
http://cython.org.
"""
from distutils.core import setup
from distutils.extension import Extension
import numpy
# We detect whether Cython is available, so that below, we can eventually ship
# pre-generated C for users to compile the extension without having Cython
# installed on their systems.
try:
from Cython.Distutils import build_ext
has_cython = True
except ImportError:
has_cython = False
# Define a cython-based extension module, using the generated sources if cython
# is not available.
if has_cython:
pyx_sources = ['numpyx.pyx']
cmdclass = {'build_ext': build_ext}
else:
# In production work, you can ship the auto-generated C source yourself to
# your users. In this case, we do NOT ship the .c file as part of numpy,
# so you'll need to actually have cython installed at least the first
# time. Since this is really just an example to show you how to use
# *Cython*, it makes more sense NOT to ship the C sources so you can edit
# the pyx at will with less chances for source update conflicts when you
# update numpy.
pyx_sources = ['numpyx.c']
cmdclass = {}
# Declare the extension object
pyx_ext = Extension('numpyx',
pyx_sources,
include_dirs = [numpy.get_include()])
# Call the routine which does the real work
setup(name = 'numpyx',
description = 'Small example on using Cython to write a Numpy extension',
ext_modules = [pyx_ext],
cmdclass = cmdclass,
)

View File

@@ -1,59 +0,0 @@
#!/usr/bin/env python
"""
%prog MODE FILES...
Post-processes HTML and Latex files output by Sphinx.
MODE is either 'html' or 'tex'.
"""
import re, optparse
def main():
p = optparse.OptionParser(__doc__)
options, args = p.parse_args()
if len(args) < 1:
p.error('no mode given')
mode = args.pop(0)
if mode not in ('html', 'tex'):
p.error('unknown mode %s' % mode)
for fn in args:
f = open(fn, 'r')
try:
if mode == 'html':
lines = process_html(fn, f.readlines())
elif mode == 'tex':
lines = process_tex(f.readlines())
finally:
f.close()
f = open(fn, 'w')
f.write("".join(lines))
f.close()
def process_html(fn, lines):
return lines
def process_tex(lines):
"""
Remove unnecessary section titles from the LaTeX file.
"""
new_lines = []
for line in lines:
if (line.startswith(r'\section{numpy.')
or line.startswith(r'\subsection{numpy.')
or line.startswith(r'\subsubsection{numpy.')
or line.startswith(r'\paragraph{numpy.')
or line.startswith(r'\subparagraph{numpy.')
):
pass # skip!
else:
new_lines.append(line)
return new_lines
if __name__ == "__main__":
main()

View File

@@ -1,2 +0,0 @@
numpyx.pyx
setup.py

View File

@@ -1,9 +0,0 @@
all:
python setup.py build_ext --inplace
test: all
python run_test.py
.PHONY: clean
clean:
rm -rf *~ *.so *.c *.o build

View File

@@ -1,3 +0,0 @@
WARNING: this code is deprecated and slated for removal soon. See the
doc/cython directory for the replacement, which uses Cython (the actively
maintained version of Pyrex).

View File

@@ -1,126 +0,0 @@
# :Author: Travis Oliphant
cdef extern from "numpy/arrayobject.h":
cdef enum NPY_TYPES:
NPY_BOOL
NPY_BYTE
NPY_UBYTE
NPY_SHORT
NPY_USHORT
NPY_INT
NPY_UINT
NPY_LONG
NPY_ULONG
NPY_LONGLONG
NPY_ULONGLONG
NPY_FLOAT
NPY_DOUBLE
NPY_LONGDOUBLE
NPY_CFLOAT
NPY_CDOUBLE
NPY_CLONGDOUBLE
NPY_OBJECT
NPY_STRING
NPY_UNICODE
NPY_VOID
NPY_NTYPES
NPY_NOTYPE
cdef enum requirements:
NPY_CONTIGUOUS
NPY_FORTRAN
NPY_OWNDATA
NPY_FORCECAST
NPY_ENSURECOPY
NPY_ENSUREARRAY
NPY_ELEMENTSTRIDES
NPY_ALIGNED
NPY_NOTSWAPPED
NPY_WRITEABLE
NPY_UPDATEIFCOPY
NPY_ARR_HAS_DESCR
NPY_BEHAVED
NPY_BEHAVED_NS
NPY_CARRAY
NPY_CARRAY_RO
NPY_FARRAY
NPY_FARRAY_RO
NPY_DEFAULT
NPY_IN_ARRAY
NPY_OUT_ARRAY
NPY_INOUT_ARRAY
NPY_IN_FARRAY
NPY_OUT_FARRAY
NPY_INOUT_FARRAY
NPY_UPDATE_ALL
cdef enum defines:
# Note: as of Pyrex 0.9.5, enums are type-checked more strictly, so this
# can't be used as an integer.
NPY_MAXDIMS
ctypedef struct npy_cdouble:
double real
double imag
ctypedef struct npy_cfloat:
double real
double imag
ctypedef int npy_intp
ctypedef extern class numpy.dtype [object PyArray_Descr]:
cdef int type_num, elsize, alignment
cdef char type, kind, byteorder
cdef int flags
cdef object fields, typeobj
ctypedef extern class numpy.ndarray [object PyArrayObject]:
cdef char *data
cdef int nd
cdef npy_intp *dimensions
cdef npy_intp *strides
cdef object base
cdef dtype descr
cdef int flags
ctypedef extern class numpy.flatiter [object PyArrayIterObject]:
cdef int nd_m1
cdef npy_intp index, size
cdef ndarray ao
cdef char *dataptr
ctypedef extern class numpy.broadcast [object PyArrayMultiIterObject]:
cdef int numiter
cdef npy_intp size, index
cdef int nd
# These next two should be arrays of [NPY_MAXITER], but that is
# difficult to cleanly specify in Pyrex. Fortunately, it doesn't matter.
cdef npy_intp *dimensions
cdef void **iters
object PyArray_ZEROS(int ndims, npy_intp* dims, NPY_TYPES type_num, int fortran)
object PyArray_EMPTY(int ndims, npy_intp* dims, NPY_TYPES type_num, int fortran)
dtype PyArray_DescrFromTypeNum(NPY_TYPES type_num)
object PyArray_SimpleNew(int ndims, npy_intp* dims, NPY_TYPES type_num)
int PyArray_Check(object obj)
object PyArray_ContiguousFromAny(object obj, NPY_TYPES type,
int mindim, int maxdim)
npy_intp PyArray_SIZE(ndarray arr)
npy_intp PyArray_NBYTES(ndarray arr)
void *PyArray_DATA(ndarray arr)
object PyArray_FromAny(object obj, dtype newtype, int mindim, int maxdim,
int requirements, object context)
object PyArray_FROMANY(object obj, NPY_TYPES type_num, int min,
int max, int requirements)
object PyArray_NewFromDescr(object subtype, dtype newtype, int nd,
npy_intp* dims, npy_intp* strides, void* data,
int flags, object parent)
void PyArray_ITER_NEXT(flatiter it)
void import_array()

View File

@@ -1,20 +0,0 @@
# -*- Mode: Python -*- Not really, but close enough
# Expose as much of the Python C API as we need here
cdef extern from "stdlib.h":
ctypedef int size_t
cdef extern from "Python.h":
ctypedef int Py_intptr_t
void* PyMem_Malloc(size_t)
void* PyMem_Realloc(void *p, size_t n)
void PyMem_Free(void *p)
char* PyString_AsString(object string)
object PyString_FromString(char *v)
object PyString_InternFromString(char *v)
int PyErr_CheckSignals()
object PyFloat_FromDouble(double v)
void Py_XINCREF(object o)
void Py_XDECREF(object o)
void Py_CLEAR(object o) # use instead of decref

View File

@@ -1,3 +0,0 @@
- cimport with a .pxd file vs 'include foo.pxi'?
- the need to repeat: pyrex does NOT parse C headers.

View File

@@ -1,1037 +0,0 @@
/* Generated by Pyrex 0.9.5.1 on Wed Jan 31 11:57:10 2007 */
#include "Python.h"
#include "structmember.h"
#ifndef PY_LONG_LONG
#define PY_LONG_LONG LONG_LONG
#endif
#ifdef __cplusplus
#define __PYX_EXTERN_C extern "C"
#else
#define __PYX_EXTERN_C extern
#endif
__PYX_EXTERN_C double pow(double, double);
#include "stdlib.h"
#include "numpy/arrayobject.h"
typedef struct {PyObject **p; char *s;} __Pyx_InternTabEntry; /*proto*/
typedef struct {PyObject **p; char *s; long n;} __Pyx_StringTabEntry; /*proto*/
static PyObject *__pyx_m;
static PyObject *__pyx_b;
static int __pyx_lineno;
static char *__pyx_filename;
static char **__pyx_f;
static int __Pyx_ArgTypeTest(PyObject *obj, PyTypeObject *type, int none_allowed, char *name); /*proto*/
static PyObject *__Pyx_Import(PyObject *name, PyObject *from_list); /*proto*/
static int __Pyx_PrintItem(PyObject *); /*proto*/
static int __Pyx_PrintNewline(void); /*proto*/
static PyObject *__Pyx_GetName(PyObject *dict, PyObject *name); /*proto*/
static int __Pyx_InternStrings(__Pyx_InternTabEntry *t); /*proto*/
static int __Pyx_InitStrings(__Pyx_StringTabEntry *t); /*proto*/
static PyTypeObject *__Pyx_ImportType(char *module_name, char *class_name, long size); /*proto*/
static void __Pyx_AddTraceback(char *funcname); /*proto*/
/* Declarations from c_python */
/* Declarations from c_numpy */
static PyTypeObject *__pyx_ptype_7c_numpy_dtype = 0;
static PyTypeObject *__pyx_ptype_7c_numpy_ndarray = 0;
static PyTypeObject *__pyx_ptype_7c_numpy_flatiter = 0;
static PyTypeObject *__pyx_ptype_7c_numpy_broadcast = 0;
/* Declarations from numpyx */
static PyObject *(__pyx_f_6numpyx_print_elements(char (*),Py_intptr_t (*),Py_intptr_t (*),int ,int ,PyObject *)); /*proto*/
/* Implementation of numpyx */
static PyObject *__pyx_n_c_python;
static PyObject *__pyx_n_c_numpy;
static PyObject *__pyx_n_numpy;
static PyObject *__pyx_n_print_array_info;
static PyObject *__pyx_n_test_methods;
static PyObject *__pyx_n_test;
static PyObject *__pyx_n_dtype;
static PyObject *__pyx_k2p;
static PyObject *__pyx_k3p;
static PyObject *__pyx_k4p;
static PyObject *__pyx_k5p;
static PyObject *__pyx_k6p;
static PyObject *__pyx_k7p;
static PyObject *__pyx_k8p;
static PyObject *__pyx_k9p;
static char (__pyx_k2[]) = "-=";
static char (__pyx_k3[]) = "printing array info for ndarray at 0x%0lx";
static char (__pyx_k4[]) = "print number of dimensions:";
static char (__pyx_k5[]) = "address of strides: 0x%0lx";
static char (__pyx_k6[]) = "strides:";
static char (__pyx_k7[]) = " stride %d:";
static char (__pyx_k8[]) = "memory dump:";
static char (__pyx_k9[]) = "-=";
static PyObject *__pyx_f_6numpyx_print_array_info(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/
static PyObject *__pyx_f_6numpyx_print_array_info(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds) {
PyArrayObject *__pyx_v_arr = 0;
int __pyx_v_i;
PyObject *__pyx_r;
PyObject *__pyx_1 = 0;
PyObject *__pyx_2 = 0;
int __pyx_3;
static char *__pyx_argnames[] = {"arr",0};
if (!PyArg_ParseTupleAndKeywords(__pyx_args, __pyx_kwds, "O", __pyx_argnames, &__pyx_v_arr)) return 0;
Py_INCREF(__pyx_v_arr);
if (!__Pyx_ArgTypeTest(((PyObject *)__pyx_v_arr), __pyx_ptype_7c_numpy_ndarray, 1, "arr")) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 10; goto __pyx_L1;}
/* "/Users/rkern/svn/numpy/numpy/doc/pyrex/numpyx.pyx":13 */
__pyx_1 = PyInt_FromLong(10); if (!__pyx_1) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 13; goto __pyx_L1;}
__pyx_2 = PyNumber_Multiply(__pyx_k2p, __pyx_1); if (!__pyx_2) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 13; goto __pyx_L1;}
Py_DECREF(__pyx_1); __pyx_1 = 0;
if (__Pyx_PrintItem(__pyx_2) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 13; goto __pyx_L1;}
Py_DECREF(__pyx_2); __pyx_2 = 0;
if (__Pyx_PrintNewline() < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 13; goto __pyx_L1;}
/* "/Users/rkern/svn/numpy/numpy/doc/pyrex/numpyx.pyx":14 */
__pyx_1 = PyInt_FromLong(((int )__pyx_v_arr)); if (!__pyx_1) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 14; goto __pyx_L1;}
__pyx_2 = PyTuple_New(1); if (!__pyx_2) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 14; goto __pyx_L1;}
PyTuple_SET_ITEM(__pyx_2, 0, __pyx_1);
__pyx_1 = 0;
__pyx_1 = PyNumber_Remainder(__pyx_k3p, __pyx_2); if (!__pyx_1) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 14; goto __pyx_L1;}
Py_DECREF(__pyx_2); __pyx_2 = 0;
if (__Pyx_PrintItem(__pyx_1) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 14; goto __pyx_L1;}
Py_DECREF(__pyx_1); __pyx_1 = 0;
if (__Pyx_PrintNewline() < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 14; goto __pyx_L1;}
/* "/Users/rkern/svn/numpy/numpy/doc/pyrex/numpyx.pyx":15 */
if (__Pyx_PrintItem(__pyx_k4p) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 15; goto __pyx_L1;}
__pyx_2 = PyInt_FromLong(__pyx_v_arr->nd); if (!__pyx_2) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 15; goto __pyx_L1;}
if (__Pyx_PrintItem(__pyx_2) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 15; goto __pyx_L1;}
Py_DECREF(__pyx_2); __pyx_2 = 0;
if (__Pyx_PrintNewline() < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 15; goto __pyx_L1;}
/* "/Users/rkern/svn/numpy/numpy/doc/pyrex/numpyx.pyx":16 */
__pyx_1 = PyInt_FromLong(((int )__pyx_v_arr->strides)); if (!__pyx_1) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 16; goto __pyx_L1;}
__pyx_2 = PyTuple_New(1); if (!__pyx_2) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 16; goto __pyx_L1;}
PyTuple_SET_ITEM(__pyx_2, 0, __pyx_1);
__pyx_1 = 0;
__pyx_1 = PyNumber_Remainder(__pyx_k5p, __pyx_2); if (!__pyx_1) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 16; goto __pyx_L1;}
Py_DECREF(__pyx_2); __pyx_2 = 0;
if (__Pyx_PrintItem(__pyx_1) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 16; goto __pyx_L1;}
Py_DECREF(__pyx_1); __pyx_1 = 0;
if (__Pyx_PrintNewline() < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 16; goto __pyx_L1;}
/* "/Users/rkern/svn/numpy/numpy/doc/pyrex/numpyx.pyx":17 */
if (__Pyx_PrintItem(__pyx_k6p) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 17; goto __pyx_L1;}
if (__Pyx_PrintNewline() < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 17; goto __pyx_L1;}
/* "/Users/rkern/svn/numpy/numpy/doc/pyrex/numpyx.pyx":18 */
__pyx_3 = __pyx_v_arr->nd;
for (__pyx_v_i = 0; __pyx_v_i < __pyx_3; ++__pyx_v_i) {
/* "/Users/rkern/svn/numpy/numpy/doc/pyrex/numpyx.pyx":20 */
__pyx_2 = PyInt_FromLong(__pyx_v_i); if (!__pyx_2) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 20; goto __pyx_L1;}
__pyx_1 = PyNumber_Remainder(__pyx_k7p, __pyx_2); if (!__pyx_1) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 20; goto __pyx_L1;}
Py_DECREF(__pyx_2); __pyx_2 = 0;
if (__Pyx_PrintItem(__pyx_1) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 20; goto __pyx_L1;}
Py_DECREF(__pyx_1); __pyx_1 = 0;
__pyx_2 = PyInt_FromLong(((int )(__pyx_v_arr->strides[__pyx_v_i]))); if (!__pyx_2) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 20; goto __pyx_L1;}
if (__Pyx_PrintItem(__pyx_2) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 20; goto __pyx_L1;}
Py_DECREF(__pyx_2); __pyx_2 = 0;
if (__Pyx_PrintNewline() < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 20; goto __pyx_L1;}
}
/* "/Users/rkern/svn/numpy/numpy/doc/pyrex/numpyx.pyx":21 */
if (__Pyx_PrintItem(__pyx_k8p) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 21; goto __pyx_L1;}
if (__Pyx_PrintNewline() < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 21; goto __pyx_L1;}
/* "/Users/rkern/svn/numpy/numpy/doc/pyrex/numpyx.pyx":22 */
__pyx_1 = PyObject_GetAttr(((PyObject *)__pyx_v_arr), __pyx_n_dtype); if (!__pyx_1) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 23; goto __pyx_L1;}
__pyx_2 = __pyx_f_6numpyx_print_elements(__pyx_v_arr->data,__pyx_v_arr->strides,__pyx_v_arr->dimensions,__pyx_v_arr->nd,(sizeof(double )),__pyx_1); if (!__pyx_2) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 22; goto __pyx_L1;}
Py_DECREF(__pyx_1); __pyx_1 = 0;
Py_DECREF(__pyx_2); __pyx_2 = 0;
/* "/Users/rkern/svn/numpy/numpy/doc/pyrex/numpyx.pyx":24 */
__pyx_1 = PyInt_FromLong(10); if (!__pyx_1) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 24; goto __pyx_L1;}
__pyx_2 = PyNumber_Multiply(__pyx_k9p, __pyx_1); if (!__pyx_2) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 24; goto __pyx_L1;}
Py_DECREF(__pyx_1); __pyx_1 = 0;
if (__Pyx_PrintItem(__pyx_2) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 24; goto __pyx_L1;}
Py_DECREF(__pyx_2); __pyx_2 = 0;
if (__Pyx_PrintNewline() < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 24; goto __pyx_L1;}
/* "/Users/rkern/svn/numpy/numpy/doc/pyrex/numpyx.pyx":25 */
if (__Pyx_PrintNewline() < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 25; goto __pyx_L1;}
__pyx_r = Py_None; Py_INCREF(Py_None);
goto __pyx_L0;
__pyx_L1:;
Py_XDECREF(__pyx_1);
Py_XDECREF(__pyx_2);
__Pyx_AddTraceback("numpyx.print_array_info");
__pyx_r = 0;
__pyx_L0:;
Py_DECREF(__pyx_v_arr);
return __pyx_r;
}
static PyObject *__pyx_n_object_;
static PyObject *__pyx_n_float64;
static PyObject *__pyx_n_name;
static PyObject *__pyx_k10p;
static PyObject *__pyx_k11p;
static PyObject *__pyx_k12p;
static PyObject *__pyx_k13p;
static PyObject *__pyx_k14p;
static char (__pyx_k10[]) = " print_elements() not (yet) implemented for dtype %s";
static char (__pyx_k11[]) = " ";
static char (__pyx_k12[]) = " ";
static char (__pyx_k13[]) = " ";
static char (__pyx_k14[]) = " ";
static PyObject *__pyx_f_6numpyx_print_elements(char (*__pyx_v_data),Py_intptr_t (*__pyx_v_strides),Py_intptr_t (*__pyx_v_dimensions),int __pyx_v_nd,int __pyx_v_elsize,PyObject *__pyx_v_dtype) {
Py_intptr_t __pyx_v_i;
void (*__pyx_v_elptr);
PyObject *__pyx_r;
PyObject *__pyx_1 = 0;
PyObject *__pyx_2 = 0;
PyObject *__pyx_3 = 0;
PyObject *__pyx_4 = 0;
int __pyx_5;
Py_intptr_t __pyx_6;
Py_INCREF(__pyx_v_dtype);
/* "/Users/rkern/svn/numpy/numpy/doc/pyrex/numpyx.pyx":36 */
__pyx_1 = __Pyx_GetName(__pyx_m, __pyx_n_numpy); if (!__pyx_1) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 36; goto __pyx_L1;}
__pyx_2 = PyObject_GetAttr(__pyx_1, __pyx_n_dtype); if (!__pyx_2) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 36; goto __pyx_L1;}
Py_DECREF(__pyx_1); __pyx_1 = 0;
__pyx_1 = __Pyx_GetName(__pyx_m, __pyx_n_numpy); if (!__pyx_1) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 36; goto __pyx_L1;}
__pyx_3 = PyObject_GetAttr(__pyx_1, __pyx_n_object_); if (!__pyx_3) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 36; goto __pyx_L1;}
Py_DECREF(__pyx_1); __pyx_1 = 0;
__pyx_1 = PyTuple_New(1); if (!__pyx_1) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 36; goto __pyx_L1;}
PyTuple_SET_ITEM(__pyx_1, 0, __pyx_3);
__pyx_3 = 0;
__pyx_3 = PyObject_CallObject(__pyx_2, __pyx_1); if (!__pyx_3) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 36; goto __pyx_L1;}
Py_DECREF(__pyx_2); __pyx_2 = 0;
Py_DECREF(__pyx_1); __pyx_1 = 0;
__pyx_2 = __Pyx_GetName(__pyx_m, __pyx_n_numpy); if (!__pyx_2) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 37; goto __pyx_L1;}
__pyx_1 = PyObject_GetAttr(__pyx_2, __pyx_n_dtype); if (!__pyx_1) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 37; goto __pyx_L1;}
Py_DECREF(__pyx_2); __pyx_2 = 0;
__pyx_2 = __Pyx_GetName(__pyx_m, __pyx_n_numpy); if (!__pyx_2) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 37; goto __pyx_L1;}
__pyx_4 = PyObject_GetAttr(__pyx_2, __pyx_n_float64); if (!__pyx_4) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 37; goto __pyx_L1;}
Py_DECREF(__pyx_2); __pyx_2 = 0;
__pyx_2 = PyTuple_New(1); if (!__pyx_2) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 37; goto __pyx_L1;}
PyTuple_SET_ITEM(__pyx_2, 0, __pyx_4);
__pyx_4 = 0;
__pyx_4 = PyObject_CallObject(__pyx_1, __pyx_2); if (!__pyx_4) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 37; goto __pyx_L1;}
Py_DECREF(__pyx_1); __pyx_1 = 0;
Py_DECREF(__pyx_2); __pyx_2 = 0;
__pyx_1 = PyList_New(2); if (!__pyx_1) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 36; goto __pyx_L1;}
PyList_SET_ITEM(__pyx_1, 0, __pyx_3);
PyList_SET_ITEM(__pyx_1, 1, __pyx_4);
__pyx_3 = 0;
__pyx_4 = 0;
__pyx_5 = PySequence_Contains(__pyx_1, __pyx_v_dtype); if (__pyx_5 < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 36; goto __pyx_L1;}
__pyx_5 = !__pyx_5;
Py_DECREF(__pyx_1); __pyx_1 = 0;
if (__pyx_5) {
/* "/Users/rkern/svn/numpy/numpy/doc/pyrex/numpyx.pyx":38 */
__pyx_2 = PyObject_GetAttr(__pyx_v_dtype, __pyx_n_name); if (!__pyx_2) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 38; goto __pyx_L1;}
__pyx_3 = PyNumber_Remainder(__pyx_k10p, __pyx_2); if (!__pyx_3) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 38; goto __pyx_L1;}
Py_DECREF(__pyx_2); __pyx_2 = 0;
if (__Pyx_PrintItem(__pyx_3) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 38; goto __pyx_L1;}
Py_DECREF(__pyx_3); __pyx_3 = 0;
if (__Pyx_PrintNewline() < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 38; goto __pyx_L1;}
/* "/Users/rkern/svn/numpy/numpy/doc/pyrex/numpyx.pyx":39 */
__pyx_r = Py_None; Py_INCREF(Py_None);
goto __pyx_L0;
goto __pyx_L2;
}
__pyx_L2:;
/* "/Users/rkern/svn/numpy/numpy/doc/pyrex/numpyx.pyx":41 */
__pyx_5 = (__pyx_v_nd == 0);
if (__pyx_5) {
/* "/Users/rkern/svn/numpy/numpy/doc/pyrex/numpyx.pyx":42 */
__pyx_4 = __Pyx_GetName(__pyx_m, __pyx_n_numpy); if (!__pyx_4) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 42; goto __pyx_L1;}
__pyx_1 = PyObject_GetAttr(__pyx_4, __pyx_n_dtype); if (!__pyx_1) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 42; goto __pyx_L1;}
Py_DECREF(__pyx_4); __pyx_4 = 0;
__pyx_2 = __Pyx_GetName(__pyx_m, __pyx_n_numpy); if (!__pyx_2) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 42; goto __pyx_L1;}
__pyx_3 = PyObject_GetAttr(__pyx_2, __pyx_n_object_); if (!__pyx_3) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 42; goto __pyx_L1;}
Py_DECREF(__pyx_2); __pyx_2 = 0;
__pyx_4 = PyTuple_New(1); if (!__pyx_4) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 42; goto __pyx_L1;}
PyTuple_SET_ITEM(__pyx_4, 0, __pyx_3);
__pyx_3 = 0;
__pyx_2 = PyObject_CallObject(__pyx_1, __pyx_4); if (!__pyx_2) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 42; goto __pyx_L1;}
Py_DECREF(__pyx_1); __pyx_1 = 0;
Py_DECREF(__pyx_4); __pyx_4 = 0;
if (PyObject_Cmp(__pyx_v_dtype, __pyx_2, &__pyx_5) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 42; goto __pyx_L1;}
__pyx_5 = __pyx_5 == 0;
Py_DECREF(__pyx_2); __pyx_2 = 0;
if (__pyx_5) {
/* "/Users/rkern/svn/numpy/numpy/doc/pyrex/numpyx.pyx":43 */
__pyx_v_elptr = (((void (*(*)))__pyx_v_data)[0]);
/* "/Users/rkern/svn/numpy/numpy/doc/pyrex/numpyx.pyx":44 */
if (__Pyx_PrintItem(__pyx_k11p) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 44; goto __pyx_L1;}
__pyx_3 = (PyObject *)__pyx_v_elptr;
Py_INCREF(__pyx_3);
if (__Pyx_PrintItem(__pyx_3) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 44; goto __pyx_L1;}
Py_DECREF(__pyx_3); __pyx_3 = 0;
if (__Pyx_PrintNewline() < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 44; goto __pyx_L1;}
goto __pyx_L4;
}
__pyx_1 = __Pyx_GetName(__pyx_m, __pyx_n_numpy); if (!__pyx_1) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 45; goto __pyx_L1;}
__pyx_4 = PyObject_GetAttr(__pyx_1, __pyx_n_dtype); if (!__pyx_4) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 45; goto __pyx_L1;}
Py_DECREF(__pyx_1); __pyx_1 = 0;
__pyx_2 = __Pyx_GetName(__pyx_m, __pyx_n_numpy); if (!__pyx_2) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 45; goto __pyx_L1;}
__pyx_3 = PyObject_GetAttr(__pyx_2, __pyx_n_float64); if (!__pyx_3) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 45; goto __pyx_L1;}
Py_DECREF(__pyx_2); __pyx_2 = 0;
__pyx_1 = PyTuple_New(1); if (!__pyx_1) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 45; goto __pyx_L1;}
PyTuple_SET_ITEM(__pyx_1, 0, __pyx_3);
__pyx_3 = 0;
__pyx_2 = PyObject_CallObject(__pyx_4, __pyx_1); if (!__pyx_2) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 45; goto __pyx_L1;}
Py_DECREF(__pyx_4); __pyx_4 = 0;
Py_DECREF(__pyx_1); __pyx_1 = 0;
if (PyObject_Cmp(__pyx_v_dtype, __pyx_2, &__pyx_5) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 45; goto __pyx_L1;}
__pyx_5 = __pyx_5 == 0;
Py_DECREF(__pyx_2); __pyx_2 = 0;
if (__pyx_5) {
/* "/Users/rkern/svn/numpy/numpy/doc/pyrex/numpyx.pyx":46 */
if (__Pyx_PrintItem(__pyx_k12p) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 46; goto __pyx_L1;}
__pyx_3 = PyFloat_FromDouble((((double (*))__pyx_v_data)[0])); if (!__pyx_3) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 46; goto __pyx_L1;}
if (__Pyx_PrintItem(__pyx_3) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 46; goto __pyx_L1;}
Py_DECREF(__pyx_3); __pyx_3 = 0;
if (__Pyx_PrintNewline() < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 46; goto __pyx_L1;}
goto __pyx_L4;
}
__pyx_L4:;
goto __pyx_L3;
}
__pyx_5 = (__pyx_v_nd == 1);
if (__pyx_5) {
/* "/Users/rkern/svn/numpy/numpy/doc/pyrex/numpyx.pyx":48 */
__pyx_6 = (__pyx_v_dimensions[0]);
for (__pyx_v_i = 0; __pyx_v_i < __pyx_6; ++__pyx_v_i) {
/* "/Users/rkern/svn/numpy/numpy/doc/pyrex/numpyx.pyx":49 */
__pyx_4 = __Pyx_GetName(__pyx_m, __pyx_n_numpy); if (!__pyx_4) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 49; goto __pyx_L1;}
__pyx_1 = PyObject_GetAttr(__pyx_4, __pyx_n_dtype); if (!__pyx_1) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 49; goto __pyx_L1;}
Py_DECREF(__pyx_4); __pyx_4 = 0;
__pyx_2 = __Pyx_GetName(__pyx_m, __pyx_n_numpy); if (!__pyx_2) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 49; goto __pyx_L1;}
__pyx_3 = PyObject_GetAttr(__pyx_2, __pyx_n_object_); if (!__pyx_3) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 49; goto __pyx_L1;}
Py_DECREF(__pyx_2); __pyx_2 = 0;
__pyx_4 = PyTuple_New(1); if (!__pyx_4) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 49; goto __pyx_L1;}
PyTuple_SET_ITEM(__pyx_4, 0, __pyx_3);
__pyx_3 = 0;
__pyx_2 = PyObject_CallObject(__pyx_1, __pyx_4); if (!__pyx_2) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 49; goto __pyx_L1;}
Py_DECREF(__pyx_1); __pyx_1 = 0;
Py_DECREF(__pyx_4); __pyx_4 = 0;
if (PyObject_Cmp(__pyx_v_dtype, __pyx_2, &__pyx_5) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 49; goto __pyx_L1;}
__pyx_5 = __pyx_5 == 0;
Py_DECREF(__pyx_2); __pyx_2 = 0;
if (__pyx_5) {
/* "/Users/rkern/svn/numpy/numpy/doc/pyrex/numpyx.pyx":50 */
__pyx_v_elptr = (((void (*(*)))__pyx_v_data)[0]);
/* "/Users/rkern/svn/numpy/numpy/doc/pyrex/numpyx.pyx":51 */
if (__Pyx_PrintItem(__pyx_k13p) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 51; goto __pyx_L1;}
__pyx_3 = (PyObject *)__pyx_v_elptr;
Py_INCREF(__pyx_3);
if (__Pyx_PrintItem(__pyx_3) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 51; goto __pyx_L1;}
Py_DECREF(__pyx_3); __pyx_3 = 0;
if (__Pyx_PrintNewline() < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 51; goto __pyx_L1;}
goto __pyx_L7;
}
__pyx_1 = __Pyx_GetName(__pyx_m, __pyx_n_numpy); if (!__pyx_1) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 52; goto __pyx_L1;}
__pyx_4 = PyObject_GetAttr(__pyx_1, __pyx_n_dtype); if (!__pyx_4) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 52; goto __pyx_L1;}
Py_DECREF(__pyx_1); __pyx_1 = 0;
__pyx_2 = __Pyx_GetName(__pyx_m, __pyx_n_numpy); if (!__pyx_2) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 52; goto __pyx_L1;}
__pyx_3 = PyObject_GetAttr(__pyx_2, __pyx_n_float64); if (!__pyx_3) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 52; goto __pyx_L1;}
Py_DECREF(__pyx_2); __pyx_2 = 0;
__pyx_1 = PyTuple_New(1); if (!__pyx_1) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 52; goto __pyx_L1;}
PyTuple_SET_ITEM(__pyx_1, 0, __pyx_3);
__pyx_3 = 0;
__pyx_2 = PyObject_CallObject(__pyx_4, __pyx_1); if (!__pyx_2) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 52; goto __pyx_L1;}
Py_DECREF(__pyx_4); __pyx_4 = 0;
Py_DECREF(__pyx_1); __pyx_1 = 0;
if (PyObject_Cmp(__pyx_v_dtype, __pyx_2, &__pyx_5) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 52; goto __pyx_L1;}
__pyx_5 = __pyx_5 == 0;
Py_DECREF(__pyx_2); __pyx_2 = 0;
if (__pyx_5) {
/* "/Users/rkern/svn/numpy/numpy/doc/pyrex/numpyx.pyx":53 */
if (__Pyx_PrintItem(__pyx_k14p) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 53; goto __pyx_L1;}
__pyx_3 = PyFloat_FromDouble((((double (*))__pyx_v_data)[0])); if (!__pyx_3) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 53; goto __pyx_L1;}
if (__Pyx_PrintItem(__pyx_3) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 53; goto __pyx_L1;}
Py_DECREF(__pyx_3); __pyx_3 = 0;
if (__Pyx_PrintNewline() < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 53; goto __pyx_L1;}
goto __pyx_L7;
}
__pyx_L7:;
/* "/Users/rkern/svn/numpy/numpy/doc/pyrex/numpyx.pyx":54 */
__pyx_v_data = (__pyx_v_data + (__pyx_v_strides[0]));
}
goto __pyx_L3;
}
/*else*/ {
/* "/Users/rkern/svn/numpy/numpy/doc/pyrex/numpyx.pyx":56 */
__pyx_6 = (__pyx_v_dimensions[0]);
for (__pyx_v_i = 0; __pyx_v_i < __pyx_6; ++__pyx_v_i) {
/* "/Users/rkern/svn/numpy/numpy/doc/pyrex/numpyx.pyx":57 */
__pyx_4 = __pyx_f_6numpyx_print_elements(__pyx_v_data,(__pyx_v_strides + 1),(__pyx_v_dimensions + 1),(__pyx_v_nd - 1),__pyx_v_elsize,__pyx_v_dtype); if (!__pyx_4) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 57; goto __pyx_L1;}
Py_DECREF(__pyx_4); __pyx_4 = 0;
/* "/Users/rkern/svn/numpy/numpy/doc/pyrex/numpyx.pyx":58 */
__pyx_v_data = (__pyx_v_data + (__pyx_v_strides[0]));
}
}
__pyx_L3:;
__pyx_r = Py_None; Py_INCREF(Py_None);
goto __pyx_L0;
__pyx_L1:;
Py_XDECREF(__pyx_1);
Py_XDECREF(__pyx_2);
Py_XDECREF(__pyx_3);
Py_XDECREF(__pyx_4);
__Pyx_AddTraceback("numpyx.print_elements");
__pyx_r = 0;
__pyx_L0:;
Py_DECREF(__pyx_v_dtype);
return __pyx_r;
}
static PyObject *__pyx_n_any;
static PyObject *__pyx_k15p;
static PyObject *__pyx_k16p;
static PyObject *__pyx_k17p;
static char (__pyx_k15[]) = "arr.any() :";
static char (__pyx_k16[]) = "arr.nd :";
static char (__pyx_k17[]) = "arr.flags :";
static PyObject *__pyx_f_6numpyx_test_methods(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/
static char __pyx_doc_6numpyx_test_methods[] = "Test a few attribute accesses for an array.\n \n This illustrates how the pyrex-visible object is in practice a strange\n hybrid of the C PyArrayObject struct and the python object. Some\n properties (like .nd) are visible here but not in python, while others\n like flags behave very differently: in python flags appears as a separate,\n object while here we see the raw int holding the bit pattern.\n\n This makes sense when we think of how pyrex resolves arr.foo: if foo is\n listed as a field in the c_numpy.ndarray struct description, it will be\n directly accessed as a C variable without going through Python at all.\n This is why for arr.flags, we see the actual int which holds all the flags\n as bit fields. However, for any other attribute not listed in the struct,\n it simply forwards the attribute lookup to python at runtime, just like\n python would (which means that AttributeError can be raised for\n non-existent attributes, for example).";
static PyObject *__pyx_f_6numpyx_test_methods(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds) {
PyArrayObject *__pyx_v_arr = 0;
PyObject *__pyx_r;
PyObject *__pyx_1 = 0;
PyObject *__pyx_2 = 0;
static char *__pyx_argnames[] = {"arr",0};
if (!PyArg_ParseTupleAndKeywords(__pyx_args, __pyx_kwds, "O", __pyx_argnames, &__pyx_v_arr)) return 0;
Py_INCREF(__pyx_v_arr);
if (!__Pyx_ArgTypeTest(((PyObject *)__pyx_v_arr), __pyx_ptype_7c_numpy_ndarray, 1, "arr")) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 60; goto __pyx_L1;}
/* "/Users/rkern/svn/numpy/numpy/doc/pyrex/numpyx.pyx":78 */
if (__Pyx_PrintItem(__pyx_k15p) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 78; goto __pyx_L1;}
__pyx_1 = PyObject_GetAttr(((PyObject *)__pyx_v_arr), __pyx_n_any); if (!__pyx_1) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 78; goto __pyx_L1;}
__pyx_2 = PyObject_CallObject(__pyx_1, 0); if (!__pyx_2) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 78; goto __pyx_L1;}
Py_DECREF(__pyx_1); __pyx_1 = 0;
if (__Pyx_PrintItem(__pyx_2) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 78; goto __pyx_L1;}
Py_DECREF(__pyx_2); __pyx_2 = 0;
if (__Pyx_PrintNewline() < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 78; goto __pyx_L1;}
/* "/Users/rkern/svn/numpy/numpy/doc/pyrex/numpyx.pyx":79 */
if (__Pyx_PrintItem(__pyx_k16p) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 79; goto __pyx_L1;}
__pyx_1 = PyInt_FromLong(__pyx_v_arr->nd); if (!__pyx_1) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 79; goto __pyx_L1;}
if (__Pyx_PrintItem(__pyx_1) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 79; goto __pyx_L1;}
Py_DECREF(__pyx_1); __pyx_1 = 0;
if (__Pyx_PrintNewline() < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 79; goto __pyx_L1;}
/* "/Users/rkern/svn/numpy/numpy/doc/pyrex/numpyx.pyx":80 */
if (__Pyx_PrintItem(__pyx_k17p) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 80; goto __pyx_L1;}
__pyx_2 = PyInt_FromLong(__pyx_v_arr->flags); if (!__pyx_2) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 80; goto __pyx_L1;}
if (__Pyx_PrintItem(__pyx_2) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 80; goto __pyx_L1;}
Py_DECREF(__pyx_2); __pyx_2 = 0;
if (__Pyx_PrintNewline() < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 80; goto __pyx_L1;}
__pyx_r = Py_None; Py_INCREF(Py_None);
goto __pyx_L0;
__pyx_L1:;
Py_XDECREF(__pyx_1);
Py_XDECREF(__pyx_2);
__Pyx_AddTraceback("numpyx.test_methods");
__pyx_r = 0;
__pyx_L0:;
Py_DECREF(__pyx_v_arr);
return __pyx_r;
}
static PyObject *__pyx_n_array;
static PyObject *__pyx_n_arange;
static PyObject *__pyx_n_shape;
static PyObject *__pyx_n_one;
static PyObject *__pyx_n_two;
static PyObject *__pyx_f_6numpyx_test(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/
static char __pyx_doc_6numpyx_test[] = "this function is pure Python";
static PyObject *__pyx_f_6numpyx_test(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds) {
PyObject *__pyx_v_arr1;
PyObject *__pyx_v_arr2;
PyObject *__pyx_v_arr3;
PyObject *__pyx_v_four;
PyObject *__pyx_v_arr4;
PyObject *__pyx_v_arr5;
PyObject *__pyx_v_arr;
PyObject *__pyx_r;
PyObject *__pyx_1 = 0;
PyObject *__pyx_2 = 0;
PyObject *__pyx_3 = 0;
PyObject *__pyx_4 = 0;
PyObject *__pyx_5 = 0;
static char *__pyx_argnames[] = {0};
if (!PyArg_ParseTupleAndKeywords(__pyx_args, __pyx_kwds, "", __pyx_argnames)) return 0;
__pyx_v_arr1 = Py_None; Py_INCREF(Py_None);
__pyx_v_arr2 = Py_None; Py_INCREF(Py_None);
__pyx_v_arr3 = Py_None; Py_INCREF(Py_None);
__pyx_v_four = Py_None; Py_INCREF(Py_None);
__pyx_v_arr4 = Py_None; Py_INCREF(Py_None);
__pyx_v_arr5 = Py_None; Py_INCREF(Py_None);
__pyx_v_arr = Py_None; Py_INCREF(Py_None);
/* "/Users/rkern/svn/numpy/numpy/doc/pyrex/numpyx.pyx":84 */
__pyx_1 = __Pyx_GetName(__pyx_m, __pyx_n_numpy); if (!__pyx_1) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 84; goto __pyx_L1;}
__pyx_2 = PyObject_GetAttr(__pyx_1, __pyx_n_array); if (!__pyx_2) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 84; goto __pyx_L1;}
Py_DECREF(__pyx_1); __pyx_1 = 0;
__pyx_1 = PyFloat_FromDouble((-1e-30)); if (!__pyx_1) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 84; goto __pyx_L1;}
__pyx_3 = PyTuple_New(1); if (!__pyx_3) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 84; goto __pyx_L1;}
PyTuple_SET_ITEM(__pyx_3, 0, __pyx_1);
__pyx_1 = 0;
__pyx_1 = PyDict_New(); if (!__pyx_1) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 84; goto __pyx_L1;}
__pyx_4 = __Pyx_GetName(__pyx_m, __pyx_n_numpy); if (!__pyx_4) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 84; goto __pyx_L1;}
__pyx_5 = PyObject_GetAttr(__pyx_4, __pyx_n_float64); if (!__pyx_5) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 84; goto __pyx_L1;}
Py_DECREF(__pyx_4); __pyx_4 = 0;
if (PyDict_SetItem(__pyx_1, __pyx_n_dtype, __pyx_5) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 84; goto __pyx_L1;}
Py_DECREF(__pyx_5); __pyx_5 = 0;
__pyx_4 = PyEval_CallObjectWithKeywords(__pyx_2, __pyx_3, __pyx_1); if (!__pyx_4) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 84; goto __pyx_L1;}
Py_DECREF(__pyx_2); __pyx_2 = 0;
Py_DECREF(__pyx_3); __pyx_3 = 0;
Py_DECREF(__pyx_1); __pyx_1 = 0;
Py_DECREF(__pyx_v_arr1);
__pyx_v_arr1 = __pyx_4;
__pyx_4 = 0;
/* "/Users/rkern/svn/numpy/numpy/doc/pyrex/numpyx.pyx":85 */
__pyx_5 = __Pyx_GetName(__pyx_m, __pyx_n_numpy); if (!__pyx_5) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 85; goto __pyx_L1;}
__pyx_2 = PyObject_GetAttr(__pyx_5, __pyx_n_array); if (!__pyx_2) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 85; goto __pyx_L1;}
Py_DECREF(__pyx_5); __pyx_5 = 0;
__pyx_3 = PyFloat_FromDouble(1.0); if (!__pyx_3) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 85; goto __pyx_L1;}
__pyx_1 = PyFloat_FromDouble(2.0); if (!__pyx_1) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 85; goto __pyx_L1;}
__pyx_4 = PyFloat_FromDouble(3.0); if (!__pyx_4) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 85; goto __pyx_L1;}
__pyx_5 = PyList_New(3); if (!__pyx_5) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 85; goto __pyx_L1;}
PyList_SET_ITEM(__pyx_5, 0, __pyx_3);
PyList_SET_ITEM(__pyx_5, 1, __pyx_1);
PyList_SET_ITEM(__pyx_5, 2, __pyx_4);
__pyx_3 = 0;
__pyx_1 = 0;
__pyx_4 = 0;
__pyx_3 = PyTuple_New(1); if (!__pyx_3) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 85; goto __pyx_L1;}
PyTuple_SET_ITEM(__pyx_3, 0, __pyx_5);
__pyx_5 = 0;
__pyx_1 = PyDict_New(); if (!__pyx_1) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 85; goto __pyx_L1;}
__pyx_4 = __Pyx_GetName(__pyx_m, __pyx_n_numpy); if (!__pyx_4) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 85; goto __pyx_L1;}
__pyx_5 = PyObject_GetAttr(__pyx_4, __pyx_n_float64); if (!__pyx_5) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 85; goto __pyx_L1;}
Py_DECREF(__pyx_4); __pyx_4 = 0;
if (PyDict_SetItem(__pyx_1, __pyx_n_dtype, __pyx_5) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 85; goto __pyx_L1;}
Py_DECREF(__pyx_5); __pyx_5 = 0;
__pyx_4 = PyEval_CallObjectWithKeywords(__pyx_2, __pyx_3, __pyx_1); if (!__pyx_4) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 85; goto __pyx_L1;}
Py_DECREF(__pyx_2); __pyx_2 = 0;
Py_DECREF(__pyx_3); __pyx_3 = 0;
Py_DECREF(__pyx_1); __pyx_1 = 0;
Py_DECREF(__pyx_v_arr2);
__pyx_v_arr2 = __pyx_4;
__pyx_4 = 0;
/* "/Users/rkern/svn/numpy/numpy/doc/pyrex/numpyx.pyx":87 */
__pyx_5 = __Pyx_GetName(__pyx_m, __pyx_n_numpy); if (!__pyx_5) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 87; goto __pyx_L1;}
__pyx_2 = PyObject_GetAttr(__pyx_5, __pyx_n_arange); if (!__pyx_2) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 87; goto __pyx_L1;}
Py_DECREF(__pyx_5); __pyx_5 = 0;
__pyx_3 = PyInt_FromLong(9); if (!__pyx_3) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 87; goto __pyx_L1;}
__pyx_1 = PyTuple_New(1); if (!__pyx_1) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 87; goto __pyx_L1;}
PyTuple_SET_ITEM(__pyx_1, 0, __pyx_3);
__pyx_3 = 0;
__pyx_4 = PyDict_New(); if (!__pyx_4) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 87; goto __pyx_L1;}
__pyx_5 = __Pyx_GetName(__pyx_m, __pyx_n_numpy); if (!__pyx_5) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 87; goto __pyx_L1;}
__pyx_3 = PyObject_GetAttr(__pyx_5, __pyx_n_float64); if (!__pyx_3) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 87; goto __pyx_L1;}
Py_DECREF(__pyx_5); __pyx_5 = 0;
if (PyDict_SetItem(__pyx_4, __pyx_n_dtype, __pyx_3) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 87; goto __pyx_L1;}
Py_DECREF(__pyx_3); __pyx_3 = 0;
__pyx_5 = PyEval_CallObjectWithKeywords(__pyx_2, __pyx_1, __pyx_4); if (!__pyx_5) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 87; goto __pyx_L1;}
Py_DECREF(__pyx_2); __pyx_2 = 0;
Py_DECREF(__pyx_1); __pyx_1 = 0;
Py_DECREF(__pyx_4); __pyx_4 = 0;
Py_DECREF(__pyx_v_arr3);
__pyx_v_arr3 = __pyx_5;
__pyx_5 = 0;
/* "/Users/rkern/svn/numpy/numpy/doc/pyrex/numpyx.pyx":88 */
__pyx_3 = PyInt_FromLong(3); if (!__pyx_3) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 88; goto __pyx_L1;}
__pyx_2 = PyInt_FromLong(3); if (!__pyx_2) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 88; goto __pyx_L1;}
__pyx_1 = PyTuple_New(2); if (!__pyx_1) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 88; goto __pyx_L1;}
PyTuple_SET_ITEM(__pyx_1, 0, __pyx_3);
PyTuple_SET_ITEM(__pyx_1, 1, __pyx_2);
__pyx_3 = 0;
__pyx_2 = 0;
if (PyObject_SetAttr(__pyx_v_arr3, __pyx_n_shape, __pyx_1) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 88; goto __pyx_L1;}
Py_DECREF(__pyx_1); __pyx_1 = 0;
/* "/Users/rkern/svn/numpy/numpy/doc/pyrex/numpyx.pyx":90 */
__pyx_4 = PyInt_FromLong(4); if (!__pyx_4) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 90; goto __pyx_L1;}
Py_DECREF(__pyx_v_four);
__pyx_v_four = __pyx_4;
__pyx_4 = 0;
/* "/Users/rkern/svn/numpy/numpy/doc/pyrex/numpyx.pyx":91 */
__pyx_5 = __Pyx_GetName(__pyx_m, __pyx_n_numpy); if (!__pyx_5) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 91; goto __pyx_L1;}
__pyx_3 = PyObject_GetAttr(__pyx_5, __pyx_n_array); if (!__pyx_3) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 91; goto __pyx_L1;}
Py_DECREF(__pyx_5); __pyx_5 = 0;
__pyx_2 = PyInt_FromLong(3); if (!__pyx_2) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 91; goto __pyx_L1;}
__pyx_1 = PyList_New(4); if (!__pyx_1) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 91; goto __pyx_L1;}
Py_INCREF(__pyx_n_one);
PyList_SET_ITEM(__pyx_1, 0, __pyx_n_one);
Py_INCREF(__pyx_n_two);
PyList_SET_ITEM(__pyx_1, 1, __pyx_n_two);
PyList_SET_ITEM(__pyx_1, 2, __pyx_2);
Py_INCREF(__pyx_v_four);
PyList_SET_ITEM(__pyx_1, 3, __pyx_v_four);
__pyx_2 = 0;
__pyx_4 = PyTuple_New(1); if (!__pyx_4) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 91; goto __pyx_L1;}
PyTuple_SET_ITEM(__pyx_4, 0, __pyx_1);
__pyx_1 = 0;
__pyx_5 = PyDict_New(); if (!__pyx_5) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 91; goto __pyx_L1;}
__pyx_2 = __Pyx_GetName(__pyx_m, __pyx_n_numpy); if (!__pyx_2) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 91; goto __pyx_L1;}
__pyx_1 = PyObject_GetAttr(__pyx_2, __pyx_n_object_); if (!__pyx_1) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 91; goto __pyx_L1;}
Py_DECREF(__pyx_2); __pyx_2 = 0;
if (PyDict_SetItem(__pyx_5, __pyx_n_dtype, __pyx_1) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 91; goto __pyx_L1;}
Py_DECREF(__pyx_1); __pyx_1 = 0;
__pyx_2 = PyEval_CallObjectWithKeywords(__pyx_3, __pyx_4, __pyx_5); if (!__pyx_2) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 91; goto __pyx_L1;}
Py_DECREF(__pyx_3); __pyx_3 = 0;
Py_DECREF(__pyx_4); __pyx_4 = 0;
Py_DECREF(__pyx_5); __pyx_5 = 0;
Py_DECREF(__pyx_v_arr4);
__pyx_v_arr4 = __pyx_2;
__pyx_2 = 0;
/* "/Users/rkern/svn/numpy/numpy/doc/pyrex/numpyx.pyx":93 */
__pyx_1 = __Pyx_GetName(__pyx_m, __pyx_n_numpy); if (!__pyx_1) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 93; goto __pyx_L1;}
__pyx_3 = PyObject_GetAttr(__pyx_1, __pyx_n_array); if (!__pyx_3) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 93; goto __pyx_L1;}
Py_DECREF(__pyx_1); __pyx_1 = 0;
__pyx_4 = PyInt_FromLong(1); if (!__pyx_4) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 93; goto __pyx_L1;}
__pyx_5 = PyInt_FromLong(2); if (!__pyx_5) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 93; goto __pyx_L1;}
__pyx_2 = PyInt_FromLong(3); if (!__pyx_2) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 93; goto __pyx_L1;}
__pyx_1 = PyList_New(3); if (!__pyx_1) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 93; goto __pyx_L1;}
PyList_SET_ITEM(__pyx_1, 0, __pyx_4);
PyList_SET_ITEM(__pyx_1, 1, __pyx_5);
PyList_SET_ITEM(__pyx_1, 2, __pyx_2);
__pyx_4 = 0;
__pyx_5 = 0;
__pyx_2 = 0;
__pyx_4 = PyTuple_New(1); if (!__pyx_4) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 93; goto __pyx_L1;}
PyTuple_SET_ITEM(__pyx_4, 0, __pyx_1);
__pyx_1 = 0;
__pyx_5 = PyObject_CallObject(__pyx_3, __pyx_4); if (!__pyx_5) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 93; goto __pyx_L1;}
Py_DECREF(__pyx_3); __pyx_3 = 0;
Py_DECREF(__pyx_4); __pyx_4 = 0;
Py_DECREF(__pyx_v_arr5);
__pyx_v_arr5 = __pyx_5;
__pyx_5 = 0;
/* "/Users/rkern/svn/numpy/numpy/doc/pyrex/numpyx.pyx":95 */
__pyx_2 = PyList_New(5); if (!__pyx_2) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 95; goto __pyx_L1;}
Py_INCREF(__pyx_v_arr1);
PyList_SET_ITEM(__pyx_2, 0, __pyx_v_arr1);
Py_INCREF(__pyx_v_arr2);
PyList_SET_ITEM(__pyx_2, 1, __pyx_v_arr2);
Py_INCREF(__pyx_v_arr3);
PyList_SET_ITEM(__pyx_2, 2, __pyx_v_arr3);
Py_INCREF(__pyx_v_arr4);
PyList_SET_ITEM(__pyx_2, 3, __pyx_v_arr4);
Py_INCREF(__pyx_v_arr5);
PyList_SET_ITEM(__pyx_2, 4, __pyx_v_arr5);
__pyx_1 = PyObject_GetIter(__pyx_2); if (!__pyx_1) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 95; goto __pyx_L1;}
Py_DECREF(__pyx_2); __pyx_2 = 0;
for (;;) {
__pyx_3 = PyIter_Next(__pyx_1);
if (!__pyx_3) {
if (PyErr_Occurred()) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 95; goto __pyx_L1;}
break;
}
Py_DECREF(__pyx_v_arr);
__pyx_v_arr = __pyx_3;
__pyx_3 = 0;
/* "/Users/rkern/svn/numpy/numpy/doc/pyrex/numpyx.pyx":96 */
__pyx_4 = __Pyx_GetName(__pyx_m, __pyx_n_print_array_info); if (!__pyx_4) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 96; goto __pyx_L1;}
__pyx_5 = PyTuple_New(1); if (!__pyx_5) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 96; goto __pyx_L1;}
Py_INCREF(__pyx_v_arr);
PyTuple_SET_ITEM(__pyx_5, 0, __pyx_v_arr);
__pyx_2 = PyObject_CallObject(__pyx_4, __pyx_5); if (!__pyx_2) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 96; goto __pyx_L1;}
Py_DECREF(__pyx_4); __pyx_4 = 0;
Py_DECREF(__pyx_5); __pyx_5 = 0;
Py_DECREF(__pyx_2); __pyx_2 = 0;
}
Py_DECREF(__pyx_1); __pyx_1 = 0;
__pyx_r = Py_None; Py_INCREF(Py_None);
goto __pyx_L0;
__pyx_L1:;
Py_XDECREF(__pyx_1);
Py_XDECREF(__pyx_2);
Py_XDECREF(__pyx_3);
Py_XDECREF(__pyx_4);
Py_XDECREF(__pyx_5);
__Pyx_AddTraceback("numpyx.test");
__pyx_r = 0;
__pyx_L0:;
Py_DECREF(__pyx_v_arr1);
Py_DECREF(__pyx_v_arr2);
Py_DECREF(__pyx_v_arr3);
Py_DECREF(__pyx_v_four);
Py_DECREF(__pyx_v_arr4);
Py_DECREF(__pyx_v_arr5);
Py_DECREF(__pyx_v_arr);
return __pyx_r;
}
static __Pyx_InternTabEntry __pyx_intern_tab[] = {
{&__pyx_n_any, "any"},
{&__pyx_n_arange, "arange"},
{&__pyx_n_array, "array"},
{&__pyx_n_c_numpy, "c_numpy"},
{&__pyx_n_c_python, "c_python"},
{&__pyx_n_dtype, "dtype"},
{&__pyx_n_float64, "float64"},
{&__pyx_n_name, "name"},
{&__pyx_n_numpy, "numpy"},
{&__pyx_n_object_, "object_"},
{&__pyx_n_one, "one"},
{&__pyx_n_print_array_info, "print_array_info"},
{&__pyx_n_shape, "shape"},
{&__pyx_n_test, "test"},
{&__pyx_n_test_methods, "test_methods"},
{&__pyx_n_two, "two"},
{0, 0}
};
static __Pyx_StringTabEntry __pyx_string_tab[] = {
{&__pyx_k2p, __pyx_k2, sizeof(__pyx_k2)},
{&__pyx_k3p, __pyx_k3, sizeof(__pyx_k3)},
{&__pyx_k4p, __pyx_k4, sizeof(__pyx_k4)},
{&__pyx_k5p, __pyx_k5, sizeof(__pyx_k5)},
{&__pyx_k6p, __pyx_k6, sizeof(__pyx_k6)},
{&__pyx_k7p, __pyx_k7, sizeof(__pyx_k7)},
{&__pyx_k8p, __pyx_k8, sizeof(__pyx_k8)},
{&__pyx_k9p, __pyx_k9, sizeof(__pyx_k9)},
{&__pyx_k10p, __pyx_k10, sizeof(__pyx_k10)},
{&__pyx_k11p, __pyx_k11, sizeof(__pyx_k11)},
{&__pyx_k12p, __pyx_k12, sizeof(__pyx_k12)},
{&__pyx_k13p, __pyx_k13, sizeof(__pyx_k13)},
{&__pyx_k14p, __pyx_k14, sizeof(__pyx_k14)},
{&__pyx_k15p, __pyx_k15, sizeof(__pyx_k15)},
{&__pyx_k16p, __pyx_k16, sizeof(__pyx_k16)},
{&__pyx_k17p, __pyx_k17, sizeof(__pyx_k17)},
{0, 0, 0}
};
static struct PyMethodDef __pyx_methods[] = {
{"print_array_info", (PyCFunction)__pyx_f_6numpyx_print_array_info, METH_VARARGS|METH_KEYWORDS, 0},
{"test_methods", (PyCFunction)__pyx_f_6numpyx_test_methods, METH_VARARGS|METH_KEYWORDS, __pyx_doc_6numpyx_test_methods},
{"test", (PyCFunction)__pyx_f_6numpyx_test, METH_VARARGS|METH_KEYWORDS, __pyx_doc_6numpyx_test},
{0, 0, 0, 0}
};
static void __pyx_init_filenames(void); /*proto*/
PyMODINIT_FUNC initnumpyx(void); /*proto*/
PyMODINIT_FUNC initnumpyx(void) {
PyObject *__pyx_1 = 0;
__pyx_init_filenames();
__pyx_m = Py_InitModule4("numpyx", __pyx_methods, 0, 0, PYTHON_API_VERSION);
if (!__pyx_m) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3; goto __pyx_L1;};
__pyx_b = PyImport_AddModule("__builtin__");
if (!__pyx_b) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3; goto __pyx_L1;};
if (PyObject_SetAttrString(__pyx_m, "__builtins__", __pyx_b) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3; goto __pyx_L1;};
if (__Pyx_InternStrings(__pyx_intern_tab) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3; goto __pyx_L1;};
if (__Pyx_InitStrings(__pyx_string_tab) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3; goto __pyx_L1;};
__pyx_ptype_7c_numpy_dtype = __Pyx_ImportType("numpy", "dtype", sizeof(PyArray_Descr)); if (!__pyx_ptype_7c_numpy_dtype) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 76; goto __pyx_L1;}
__pyx_ptype_7c_numpy_ndarray = __Pyx_ImportType("numpy", "ndarray", sizeof(PyArrayObject)); if (!__pyx_ptype_7c_numpy_ndarray) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 81; goto __pyx_L1;}
__pyx_ptype_7c_numpy_flatiter = __Pyx_ImportType("numpy", "flatiter", sizeof(PyArrayIterObject)); if (!__pyx_ptype_7c_numpy_flatiter) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 90; goto __pyx_L1;}
__pyx_ptype_7c_numpy_broadcast = __Pyx_ImportType("numpy", "broadcast", sizeof(PyArrayMultiIterObject)); if (!__pyx_ptype_7c_numpy_broadcast) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 96; goto __pyx_L1;}
/* "/Users/rkern/svn/numpy/numpy/doc/pyrex/numpyx.pyx":5 */
__pyx_1 = __Pyx_Import(__pyx_n_numpy, 0); if (!__pyx_1) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 5; goto __pyx_L1;}
if (PyObject_SetAttr(__pyx_m, __pyx_n_numpy, __pyx_1) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 5; goto __pyx_L1;}
Py_DECREF(__pyx_1); __pyx_1 = 0;
/* "/Users/rkern/svn/numpy/numpy/doc/pyrex/numpyx.pyx":8 */
import_array();
/* "/Users/rkern/svn/numpy/numpy/doc/pyrex/numpyx.pyx":82 */
return;
__pyx_L1:;
Py_XDECREF(__pyx_1);
__Pyx_AddTraceback("numpyx");
}
static char *__pyx_filenames[] = {
"numpyx.pyx",
"c_numpy.pxd",
};
/* Runtime support code */
static void __pyx_init_filenames(void) {
__pyx_f = __pyx_filenames;
}
static int __Pyx_ArgTypeTest(PyObject *obj, PyTypeObject *type, int none_allowed, char *name) {
if (!type) {
PyErr_Format(PyExc_SystemError, "Missing type object");
return 0;
}
if ((none_allowed && obj == Py_None) || PyObject_TypeCheck(obj, type))
return 1;
PyErr_Format(PyExc_TypeError,
"Argument '%s' has incorrect type (expected %s, got %s)",
name, type->tp_name, obj->ob_type->tp_name);
return 0;
}
static PyObject *__Pyx_Import(PyObject *name, PyObject *from_list) {
PyObject *__import__ = 0;
PyObject *empty_list = 0;
PyObject *module = 0;
PyObject *global_dict = 0;
PyObject *empty_dict = 0;
PyObject *list;
__import__ = PyObject_GetAttrString(__pyx_b, "__import__");
if (!__import__)
goto bad;
if (from_list)
list = from_list;
else {
empty_list = PyList_New(0);
if (!empty_list)
goto bad;
list = empty_list;
}
global_dict = PyModule_GetDict(__pyx_m);
if (!global_dict)
goto bad;
empty_dict = PyDict_New();
if (!empty_dict)
goto bad;
module = PyObject_CallFunction(__import__, "OOOO",
name, global_dict, empty_dict, list);
bad:
Py_XDECREF(empty_list);
Py_XDECREF(__import__);
Py_XDECREF(empty_dict);
return module;
}
static PyObject *__Pyx_GetStdout(void) {
PyObject *f = PySys_GetObject("stdout");
if (!f) {
PyErr_SetString(PyExc_RuntimeError, "lost sys.stdout");
}
return f;
}
static int __Pyx_PrintItem(PyObject *v) {
PyObject *f;
if (!(f = __Pyx_GetStdout()))
return -1;
if (PyFile_SoftSpace(f, 1)) {
if (PyFile_WriteString(" ", f) < 0)
return -1;
}
if (PyFile_WriteObject(v, f, Py_PRINT_RAW) < 0)
return -1;
if (PyString_Check(v)) {
char *s = PyString_AsString(v);
int len = PyString_Size(v);
if (len > 0 &&
isspace(Py_CHARMASK(s[len-1])) &&
s[len-1] != ' ')
PyFile_SoftSpace(f, 0);
}
return 0;
}
static int __Pyx_PrintNewline(void) {
PyObject *f;
if (!(f = __Pyx_GetStdout()))
return -1;
if (PyFile_WriteString("\n", f) < 0)
return -1;
PyFile_SoftSpace(f, 0);
return 0;
}
static PyObject *__Pyx_GetName(PyObject *dict, PyObject *name) {
PyObject *result;
result = PyObject_GetAttr(dict, name);
if (!result)
PyErr_SetObject(PyExc_NameError, name);
return result;
}
static int __Pyx_InternStrings(__Pyx_InternTabEntry *t) {
while (t->p) {
*t->p = PyString_InternFromString(t->s);
if (!*t->p)
return -1;
++t;
}
return 0;
}
static int __Pyx_InitStrings(__Pyx_StringTabEntry *t) {
while (t->p) {
*t->p = PyString_FromStringAndSize(t->s, t->n - 1);
if (!*t->p)
return -1;
++t;
}
return 0;
}
static PyTypeObject *__Pyx_ImportType(char *module_name, char *class_name,
long size)
{
PyObject *py_module_name = 0;
PyObject *py_class_name = 0;
PyObject *py_name_list = 0;
PyObject *py_module = 0;
PyObject *result = 0;
py_module_name = PyString_FromString(module_name);
if (!py_module_name)
goto bad;
py_class_name = PyString_FromString(class_name);
if (!py_class_name)
goto bad;
py_name_list = PyList_New(1);
if (!py_name_list)
goto bad;
Py_INCREF(py_class_name);
if (PyList_SetItem(py_name_list, 0, py_class_name) < 0)
goto bad;
py_module = __Pyx_Import(py_module_name, py_name_list);
if (!py_module)
goto bad;
result = PyObject_GetAttr(py_module, py_class_name);
if (!result)
goto bad;
if (!PyType_Check(result)) {
PyErr_Format(PyExc_TypeError,
"%s.%s is not a type object",
module_name, class_name);
goto bad;
}
if (((PyTypeObject *)result)->tp_basicsize != size) {
PyErr_Format(PyExc_ValueError,
"%s.%s does not appear to be the correct type object",
module_name, class_name);
goto bad;
}
goto done;
bad:
Py_XDECREF(result);
result = 0;
done:
Py_XDECREF(py_module_name);
Py_XDECREF(py_class_name);
Py_XDECREF(py_name_list);
return (PyTypeObject *)result;
}
#include "compile.h"
#include "frameobject.h"
#include "traceback.h"
static void __Pyx_AddTraceback(char *funcname) {
PyObject *py_srcfile = 0;
PyObject *py_funcname = 0;
PyObject *py_globals = 0;
PyObject *empty_tuple = 0;
PyObject *empty_string = 0;
PyCodeObject *py_code = 0;
PyFrameObject *py_frame = 0;
py_srcfile = PyString_FromString(__pyx_filename);
if (!py_srcfile) goto bad;
py_funcname = PyString_FromString(funcname);
if (!py_funcname) goto bad;
py_globals = PyModule_GetDict(__pyx_m);
if (!py_globals) goto bad;
empty_tuple = PyTuple_New(0);
if (!empty_tuple) goto bad;
empty_string = PyString_FromString("");
if (!empty_string) goto bad;
py_code = PyCode_New(
0, /*int argcount,*/
0, /*int nlocals,*/
0, /*int stacksize,*/
0, /*int flags,*/
empty_string, /*PyObject *code,*/
empty_tuple, /*PyObject *consts,*/
empty_tuple, /*PyObject *names,*/
empty_tuple, /*PyObject *varnames,*/
empty_tuple, /*PyObject *freevars,*/
empty_tuple, /*PyObject *cellvars,*/
py_srcfile, /*PyObject *filename,*/
py_funcname, /*PyObject *name,*/
__pyx_lineno, /*int firstlineno,*/
empty_string /*PyObject *lnotab*/
);
if (!py_code) goto bad;
py_frame = PyFrame_New(
PyThreadState_Get(), /*PyThreadState *tstate,*/
py_code, /*PyCodeObject *code,*/
py_globals, /*PyObject *globals,*/
0 /*PyObject *locals*/
);
if (!py_frame) goto bad;
py_frame->f_lineno = __pyx_lineno;
PyTraceBack_Here(py_frame);
bad:
Py_XDECREF(py_srcfile);
Py_XDECREF(py_funcname);
Py_XDECREF(empty_tuple);
Py_XDECREF(empty_string);
Py_XDECREF(py_code);
Py_XDECREF(py_frame);
}

View File

@@ -1,101 +0,0 @@
# -*- Mode: Python -*- Not really, but close enough
"""WARNING: this code is deprecated and slated for removal soon. See the
doc/cython directory for the replacement, which uses Cython (the actively
maintained version of Pyrex).
"""
cimport c_python
cimport c_numpy
import numpy
# Numpy must be initialized
c_numpy.import_array()
def print_array_info(c_numpy.ndarray arr):
cdef int i
print '-='*10
print 'printing array info for ndarray at 0x%0lx'%(<c_python.Py_intptr_t>arr,)
print 'print number of dimensions:',arr.nd
print 'address of strides: 0x%0lx'%(<c_python.Py_intptr_t>arr.strides,)
print 'strides:'
for i from 0<=i<arr.nd:
# print each stride
print ' stride %d:'%i,<c_python.Py_intptr_t>arr.strides[i]
print 'memory dump:'
print_elements( arr.data, arr.strides, arr.dimensions,
arr.nd, sizeof(double), arr.dtype )
print '-='*10
print
cdef print_elements(char *data,
c_python.Py_intptr_t* strides,
c_python.Py_intptr_t* dimensions,
int nd,
int elsize,
object dtype):
cdef c_python.Py_intptr_t i,j
cdef void* elptr
if dtype not in [numpy.dtype(numpy.object_),
numpy.dtype(numpy.float64)]:
print ' print_elements() not (yet) implemented for dtype %s'%dtype.name
return
if nd ==0:
if dtype==numpy.dtype(numpy.object_):
elptr = (<void**>data)[0] #[0] dereferences pointer in Pyrex
print ' ',<object>elptr
elif dtype==numpy.dtype(numpy.float64):
print ' ',(<double*>data)[0]
elif nd == 1:
for i from 0<=i<dimensions[0]:
if dtype==numpy.dtype(numpy.object_):
elptr = (<void**>data)[0]
print ' ',<object>elptr
elif dtype==numpy.dtype(numpy.float64):
print ' ',(<double*>data)[0]
data = data + strides[0]
else:
for i from 0<=i<dimensions[0]:
print_elements(data, strides+1, dimensions+1, nd-1, elsize, dtype)
data = data + strides[0]
def test_methods(c_numpy.ndarray arr):
"""Test a few attribute accesses for an array.
This illustrates how the pyrex-visible object is in practice a strange
hybrid of the C PyArrayObject struct and the python object. Some
properties (like .nd) are visible here but not in python, while others
like flags behave very differently: in python flags appears as a separate,
object while here we see the raw int holding the bit pattern.
This makes sense when we think of how pyrex resolves arr.foo: if foo is
listed as a field in the c_numpy.ndarray struct description, it will be
directly accessed as a C variable without going through Python at all.
This is why for arr.flags, we see the actual int which holds all the flags
as bit fields. However, for any other attribute not listed in the struct,
it simply forwards the attribute lookup to python at runtime, just like
python would (which means that AttributeError can be raised for
non-existent attributes, for example)."""
print 'arr.any() :',arr.any()
print 'arr.nd :',arr.nd
print 'arr.flags :',arr.flags
def test():
"""this function is pure Python"""
arr1 = numpy.array(-1e-30,dtype=numpy.float64)
arr2 = numpy.array([1.0,2.0,3.0],dtype=numpy.float64)
arr3 = numpy.arange(9,dtype=numpy.float64)
arr3.shape = 3,3
four = 4
arr4 = numpy.array(['one','two',3,four],dtype=numpy.object_)
arr5 = numpy.array([1,2,3]) # int types not (yet) supported by print_elements
for arr in [arr1,arr2,arr3,arr4,arr5]:
print_array_info(arr)

View File

@@ -1,3 +0,0 @@
#!/usr/bin/env python
from numpyx import test
test()

View File

@@ -1,48 +0,0 @@
#!/usr/bin/env python
"""
WARNING: this code is deprecated and slated for removal soon. See the
doc/cython directory for the replacement, which uses Cython (the actively
maintained version of Pyrex).
Install file for example on how to use Pyrex with Numpy.
For more details, see:
http://www.scipy.org/Cookbook/Pyrex_and_NumPy
http://www.scipy.org/Cookbook/ArrayStruct_and_Pyrex
"""
from distutils.core import setup
from distutils.extension import Extension
# Make this usable by people who don't have pyrex installed (I've committed
# the generated C sources to SVN).
try:
from Pyrex.Distutils import build_ext
has_pyrex = True
except ImportError:
has_pyrex = False
import numpy
# Define a pyrex-based extension module, using the generated sources if pyrex
# is not available.
if has_pyrex:
pyx_sources = ['numpyx.pyx']
cmdclass = {'build_ext': build_ext}
else:
pyx_sources = ['numpyx.c']
cmdclass = {}
pyx_ext = Extension('numpyx',
pyx_sources,
include_dirs = [numpy.get_include()])
# Call the routine which does the real work
setup(name = 'numpyx',
description = 'Small example on using Pyrex to write a Numpy extension',
url = 'http://www.scipy.org/Cookbook/Pyrex_and_NumPy',
ext_modules = [pyx_ext],
cmdclass = cmdclass,
)

View File

@@ -1,278 +0,0 @@
=========================
NumPy 1.3.0 Release Notes
=========================
This minor includes numerous bug fixes, official python 2.6 support, and
several new features such as generalized ufuncs.
Highlights
==========
Python 2.6 support
~~~~~~~~~~~~~~~~~~
Python 2.6 is now supported on all previously supported platforms, including
windows.
http://www.python.org/dev/peps/pep-0361/
Generalized ufuncs
~~~~~~~~~~~~~~~~~~
There is a general need for looping over not only functions on scalars but also
over functions on vectors (or arrays), as explained on
http://scipy.org/scipy/numpy/wiki/GeneralLoopingFunctions. We propose to
realize this concept by generalizing the universal functions (ufuncs), and
provide a C implementation that adds ~500 lines to the numpy code base. In
current (specialized) ufuncs, the elementary function is limited to
element-by-element operations, whereas the generalized version supports
"sub-array" by "sub-array" operations. The Perl vector library PDL provides a
similar functionality and its terms are re-used in the following.
Each generalized ufunc has information associated with it that states what the
"core" dimensionality of the inputs is, as well as the corresponding
dimensionality of the outputs (the element-wise ufuncs have zero core
dimensions). The list of the core dimensions for all arguments is called the
"signature" of a ufunc. For example, the ufunc numpy.add has signature
"(),()->()" defining two scalar inputs and one scalar output.
Another example is (see the GeneralLoopingFunctions page) the function
inner1d(a,b) with a signature of "(i),(i)->()". This applies the inner product
along the last axis of each input, but keeps the remaining indices intact. For
example, where a is of shape (3,5,N) and b is of shape (5,N), this will return
an output of shape (3,5). The underlying elementary function is called 3*5
times. In the signature, we specify one core dimension "(i)" for each input and
zero core dimensions "()" for the output, since it takes two 1-d arrays and
returns a scalar. By using the same name "i", we specify that the two
corresponding dimensions should be of the same size (or one of them is of size
1 and will be broadcasted).
The dimensions beyond the core dimensions are called "loop" dimensions. In the
above example, this corresponds to (3,5).
The usual numpy "broadcasting" rules apply, where the signature determines how
the dimensions of each input/output object are split into core and loop
dimensions:
While an input array has a smaller dimensionality than the corresponding number
of core dimensions, 1's are pre-pended to its shape. The core dimensions are
removed from all inputs and the remaining dimensions are broadcasted; defining
the loop dimensions. The output is given by the loop dimensions plus the
output core dimensions.
Experimental Windows 64 bits support
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Numpy can now be built on windows 64 bits (amd64 only, not IA64), with both MS
compilers and mingw-w64 compilers:
This is *highly experimental*: DO NOT USE FOR PRODUCTION USE. See INSTALL.txt,
Windows 64 bits section for more information on limitations and how to build it
by yourself.
New features
============
Formatting issues
~~~~~~~~~~~~~~~~~
Float formatting is now handled by numpy instead of the C runtime: this enables
locale independent formatting, more robust fromstring and related methods.
Special values (inf and nan) are also more consistent across platforms (nan vs
IND/NaN, etc...), and more consistent with recent python formatting work (in
2.6 and later).
Nan handling in max/min
~~~~~~~~~~~~~~~~~~~~~~~
The maximum/minimum ufuncs now reliably propagate nans. If one of the
arguments is a nan, then nan is retured. This affects np.min/np.max, amin/amax
and the array methods max/min. New ufuncs fmax and fmin have been added to deal
with non-propagating nans.
Nan handling in sign
~~~~~~~~~~~~~~~~~~~~
The ufunc sign now returns nan for the sign of anan.
New ufuncs
~~~~~~~~~~
#. fmax - same as maximum for integer types and non-nan floats. Returns the
non-nan argument if one argument is nan and returns nan if both arguments
are nan.
#. fmin - same as minimum for integer types and non-nan floats. Returns the
non-nan argument if one argument is nan and returns nan if both arguments
are nan.
#. deg2rad - converts degrees to radians, same as the radians ufunc.
#. rad2deg - converts radians to degrees, same as the degrees ufunc.
#. log2 - base 2 logarithm.
#. exp2 - base 2 exponential.
#. trunc - truncate floats to nearest integer towards zero.
#. logaddexp - add numbers stored as logarithms and return the logarithm
of the result.
#. logaddexp2 - add numbers stored as base 2 logarithms and return the base 2
logarithm of the result result.
Masked arrays
~~~~~~~~~~~~~
Several new features and bug fixes, including:
* structured arrays should now be fully supported by MaskedArray
(r6463, r6324, r6305, r6300, r6294...)
* Minor bug fixes (r6356, r6352, r6335, r6299, r6298)
* Improved support for __iter__ (r6326)
* made baseclass, sharedmask and hardmask accesible to the user (but
read-only)
* doc update
gfortran support on windows
~~~~~~~~~~~~~~~~~~~~~~~~~~~
Gfortran can now be used as a fortran compiler for numpy on windows, even when
the C compiler is Visual Studio (VS 2005 and above; VS 2003 will NOT work).
Gfortran + Visual studio does not work on windows 64 bits (but gcc + gfortran
does). It is unclear whether it will be possible to use gfortran and visual
studio at all on x64.
Arch option for windows binary
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Automatic arch detection can now be bypassed from the command line for the superpack installed:
numpy-1.3.0-superpack-win32.exe /arch=nosse
will install a numpy which works on any x86, even if the running computer
supports SSE set.
Deprecated features
===================
Histogram
~~~~~~~~~
The semantics of histogram has been modified to fix long-standing issues
with outliers handling. The main changes concern
#. the definition of the bin edges, now including the rightmost edge, and
#. the handling of upper outliers, now ignored rather than tallied in the
rightmost bin.
The previous behavior is still accessible using `new=False`, but this is
deprecated, and will be removed entirely in 1.4.0.
Documentation changes
=====================
A lot of documentation has been added. Both user guide and references can be
built from sphinx.
New C API
=========
Multiarray API
~~~~~~~~~~~~~~
The following functions have been added to the multiarray C API:
* PyArray_GetEndianness: to get runtime endianness
Ufunc API
~~~~~~~~~~~~~~
The following functions have been added to the ufunc API:
* PyUFunc_FromFuncAndDataAndSignature: to declare a more general ufunc
(generalized ufunc).
New defines
~~~~~~~~~~~
New public C defines are available for ARCH specific code through numpy/npy_cpu.h:
* NPY_CPU_X86: x86 arch (32 bits)
* NPY_CPU_AMD64: amd64 arch (x86_64, NOT Itanium)
* NPY_CPU_PPC: 32 bits ppc
* NPY_CPU_PPC64: 64 bits ppc
* NPY_CPU_SPARC: 32 bits sparc
* NPY_CPU_SPARC64: 64 bits sparc
* NPY_CPU_S390: S390
* NPY_CPU_IA64: ia64
* NPY_CPU_PARISC: PARISC
New macros for CPU endianness has been added as well (see internal changes
below for details):
* NPY_BYTE_ORDER: integer
* NPY_LITTLE_ENDIAN/NPY_BIG_ENDIAN defines
Those provide portable alternatives to glibc endian.h macros for platforms
without it.
Portable NAN, INFINITY, etc...
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
npy_math.h now makes available several portable macro to get NAN, INFINITY:
* NPY_NAN: equivalent to NAN, which is a GNU extension
* NPY_INFINITY: equivalent to C99 INFINITY
* NPY_PZERO, NPY_NZERO: positive and negative zero respectively
Corresponding single and extended precision macros are available as well. All
references to NAN, or home-grown computation of NAN on the fly have been
removed for consistency.
Internal changes
================
numpy.core math configuration revamp
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This should make the porting to new platforms easier, and more robust. In
particular, the configuration stage does not need to execute any code on the
target platform, which is a first step toward cross-compilation.
http://projects.scipy.org/numpy/browser/trunk/doc/neps/math_config_clean.txt
umath refactor
~~~~~~~~~~~~~~
A lot of code cleanup for umath/ufunc code (charris).
Improvements to build warnings
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Numpy can now build with -W -Wall without warnings
http://projects.scipy.org/numpy/browser/trunk/doc/neps/warnfix.txt
Separate core math library
~~~~~~~~~~~~~~~~~~~~~~~~~~
The core math functions (sin, cos, etc... for basic C types) have been put into
a separate library; it acts as a compatibility layer, to support most C99 maths
functions (real only for now). The library includes platform-specific fixes for
various maths functions, such as using those versions should be more robust
than using your platform functions directly. The API for existing functions is
exactly the same as the C99 math functions API; the only difference is the npy
prefix (npy_cos vs cos).
The core library will be made available to any extension in 1.4.0.
CPU arch detection
~~~~~~~~~~~~~~~~~~
npy_cpu.h defines numpy specific CPU defines, such as NPY_CPU_X86, etc...
Those are portable across OS and toolchains, and set up when the header is
parsed, so that they can be safely used even in the case of cross-compilation
(the values is not set when numpy is built), or for multi-arch binaries (e.g.
fat binaries on Max OS X).
npy_endian.h defines numpy specific endianness defines, modeled on the glibc
endian.h. NPY_BYTE_ORDER is equivalent to BYTE_ORDER, and one of
NPY_LITTLE_ENDIAN or NPY_BIG_ENDIAN is defined. As for CPU archs, those are set
when the header is parsed by the compiler, and as such can be used for
cross-compilation and multi-arch binaries.

View File

@@ -1,238 +0,0 @@
=========================
NumPy 1.4.0 Release Notes
=========================
This minor includes numerous bug fixes, as well as a few new features. It
is backward compatible with 1.3.0 release.
Highlights
==========
* New datetime dtype support to deal with dates in arrays
* Faster import time
* Extended array wrapping mechanism for ufuncs
* New Neighborhood iterator (C-level only)
* C99-like complex functions in npymath
New features
============
Extended array wrapping mechanism for ufuncs
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
An __array_prepare__ method has been added to ndarray to provide subclasses
greater flexibility to interact with ufuncs and ufunc-like functions. ndarray
already provided __array_wrap__, which allowed subclasses to set the array type
for the result and populate metadata on the way out of the ufunc (as seen in
the implementation of MaskedArray). For some applications it is necessary to
provide checks and populate metadata *on the way in*. __array_prepare__ is
therefore called just after the ufunc has initialized the output array but
before computing the results and populating it. This way, checks can be made
and errors raised before operations which may modify data in place.
Automatic detection of forward incompatibilities
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Previously, if an extension was built against a version N of NumPy, and used on
a system with NumPy M < N, the import_array was successfull, which could cause
crashes because the version M does not have a function in N. Starting from
NumPy 1.4.0, this will cause a failure in import_array, so the error will be
catched early on.
New iterators
~~~~~~~~~~~~~
A new neighborhood iterator has been added to the C API. It can be used to
iterate over the items in a neighborhood of an array, and can handle boundaries
conditions automatically. Zero and one padding are available, as well as
arbitrary constant value, mirror and circular padding.
New polynomial support
~~~~~~~~~~~~~~~~~~~~~~
New modules chebyshev and polynomial have been added. The new polynomial module
is not compatible with the current polynomial support in numpy, but is much
like the new chebyshev module. The most noticeable difference to most will
be that coefficients are specified from low to high power, that the low
level functions do *not* work with the Chebyshev and Polynomial classes as
arguements, and that the Chebyshev and Polynomial classes include a domain.
Mapping between domains is a linear substitution and the two classes can be
converted one to the other, allowing, for instance, a Chebyshev series in
one domain to be expanded as a polynomial in another domain. The new classes
should generally be used instead of the low level functions, the latter are
provided for those who wish to build their own classes.
The new modules are not automatically imported into the numpy namespace,
they must be explicitly brought in with an "import numpy.polynomial"
statement.
New C API
~~~~~~~~~
The following C functions have been added to the C API:
#. PyArray_GetNDArrayCFeatureVersion: return the *API* version of the
loaded numpy.
#. PyArray_Correlate2 - like PyArray_Correlate, but implements the usual
definition of correlation. Inputs are not swapped, and conjugate is
taken for complex arrays.
#. PyArray_NeighborhoodIterNew - a new iterator to iterate over a
neighborhood of a point, with automatic boundaries handling. It is
documented in the iterators section of the C-API reference, and you can
find some examples in the multiarray_test.c.src file in numpy.core.
New ufuncs
~~~~~~~~~~
The following ufuncs have been added to the C API:
#. copysign - return the value of the first argument with the sign copied
from the second argument.
#. nextafter - return the next representable floating point value of the
first argument toward the second argument.
New defines
~~~~~~~~~~~
The alpha processor is now defined and available in numpy/npy_cpu.h. The
failed detection of the PARISC processor has been fixed. The defines are:
#. NPY_CPU_HPPA: PARISC
#. NPY_CPU_ALPHA: Alpha
Testing
~~~~~~~
#. deprecated decorator: this decorator may be used to avoid cluttering
testing output while testing DeprecationWarning is effectively raised by
the decorated test.
#. assert_array_almost_equal_nulps: new method to compare two arrays of
floating point values. With this function, two values are considered
close if there are not many representable floating point values in
between, thus being more robust than assert_array_almost_equal when the
values fluctuate a lot.
#. assert_array_max_ulp: raise an assertion if there are more than N
representable numbers between two floating point values.
#. assert_warns: raise an AssertionError if a callable does not generate a
warning of the appropriate class, without altering the warning state.
Reusing npymath
~~~~~~~~~~~~~~~
In 1.3.0, we started putting portable C math routines in npymath library, so
that people can use those to write portable extensions. Unfortunately, it was
not possible to easily link against this library: in 1.4.0, support has been
added to numpy.distutils so that 3rd party can reuse this library. See coremath
documentation for more information.
Improved set operations
~~~~~~~~~~~~~~~~~~~~~~~
In previous versions of NumPy some set functions (intersect1d,
setxor1d, setdiff1d and setmember1d) could return incorrect results if
the input arrays contained duplicate items. These now work correctly
for input arrays with duplicates. setmember1d has been renamed to
in1d, as with the change to accept arrays with duplicates it is
no longer a set operation, and is conceptually similar to an
elementwise version of the Python operator 'in'. All of these
functions now accept the boolean keyword assume_unique. This is False
by default, but can be set True if the input arrays are known not
to contain duplicates, which can increase the functions' execution
speed.
Improvements
============
#. numpy import is noticeably faster (from 20 to 30 % depending on the
platform and computer)
#. The sort functions now sort nans to the end.
* Real sort order is [R, nan]
* Complex sort order is [R + Rj, R + nanj, nan + Rj, nan + nanj]
Complex numbers with the same nan placements are sorted according to
the non-nan part if it exists.
#. The type comparison functions have been made consistent with the new
sort order of nans. Searchsorted now works with sorted arrays
containing nan values.
#. Complex division has been made more resistent to overflow.
#. Complex floor division has been made more resistent to overflow.
Deprecations
============
The following functions are deprecated:
#. correlate: it takes a new keyword argument old_behavior. When True (the
default), it returns the same result as before. When False, compute the
conventional correlation, and take the conjugate for complex arrays. The
old behavior will be removed in NumPy 1.5, and raises a
DeprecationWarning in 1.4.
#. unique1d: use unique instead. unique1d raises a deprecation
warning in 1.4, and will be removed in 1.5.
#. intersect1d_nu: use intersect1d instead. intersect1d_nu raises
a deprecation warning in 1.4, and will be removed in 1.5.
#. setmember1d: use in1d instead. setmember1d raises a deprecation
warning in 1.4, and will be removed in 1.5.
The following raise errors:
#. When operating on 0-d arrays, ``numpy.max`` and other functions accept
only ``axis=0``, ``axis=-1`` and ``axis=None``. Using an out-of-bounds
axes is an indication of a bug, so Numpy raises an error for these cases
now.
#. Specifying ``axis > MAX_DIMS`` is no longer allowed; Numpy raises now an
error instead of behaving similarly as for ``axis=None``.
Internal changes
================
Use C99 complex functions when available
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The numpy complex types are now guaranteed to be ABI compatible with C99
complex type, if availble on the platform. Moreoever, the complex ufunc now use
the platform C99 functions intead of our own.
split multiarray and umath source code
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The source code of multiarray and umath has been split into separate logic
compilation units. This should make the source code more amenable for
newcomers.
Separate compilation
~~~~~~~~~~~~~~~~~~~~
By default, every file of multiarray (and umath) is merged into one for
compilation as was the case before, but if NPY_SEPARATE_COMPILATION env
variable is set to a non-negative value, experimental individual compilation of
each file is enabled. This makes the compile/debug cycle much faster when
working on core numpy.
Separate core math library
~~~~~~~~~~~~~~~~~~~~~~~~~~
New functions which have been added:
* npy_copysign
* npy_nextafter
* npy_cpack
* npy_creal
* npy_cimag
* npy_cabs
* npy_cexp
* npy_clog
* npy_cpow
* npy_csqr
* npy_ccos
* npy_csin

View File

@@ -1,131 +0,0 @@
=========================
NumPy 1.5.0 Release Notes
=========================
Highlights
==========
Python 3 compatibility
----------------------
This is the first NumPy release which is compatible with Python 3. Support for
Python 3 and Python 2 is done from a single code base. Extensive notes on
changes can be found at
`<http://projects.scipy.org/numpy/browser/trunk/doc/Py3K.txt>`_.
Note that the Numpy testing framework relies on nose, which does not have a
Python 3 compatible release yet. A working Python 3 branch of nose can be found
at `<http://bitbucket.org/jpellerin/nose3/>`_ however.
Porting of SciPy to Python 3 is expected to be completed soon.
:pep:`3118` compatibility
-------------------------
The new buffer protocol described by PEP 3118 is fully supported in this
version of Numpy. On Python versions >= 2.6 Numpy arrays expose the buffer
interface, and array(), asarray() and other functions accept new-style buffers
as input.
New features
============
Warning on casting complex to real
----------------------------------
Numpy now emits a `numpy.ComplexWarning` when a complex number is cast
into a real number. For example:
>>> x = np.array([1,2,3])
>>> x[:2] = np.array([1+2j, 1-2j])
ComplexWarning: Casting complex values to real discards the imaginary part
The cast indeed discards the imaginary part, and this may not be the
intended behavior in all cases, hence the warning. This warning can be
turned off in the standard way:
>>> import warnings
>>> warnings.simplefilter("ignore", np.ComplexWarning)
Dot method for ndarrays
-----------------------
Ndarrays now have the dot product also as a method, which allows writing
chains of matrix products as
>>> a.dot(b).dot(c)
instead of the longer alternative
>>> np.dot(a, np.dot(b, c))
linalg.slogdet function
-----------------------
The slogdet function returns the sign and logarithm of the determinant
of a matrix. Because the determinant may involve the product of many
small/large values, the result is often more accurate than that obtained
by simple multiplication.
new header
----------
The new header file ndarraytypes.h contains the symbols from
ndarrayobject.h that do not depend on the PY_ARRAY_UNIQUE_SYMBOL and
NO_IMPORT/_ARRAY macros. Broadly, these symbols are types, typedefs,
and enumerations; the array function calls are left in
ndarrayobject.h. This allows users to include array-related types and
enumerations without needing to concern themselves with the macro
expansions and their side- effects.
Changes
=======
polynomial.polynomial
---------------------
* The polyint and polyder functions now check that the specified number
integrations or derivations is a non-negative integer. The number 0 is
a valid value for both functions.
* A degree method has been added to the Polynomial class.
* A trimdeg method has been added to the Polynomial class. It operates like
truncate except that the argument is the desired degree of the result,
not the number of coefficients.
* Polynomial.fit now uses None as the default domain for the fit. The default
Polynomial domain can be specified by using [] as the domain value.
* Weights can be used in both polyfit and Polynomial.fit
* A linspace method has been added to the Polynomial class to ease plotting.
* The polymulx function was added.
polynomial.chebyshev
--------------------
* The chebint and chebder functions now check that the specified number
integrations or derivations is a non-negative integer. The number 0 is
a valid value for both functions.
* A degree method has been added to the Chebyshev class.
* A trimdeg method has been added to the Chebyshev class. It operates like
truncate except that the argument is the desired degree of the result,
not the number of coefficients.
* Chebyshev.fit now uses None as the default domain for the fit. The default
Chebyshev domain can be specified by using [] as the domain value.
* Weights can be used in both chebfit and Chebyshev.fit
* A linspace method has been added to the Chebyshev class to ease plotting.
* The chebmulx function was added.
* Added functions for the Chebyshev points of the first and second kind.
histogram
---------
After a two years transition period, the old behavior of the histogram function
has been phased out, and the "new" keyword has been removed.
correlate
---------
The old behavior of correlate was deprecated in 1.4.0, the new behavior (the
usual definition for cross-correlation) is now the default.

View File

@@ -1,177 +0,0 @@
=========================
NumPy 1.6.0 Release Notes
=========================
This release includes several new features as well as numerous bug fixes and
improved documentation. It is backward compatible with the 1.5.0 release, and
supports Python 2.4 - 2.7 and 3.1 - 3.2.
Highlights
==========
* Re-introduction of datetime dtype support to deal with dates in arrays.
* A new 16-bit floating point type.
* A new iterator, which improves performance of many functions.
New features
============
New 16-bit floating point type
------------------------------
This release adds support for the IEEE 754-2008 binary16 format, available as
the data type ``numpy.half``. Within Python, the type behaves similarly to
`float` or `double`, and C extensions can add support for it with the exposed
half-float API.
New iterator
------------
A new iterator has been added, replacing the functionality of the
existing iterator and multi-iterator with a single object and API.
This iterator works well with general memory layouts different from
C or Fortran contiguous, and handles both standard NumPy and
customized broadcasting. The buffering, automatic data type
conversion, and optional output parameters, offered by
ufuncs but difficult to replicate elsewhere, are now exposed by this
iterator.
Legendre, Laguerre, Hermite, HermiteE polynomials in ``numpy.polynomial``
-------------------------------------------------------------------------
Extend the number of polynomials available in the polynomial package. In
addition, a new ``window`` attribute has been added to the classes in
order to specify the range the ``domain`` maps to. This is mostly useful
for the Laguerre, Hermite, and HermiteE polynomials whose natural domains
are infinite and provides a more intuitive way to get the correct mapping
of values without playing unnatural tricks with the domain.
Fortran assumed shape array and size function support in ``numpy.f2py``
-----------------------------------------------------------------------
F2py now supports wrapping Fortran 90 routines that use assumed shape
arrays. Before such routines could be called from Python but the
corresponding Fortran routines received assumed shape arrays as zero
length arrays which caused unpredicted results. Thanks to Lorenz
Hüdepohl for pointing out the correct way to interface routines with
assumed shape arrays.
In addition, f2py supports now automatic wrapping of Fortran routines
that use two argument ``size`` function in dimension specifications.
Other new functions
-------------------
``numpy.ravel_multi_index`` : Converts a multi-index tuple into
an array of flat indices, applying boundary modes to the indices.
``numpy.einsum`` : Evaluate the Einstein summation convention. Using the
Einstein summation convention, many common multi-dimensional array operations
can be represented in a simple fashion. This function provides a way compute
such summations.
``numpy.count_nonzero`` : Counts the number of non-zero elements in an array.
``numpy.result_type`` and ``numpy.min_scalar_type`` : These functions expose
the underlying type promotion used by the ufuncs and other operations to
determine the types of outputs. These improve upon the ``numpy.common_type``
and ``numpy.mintypecode`` which provide similar functionality but do
not match the ufunc implementation.
Changes
=======
``default error handling``
--------------------------
The default error handling has been change from ``print`` to ``warn`` for
all except for ``underflow``, which remains as ``ignore``.
``numpy.distutils``
-------------------
Several new compilers are supported for building Numpy: the Portland Group
Fortran compiler on OS X, the PathScale compiler suite and the 64-bit Intel C
compiler on Linux.
``numpy.testing``
-----------------
The testing framework gained ``numpy.testing.assert_allclose``, which provides
a more convenient way to compare floating point arrays than
`assert_almost_equal`, `assert_approx_equal` and `assert_array_almost_equal`.
``C API``
---------
In addition to the APIs for the new iterator and half data type, a number
of other additions have been made to the C API. The type promotion
mechanism used by ufuncs is exposed via ``PyArray_PromoteTypes``,
``PyArray_ResultType``, and ``PyArray_MinScalarType``. A new enumeration
``NPY_CASTING`` has been added which controls what types of casts are
permitted. This is used by the new functions ``PyArray_CanCastArrayTo``
and ``PyArray_CanCastTypeTo``. A more flexible way to handle
conversion of arbitrary python objects into arrays is exposed by
``PyArray_GetArrayParamsFromObject``.
Deprecated features
===================
The "normed" keyword in ``numpy.histogram`` is deprecated. Its functionality
will be replaced by the new "density" keyword.
Removed features
================
``numpy.fft``
-------------
The functions `refft`, `refft2`, `refftn`, `irefft`, `irefft2`, `irefftn`,
which were aliases for the same functions without the 'e' in the name, were
removed.
``numpy.memmap``
----------------
The `sync()` and `close()` methods of memmap were removed. Use `flush()` and
"del memmap" instead.
``numpy.lib``
-------------
The deprecated functions ``numpy.unique1d``, ``numpy.setmember1d``,
``numpy.intersect1d_nu`` and ``numpy.lib.ufunclike.log2`` were removed.
``numpy.ma``
------------
Several deprecated items were removed from the ``numpy.ma`` module::
* ``numpy.ma.MaskedArray`` "raw_data" method
* ``numpy.ma.MaskedArray`` constructor "flag" keyword
* ``numpy.ma.make_mask`` "flag" keyword
* ``numpy.ma.allclose`` "fill_value" keyword
``numpy.distutils``
-------------------
The ``numpy.get_numpy_include`` function was removed, use ``numpy.get_include``
instead.

View File

@@ -1,22 +0,0 @@
=========================
NumPy 1.6.1 Release Notes
=========================
This is a bugfix only release in the 1.6.x series.
Issues fixed
------------
#1834 einsum fails for specific shapes
#1837 einsum throws nan or freezes python for specific array shapes
#1838 object <-> structured type arrays regression
#1851 regression for SWIG based code in 1.6.0
#1863 Buggy results when operating on array copied with astype()
#1870 Fix corner case of object array assignment
#1843 Py3k: fix error with recarray
#1885 nditer: Error in detecting double reduction loop
#1874 f2py: fix --include_paths bug
#1749 Fix ctypes.load_library()
#1895/1896 iter: writeonly operands weren't always being buffered correctly

View File

@@ -1,90 +0,0 @@
=========================
NumPy 1.6.2 Release Notes
=========================
This is a bugfix release in the 1.6.x series. Due to the delay of the NumPy
1.7.0 release, this release contains far more fixes than a regular NumPy bugfix
release. It also includes a number of documentation and build improvements.
``numpy.core`` issues fixed
---------------------------
#2063 make unique() return consistent index
#1138 allow creating arrays from empty buffers or empty slices
#1446 correct note about correspondence vstack and concatenate
#1149 make argmin() work for datetime
#1672 fix allclose() to work for scalar inf
#1747 make np.median() work for 0-D arrays
#1776 make complex division by zero to yield inf properly
#1675 add scalar support for the format() function
#1905 explicitly check for NaNs in allclose()
#1952 allow floating ddof in std() and var()
#1948 fix regression for indexing chararrays with empty list
#2017 fix type hashing
#2046 deleting array attributes causes segfault
#2033 a**2.0 has incorrect type
#2045 make attribute/iterator_element deletions not segfault
#2021 fix segfault in searchsorted()
#2073 fix float16 __array_interface__ bug
``numpy.lib`` issues fixed
--------------------------
#2048 break reference cycle in NpzFile
#1573 savetxt() now handles complex arrays
#1387 allow bincount() to accept empty arrays
#1899 fixed histogramdd() bug with empty inputs
#1793 fix failing npyio test under py3k
#1936 fix extra nesting for subarray dtypes
#1848 make tril/triu return the same dtype as the original array
#1918 use Py_TYPE to access ob_type, so it works also on Py3
``numpy.f2py`` changes
----------------------
ENH: Introduce new options extra_f77_compiler_args and extra_f90_compiler_args
BLD: Improve reporting of fcompiler value
BUG: Fix f2py test_kind.py test
``numpy.poly`` changes
----------------------
ENH: Add some tests for polynomial printing
ENH: Add companion matrix functions
DOC: Rearrange the polynomial documents
BUG: Fix up links to classes
DOC: Add version added to some of the polynomial package modules
DOC: Document xxxfit functions in the polynomial package modules
BUG: The polynomial convenience classes let different types interact
DOC: Document the use of the polynomial convenience classes
DOC: Improve numpy reference documentation of polynomial classes
ENH: Improve the computation of polynomials from roots
STY: Code cleanup in polynomial [*]fromroots functions
DOC: Remove references to cast and NA, which were added in 1.7
``numpy.distutils`` issues fixed
-------------------------------
#1261 change compile flag on AIX from -O5 to -O3
#1377 update HP compiler flags
#1383 provide better support for C++ code on HPUX
#1857 fix build for py3k + pip
BLD: raise a clearer warning in case of building without cleaning up first
BLD: follow build_ext coding convention in build_clib
BLD: fix up detection of Intel CPU on OS X in system_info.py
BLD: add support for the new X11 directory structure on Ubuntu & co.
BLD: add ufsparse to the libraries search path.
BLD: add 'pgfortran' as a valid compiler in the Portland Group
BLD: update version match regexp for IBM AIX Fortran compilers.
``numpy.random`` issues fixed
-----------------------------
BUG: Use npy_intp instead of long in mtrand

View File

@@ -1,16 +0,0 @@
=========================
NumPy 2.0.0 Release Notes
=========================
Highlights
==========
New features
============
Changes
=======

View File

@@ -1,129 +0,0 @@
.. vim:syntax=rst
Introduction
============
This document proposes some enhancements for numpy and scipy releases.
Successive numpy and scipy releases are too far apart from a time point of
view - some people who are in the numpy release team feel that it cannot
improve without a bit more formal release process. The main proposal is to
follow a time-based release, with expected dates for code freeze, beta and rc.
The goal is two folds: make release more predictable, and move the code forward.
Rationale
=========
Right now, the release process of numpy is relatively organic. When some
features are there, we may decide to make a new release. Because there is not
fixed schedule, people don't really know when new features and bug fixes will
go into a release. More significantly, having an expected release schedule
helps to *coordinate* efforts: at the beginning of a cycle, everybody can jump
in and put new code, even break things if needed. But after some point, only
bug fixes are accepted: this makes beta and RC releases much easier; calming
things down toward the release date helps focusing on bugs and regressions
Proposal
========
Time schedule
-------------
The proposed schedule is to release numpy every 9 weeks - the exact period can
be tweaked if it ends up not working as expected. There will be several stages
for the cycle:
* Development: anything can happen (by anything, we mean as currently
done). The focus is on new features, refactoring, etc...
* Beta: no new features. No bug fixing which requires heavy changes.
regression fixes which appear on supported platforms and were not
caught earlier.
* Polish/RC: only docstring changes and blocker regressions are allowed.
The schedule would be as follows:
+------+-----------------+-----------------+------------------+
| Week | 1.3.0 | 1.4.0 | Release time |
+======+=================+=================+==================+
| 1 | Development | | |
+------+-----------------+-----------------+------------------+
| 2 | Development | | |
+------+-----------------+-----------------+------------------+
| 3 | Development | | |
+------+-----------------+-----------------+------------------+
| 4 | Development | | |
+------+-----------------+-----------------+------------------+
| 5 | Development | | |
+------+-----------------+-----------------+------------------+
| 6 | Development | | |
+------+-----------------+-----------------+------------------+
| 7 | Beta | | |
+------+-----------------+-----------------+------------------+
| 8 | Beta | | |
+------+-----------------+-----------------+------------------+
| 9 | Beta | | 1.3.0 released |
+------+-----------------+-----------------+------------------+
| 10 | Polish | Development | |
+------+-----------------+-----------------+------------------+
| 11 | Polish | Development | |
+------+-----------------+-----------------+------------------+
| 12 | Polish | Development | |
+------+-----------------+-----------------+------------------+
| 13 | Polish | Development | |
+------+-----------------+-----------------+------------------+
| 14 | | Development | |
+------+-----------------+-----------------+------------------+
| 15 | | Development | |
+------+-----------------+-----------------+------------------+
| 16 | | Beta | |
+------+-----------------+-----------------+------------------+
| 17 | | Beta | |
+------+-----------------+-----------------+------------------+
| 18 | | Beta | 1.4.0 released |
+------+-----------------+-----------------+------------------+
Each stage can be defined as follows:
+------------------+-------------+----------------+----------------+
| | Development | Beta | Polish |
+==================+=============+================+================+
| Python Frozen | | slushy | Y |
+------------------+-------------+----------------+----------------+
| Docstring Frozen | | slushy | thicker slush |
+------------------+-------------+----------------+----------------+
| C code Frozen | | thicker slush | thicker slush |
+------------------+-------------+----------------+----------------+
Terminology:
* slushy: you can change it if you beg the release team and it's really
important and you coordinate with docs/translations; no "big"
changes.
* thicker slush: you can change it if it's an open bug marked
showstopper for the Polish release, you beg the release team, the
change is very very small yet very very important, and you feel
extremely guilty about your transgressions.
The different frozen states are intended to be gradients. The exact meaning is
decided by the release manager: he has the last word on what's go in, what
doesn't. The proposed schedule means that there would be at most 12 weeks
between putting code into the source code repository and being released.
Release team
------------
For every release, there would be at least one release manager. We propose to
rotate the release manager: rotation means it is not always the same person
doing the dirty job, and it should also keep the release manager honest.
References
==========
* Proposed schedule for Gnome from Havoc Pennington (one of the core
GTK and Gnome manager):
http://mail.gnome.org/archives/gnome-hackers/2002-June/msg00041.html
The proposed schedule is heavily based on this email
* http://live.gnome.org/ReleasePlanning/Freezes

View File

@@ -1,183 +0,0 @@
@import "default.css";
/**
* Spacing fixes
*/
div.body p, div.body dd, div.body li {
line-height: 125%;
}
ul.simple {
margin-top: 0;
margin-bottom: 0;
padding-top: 0;
padding-bottom: 0;
}
/* spacing around blockquoted fields in parameters/attributes/returns */
td.field-body > blockquote {
margin-top: 0.1em;
margin-bottom: 0.5em;
}
/* spacing around example code */
div.highlight > pre {
padding: 2px 5px 2px 5px;
}
/* spacing in see also definition lists */
dl.last > dd {
margin-top: 1px;
margin-bottom: 5px;
margin-left: 30px;
}
/**
* Hide dummy toctrees
*/
ul {
padding-top: 0;
padding-bottom: 0;
margin-top: 0;
margin-bottom: 0;
}
ul li {
padding-top: 0;
padding-bottom: 0;
margin-top: 0;
margin-bottom: 0;
}
ul li a.reference {
padding-top: 0;
padding-bottom: 0;
margin-top: 0;
margin-bottom: 0;
}
/**
* Make high-level subsections easier to distinguish from top-level ones
*/
div.body h3 {
background-color: transparent;
}
div.body h4 {
border: none;
background-color: transparent;
}
/**
* Scipy colors
*/
body {
background-color: rgb(100,135,220);
}
div.document {
background-color: rgb(230,230,230);
}
div.sphinxsidebar {
background-color: rgb(230,230,230);
}
div.related {
background-color: rgb(100,135,220);
}
div.sphinxsidebar h3 {
color: rgb(0,102,204);
}
div.sphinxsidebar h3 a {
color: rgb(0,102,204);
}
div.sphinxsidebar h4 {
color: rgb(0,82,194);
}
div.sphinxsidebar p {
color: black;
}
div.sphinxsidebar a {
color: #355f7c;
}
div.sphinxsidebar ul.want-points {
list-style: disc;
}
.field-list th {
color: rgb(0,102,204);
}
/**
* Extra admonitions
*/
div.tip {
background-color: #ffffe4;
border: 1px solid #ee6;
}
div.plot-output {
clear-after: both;
}
div.plot-output .figure {
float: left;
text-align: center;
margin-bottom: 0;
padding-bottom: 0;
}
div.plot-output .caption {
margin-top: 2;
padding-top: 0;
}
div.plot-output p.admonition-title {
display: none;
}
div.plot-output:after {
content: "";
display: block;
height: 0;
clear: both;
}
/*
div.admonition-example {
background-color: #e4ffe4;
border: 1px solid #ccc;
}*/
/**
* Styling for field lists
*/
table.field-list th {
border-left: 1px solid #aaa !important;
padding-left: 5px;
}
table.field-list {
border-collapse: separate;
border-spacing: 10px;
}
/**
* Styling for footnotes
*/
table.footnote td, table.footnote th {
border: none;
}

View File

@@ -1,23 +0,0 @@
{% extends "!autosummary/class.rst" %}
{% block methods %}
{% if methods %}
.. HACK
.. autosummary::
:toctree:
{% for item in methods %}
{{ name }}.{{ item }}
{%- endfor %}
{% endif %}
{% endblock %}
{% block attributes %}
{% if attributes %}
.. HACK
.. autosummary::
:toctree:
{% for item in attributes %}
{{ name }}.{{ item }}
{%- endfor %}
{% endif %}
{% endblock %}

View File

@@ -1,58 +0,0 @@
{% extends "defindex.html" %}
{% block tables %}
<p><strong>Parts of the documentation:</strong></p>
<table class="contentstable" align="center"><tr>
<td width="50%">
<p class="biglink"><a class="biglink" href="{{ pathto("user/index") }}">Numpy User Guide</a><br/>
<span class="linkdescr">start here</span></p>
<p class="biglink"><a class="biglink" href="{{ pathto("reference/index") }}">Numpy Reference</a><br/>
<span class="linkdescr">reference documentation</span></p>
<p class="biglink"><a class="biglink" href="{{ pathto("dev/index") }}">Numpy Developer Guide</a><br/>
<span class="linkdescr">contributing to NumPy</span></p>
</td></tr>
</table>
<p><strong>Indices and tables:</strong></p>
<table class="contentstable" align="center"><tr>
<td width="50%">
<p class="biglink"><a class="biglink" href="{{ pathto("modindex") }}">Module Index</a><br/>
<span class="linkdescr">quick access to all modules</span></p>
<p class="biglink"><a class="biglink" href="{{ pathto("genindex") }}">General Index</a><br/>
<span class="linkdescr">all functions, classes, terms</span></p>
<p class="biglink"><a class="biglink" href="{{ pathto("glossary") }}">Glossary</a><br/>
<span class="linkdescr">the most important terms explained</span></p>
</td><td width="50%">
<p class="biglink"><a class="biglink" href="{{ pathto("search") }}">Search page</a><br/>
<span class="linkdescr">search this documentation</span></p>
<p class="biglink"><a class="biglink" href="{{ pathto("contents") }}">Complete Table of Contents</a><br/>
<span class="linkdescr">lists all sections and subsections</span></p>
</td></tr>
</table>
<p><strong>Meta information:</strong></p>
<table class="contentstable" align="center"><tr>
<td width="50%">
<p class="biglink"><a class="biglink" href="{{ pathto("bugs") }}">Reporting bugs</a></p>
<p class="biglink"><a class="biglink" href="{{ pathto("about") }}">About NumPy</a></p>
</td><td width="50%">
<p class="biglink"><a class="biglink" href="{{ pathto("release") }}">Release Notes</a></p>
<p class="biglink"><a class="biglink" href="{{ pathto("license") }}">License of Numpy</a></p>
</td></tr>
</table>
<h2>Acknowledgements</h2>
<p>
Large parts of this manual originate from Travis E. Oliphant's book
<a href="http://www.tramy.us/">"Guide to Numpy"</a> (which generously entered
Public Domain in August 2008). The reference documentation for many of
the functions are written by numerous contributors and developers of
Numpy, both prior to and during the
<a href="http://scipy.org/Developer_Zone/DocMarathon2008">Numpy Documentation Marathon</a>.
</p>
<p>
The Documentation Marathon is still ongoing. Please help us write
better documentation for Numpy by joining it! Instructions on how to
join and what to do can be found
<a href="http://scipy.org/Developer_Zone/DocMarathon2008">on the scipy.org website</a>.
</p>
{% endblock %}

View File

@@ -1,5 +0,0 @@
<h3>Resources</h3>
<ul>
<li><a href="http://scipy.org/">Scipy.org website</a></li>
<li>&nbsp;</li>
</ul>

View File

@@ -1,17 +0,0 @@
{% extends "!layout.html" %}
{% block rootrellink %}
<li><a href="{{ pathto('index') }}">{{ shorttitle }}</a>{{ reldelim1 }}</li>
{% endblock %}
{% block sidebarsearch %}
{%- if sourcename %}
<ul class="this-page-menu">
{%- if 'reference/generated' in sourcename %}
<li><a href="/numpy/docs/{{ sourcename.replace('reference/generated/', '').replace('.txt', '') |e }}">{{_('Edit page')}}</a></li>
{%- else %}
<li><a href="/numpy/docs/numpy-docs/{{ sourcename.replace('.txt', '.rst') |e }}">{{_('Edit page')}}</a></li>
{%- endif %}
</ul>
{%- endif %}
{{ super() }}
{% endblock %}

View File

@@ -1,65 +0,0 @@
About NumPy
===========
`NumPy <http://www.scipy.org/NumpPy/>`__ is the fundamental package
needed for scientific computing with Python. This package contains:
- a powerful N-dimensional :ref:`array object <arrays>`
- sophisticated :ref:`(broadcasting) functions <ufuncs>`
- basic :ref:`linear algebra functions <routines.linalg>`
- basic :ref:`Fourier transforms <routines.fft>`
- sophisticated :ref:`random number capabilities <routines.random>`
- tools for integrating Fortran code
- tools for integrating C/C++ code
Besides its obvious scientific uses, *NumPy* can also be used as an
efficient multi-dimensional container of generic data. Arbitrary
data types can be defined. This allows *NumPy* to seamlessly and
speedily integrate with a wide variety of databases.
NumPy is a successor for two earlier scientific Python libraries:
NumPy derives from the old *Numeric* code base and can be used
as a replacement for *Numeric*. It also adds the features introduced
by *Numarray* and can also be used to replace *Numarray*.
NumPy community
---------------
Numpy is a distributed, volunteer, open-source project. *You* can help
us make it better; if you believe something should be improved either
in functionality or in documentation, don't hesitate to contact us --- or
even better, contact us and participate in fixing the problem.
Our main means of communication are:
- `scipy.org website <http://scipy.org/>`__
- `Mailing lists <http://scipy.org/Mailing_Lists>`__
- `Numpy Trac <http://projects.scipy.org/numpy>`__ (bug "tickets" go here)
More information about the development of Numpy can be found at
http://scipy.org/Developer_Zone
If you want to fix issues in this documentation, the easiest way
is to participate in `our ongoing documentation marathon
<http://scipy.org/Developer_Zone/DocMarathon2008>`__.
About this documentation
========================
Conventions
-----------
Names of classes, objects, constants, etc. are given in **boldface** font.
Often they are also links to a more detailed documentation of the
referred object.
This manual contains many examples of use, usually prefixed with the
Python prompt ``>>>`` (which is not a part of the example code). The
examples assume that you have first entered::
>>> import numpy as np
before running the examples.

View File

@@ -1,23 +0,0 @@
**************
Reporting bugs
**************
File bug reports or feature requests, and make contributions
(e.g. code patches), by submitting a "ticket" on the Trac pages:
- Numpy Trac: http://scipy.org/scipy/numpy
Because of spam abuse, you must create an account on our Trac in order
to submit a ticket, then click on the "New Ticket" tab that only
appears when you have logged in. Please give as much information as
you can in the ticket. It is extremely useful if you can supply a
small self-contained code snippet that reproduces the problem. Also
specify the component, the version you are referring to and the
milestone.
Report bugs to the appropriate Trac instance (there is one for NumPy
and a different one for SciPy). There are also read-only mailing lists
for tracking the status of your bug ticket.
More information can be found on the http://scipy.org/Developer_Zone
website.

View File

@@ -1,269 +0,0 @@
# -*- coding: utf-8 -*-
import sys, os, re
# Check Sphinx version
import sphinx
if sphinx.__version__ < "1.0.1":
raise RuntimeError("Sphinx 1.0.1 or newer required")
needs_sphinx = '1.0'
# -----------------------------------------------------------------------------
# General configuration
# -----------------------------------------------------------------------------
# Add any Sphinx extension module names here, as strings. They can be extensions
# coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
sys.path.insert(0, os.path.abspath('../sphinxext'))
extensions = ['sphinx.ext.autodoc', 'sphinx.ext.pngmath', 'numpydoc',
'sphinx.ext.intersphinx', 'sphinx.ext.coverage',
'sphinx.ext.doctest', 'sphinx.ext.autosummary',
'plot_directive']
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# The suffix of source filenames.
source_suffix = '.rst'
# The master toctree document.
#master_doc = 'index'
# General substitutions.
project = 'NumPy'
copyright = '2008-2009, The Scipy community'
# The default replacements for |version| and |release|, also used in various
# other places throughout the built documents.
#
import numpy
# The short X.Y version (including .devXXXX, rcX, b1 suffixes if present)
version = re.sub(r'(\d+\.\d+)\.\d+(.*)', r'\1\2', numpy.__version__)
version = re.sub(r'(\.dev\d+).*?$', r'\1', version)
# The full version, including alpha/beta/rc tags.
release = numpy.__version__
print version, release
# There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used:
#today = ''
# Else, today_fmt is used as the format for a strftime call.
today_fmt = '%B %d, %Y'
# List of documents that shouldn't be included in the build.
#unused_docs = []
# The reST default role (used for this markup: `text`) to use for all documents.
default_role = "autolink"
# List of directories, relative to source directories, that shouldn't be searched
# for source files.
exclude_dirs = []
# If true, '()' will be appended to :func: etc. cross-reference text.
add_function_parentheses = False
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
#add_module_names = True
# If true, sectionauthor and moduleauthor directives will be shown in the
# output. They are ignored by default.
#show_authors = False
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# -----------------------------------------------------------------------------
# HTML output
# -----------------------------------------------------------------------------
# The style sheet to use for HTML and HTML Help pages. A file of that name
# must exist either in Sphinx' static/ path, or in one of the custom paths
# given in html_static_path.
html_style = 'scipy.css'
# The name for this set of Sphinx documents. If None, it defaults to
# "<project> v<release> documentation".
html_title = "%s v%s Manual (DRAFT)" % (project, version)
# The name of an image file (within the static path) to place at the top of
# the sidebar.
html_logo = 'scipyshiny_small.png'
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['_static']
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
# using the given strftime format.
html_last_updated_fmt = '%b %d, %Y'
# If true, SmartyPants will be used to convert quotes and dashes to
# typographically correct entities.
#html_use_smartypants = True
# Custom sidebar templates, maps document names to template names.
html_sidebars = {
'index': 'indexsidebar.html'
}
# Additional templates that should be rendered to pages, maps page names to
# template names.
html_additional_pages = {
'index': 'indexcontent.html',
}
# If false, no module index is generated.
html_use_modindex = True
# If true, the reST sources are included in the HTML build as _sources/<name>.
#html_copy_source = True
# If true, an OpenSearch description file will be output, and all pages will
# contain a <link> tag referring to it. The value of this option must be the
# base URL from which the finished HTML is served.
#html_use_opensearch = ''
# If nonempty, this is the file name suffix for HTML files (e.g. ".html").
#html_file_suffix = '.html'
# Output file base name for HTML help builder.
htmlhelp_basename = 'numpy'
# Pngmath should try to align formulas properly
pngmath_use_preview = True
# -----------------------------------------------------------------------------
# LaTeX output
# -----------------------------------------------------------------------------
# The paper size ('letter' or 'a4').
#latex_paper_size = 'letter'
# The font size ('10pt', '11pt' or '12pt').
#latex_font_size = '10pt'
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title, author, document class [howto/manual]).
_stdauthor = 'Written by the NumPy community'
latex_documents = [
('reference/index', 'numpy-ref.tex', 'NumPy Reference',
_stdauthor, 'manual'),
('user/index', 'numpy-user.tex', 'NumPy User Guide',
_stdauthor, 'manual'),
]
# The name of an image file (relative to this directory) to place at the top of
# the title page.
#latex_logo = None
# For "manual" documents, if this is true, then toplevel headings are parts,
# not chapters.
#latex_use_parts = False
# Additional stuff for the LaTeX preamble.
latex_preamble = r'''
\usepackage{amsmath}
\DeclareUnicodeCharacter{00A0}{\nobreakspace}
% In the parameters section, place a newline after the Parameters
% header
\usepackage{expdlist}
\let\latexdescription=\description
\def\description{\latexdescription{}{} \breaklabel}
% Make Examples/etc section headers smaller and more compact
\makeatletter
\titleformat{\paragraph}{\normalsize\py@HeaderFamily}%
{\py@TitleColor}{0em}{\py@TitleColor}{\py@NormalColor}
\titlespacing*{\paragraph}{0pt}{1ex}{0pt}
\makeatother
% Fix footer/header
\renewcommand{\chaptermark}[1]{\markboth{\MakeUppercase{\thechapter.\ #1}}{}}
\renewcommand{\sectionmark}[1]{\markright{\MakeUppercase{\thesection.\ #1}}}
'''
# Documents to append as an appendix to all manuals.
#latex_appendices = []
# If false, no module index is generated.
latex_use_modindex = False
# -----------------------------------------------------------------------------
# Intersphinx configuration
# -----------------------------------------------------------------------------
intersphinx_mapping = {'http://docs.python.org/dev': None}
# -----------------------------------------------------------------------------
# Numpy extensions
# -----------------------------------------------------------------------------
# If we want to do a phantom import from an XML file for all autodocs
phantom_import_file = 'dump.xml'
# Make numpydoc to generate plots for example sections
numpydoc_use_plots = True
# -----------------------------------------------------------------------------
# Autosummary
# -----------------------------------------------------------------------------
import glob
autosummary_generate = glob.glob("reference/*.rst")
# -----------------------------------------------------------------------------
# Coverage checker
# -----------------------------------------------------------------------------
coverage_ignore_modules = r"""
""".split()
coverage_ignore_functions = r"""
test($|_) (some|all)true bitwise_not cumproduct pkgload
generic\.
""".split()
coverage_ignore_classes = r"""
""".split()
coverage_c_path = []
coverage_c_regexes = {}
coverage_ignore_c_items = {}
# -----------------------------------------------------------------------------
# Plots
# -----------------------------------------------------------------------------
plot_pre_code = """
import numpy as np
np.random.seed(0)
"""
plot_include_source = True
plot_formats = [('png', 100), 'pdf']
import math
phi = (math.sqrt(5) + 1)/2
import matplotlib
matplotlib.rcParams.update({
'font.size': 8,
'axes.titlesize': 8,
'axes.labelsize': 8,
'xtick.labelsize': 8,
'ytick.labelsize': 8,
'legend.fontsize': 8,
'figure.figsize': (3*phi, 3),
'figure.subplot.bottom': 0.2,
'figure.subplot.left': 0.2,
'figure.subplot.right': 0.9,
'figure.subplot.top': 0.85,
'figure.subplot.wspace': 0.4,
'text.usetex': False,
})

View File

@@ -1,14 +0,0 @@
#####################
Numpy manual contents
#####################
.. toctree::
user/index
reference/index
dev/index
release
about
bugs
license
glossary

Binary file not shown.

Before

Width:  |  Height:  |  Size: 13 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 10 KiB

View File

@@ -1,123 +0,0 @@
.. _configure-git:
=================
Git configuration
=================
.. _git-config-basic:
Overview
========
Your personal git_ configurations are saved in the ``.gitconfig`` file in
your home directory.
Here is an example ``.gitconfig`` file::
[user]
name = Your Name
email = you@yourdomain.example.com
[alias]
ci = commit -a
co = checkout
st = status -a
stat = status -a
br = branch
wdiff = diff --color-words
[core]
editor = vim
[merge]
summary = true
You can edit this file directly or you can use the ``git config --global``
command::
git config --global user.name "Your Name"
git config --global user.email you@yourdomain.example.com
git config --global alias.ci "commit -a"
git config --global alias.co checkout
git config --global alias.st "status -a"
git config --global alias.stat "status -a"
git config --global alias.br branch
git config --global alias.wdiff "diff --color-words"
git config --global core.editor vim
git config --global merge.summary true
To set up on another computer, you can copy your ``~/.gitconfig`` file,
or run the commands above.
In detail
=========
user.name and user.email
------------------------
It is good practice to tell git_ who you are, for labeling any changes
you make to the code. The simplest way to do this is from the command
line::
git config --global user.name "Your Name"
git config --global user.email you@yourdomain.example.com
This will write the settings into your git configuration file, which
should now contain a user section with your name and email::
[user]
name = Your Name
email = you@yourdomain.example.com
Of course you'll need to replace ``Your Name`` and ``you@yourdomain.example.com``
with your actual name and email address.
Aliases
-------
You might well benefit from some aliases to common commands.
For example, you might well want to be able to shorten ``git checkout``
to ``git co``. Or you may want to alias ``git diff --color-words``
(which gives a nicely formatted output of the diff) to ``git wdiff``
The following ``git config --global`` commands::
git config --global alias.ci "commit -a"
git config --global alias.co checkout
git config --global alias.st "status -a"
git config --global alias.stat "status -a"
git config --global alias.br branch
git config --global alias.wdiff "diff --color-words"
will create an ``alias`` section in your ``.gitconfig`` file with contents
like this::
[alias]
ci = commit -a
co = checkout
st = status -a
stat = status -a
br = branch
wdiff = diff --color-words
Editor
------
You may also want to make sure that your editor of choice is used ::
git config --global core.editor vim
Merging
-------
To enforce summaries when doing merges (``~/.gitconfig`` file again)::
[merge]
log = true
Or from the command line::
git config --global merge.log true
.. include:: git_links.inc

View File

@@ -1,113 +0,0 @@
====================================
Getting started with Git development
====================================
Basic Git setup
###############
* :ref:`install-git`.
* Introduce yourself to Git::
git config --global user.email you@yourdomain.example.com
git config --global user.name "Your Name Comes Here"
.. _forking:
Making your own copy (fork) of NumPy
####################################
You need to do this only once. The instructions here are very similar
to the instructions at http://help.github.com/forking/ - please see that
page for more detail. We're repeating some of it here just to give the
specifics for the NumPy_ project, and to suggest some default names.
Set up and configure a github_ account
======================================
If you don't have a github_ account, go to the github_ page, and make one.
You then need to configure your account to allow write access - see the
``Generating SSH keys`` help on `github help`_.
Create your own forked copy of NumPy_
=========================================
#. Log into your github_ account.
#. Go to the NumPy_ github home at `NumPy github`_.
#. Click on the *fork* button:
.. image:: forking_button.png
Now, after a short pause and some 'Hardcore forking action', you
should find yourself at the home page for your own forked copy of NumPy_.
.. include:: git_links.inc
.. _set-up-fork:
Set up your fork
################
First you follow the instructions for :ref:`forking`.
Overview
========
::
git clone git@github.com:your-user-name/numpy.git
cd numpy
git remote add upstream git://github.com/numpy/numpy.git
In detail
=========
Clone your fork
---------------
#. Clone your fork to the local computer with ``git clone
git@github.com:your-user-name/numpy.git``
#. Investigate. Change directory to your new repo: ``cd numpy``. Then
``git branch -a`` to show you all branches. You'll get something
like::
* master
remotes/origin/master
This tells you that you are currently on the ``master`` branch, and
that you also have a ``remote`` connection to ``origin/master``.
What remote repository is ``remote/origin``? Try ``git remote -v`` to
see the URLs for the remote. They will point to your github_ fork.
Now you want to connect to the upstream `NumPy github`_ repository, so
you can merge in changes from trunk.
.. _linking-to-upstream:
Linking your repository to the upstream repo
--------------------------------------------
::
cd numpy
git remote add upstream git://github.com/numpy/numpy.git
``upstream`` here is just the arbitrary name we're using to refer to the
main NumPy_ repository at `NumPy github`_.
Note that we've used ``git://`` for the URL rather than ``git@``. The
``git://`` URL is read only. This means we that we can't accidentally
(or deliberately) write to the upstream repo, and we are only going to
use it to merge into our own code.
Just for your own satisfaction, show yourself that you now have a new
'remote', with ``git remote -v show``, giving you something like::
upstream git://github.com/numpy/numpy.git (fetch)
upstream git://github.com/numpy/numpy.git (push)
origin git@github.com:your-user-name/numpy.git (fetch)
origin git@github.com:your-user-name/numpy.git (push)
.. include:: git_links.inc

View File

@@ -1,461 +0,0 @@
.. _development-workflow:
====================
Development workflow
====================
You already have your own forked copy of the NumPy_ repository, by
following :ref:`forking`, :ref:`set-up-fork`, you have configured git_
by following :ref:`configure-git`, and have linked the upstream
repository as explained in :ref:`linking-to-upstream`.
What is described below is a recommended workflow with Git.
Basic workflow
##############
In short:
1. Start a new *feature branch* for each set of edits that you do.
See :ref:`below <making-a-new-feature-branch>`.
Avoid putting new commits in your ``master`` branch.
2. Hack away! See :ref:`below <editing-workflow>`
3. Avoid merging other branches into your feature branch while you are
working.
You can optionally rebase if really needed,
see :ref:`below <rebasing-on-master>`.
4. When finished:
- *Contributors*: push your feature branch to your own Github repo, and
:ref:`ask for code review or make a pull request <asking-for-merging>`.
- *Core developers* (if you want to push changes without
further review)::
# First, either (i) rebase on upstream -- if you have only few commits
git fetch upstream
git rebase upstream/master
# or, (ii) merge to upstream -- if you have many related commits
git fetch upstream
git merge --no-ff upstream/master
# Recheck that what is there is sensible
git log --oneline --graph
git log -p upstream/master..
# Finally, push branch to upstream master
git push upstream my-new-feature:master
See :ref:`below <pushing-to-main>`.
This way of working helps to keep work well organized and the history
as clear as possible.
.. note::
Do not use ``git pull`` --- this avoids common mistakes if you are
new to Git. Instead, always do ``git fetch`` followed by ``git
rebase``, ``git merge --ff-only`` or ``git merge --no-ff``,
depending on what you intend.
.. seealso::
See discussions on `linux git workflow`_,
and `ipython git workflow <http://mail.scipy.org/pipermail/ipython-dev/2010-October/006746.html>`__.
.. _making-a-new-feature-branch:
Making a new feature branch
===========================
::
git branch my-new-feature
git checkout my-new-feature
or just::
git checkout -b my-new-feature upstream/master
Generally, you will want to keep this also on your public github_ fork
of NumPy_. To do this, you `git push`_ this new branch up to your github_
repo. Generally (if you followed the instructions in these pages, and
by default), git will have a link to your github_ repo, called
``origin``. You push up to your own repo on github_ with::
git push origin my-new-feature
In git >= 1.7 you can ensure that the link is correctly set by using the
``--set-upstream`` option::
git push --set-upstream origin my-new-feature
From now on git_ will know that ``my-new-feature`` is related to the
``my-new-feature`` branch in your own github_ repo.
.. _editing-workflow:
The editing workflow
====================
Overview
--------
::
# hack hack
git add my_new_file
git commit -am 'ENH: some message'
# push the branch to your own Github repo
git push
In more detail
--------------
#. Make some changes
#. See which files have changed with ``git status`` (see `git status`_).
You'll see a listing like this one::
# On branch my-new-feature
# Changed but not updated:
# (use "git add <file>..." to update what will be committed)
# (use "git checkout -- <file>..." to discard changes in working directory)
#
# modified: README
#
# Untracked files:
# (use "git add <file>..." to include in what will be committed)
#
# INSTALL
no changes added to commit (use "git add" and/or "git commit -a")
#. Check what the actual changes are with ``git diff`` (`git diff`_).
#. Add any new files to version control ``git add new_file_name`` (see
`git add`_).
#. To commit all modified files into the local copy of your repo,, do
``git commit -am 'A commit message'``. Note the ``-am`` options to
``commit``. The ``m`` flag just signals that you're going to type a
message on the command line. The ``a`` flag - you can just take on
faith - or see `why the -a flag?`_ - and the helpful use-case description in
the `tangled working copy problem`_. The `git commit`_ manual
page might also be useful.
#. To push the changes up to your forked repo on github_, do a ``git
push`` (see `git push`).
.. _rebasing-on-master:
Rebasing on master
==================
This updates your feature branch with changes from the upstream `NumPy
github`_ repo. If you do not absolutely need to do this, try to avoid
doing it, except perhaps when you are finished.
First, it can be useful to update your master branch::
# go to the master branch
git checkout master
# pull changes from github
git fetch upstream
# update the master branch
git rebase upstream/master
# push it to your Github repo
git push
Then, the feature branch::
# go to the feature branch
git checkout my-new-feature
# make a backup in case you mess up
git branch tmp my-new-feature
# rebase on master
git rebase master
If you have made changes to files that have changed also upstream,
this may generate merge conflicts that you need to resolve.::
# delete backup branch
git branch -D tmp
.. _recovering-from-mess-up:
Recovering from mess-ups
------------------------
Sometimes, you mess up merges or rebases. Luckily, in Git it is
relatively straightforward to recover from such mistakes.
If you mess up during a rebase::
git rebase --abort
If you notice you messed up after the rebase::
# reset branch back to the saved point
git reset --hard tmp
If you forgot to make a backup branch::
# look at the reflog of the branch
git reflog show my-feature-branch
8630830 my-feature-branch@{0}: commit: BUG: io: close file handles immediately
278dd2a my-feature-branch@{1}: rebase finished: refs/heads/my-feature-branch onto 11ee694744f2552d
26aa21a my-feature-branch@{2}: commit: BUG: lib: make seek_gzip_factory not leak gzip obj
...
# reset the branch to where it was before the botched rebase
git reset --hard my-feature-branch@{2}
.. _asking-for-merging:
Asking for your changes to be merged with the main repo
=======================================================
When you feel your work is finished, you can ask for code review, or
directly file a pull request.
Asking for code review
----------------------
#. Go to your repo URL - e.g. ``http://github.com/your-user-name/numpy``.
#. Click on the *Branch list* button:
.. image:: branch_list.png
#. Click on the *Compare* button for your feature branch - here ``my-new-feature``:
.. image:: branch_list_compare.png
#. If asked, select the *base* and *comparison* branch names you want to
compare. Usually these will be ``master`` and ``my-new-feature``
(where that is your feature branch name).
#. At this point you should get a nice summary of the changes. Copy the
URL for this, and post it to the `NumPy mailing list`_, asking for
review. The URL will look something like:
``http://github.com/your-user-name/numpy/compare/master...my-new-feature``.
There's an example at
http://github.com/matthew-brett/nipy/compare/master...find-install-data
See: http://github.com/blog/612-introducing-github-compare-view for
more detail.
The generated comparison, is between your feature branch
``my-new-feature``, and the place in ``master`` from which you branched
``my-new-feature``. In other words, you can keep updating ``master``
without interfering with the output from the comparison. More detail?
Note the three dots in the URL above (``master...my-new-feature``) and
see :ref:`dot2-dot3`.
Filing a pull request
---------------------
When you are ready to ask for the merge of your code:
#. Go to the URL of your forked repo, say
``http://github.com/your-user-name/numpy.git``.
#. Click on the 'Pull request' button:
.. image:: pull_button.png
Enter a message; we suggest you select only ``NumPy`` as the
recipient. The message will go to the NumPy core developers. Please
feel free to add others from the list as you like.
.. _pushing-to-main:
Pushing changes to the main repo
================================
When you have a set of "ready" changes in a feature branch ready for
Numpy's ``master`` or ``maintenance/1.5.x`` branches, you can push
them to ``upstream`` as follows:
1. First, merge or rebase on the target branch.
a) Only a few commits: prefer rebasing::
git fetch upstream
git rebase upstream/master
See :ref:`above <rebasing-on-master>`.
b) Many related commits: consider creating a merge commit::
git fetch upstream
git merge --no-ff upstream/master
2. Check that what you are going to push looks sensible::
git log -p upstream/master..
git log --oneline --graph
3. Push to upstream::
git push upstream my-feature-branch:master
.. note::
Avoid using ``git pull`` here.
Additional things you might want to do
######################################
.. _rewriting-commit-history:
Rewriting commit history
========================
.. note::
Do this only for your own feature branches.
There's an embarassing typo in a commit you made? Or perhaps the you
made several false starts you would like the posterity not to see.
This can be done via *interactive rebasing*.
Suppose that the commit history looks like this::
git log --oneline
eadc391 Fix some remaining bugs
a815645 Modify it so that it works
2dec1ac Fix a few bugs + disable
13d7934 First implementation
6ad92e5 * masked is now an instance of a new object, MaskedConstant
29001ed Add pre-nep for a copule of structured_array_extensions.
...
and ``6ad92e5`` is the last commit in the ``master`` branch. Suppose we
want to make the following changes:
* Rewrite the commit message for ``13d7934`` to something more sensible.
* Combine the commits ``2dec1ac``, ``a815645``, ``eadc391`` into a single one.
We do as follows::
# make a backup of the current state
git branch tmp HEAD
# interactive rebase
git rebase -i 6ad92e5
This will open an editor with the following text in it::
pick 13d7934 First implementation
pick 2dec1ac Fix a few bugs + disable
pick a815645 Modify it so that it works
pick eadc391 Fix some remaining bugs
# Rebase 6ad92e5..eadc391 onto 6ad92e5
#
# Commands:
# p, pick = use commit
# r, reword = use commit, but edit the commit message
# e, edit = use commit, but stop for amending
# s, squash = use commit, but meld into previous commit
# f, fixup = like "squash", but discard this commit's log message
#
# If you remove a line here THAT COMMIT WILL BE LOST.
# However, if you remove everything, the rebase will be aborted.
#
To achieve what we want, we will make the following changes to it::
r 13d7934 First implementation
pick 2dec1ac Fix a few bugs + disable
f a815645 Modify it so that it works
f eadc391 Fix some remaining bugs
This means that (i) we want to edit the commit message for
``13d7934``, and (ii) collapse the last three commits into one. Now we
save and quit the editor.
Git will then immediately bring up an editor for editing the commit
message. After revising it, we get the output::
[detached HEAD 721fc64] FOO: First implementation
2 files changed, 199 insertions(+), 66 deletions(-)
[detached HEAD 0f22701] Fix a few bugs + disable
1 files changed, 79 insertions(+), 61 deletions(-)
Successfully rebased and updated refs/heads/my-feature-branch.
and the history looks now like this::
0f22701 Fix a few bugs + disable
721fc64 ENH: Sophisticated feature
6ad92e5 * masked is now an instance of a new object, MaskedConstant
If it went wrong, recovery is again possible as explained :ref:`above
<recovering-from-mess-up>`.
Deleting a branch on github_
============================
::
git checkout master
# delete branch locally
git branch -D my-unwanted-branch
# delete branch on github
git push origin :my-unwanted-branch
(Note the colon ``:`` before ``test-branch``. See also:
http://github.com/guides/remove-a-remote-branch
Several people sharing a single repository
==========================================
If you want to work on some stuff with other people, where you are all
committing into the same repository, or even the same branch, then just
share it via github_.
First fork NumPy into your account, as from :ref:`forking`.
Then, go to your forked repository github page, say
``http://github.com/your-user-name/numpy``
Click on the 'Admin' button, and add anyone else to the repo as a
collaborator:
.. image:: pull_button.png
Now all those people can do::
git clone git@githhub.com:your-user-name/numpy.git
Remember that links starting with ``git@`` use the ssh protocol and are
read-write; links starting with ``git://`` are read-only.
Your collaborators can then commit directly into that repo with the
usual::
git commit -am 'ENH - much better code'
git push origin master # pushes directly into your repo
Exploring your repository
=========================
To see a graphical representation of the repository branches and
commits::
gitk --all
To see a linear list of commits for this branch::
git log
You can also look at the `network graph visualizer`_ for your github_
repo.
.. include:: git_links.inc

View File

@@ -1,28 +0,0 @@
.. _dot2-dot3:
========================================
Two and three dots in difference specs
========================================
Thanks to Yarik Halchenko for this explanation.
Imagine a series of commits A, B, C, D... Imagine that there are two
branches, *topic* and *master*. You branched *topic* off *master* when
*master* was at commit 'E'. The graph of the commits looks like this::
A---B---C topic
/
D---E---F---G master
Then::
git diff master..topic
will output the difference from G to C (i.e. with effects of F and G),
while::
git diff master...topic
would output just differences in the topic branch (i.e. only A, B, and
C).

View File

@@ -1,37 +0,0 @@
.. _following-latest:
=============================
Following the latest source
=============================
These are the instructions if you just want to follow the latest
*NumPy* source, but you don't need to do any development for now.
The steps are:
* :ref:`install-git`
* get local copy of the git repository from github_
* update local copy from time to time
Get the local copy of the code
==============================
From the command line::
git clone git://github.com/numpy/numpy.git
You now have a copy of the code tree in the new ``numpy`` directory.
Updating the code
=================
From time to time you may want to pull down the latest code. Do this with::
cd numpy
git fetch
git merge --ff-only
The tree in ``numpy`` will now have the latest changes from the initial
repository.
.. include:: git_links.inc

Binary file not shown.

Before

Width:  |  Height:  |  Size: 13 KiB

View File

@@ -1,15 +0,0 @@
.. _git-development:
=====================
Git for development
=====================
Contents:
.. toctree::
:maxdepth: 2
development_setup
configure_git
development_workflow

View File

@@ -1,42 +0,0 @@
============
Introduction
============
These pages describe a git_ and github_ workflow for the NumPy_
project.
There are several different workflows here, for different ways of
working with *NumPy*.
This is not a comprehensive git_ reference, it's just a workflow for our
own project. It's tailored to the github_ hosting service. You may well
find better or quicker ways of getting stuff done with git_, but these
should get you started.
For general resources for learning git_ see :ref:`git-resources`.
.. _install-git:
Install git
===========
Overview
--------
================ =============
Debian / Ubuntu ``sudo apt-get install git-core``
Fedora ``sudo yum install git-core``
Windows Download and install msysGit_
OS X Use the git-osx-installer_
================ =============
In detail
---------
See the git_ page for the most recent information.
Have a look at the github_ install help pages available from `github help`_
There are good instructions here: http://book.git-scm.com/2_installing_git.html
.. include:: git_links.inc

View File

@@ -1,85 +0,0 @@
.. This (-*- rst -*-) format file contains commonly used link targets
and name substitutions. It may be included in many files,
therefore it should only contain link targets and name
substitutions. Try grepping for "^\.\. _" to find plausible
candidates for this list.
.. NOTE: reST targets are
__not_case_sensitive__, so only one target definition is needed for
nipy, NIPY, Nipy, etc...
.. PROJECTNAME placeholders
.. _PROJECTNAME: http://neuroimaging.scipy.org
.. _`PROJECTNAME github`: http://github.com/nipy
.. _`PROJECTNAME mailing list`: http://projects.scipy.org/mailman/listinfo/nipy-devel
.. nipy
.. _nipy: http://nipy.org/nipy
.. _`nipy github`: http://github.com/nipy/nipy
.. _`nipy mailing list`: http://mail.scipy.org/mailman/listinfo/nipy-devel
.. ipython
.. _ipython: http://ipython.scipy.org
.. _`ipython github`: http://github.com/ipython/ipython
.. _`ipython mailing list`: http://mail.scipy.org/mailman/listinfo/IPython-dev
.. dipy
.. _dipy: http://nipy.org/dipy
.. _`dipy github`: http://github.com/Garyfallidis/dipy
.. _`dipy mailing list`: http://mail.scipy.org/mailman/listinfo/nipy-devel
.. nibabel
.. _nibabel: http://nipy.org/nibabel
.. _`nibabel github`: http://github.com/nipy/nibabel
.. _`nibabel mailing list`: http://mail.scipy.org/mailman/listinfo/nipy-devel
.. marsbar
.. _marsbar: http://marsbar.sourceforge.net
.. _`marsbar github`: http://github.com/matthew-brett/marsbar
.. _`MarsBaR mailing list`: https://lists.sourceforge.net/lists/listinfo/marsbar-users
.. git stuff
.. _git: http://git-scm.com/
.. _github: http://github.com
.. _github help: http://help.github.com
.. _msysgit: http://code.google.com/p/msysgit/downloads/list
.. _git-osx-installer: http://code.google.com/p/git-osx-installer/downloads/list
.. _subversion: http://subversion.tigris.org/
.. _git cheat sheet: http://github.com/guides/git-cheat-sheet
.. _pro git book: http://progit.org/
.. _git svn crash course: http://git-scm.com/course/svn.html
.. _learn.github: http://learn.github.com/
.. _network graph visualizer: http://github.com/blog/39-say-hello-to-the-network-graph-visualizer
.. _git user manual: http://www.kernel.org/pub/software/scm/git/docs/user-manual.html
.. _git tutorial: http://www.kernel.org/pub/software/scm/git/docs/gittutorial.html
.. _git community book: http://book.git-scm.com/
.. _git ready: http://www.gitready.com/
.. _git casts: http://www.gitcasts.com/
.. _Fernando's git page: http://www.fperez.org/py4science/git.html
.. _git magic: http://www-cs-students.stanford.edu/~blynn/gitmagic/index.html
.. _git concepts: http://www.eecs.harvard.edu/~cduan/technical/git/
.. _git clone: http://www.kernel.org/pub/software/scm/git/docs/git-clone.html
.. _git checkout: http://www.kernel.org/pub/software/scm/git/docs/git-checkout.html
.. _git commit: http://www.kernel.org/pub/software/scm/git/docs/git-commit.html
.. _git push: http://www.kernel.org/pub/software/scm/git/docs/git-push.html
.. _git pull: http://www.kernel.org/pub/software/scm/git/docs/git-pull.html
.. _git add: http://www.kernel.org/pub/software/scm/git/docs/git-add.html
.. _git status: http://www.kernel.org/pub/software/scm/git/docs/git-status.html
.. _git diff: http://www.kernel.org/pub/software/scm/git/docs/git-diff.html
.. _git log: http://www.kernel.org/pub/software/scm/git/docs/git-log.html
.. _git branch: http://www.kernel.org/pub/software/scm/git/docs/git-branch.html
.. _git remote: http://www.kernel.org/pub/software/scm/git/docs/git-remote.html
.. _git config: http://www.kernel.org/pub/software/scm/git/docs/git-config.html
.. _why the -a flag?: http://www.gitready.com/beginner/2009/01/18/the-staging-area.html
.. _git staging area: http://www.gitready.com/beginner/2009/01/18/the-staging-area.html
.. _tangled working copy problem: http://tomayko.com/writings/the-thing-about-git
.. _git management: http://kerneltrap.org/Linux/Git_Management
.. _linux git workflow: http://www.mail-archive.com/dri-devel@lists.sourceforge.net/msg39091.html
.. _git parable: http://tom.preston-werner.com/2009/05/19/the-git-parable.html
.. _git foundation: http://matthew-brett.github.com/pydagogue/foundation.html
.. other stuff
.. _python: http://www.python.org
.. _NumPy: http://numpy.scipy.org
.. _`NumPy github`: http://github.com/numpy/numpy
.. _`NumPy mailing list`: http://scipy.org/Mailing_Lists

View File

@@ -1,58 +0,0 @@
.. _git-resources:
================
git_ resources
================
Tutorials and summaries
=======================
* `github help`_ has an excellent series of how-to guides.
* `learn.github`_ has an excellent series of tutorials
* The `pro git book`_ is a good in-depth book on git.
* A `git cheat sheet`_ is a page giving summaries of common commands.
* The `git user manual`_
* The `git tutorial`_
* The `git community book`_
* `git ready`_ - a nice series of tutorials
* `git casts`_ - video snippets giving git how-tos.
* `git magic`_ - extended introduction with intermediate detail
* The `git parable`_ is an easy read explaining the concepts behind git.
* Our own `git foundation`_ expands on the `git parable`_.
* Fernando Perez' git page - `Fernando's git page`_ - many links and tips
* A good but technical page on `git concepts`_
* `git svn crash course`_: git_ for those of us used to subversion_
Advanced git workflow
=====================
There are many ways of working with git_; here are some posts on the
rules of thumb that other projects have come up with:
* Linus Torvalds on `git management`_
* Linus Torvalds on `linux git workflow`_ . Summary; use the git tools
to make the history of your edits as clean as possible; merge from
upstream edits as little as possible in branches where you are doing
active development.
Manual pages online
===================
You can get these on your own machine with (e.g) ``git help push`` or
(same thing) ``git push --help``, but, for convenience, here are the
online manual pages for some common commands:
* `git add`_
* `git branch`_
* `git checkout`_
* `git clone`_
* `git commit`_
* `git config`_
* `git diff`_
* `git log`_
* `git pull`_
* `git push`_
* `git remote`_
* `git status`_
.. include:: git_links.inc

View File

@@ -1,17 +0,0 @@
.. _using-git:
Working with *NumPy* source code
======================================
Contents:
.. toctree::
:maxdepth: 2
git_intro
following_latest
patching
git_development
git_resources

View File

@@ -1,132 +0,0 @@
================
Making a patch
================
You've discovered a bug or something else you want to change in
NumPy_ - excellent!
You've worked out a way to fix it - even better!
You want to tell us about it - best of all!
The easiest way is to make a *patch* or set of patches. Here we explain
how. Making a patch is the simplest and quickest, but if you're going
to be doing anything more than simple quick things, please consider
following the :ref:`git-development` model instead.
.. _making-patches:
Making patches
==============
Overview
--------
::
# tell git who you are
git config --global user.email you@yourdomain.example.com
git config --global user.name "Your Name Comes Here"
# get the repository if you don't have it
git clone git://github.com/numpy/numpy.git
# make a branch for your patching
cd numpy
git branch the-fix-im-thinking-of
git checkout the-fix-im-thinking-of
# hack, hack, hack
# Tell git about any new files you've made
git add somewhere/tests/test_my_bug.py
# commit work in progress as you go
git commit -am 'BF - added tests for Funny bug'
# hack hack, hack
git commit -am 'BF - added fix for Funny bug'
# make the patch files
git format-patch -M -C master
Then, create a ticket in the `Numpy Trac <http://projects.scipy.org/numpy/>`__,
attach the generated patch files there, and notify the `NumPy mailing list`_
about your contribution.
In detail
---------
#. Tell git_ who you are so it can label the commits you've made::
git config --global user.email you@yourdomain.example.com
git config --global user.name "Your Name Comes Here"
#. If you don't already have one, clone a copy of the NumPy_ repository::
git clone git://github.com/numpy/numpy.git
cd numpy
#. Make a 'feature branch'. This will be where you work on your bug
fix. It's nice and safe and leaves you with access to an unmodified
copy of the code in the main branch::
git branch the-fix-im-thinking-of
git checkout the-fix-im-thinking-of
#. Do some edits, and commit them as you go::
# hack, hack, hack
# Tell git about any new files you've made
git add somewhere/tests/test_my_bug.py
# commit work in progress as you go
git commit -am 'BF - added tests for Funny bug'
# hack hack, hack
git commit -am 'BF - added fix for Funny bug'
Note the ``-am`` options to ``commit``. The ``m`` flag just signals
that you're going to type a message on the command line. The ``a``
flag - you can just take on faith - or see `why the -a flag?`_.
#. When you have finished, check you have committed all your changes::
git status
#. Finally, make your commits into patches. You want all the commits
since you branched from the ``master`` branch::
git format-patch -M -C master
You will now have several files named for the commits::
0001-BF-added-tests-for-Funny-bug.patch
0002-BF-added-fix-for-Funny-bug.patch
The recommended way to proceed is either to attach these files to
an enhancement ticket in the `Numpy Trac <http://projects.scipy.org/numpy/>`__
and send a mail about the enhancement to the `NumPy mailing list`_.
You can also consider sending your changes via Github, see below and in
:ref:`asking-for-merging`.
When you are done, to switch back to the main copy of the code, just
return to the ``master`` branch::
git checkout master
Moving from patching to development
===================================
If you find you have done some patches, and you have one or more feature
branches, you will probably want to switch to development mode. You can
do this with the repository you have.
Fork the NumPy_ repository on github_ - :ref:`forking`. Then::
# checkout and refresh master branch from main repo
git checkout master
git fetch origin
git merge --ff-only origin/master
# rename pointer to main repository to 'upstream'
git remote rename origin upstream
# point your repo to default read / write to your fork on github
git remote add origin git@github.com:your-user-name/numpy.git
# push up any branches you've made and want to keep
git push origin the-fix-im-thinking-of
Then you can, if you want, follow the :ref:`development-workflow`.
.. include:: git_links.inc

Binary file not shown.

Before

Width:  |  Height:  |  Size: 13 KiB

View File

@@ -1,3 +0,0 @@
.. _NumPy: http://numpy.scipy.org
.. _`NumPy github`: http://github.com/numpy/numpy
.. _`NumPy mailing list`: http://scipy.org/Mailing_Lists

View File

@@ -1,10 +0,0 @@
#####################
Contributing to Numpy
#####################
.. toctree::
:maxdepth: 3
gitwash/index
For core developers: see :ref:`development-workflow`.

View File

@@ -1,14 +0,0 @@
********
Glossary
********
.. toctree::
.. glossary::
.. automodule:: numpy.doc.glossary
Jargon
------
.. automodule:: numpy.doc.jargon

View File

@@ -1,35 +0,0 @@
*************
Numpy License
*************
Copyright (c) 2005, NumPy Developers
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following
disclaimer in the documentation and/or other materials provided
with the distribution.
* Neither the name of the NumPy Developers nor the names of any
contributors may be used to endorse or promote products derived
from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

View File

@@ -1,423 +0,0 @@
#########################
Standard array subclasses
#########################
.. currentmodule:: numpy
The :class:`ndarray` in NumPy is a "new-style" Python
built-in-type. Therefore, it can be inherited from (in Python or in C)
if desired. Therefore, it can form a foundation for many useful
classes. Often whether to sub-class the array object or to simply use
the core array component as an internal part of a new class is a
difficult decision, and can be simply a matter of choice. NumPy has
several tools for simplifying how your new object interacts with other
array objects, and so the choice may not be significant in the
end. One way to simplify the question is by asking yourself if the
object you are interested in can be replaced as a single array or does
it really require two or more arrays at its core.
Note that :func:`asarray` always returns the base-class ndarray. If
you are confident that your use of the array object can handle any
subclass of an ndarray, then :func:`asanyarray` can be used to allow
subclasses to propagate more cleanly through your subroutine. In
principal a subclass could redefine any aspect of the array and
therefore, under strict guidelines, :func:`asanyarray` would rarely be
useful. However, most subclasses of the arrayobject will not
redefine certain aspects of the array object such as the buffer
interface, or the attributes of the array. One important example,
however, of why your subroutine may not be able to handle an arbitrary
subclass of an array is that matrices redefine the "*" operator to be
matrix-multiplication, rather than element-by-element multiplication.
Special attributes and methods
==============================
.. seealso:: :ref:`Subclassing ndarray <basics.subclassing>`
Numpy provides several hooks that subclasses of :class:`ndarray` can
customize:
.. function:: __array_finalize__(self)
This method is called whenever the system internally allocates a
new array from *obj*, where *obj* is a subclass (subtype) of the
:class:`ndarray`. It can be used to change attributes of *self*
after construction (so as to ensure a 2-d matrix for example), or
to update meta-information from the "parent." Subclasses inherit
a default implementation of this method that does nothing.
.. function:: __array_prepare__(array, context=None)
At the beginning of every :ref:`ufunc <ufuncs.output-type>`, this
method is called on the input object with the highest array
priority, or the output object if one was specified. The output
array is passed in and whatever is returned is passed to the ufunc.
Subclasses inherit a default implementation of this method which
simply returns the output array unmodified. Subclasses may opt to
use this method to transform the output array into an instance of
the subclass and update metadata before returning the array to the
ufunc for computation.
.. function:: __array_wrap__(array, context=None)
At the end of every :ref:`ufunc <ufuncs.output-type>`, this method
is called on the input object with the highest array priority, or
the output object if one was specified. The ufunc-computed array
is passed in and whatever is returned is passed to the user.
Subclasses inherit a default implementation of this method, which
transforms the array into a new instance of the object's class.
Subclasses may opt to use this method to transform the output array
into an instance of the subclass and update metadata before
returning the array to the user.
.. data:: __array_priority__
The value of this attribute is used to determine what type of
object to return in situations where there is more than one
possibility for the Python type of the returned object. Subclasses
inherit a default value of 1.0 for this attribute.
.. function:: __array__([dtype])
If a class having the :obj:`__array__` method is used as the output
object of an :ref:`ufunc <ufuncs.output-type>`, results will be
written to the object returned by :obj:`__array__`.
Matrix objects
==============
.. index::
single: matrix
:class:`matrix` objects inherit from the ndarray and therefore, they
have the same attributes and methods of ndarrays. There are six
important differences of matrix objects, however, that may lead to
unexpected results when you use matrices but expect them to act like
arrays:
1. Matrix objects can be created using a string notation to allow
Matlab-style syntax where spaces separate columns and semicolons
(';') separate rows.
2. Matrix objects are always two-dimensional. This has far-reaching
implications, in that m.ravel() is still two-dimensional (with a 1
in the first dimension) and item selection returns two-dimensional
objects so that sequence behavior is fundamentally different than
arrays.
3. Matrix objects over-ride multiplication to be
matrix-multiplication. **Make sure you understand this for
functions that you may want to receive matrices. Especially in
light of the fact that asanyarray(m) returns a matrix when m is
a matrix.**
4. Matrix objects over-ride power to be matrix raised to a power. The
same warning about using power inside a function that uses
asanyarray(...) to get an array object holds for this fact.
5. The default __array_priority\__ of matrix objects is 10.0, and
therefore mixed operations with ndarrays always produce matrices.
6. Matrices have special attributes which make calculations easier.
These are
.. autosummary::
:toctree: generated/
matrix.T
matrix.H
matrix.I
matrix.A
.. warning::
Matrix objects over-ride multiplication, '*', and power, '**', to
be matrix-multiplication and matrix power, respectively. If your
subroutine can accept sub-classes and you do not convert to base-
class arrays, then you must use the ufuncs multiply and power to
be sure that you are performing the correct operation for all
inputs.
The matrix class is a Python subclass of the ndarray and can be used
as a reference for how to construct your own subclass of the ndarray.
Matrices can be created from other matrices, strings, and anything
else that can be converted to an ``ndarray`` . The name "mat "is an
alias for "matrix "in NumPy.
.. autosummary::
:toctree: generated/
matrix
asmatrix
bmat
Example 1: Matrix creation from a string
>>> a=mat('1 2 3; 4 5 3')
>>> print (a*a.T).I
[[ 0.2924 -0.1345]
[-0.1345 0.0819]]
Example 2: Matrix creation from nested sequence
>>> mat([[1,5,10],[1.0,3,4j]])
matrix([[ 1.+0.j, 5.+0.j, 10.+0.j],
[ 1.+0.j, 3.+0.j, 0.+4.j]])
Example 3: Matrix creation from an array
>>> mat(random.rand(3,3)).T
matrix([[ 0.7699, 0.7922, 0.3294],
[ 0.2792, 0.0101, 0.9219],
[ 0.3398, 0.7571, 0.8197]])
Memory-mapped file arrays
=========================
.. index::
single: memory maps
.. currentmodule:: numpy
Memory-mapped files are useful for reading and/or modifying small
segments of a large file with regular layout, without reading the
entire file into memory. A simple subclass of the ndarray uses a
memory-mapped file for the data buffer of the array. For small files,
the over-head of reading the entire file into memory is typically not
significant, however for large files using memory mapping can save
considerable resources.
Memory-mapped-file arrays have one additional method (besides those
they inherit from the ndarray): :meth:`.flush() <memmap.flush>` which
must be called manually by the user to ensure that any changes to the
array actually get written to disk.
.. note::
Memory-mapped arrays use the the Python memory-map object which
(prior to Python 2.5) does not allow files to be larger than a
certain size depending on the platform. This size is always
< 2GB even on 64-bit systems.
.. autosummary::
:toctree: generated/
memmap
memmap.flush
Example:
>>> a = memmap('newfile.dat', dtype=float, mode='w+', shape=1000)
>>> a[10] = 10.0
>>> a[30] = 30.0
>>> del a
>>> b = fromfile('newfile.dat', dtype=float)
>>> print b[10], b[30]
10.0 30.0
>>> a = memmap('newfile.dat', dtype=float)
>>> print a[10], a[30]
10.0 30.0
Character arrays (:mod:`numpy.char`)
====================================
.. seealso:: :ref:`routines.array-creation.char`
.. index::
single: character arrays
.. note::
The `chararray` class exists for backwards compatibility with
Numarray, it is not recommended for new development. Starting from numpy
1.4, if one needs arrays of strings, it is recommended to use arrays of
`dtype` `object_`, `string_` or `unicode_`, and use the free functions
in the `numpy.char` module for fast vectorized string operations.
These are enhanced arrays of either :class:`string_` type or
:class:`unicode_` type. These arrays inherit from the
:class:`ndarray`, but specially-define the operations ``+``, ``*``,
and ``%`` on a (broadcasting) element-by-element basis. These
operations are not available on the standard :class:`ndarray` of
character type. In addition, the :class:`chararray` has all of the
standard :class:`string <str>` (and :class:`unicode`) methods,
executing them on an element-by-element basis. Perhaps the easiest
way to create a chararray is to use :meth:`self.view(chararray)
<ndarray.view>` where *self* is an ndarray of str or unicode
data-type. However, a chararray can also be created using the
:meth:`numpy.chararray` constructor, or via the
:func:`numpy.char.array <core.defchararray.array>` function:
.. autosummary::
:toctree: generated/
chararray
core.defchararray.array
Another difference with the standard ndarray of str data-type is
that the chararray inherits the feature introduced by Numarray that
white-space at the end of any element in the array will be ignored
on item retrieval and comparison operations.
.. _arrays.classes.rec:
Record arrays (:mod:`numpy.rec`)
================================
.. seealso:: :ref:`routines.array-creation.rec`, :ref:`routines.dtype`,
:ref:`arrays.dtypes`.
Numpy provides the :class:`recarray` class which allows accessing the
fields of a record/structured array as attributes, and a corresponding
scalar data type object :class:`record`.
.. currentmodule:: numpy
.. autosummary::
:toctree: generated/
recarray
record
Masked arrays (:mod:`numpy.ma`)
===============================
.. seealso:: :ref:`maskedarray`
Standard container class
========================
.. currentmodule:: numpy
For backward compatibility and as a standard "container "class, the
UserArray from Numeric has been brought over to NumPy and named
:class:`numpy.lib.user_array.container` The container class is a
Python class whose self.array attribute is an ndarray. Multiple
inheritance is probably easier with numpy.lib.user_array.container
than with the ndarray itself and so it is included by default. It is
not documented here beyond mentioning its existence because you are
encouraged to use the ndarray class directly if you can.
.. autosummary::
:toctree: generated/
numpy.lib.user_array.container
.. index::
single: user_array
single: container class
Array Iterators
===============
.. currentmodule:: numpy
.. index::
single: array iterator
Iterators are a powerful concept for array processing. Essentially,
iterators implement a generalized for-loop. If *myiter* is an iterator
object, then the Python code::
for val in myiter:
...
some code involving val
...
calls ``val = myiter.next()`` repeatedly until :exc:`StopIteration` is
raised by the iterator. There are several ways to iterate over an
array that may be useful: default iteration, flat iteration, and
:math:`N`-dimensional enumeration.
Default iteration
-----------------
The default iterator of an ndarray object is the default Python
iterator of a sequence type. Thus, when the array object itself is
used as an iterator. The default behavior is equivalent to::
for i in xrange(arr.shape[0]):
val = arr[i]
This default iterator selects a sub-array of dimension :math:`N-1`
from the array. This can be a useful construct for defining recursive
algorithms. To loop over the entire array requires :math:`N` for-loops.
>>> a = arange(24).reshape(3,2,4)+10
>>> for val in a:
... print 'item:', val
item: [[10 11 12 13]
[14 15 16 17]]
item: [[18 19 20 21]
[22 23 24 25]]
item: [[26 27 28 29]
[30 31 32 33]]
Flat iteration
--------------
.. autosummary::
:toctree: generated/
ndarray.flat
As mentioned previously, the flat attribute of ndarray objects returns
an iterator that will cycle over the entire array in C-style
contiguous order.
>>> for i, val in enumerate(a.flat):
... if i%5 == 0: print i, val
0 10
5 15
10 20
15 25
20 30
Here, I've used the built-in enumerate iterator to return the iterator
index as well as the value.
N-dimensional enumeration
-------------------------
.. autosummary::
:toctree: generated/
ndenumerate
Sometimes it may be useful to get the N-dimensional index while
iterating. The ndenumerate iterator can achieve this.
>>> for i, val in ndenumerate(a):
... if sum(i)%5 == 0: print i, val
(0, 0, 0) 10
(1, 1, 3) 25
(2, 0, 3) 29
(2, 1, 2) 32
Iterator for broadcasting
-------------------------
.. autosummary::
:toctree: generated/
broadcast
The general concept of broadcasting is also available from Python
using the :class:`broadcast` iterator. This object takes :math:`N`
objects as inputs and returns an iterator that returns tuples
providing each of the input sequence elements in the broadcasted
result.
>>> for val in broadcast([[1,0],[2,3]],[0,1]):
... print val
(1, 0)
(0, 1)
(2, 0)
(3, 1)

View File

@@ -1,512 +0,0 @@
.. currentmodule:: numpy
.. _arrays.dtypes:
**********************************
Data type objects (:class:`dtype`)
**********************************
A data type object (an instance of :class:`numpy.dtype` class)
describes how the bytes in the fixed-size block of memory
corresponding to an array item should be interpreted. It describes the
following aspects of the data:
1. Type of the data (integer, float, Python object, etc.)
2. Size of the data (how many bytes is in *e.g.* the integer)
3. Byte order of the data (:term:`little-endian` or :term:`big-endian`)
4. If the data type is a :term:`record`, an aggregate of other
data types, (*e.g.*, describing an array item consisting of
an integer and a float),
1. what are the names of the ":term:`fields <field>`" of the record,
by which they can be :ref:`accessed <arrays.indexing.rec>`,
2. what is the data-type of each :term:`field`, and
3. which part of the memory block each field takes.
5. If the data is a sub-array, what is its shape and data type.
.. index::
pair: dtype; scalar
To describe the type of scalar data, there are several :ref:`built-in
scalar types <arrays.scalars.built-in>` in Numpy for various precision
of integers, floating-point numbers, *etc*. An item extracted from an
array, *e.g.*, by indexing, will be a Python object whose type is the
scalar type associated with the data type of the array.
Note that the scalar types are not :class:`dtype` objects, even though
they can be used in place of one whenever a data type specification is
needed in Numpy.
.. index::
pair: dtype; field
pair: dtype; record
Record data types are formed by creating a data type whose
:term:`fields` contain other data types. Each field has a name by
which it can be :ref:`accessed <arrays.indexing.rec>`. The parent data
type should be of sufficient size to contain all its fields; the
parent can for example be based on the :class:`void` type which allows
an arbitrary item size. Record data types may also contain other record
types and fixed-size sub-array data types in their fields.
.. index::
pair: dtype; sub-array
Finally, a data type can describe items that are themselves arrays of
items of another data type. These sub-arrays must, however, be of a
fixed size. If an array is created using a data-type describing a
sub-array, the dimensions of the sub-array are appended to the shape
of the array when the array is created. Sub-arrays in a field of a
record behave differently, see :ref:`arrays.indexing.rec`.
.. admonition:: Example
A simple data type containing a 32-bit big-endian integer:
(see :ref:`arrays.dtypes.constructing` for details on construction)
>>> dt = np.dtype('>i4')
>>> dt.byteorder
'>'
>>> dt.itemsize
4
>>> dt.name
'int32'
>>> dt.type is np.int32
True
The corresponding array scalar type is :class:`int32`.
.. admonition:: Example
A record data type containing a 16-character string (in field 'name')
and a sub-array of two 64-bit floating-point number (in field 'grades'):
>>> dt = np.dtype([('name', np.str_, 16), ('grades', np.float64, (2,))])
>>> dt['name']
dtype('|S16')
>>> dt['grades']
dtype(('float64',(2,)))
Items of an array of this data type are wrapped in an :ref:`array
scalar <arrays.scalars>` type that also has two fields:
>>> x = np.array([('Sarah', (8.0, 7.0)), ('John', (6.0, 7.0))], dtype=dt)
>>> x[1]
('John', [6.0, 7.0])
>>> x[1]['grades']
array([ 6., 7.])
>>> type(x[1])
<type 'numpy.void'>
>>> type(x[1]['grades'])
<type 'numpy.ndarray'>
.. _arrays.dtypes.constructing:
Specifying and constructing data types
======================================
Whenever a data-type is required in a NumPy function or method, either
a :class:`dtype` object or something that can be converted to one can
be supplied. Such conversions are done by the :class:`dtype`
constructor:
.. autosummary::
:toctree: generated/
dtype
What can be converted to a data-type object is described below:
:class:`dtype` object
.. index::
triple: dtype; construction; from dtype
Used as-is.
:const:`None`
.. index::
triple: dtype; construction; from None
The default data type: :class:`float_`.
.. index::
triple: dtype; construction; from type
Array-scalar types
The 24 built-in :ref:`array scalar type objects
<arrays.scalars.built-in>` all convert to an associated data-type object.
This is true for their sub-classes as well.
Note that not all data-type information can be supplied with a
type-object: for example, :term:`flexible` data-types have
a default *itemsize* of 0, and require an explicitly given size
to be useful.
.. admonition:: Example
>>> dt = np.dtype(np.int32) # 32-bit integer
>>> dt = np.dtype(np.complex128) # 128-bit complex floating-point number
Generic types
The generic hierarchical type objects convert to corresponding
type objects according to the associations:
===================================================== ===============
:class:`number`, :class:`inexact`, :class:`floating` :class:`float`
:class:`complexfloating` :class:`cfloat`
:class:`integer`, :class:`signedinteger` :class:`int\_`
:class:`unsignedinteger` :class:`uint`
:class:`character` :class:`string`
:class:`generic`, :class:`flexible` :class:`void`
===================================================== ===============
Built-in Python types
Several python types are equivalent to a corresponding
array scalar when used to generate a :class:`dtype` object:
================ ===============
:class:`int` :class:`int\_`
:class:`bool` :class:`bool\_`
:class:`float` :class:`float\_`
:class:`complex` :class:`cfloat`
:class:`str` :class:`string`
:class:`unicode` :class:`unicode\_`
:class:`buffer` :class:`void`
(all others) :class:`object_`
================ ===============
.. admonition:: Example
>>> dt = np.dtype(float) # Python-compatible floating-point number
>>> dt = np.dtype(int) # Python-compatible integer
>>> dt = np.dtype(object) # Python object
Types with ``.dtype``
Any type object with a ``dtype`` attribute: The attribute will be
accessed and used directly. The attribute must return something
that is convertible into a dtype object.
.. index::
triple: dtype; construction; from string
Several kinds of strings can be converted. Recognized strings can be
prepended with ``'>'`` (:term:`big-endian`), ``'<'``
(:term:`little-endian`), or ``'='`` (hardware-native, the default), to
specify the byte order.
One-character strings
Each built-in data-type has a character code
(the updated Numeric typecodes), that uniquely identifies it.
.. admonition:: Example
>>> dt = np.dtype('b') # byte, native byte order
>>> dt = np.dtype('>H') # big-endian unsigned short
>>> dt = np.dtype('<f') # little-endian single-precision float
>>> dt = np.dtype('d') # double-precision floating-point number
Array-protocol type strings (see :ref:`arrays.interface`)
The first character specifies the kind of data and the remaining
characters specify how many bytes of data. The supported kinds are
================ ========================
``'b'`` Boolean
``'i'`` (signed) integer
``'u'`` unsigned integer
``'f'`` floating-point
``'c'`` complex-floating point
``'S'``, ``'a'`` string
``'U'`` unicode
``'V'`` anything (:class:`void`)
================ ========================
.. admonition:: Example
>>> dt = np.dtype('i4') # 32-bit signed integer
>>> dt = np.dtype('f8') # 64-bit floating-point number
>>> dt = np.dtype('c16') # 128-bit complex floating-point number
>>> dt = np.dtype('a25') # 25-character string
String with comma-separated fields
Numarray introduced a short-hand notation for specifying the format
of a record as a comma-separated string of basic formats.
A basic format in this context is an optional shape specifier
followed by an array-protocol type string. Parenthesis are required
on the shape if it is greater than 1-d. NumPy allows a modification
on the format in that any string that can uniquely identify the
type can be used to specify the data-type in a field.
The generated data-type fields are named ``'f0'``, ``'f1'``, ...,
``'f<N-1>'`` where N (>1) is the number of comma-separated basic
formats in the string. If the optional shape specifier is provided,
then the data-type for the corresponding field describes a sub-array.
.. admonition:: Example
- field named ``f0`` containing a 32-bit integer
- field named ``f1`` containing a 2 x 3 sub-array
of 64-bit floating-point numbers
- field named ``f2`` containing a 32-bit floating-point number
>>> dt = np.dtype("i4, (2,3)f8, f4")
- field named ``f0`` containing a 3-character string
- field named ``f1`` containing a sub-array of shape (3,)
containing 64-bit unsigned integers
- field named ``f2`` containing a 3 x 4 sub-array
containing 10-character strings
>>> dt = np.dtype("a3, 3u8, (3,4)a10")
Type strings
Any string in :obj:`numpy.sctypeDict`.keys():
.. admonition:: Example
>>> dt = np.dtype('uint32') # 32-bit unsigned integer
>>> dt = np.dtype('Float64') # 64-bit floating-point number
.. index::
triple: dtype; construction; from tuple
``(flexible_dtype, itemsize)``
The first argument must be an object that is converted to a
flexible data-type object (one whose element size is 0), the
second argument is an integer providing the desired itemsize.
.. admonition:: Example
>>> dt = np.dtype((void, 10)) # 10-byte wide data block
>>> dt = np.dtype((str, 35)) # 35-character string
>>> dt = np.dtype(('U', 10)) # 10-character unicode string
``(fixed_dtype, shape)``
.. index::
pair: dtype; sub-array
The first argument is any object that can be converted into a
fixed-size data-type object. The second argument is the desired
shape of this type. If the shape parameter is 1, then the
data-type object is equivalent to fixed dtype. If *shape* is a
tuple, then the new dtype defines a sub-array of the given shape.
.. admonition:: Example
>>> dt = np.dtype((np.int32, (2,2))) # 2 x 2 integer sub-array
>>> dt = np.dtype(('S10', 1)) # 10-character string
>>> dt = np.dtype(('i4, (2,3)f8, f4', (2,3))) # 2 x 3 record sub-array
``(base_dtype, new_dtype)``
Both arguments must be convertible to data-type objects in this
case. The *base_dtype* is the data-type object that the new
data-type builds on. This is how you could assign named fields to
any built-in data-type object.
.. admonition:: Example
32-bit integer, whose first two bytes are interpreted as an integer
via field ``real``, and the following two bytes via field ``imag``.
>>> dt = np.dtype((np.int32,{'real':(np.int16, 0),'imag':(np.int16, 2)})
32-bit integer, which is interpreted as consisting of a sub-array
of shape ``(4,)`` containing 8-bit integers:
>>> dt = np.dtype((np.int32, (np.int8, 4)))
32-bit integer, containing fields ``r``, ``g``, ``b``, ``a`` that
interpret the 4 bytes in the integer as four unsigned integers:
>>> dt = np.dtype(('i4', [('r','u1'),('g','u1'),('b','u1'),('a','u1')]))
.. index::
triple: dtype; construction; from list
``[(field_name, field_dtype, field_shape), ...]``
*obj* should be a list of fields where each field is described by a
tuple of length 2 or 3. (Equivalent to the ``descr`` item in the
:obj:`__array_interface__` attribute.)
The first element, *field_name*, is the field name (if this is
``''`` then a standard field name, ``'f#'``, is assigned). The
field name may also be a 2-tuple of strings where the first string
is either a "title" (which may be any string or unicode string) or
meta-data for the field which can be any object, and the second
string is the "name" which must be a valid Python identifier.
The second element, *field_dtype*, can be anything that can be
interpreted as a data-type.
The optional third element *field_shape* contains the shape if this
field represents an array of the data-type in the second
element. Note that a 3-tuple with a third argument equal to 1 is
equivalent to a 2-tuple.
This style does not accept *align* in the :class:`dtype`
constructor as it is assumed that all of the memory is accounted
for by the array interface description.
.. admonition:: Example
Data-type with fields ``big`` (big-endian 32-bit integer) and
``little`` (little-endian 32-bit integer):
>>> dt = np.dtype([('big', '>i4'), ('little', '<i4')])
Data-type with fields ``R``, ``G``, ``B``, ``A``, each being an
unsigned 8-bit integer:
>>> dt = np.dtype([('R','u1'), ('G','u1'), ('B','u1'), ('A','u1')])
.. index::
triple: dtype; construction; from dict
``{'names': ..., 'formats': ..., 'offsets': ..., 'titles': ...}``
This style has two required and two optional keys. The *names*
and *formats* keys are required. Their respective values are
equal-length lists with the field names and the field formats.
The field names must be strings and the field formats can be any
object accepted by :class:`dtype` constructor.
The optional keys in the dictionary are *offsets* and *titles* and
their values must each be lists of the same length as the *names*
and *formats* lists. The *offsets* value is a list of byte offsets
(integers) for each field, while the *titles* value is a list of
titles for each field (:const:`None` can be used if no title is
desired for that field). The *titles* can be any :class:`string`
or :class:`unicode` object and will add another entry to the
fields dictionary keyed by the title and referencing the same
field tuple which will contain the title as an additional tuple
member.
.. admonition:: Example
Data type with fields ``r``, ``g``, ``b``, ``a``, each being
a 8-bit unsigned integer:
>>> dt = np.dtype({'names': ['r','g','b','a'],
... 'formats': [uint8, uint8, uint8, uint8]})
Data type with fields ``r`` and ``b`` (with the given titles),
both being 8-bit unsigned integers, the first at byte position
0 from the start of the field and the second at position 2:
>>> dt = np.dtype({'names': ['r','b'], 'formats': ['u1', 'u1'],
... 'offsets': [0, 2],
... 'titles': ['Red pixel', 'Blue pixel']})
``{'field1': ..., 'field2': ..., ...}``
This style allows passing in the :attr:`fields <dtype.fields>`
attribute of a data-type object.
*obj* should contain string or unicode keys that refer to
``(data-type, offset)`` or ``(data-type, offset, title)`` tuples.
.. admonition:: Example
Data type containing field ``col1`` (10-character string at
byte position 0), ``col2`` (32-bit float at byte position 10),
and ``col3`` (integers at byte position 14):
>>> dt = np.dtype({'col1': ('S10', 0), 'col2': (float32, 10),
'col3': (int, 14)})
:class:`dtype`
==============
Numpy data type descriptions are instances of the :class:`dtype` class.
Attributes
----------
The type of the data is described by the following :class:`dtype` attributes:
.. autosummary::
:toctree: generated/
dtype.type
dtype.kind
dtype.char
dtype.num
dtype.str
Size of the data is in turn described by:
.. autosummary::
:toctree: generated/
dtype.name
dtype.itemsize
Endianness of this data:
.. autosummary::
:toctree: generated/
dtype.byteorder
Information about sub-data-types in a :term:`record`:
.. autosummary::
:toctree: generated/
dtype.fields
dtype.names
For data types that describe sub-arrays:
.. autosummary::
:toctree: generated/
dtype.subdtype
dtype.shape
Attributes providing additional information:
.. autosummary::
:toctree: generated/
dtype.hasobject
dtype.flags
dtype.isbuiltin
dtype.isnative
dtype.descr
dtype.alignment
Methods
-------
Data types have the following method for changing the byte order:
.. autosummary::
:toctree: generated/
dtype.newbyteorder
The following methods implement the pickle protocol:
.. autosummary::
:toctree: generated/
dtype.__reduce__
dtype.__setstate__

View File

@@ -1,368 +0,0 @@
.. _arrays.indexing:
Indexing
========
.. sectionauthor:: adapted from "Guide to Numpy" by Travis E. Oliphant
.. currentmodule:: numpy
.. index:: indexing, slicing
:class:`ndarrays <ndarray>` can be indexed using the standard Python
``x[obj]`` syntax, where *x* is the array and *obj* the selection.
There are three kinds of indexing available: record access, basic
slicing, advanced indexing. Which one occurs depends on *obj*.
.. note::
In Python, ``x[(exp1, exp2, ..., expN)]`` is equivalent to
``x[exp1, exp2, ..., expN]``; the latter is just syntactic sugar
for the former.
Basic Slicing
-------------
Basic slicing extends Python's basic concept of slicing to N
dimensions. Basic slicing occurs when *obj* is a :class:`slice` object
(constructed by ``start:stop:step`` notation inside of brackets), an
integer, or a tuple of slice objects and integers. :const:`Ellipsis`
and :const:`newaxis` objects can be interspersed with these as
well. In order to remain backward compatible with a common usage in
Numeric, basic slicing is also initiated if the selection object is
any sequence (such as a :class:`list`) containing :class:`slice`
objects, the :const:`Ellipsis` object, or the :const:`newaxis` object,
but no integer arrays or other embedded sequences.
.. index::
triple: ndarray; special methods; getslice
triple: ndarray; special methods; setslice
single: ellipsis
single: newaxis
The simplest case of indexing with *N* integers returns an :ref:`array
scalar <arrays.scalars>` representing the corresponding item. As in
Python, all indices are zero-based: for the *i*-th index :math:`n_i`,
the valid range is :math:`0 \le n_i < d_i` where :math:`d_i` is the
*i*-th element of the shape of the array. Negative indices are
interpreted as counting from the end of the array (*i.e.*, if *i < 0*,
it means :math:`n_i + i`).
All arrays generated by basic slicing are always :term:`views <view>`
of the original array.
The standard rules of sequence slicing apply to basic slicing on a
per-dimension basis (including using a step index). Some useful
concepts to remember include:
- The basic slice syntax is ``i:j:k`` where *i* is the starting index,
*j* is the stopping index, and *k* is the step (:math:`k\neq0`).
This selects the *m* elements (in the corresponding dimension) with
index values *i*, *i + k*, ..., *i + (m - 1) k* where
:math:`m = q + (r\neq0)` and *q* and *r* are the quotient and remainder
obtained by dividing *j - i* by *k*: *j - i = q k + r*, so that
*i + (m - 1) k < j*.
.. admonition:: Example
>>> x = np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
>>> x[1:7:2]
array([1, 3, 5])
- Negative *i* and *j* are interpreted as *n + i* and *n + j* where
*n* is the number of elements in the corresponding dimension.
Negative *k* makes stepping go towards smaller indices.
.. admonition:: Example
>>> x[-2:10]
array([8, 9])
>>> x[-3:3:-1]
array([7, 6, 5, 4])
- Assume *n* is the number of elements in the dimension being
sliced. Then, if *i* is not given it defaults to 0 for *k > 0* and
*n* for *k < 0* . If *j* is not given it defaults to *n* for *k > 0*
and -1 for *k < 0* . If *k* is not given it defaults to 1. Note that
``::`` is the same as ``:`` and means select all indices along this
axis.
.. admonition:: Example
>>> x[5:]
array([5, 6, 7, 8, 9])
- If the number of objects in the selection tuple is less than
*N* , then ``:`` is assumed for any subsequent dimensions.
.. admonition:: Example
>>> x = np.array([[[1],[2],[3]], [[4],[5],[6]]])
>>> x.shape
(2, 3, 1)
>>> x[1:2]
array([[[4],
[5],
[6]]])
- :const:`Ellipsis` expand to the number of ``:`` objects needed to
make a selection tuple of the same length as ``x.ndim``. Only the
first ellipsis is expanded, any others are interpreted as ``:``.
.. admonition:: Example
>>> x[...,0]
array([[1, 2, 3],
[4, 5, 6]])
- Each :const:`newaxis` object in the selection tuple serves to expand
the dimensions of the resulting selection by one unit-length
dimension. The added dimension is the position of the :const:`newaxis`
object in the selection tuple.
.. admonition:: Example
>>> x[:,np.newaxis,:,:].shape
(2, 1, 3, 1)
- An integer, *i*, returns the same values as ``i:i+1``
**except** the dimensionality of the returned object is reduced by
1. In particular, a selection tuple with the *p*-th
element an integer (and all other entries ``:``) returns the
corresponding sub-array with dimension *N - 1*. If *N = 1*
then the returned object is an array scalar. These objects are
explained in :ref:`arrays.scalars`.
- If the selection tuple has all entries ``:`` except the
*p*-th entry which is a slice object ``i:j:k``,
then the returned array has dimension *N* formed by
concatenating the sub-arrays returned by integer indexing of
elements *i*, *i+k*, ..., *i + (m - 1) k < j*,
- Basic slicing with more than one non-``:`` entry in the slicing
tuple, acts like repeated application of slicing using a single
non-``:`` entry, where the non-``:`` entries are successively taken
(with all other non-``:`` entries replaced by ``:``). Thus,
``x[ind1,...,ind2,:]`` acts like ``x[ind1][...,ind2,:]`` under basic
slicing.
.. warning:: The above is **not** true for advanced slicing.
- You may use slicing to set values in the array, but (unlike lists) you
can never grow the array. The size of the value to be set in
``x[obj] = value`` must be (broadcastable) to the same shape as
``x[obj]``.
.. index::
pair: ndarray; view
.. note::
Remember that a slicing tuple can always be constructed as *obj*
and used in the ``x[obj]`` notation. Slice objects can be used in
the construction in place of the ``[start:stop:step]``
notation. For example, ``x[1:10:5,::-1]`` can also be implemented
as ``obj = (slice(1,10,5), slice(None,None,-1)); x[obj]`` . This
can be useful for constructing generic code that works on arrays
of arbitrary dimension.
.. data:: newaxis
The :const:`newaxis` object can be used in the basic slicing syntax
discussed above. :const:`None` can also be used instead of
:const:`newaxis`.
Advanced indexing
-----------------
Advanced indexing is triggered when the selection object, *obj*, is a
non-tuple sequence object, an :class:`ndarray` (of data type integer or bool),
or a tuple with at least one sequence object or ndarray (of data type
integer or bool). There are two types of advanced indexing: integer
and Boolean.
Advanced indexing always returns a *copy* of the data (contrast with
basic slicing that returns a :term:`view`).
Integer
^^^^^^^
Integer indexing allows selection of arbitrary items in the array
based on their *N*-dimensional index. This kind of selection occurs
when advanced indexing is triggered and the selection object is not
an array of data type bool. For the discussion below, when the
selection object is not a tuple, it will be referred to as if it had
been promoted to a 1-tuple, which will be called the selection
tuple. The rules of advanced integer-style indexing are:
- If the length of the selection tuple is larger than *N* an error is raised.
- All sequences and scalars in the selection tuple are converted to
:class:`intp` indexing arrays.
- All selection tuple objects must be convertible to :class:`intp`
arrays, :class:`slice` objects, or the :const:`Ellipsis` object.
- The first :const:`Ellipsis` object will be expanded, and any other
:const:`Ellipsis` objects will be treated as full slice (``:``)
objects. The expanded :const:`Ellipsis` object is replaced with as
many full slice (``:``) objects as needed to make the length of the
selection tuple :math:`N`.
- If the selection tuple is smaller than *N*, then as many ``:``
objects as needed are added to the end of the selection tuple so
that the modified selection tuple has length *N*.
- All the integer indexing arrays must be :ref:`broadcastable
<arrays.broadcasting.broadcastable>` to the same shape.
- The shape of the output (or the needed shape of the object to be used
for setting) is the broadcasted shape.
- After expanding any ellipses and filling out any missing ``:``
objects in the selection tuple, then let :math:`N_t` be the number
of indexing arrays, and let :math:`N_s = N - N_t` be the number of
slice objects. Note that :math:`N_t > 0` (or we wouldn't be doing
advanced integer indexing).
- If :math:`N_s = 0` then the *M*-dimensional result is constructed by
varying the index tuple ``(i_1, ..., i_M)`` over the range
of the result shape and for each value of the index tuple
``(ind_1, ..., ind_M)``::
result[i_1, ..., i_M] == x[ind_1[i_1, ..., i_M], ind_2[i_1, ..., i_M],
..., ind_N[i_1, ..., i_M]]
.. admonition:: Example
Suppose the shape of the broadcasted indexing arrays is 3-dimensional
and *N* is 2. Then the result is found by letting *i, j, k* run over
the shape found by broadcasting ``ind_1`` and ``ind_2``, and each
*i, j, k* yields::
result[i,j,k] = x[ind_1[i,j,k], ind_2[i,j,k]]
- If :math:`N_s > 0`, then partial indexing is done. This can be
somewhat mind-boggling to understand, but if you think in terms of
the shapes of the arrays involved, it can be easier to grasp what
happens. In simple cases (*i.e.* one indexing array and *N - 1* slice
objects) it does exactly what you would expect (concatenation of
repeated application of basic slicing). The rule for partial
indexing is that the shape of the result (or the interpreted shape
of the object to be used in setting) is the shape of *x* with the
indexed subspace replaced with the broadcasted indexing subspace. If
the index subspaces are right next to each other, then the
broadcasted indexing space directly replaces all of the indexed
subspaces in *x*. If the indexing subspaces are separated (by slice
objects), then the broadcasted indexing space is first, followed by
the sliced subspace of *x*.
.. admonition:: Example
Suppose ``x.shape`` is (10,20,30) and ``ind`` is a (2,3,4)-shaped
indexing :class:`intp` array, then ``result = x[...,ind,:]`` has
shape (10,2,3,4,30) because the (20,)-shaped subspace has been
replaced with a (2,3,4)-shaped broadcasted indexing subspace. If
we let *i, j, k* loop over the (2,3,4)-shaped subspace then
``result[...,i,j,k,:] = x[...,ind[i,j,k],:]``. This example
produces the same result as :meth:`x.take(ind, axis=-2) <ndarray.take>`.
.. admonition:: Example
Now let ``x.shape`` be (10,20,30,40,50) and suppose ``ind_1``
and ``ind_2`` are broadcastable to the shape (2,3,4). Then
``x[:,ind_1,ind_2]`` has shape (10,2,3,4,40,50) because the
(20,30)-shaped subspace from X has been replaced with the
(2,3,4) subspace from the indices. However,
``x[:,ind_1,:,ind_2]`` has shape (2,3,4,10,30,50) because there
is no unambiguous place to drop in the indexing subspace, thus
it is tacked-on to the beginning. It is always possible to use
:meth:`.transpose() <ndarray.transpose>` to move the subspace
anywhere desired. (Note that this example cannot be replicated
using :func:`take`.)
Boolean
^^^^^^^
This advanced indexing occurs when obj is an array object of Boolean
type (such as may be returned from comparison operators). It is always
equivalent to (but faster than) ``x[obj.nonzero()]`` where, as
described above, :meth:`obj.nonzero() <ndarray.nonzero>` returns a
tuple (of length :attr:`obj.ndim <ndarray.ndim>`) of integer index
arrays showing the :const:`True` elements of *obj*.
The special case when ``obj.ndim == x.ndim`` is worth mentioning. In
this case ``x[obj]`` returns a 1-dimensional array filled with the
elements of *x* corresponding to the :const:`True` values of *obj*.
The search order will be C-style (last index varies the fastest). If
*obj* has :const:`True` values at entries that are outside of the
bounds of *x*, then an index error will be raised.
You can also use Boolean arrays as element of the selection tuple. In
such instances, they will always be interpreted as :meth:`nonzero(obj)
<ndarray.nonzero>` and the equivalent integer indexing will be
done.
.. warning::
The definition of advanced indexing means that ``x[(1,2,3),]`` is
fundamentally different than ``x[(1,2,3)]``. The latter is
equivalent to ``x[1,2,3]`` which will trigger basic selection while
the former will trigger advanced indexing. Be sure to understand
why this is occurs.
Also recognize that ``x[[1,2,3]]`` will trigger advanced indexing,
whereas ``x[[1,2,slice(None)]]`` will trigger basic slicing.
.. _arrays.indexing.rec:
Record Access
-------------
.. seealso:: :ref:`arrays.dtypes`, :ref:`arrays.scalars`
If the :class:`ndarray` object is a record array, *i.e.* its data type
is a :term:`record` data type, the :term:`fields <field>` of the array
can be accessed by indexing the array with strings, dictionary-like.
Indexing ``x['field-name']`` returns a new :term:`view` to the array,
which is of the same shape as *x* (except when the field is a
sub-array) but of data type ``x.dtype['field-name']`` and contains
only the part of the data in the specified field. Also record array
scalars can be "indexed" this way.
If the accessed field is a sub-array, the dimensions of the sub-array
are appended to the shape of the result.
.. admonition:: Example
>>> x = np.zeros((2,2), dtype=[('a', np.int32), ('b', np.float64, (3,3))])
>>> x['a'].shape
(2, 2)
>>> x['a'].dtype
dtype('int32')
>>> x['b'].shape
(2, 2, 3, 3)
>>> x['b'].dtype
dtype('float64')
Flat Iterator indexing
----------------------
:attr:`x.flat <ndarray.flat>` returns an iterator that will iterate
over the entire array (in C-contiguous style with the last index
varying the fastest). This iterator object can also be indexed using
basic slicing or advanced indexing as long as the selection object is
not a tuple. This should be clear from the fact that :attr:`x.flat
<ndarray.flat>` is a 1-dimensional view. It can be used for integer
indexing with 1-dimensional C-style-flat indices. The shape of any
returned array is therefore the shape of the integer indexing object.
.. index::
single: indexing
single: ndarray

View File

@@ -1,336 +0,0 @@
.. index::
pair: array; interface
pair: array; protocol
.. _arrays.interface:
*******************
The Array Interface
*******************
.. note::
This page describes the numpy-specific API for accessing the contents of
a numpy array from other C extensions. :pep:`3118` --
:cfunc:`The Revised Buffer Protocol <PyObject_GetBuffer>` introduces
similar, standardized API to Python 2.6 and 3.0 for any extension
module to use. Cython__'s buffer array support
uses the :pep:`3118` API; see the `Cython numpy
tutorial`__. Cython provides a way to write code that supports the buffer
protocol with Python versions older than 2.6 because it has a
backward-compatible implementation utilizing the legacy array interface
described here.
__ http://cython.org/
__ http://wiki.cython.org/tutorials/numpy
:version: 3
The array interface (sometimes called array protocol) was created in
2005 as a means for array-like Python objects to re-use each other's
data buffers intelligently whenever possible. The homogeneous
N-dimensional array interface is a default mechanism for objects to
share N-dimensional array memory and information. The interface
consists of a Python-side and a C-side using two attributes. Objects
wishing to be considered an N-dimensional array in application code
should support at least one of these attributes. Objects wishing to
support an N-dimensional array in application code should look for at
least one of these attributes and use the information provided
appropriately.
This interface describes homogeneous arrays in the sense that each
item of the array has the same "type". This type can be very simple
or it can be a quite arbitrary and complicated C-like structure.
There are two ways to use the interface: A Python side and a C-side.
Both are separate attributes.
Python side
===========
This approach to the interface consists of the object having an
:data:`__array_interface__` attribute.
.. data:: __array_interface__
A dictionary of items (3 required and 5 optional). The optional
keys in the dictionary have implied defaults if they are not
provided.
The keys are:
**shape** (required)
Tuple whose elements are the array size in each dimension. Each
entry is an integer (a Python int or long). Note that these
integers could be larger than the platform "int" or "long"
could hold (a Python int is a C long). It is up to the code
using this attribute to handle this appropriately; either by
raising an error when overflow is possible, or by using
:cdata:`Py_LONG_LONG` as the C type for the shapes.
**typestr** (required)
A string providing the basic type of the homogenous array The
basic string format consists of 3 parts: a character describing
the byteorder of the data (``<``: little-endian, ``>``:
big-endian, ``|``: not-relevant), a character code giving the
basic type of the array, and an integer providing the number of
bytes the type uses.
The basic type character codes are:
===== ================================================================
``t`` Bit field (following integer gives the number of
bits in the bit field).
``b`` Boolean (integer type where all values are only True or False)
``i`` Integer
``u`` Unsigned integer
``f`` Floating point
``c`` Complex floating point
``O`` Object (i.e. the memory contains a pointer to :ctype:`PyObject`)
``S`` String (fixed-length sequence of char)
``U`` Unicode (fixed-length sequence of :ctype:`Py_UNICODE`)
``V`` Other (void \* -- each item is a fixed-size chunk of memory)
===== ================================================================
**descr** (optional)
A list of tuples providing a more detailed description of the
memory layout for each item in the homogeneous array. Each
tuple in the list has two or three elements. Normally, this
attribute would be used when *typestr* is ``V[0-9]+``, but this is
not a requirement. The only requirement is that the number of
bytes represented in the *typestr* key is the same as the total
number of bytes represented here. The idea is to support
descriptions of C-like structs (records) that make up array
elements. The elements of each tuple in the list are
1. A string providing a name associated with this portion of
the record. This could also be a tuple of ``('full name',
'basic_name')`` where basic name would be a valid Python
variable name representing the full name of the field.
2. Either a basic-type description string as in *typestr* or
another list (for nested records)
3. An optional shape tuple providing how many times this part
of the record should be repeated. No repeats are assumed
if this is not given. Very complicated structures can be
described using this generic interface. Notice, however,
that each element of the array is still of the same
data-type. Some examples of using this interface are given
below.
**Default**: ``[('', typestr)]``
**data** (optional)
A 2-tuple whose first argument is an integer (a long integer
if necessary) that points to the data-area storing the array
contents. This pointer must point to the first element of
data (in other words any offset is always ignored in this
case). The second entry in the tuple is a read-only flag (true
means the data area is read-only).
This attribute can also be an object exposing the
:cfunc:`buffer interface <PyObject_AsCharBuffer>` which
will be used to share the data. If this key is not present (or
returns :class:`None`), then memory sharing will be done
through the buffer interface of the object itself. In this
case, the offset key can be used to indicate the start of the
buffer. A reference to the object exposing the array interface
must be stored by the new object if the memory area is to be
secured.
**Default**: :const:`None`
**strides** (optional)
Either :const:`None` to indicate a C-style contiguous array or
a Tuple of strides which provides the number of bytes needed
to jump to the next array element in the corresponding
dimension. Each entry must be an integer (a Python
:const:`int` or :const:`long`). As with shape, the values may
be larger than can be represented by a C "int" or "long"; the
calling code should handle this appropiately, either by
raising an error, or by using :ctype:`Py_LONG_LONG` in C. The
default is :const:`None` which implies a C-style contiguous
memory buffer. In this model, the last dimension of the array
varies the fastest. For example, the default strides tuple
for an object whose array entries are 8 bytes long and whose
shape is (10,20,30) would be (4800, 240, 8)
**Default**: :const:`None` (C-style contiguous)
**mask** (optional)
:const:`None` or an object exposing the array interface. All
elements of the mask array should be interpreted only as true
or not true indicating which elements of this array are valid.
The shape of this object should be `"broadcastable"
<arrays.broadcasting.broadcastable>` to the shape of the
original array.
**Default**: :const:`None` (All array values are valid)
**offset** (optional)
An integer offset into the array data region. This can only be
used when data is :const:`None` or returns a :class:`buffer`
object.
**Default**: 0.
**version** (required)
An integer showing the version of the interface (i.e. 3 for
this version). Be careful not to use this to invalidate
objects exposing future versions of the interface.
C-struct access
===============
This approach to the array interface allows for faster access to an
array using only one attribute lookup and a well-defined C-structure.
.. cvar:: __array_struct__
A :ctype:`PyCObject` whose :cdata:`voidptr` member contains a
pointer to a filled :ctype:`PyArrayInterface` structure. Memory
for the structure is dynamically created and the :ctype:`PyCObject`
is also created with an appropriate destructor so the retriever of
this attribute simply has to apply :cfunc:`Py_DECREF()` to the
object returned by this attribute when it is finished. Also,
either the data needs to be copied out, or a reference to the
object exposing this attribute must be held to ensure the data is
not freed. Objects exposing the :obj:`__array_struct__` interface
must also not reallocate their memory if other objects are
referencing them.
The PyArrayInterface structure is defined in ``numpy/ndarrayobject.h``
as::
typedef struct {
int two; /* contains the integer 2 -- simple sanity check */
int nd; /* number of dimensions */
char typekind; /* kind in array --- character code of typestr */
int itemsize; /* size of each element */
int flags; /* flags indicating how the data should be interpreted */
/* must set ARR_HAS_DESCR bit to validate descr */
Py_intptr_t *shape; /* A length-nd array of shape information */
Py_intptr_t *strides; /* A length-nd array of stride information */
void *data; /* A pointer to the first element of the array */
PyObject *descr; /* NULL or data-description (same as descr key
of __array_interface__) -- must set ARR_HAS_DESCR
flag or this will be ignored. */
} PyArrayInterface;
The flags member may consist of 5 bits showing how the data should be
interpreted and one bit showing how the Interface should be
interpreted. The data-bits are :const:`CONTIGUOUS` (0x1),
:const:`FORTRAN` (0x2), :const:`ALIGNED` (0x100), :const:`NOTSWAPPED`
(0x200), and :const:`WRITEABLE` (0x400). A final flag
:const:`ARR_HAS_DESCR` (0x800) indicates whether or not this structure
has the arrdescr field. The field should not be accessed unless this
flag is present.
.. admonition:: New since June 16, 2006:
In the past most implementations used the "desc" member of the
:ctype:`PyCObject` itself (do not confuse this with the "descr" member of
the :ctype:`PyArrayInterface` structure above --- they are two separate
things) to hold the pointer to the object exposing the interface.
This is now an explicit part of the interface. Be sure to own a
reference to the object when the :ctype:`PyCObject` is created using
:ctype:`PyCObject_FromVoidPtrAndDesc`.
Type description examples
=========================
For clarity it is useful to provide some examples of the type
description and corresponding :data:`__array_interface__` 'descr'
entries. Thanks to Scott Gilbert for these examples:
In every case, the 'descr' key is optional, but of course provides
more information which may be important for various applications::
* Float data
typestr == '>f4'
descr == [('','>f4')]
* Complex double
typestr == '>c8'
descr == [('real','>f4'), ('imag','>f4')]
* RGB Pixel data
typestr == '|V3'
descr == [('r','|u1'), ('g','|u1'), ('b','|u1')]
* Mixed endian (weird but could happen).
typestr == '|V8' (or '>u8')
descr == [('big','>i4'), ('little','<i4')]
* Nested structure
struct {
int ival;
struct {
unsigned short sval;
unsigned char bval;
unsigned char cval;
} sub;
}
typestr == '|V8' (or '<u8' if you want)
descr == [('ival','<i4'), ('sub', [('sval','<u2'), ('bval','|u1'), ('cval','|u1') ]) ]
* Nested array
struct {
int ival;
double data[16*4];
}
typestr == '|V516'
descr == [('ival','>i4'), ('data','>f8',(16,4))]
* Padded structure
struct {
int ival;
double dval;
}
typestr == '|V16'
descr == [('ival','>i4'),('','|V4'),('dval','>f8')]
It should be clear that any record type could be described using this interface.
Differences with Array interface (Version 2)
============================================
The version 2 interface was very similar. The differences were
largely asthetic. In particular:
1. The PyArrayInterface structure had no descr member at the end
(and therefore no flag ARR_HAS_DESCR)
2. The desc member of the PyCObject returned from __array_struct__ was
not specified. Usually, it was the object exposing the array (so
that a reference to it could be kept and destroyed when the
C-object was destroyed). Now it must be a tuple whose first
element is a string with "PyArrayInterface Version #" and whose
second element is the object exposing the array.
3. The tuple returned from __array_interface__['data'] used to be a
hex-string (now it is an integer or a long integer).
4. There was no __array_interface__ attribute instead all of the keys
(except for version) in the __array_interface__ dictionary were
their own attribute: Thus to obtain the Python-side information you
had to access separately the attributes:
* __array_data__
* __array_shape__
* __array_strides__
* __array_typestr__
* __array_descr__
* __array_offset__
* __array_mask__

View File

@@ -1,568 +0,0 @@
.. _arrays.ndarray:
******************************************
The N-dimensional array (:class:`ndarray`)
******************************************
.. currentmodule:: numpy
An :class:`ndarray` is a (usually fixed-size) multidimensional
container of items of the same type and size. The number of dimensions
and items in an array is defined by its :attr:`shape <ndarray.shape>`,
which is a :class:`tuple` of *N* positive integers that specify the
sizes of each dimension. The type of items in the array is specified by
a separate :ref:`data-type object (dtype) <arrays.dtypes>`, one of which
is associated with each ndarray.
As with other container objects in Python, the contents of an
:class:`ndarray` can be accessed and modified by :ref:`indexing or
slicing <arrays.indexing>` the array (using, for example, *N* integers),
and via the methods and attributes of the :class:`ndarray`.
.. index:: view, base
Different :class:`ndarrays <ndarray>` can share the same data, so that
changes made in one :class:`ndarray` may be visible in another. That
is, an ndarray can be a *"view"* to another ndarray, and the data it
is referring to is taken care of by the *"base"* ndarray. ndarrays can
also be views to memory owned by Python :class:`strings <str>` or
objects implementing the :class:`buffer` or :ref:`array
<arrays.interface>` interfaces.
.. admonition:: Example
A 2-dimensional array of size 2 x 3, composed of 4-byte integer
elements:
>>> x = np.array([[1, 2, 3], [4, 5, 6]], np.int32)
>>> type(x)
<type 'numpy.ndarray'>
>>> x.shape
(2, 3)
>>> x.dtype
dtype('int32')
The array can be indexed using Python container-like syntax:
>>> x[1,2] # i.e., the element of x in the *second* row, *third*
column, namely, 6.
For example :ref:`slicing <arrays.indexing>` can produce views of
the array:
>>> y = x[:,1]
>>> y
array([2, 5])
>>> y[0] = 9 # this also changes the corresponding element in x
>>> y
array([9, 5])
>>> x
array([[1, 9, 3],
[4, 5, 6]])
Constructing arrays
===================
New arrays can be constructed using the routines detailed in
:ref:`routines.array-creation`, and also by using the low-level
:class:`ndarray` constructor:
.. autosummary::
:toctree: generated/
ndarray
.. _arrays.ndarray.indexing:
Indexing arrays
===============
Arrays can be indexed using an extended Python slicing syntax,
``array[selection]``. Similar syntax is also used for accessing
fields in a :ref:`record array <arrays.dtypes>`.
.. seealso:: :ref:`Array Indexing <arrays.indexing>`.
Internal memory layout of an ndarray
====================================
An instance of class :class:`ndarray` consists of a contiguous
one-dimensional segment of computer memory (owned by the array, or by
some other object), combined with an indexing scheme that maps *N*
integers into the location of an item in the block. The ranges in
which the indices can vary is specified by the :obj:`shape
<ndarray.shape>` of the array. How many bytes each item takes and how
the bytes are interpreted is defined by the :ref:`data-type object
<arrays.dtypes>` associated with the array.
.. index:: C-order, Fortran-order, row-major, column-major, stride,
offset
A segment of memory is inherently 1-dimensional, and there are many
different schemes for arranging the items of an *N*-dimensional array
in a 1-dimensional block. Numpy is flexible, and :class:`ndarray`
objects can accommodate any *strided indexing scheme*. In a strided
scheme, the N-dimensional index :math:`(n_0, n_1, ..., n_{N-1})`
corresponds to the offset (in bytes):
.. math:: n_{\mathrm{offset}} = \sum_{k=0}^{N-1} s_k n_k
from the beginning of the memory block associated with the
array. Here, :math:`s_k` are integers which specify the :obj:`strides
<ndarray.strides>` of the array. The :term:`column-major` order (used,
for example, in the Fortran language and in *Matlab*) and
:term:`row-major` order (used in C) schemes are just specific kinds of
strided scheme, and correspond to the strides:
.. math::
s_k^{\mathrm{column}} = \prod_{j=0}^{k-1} d_j ,
\quad s_k^{\mathrm{row}} = \prod_{j=k+1}^{N-1} d_j .
.. index:: single-segment, contiguous, non-contiguous
where :math:`d_j` = `self.itemsize * self.shape[j]`.
Both the C and Fortran orders are :term:`contiguous`, *i.e.,*
:term:`single-segment`, memory layouts, in which every part of the
memory block can be accessed by some combination of the indices.
Data in new :class:`ndarrays <ndarray>` is in the :term:`row-major`
(C) order, unless otherwise specified, but, for example, :ref:`basic
array slicing <arrays.indexing>` often produces :term:`views <view>`
in a different scheme.
.. seealso: :ref:`Indexing <arrays.ndarray.indexing>`_
.. note::
Several algorithms in NumPy work on arbitrarily strided arrays.
However, some algorithms require single-segment arrays. When an
irregularly strided array is passed in to such algorithms, a copy
is automatically made.
.. _arrays.ndarray.attributes:
Array attributes
================
Array attributes reflect information that is intrinsic to the array
itself. Generally, accessing an array through its attributes allows
you to get and sometimes set intrinsic properties of the array without
creating a new array. The exposed attributes are the core parts of an
array and only some of them can be reset meaningfully without creating
a new array. Information on each attribute is given below.
Memory layout
-------------
The following attributes contain information about the memory layout
of the array:
.. autosummary::
:toctree: generated/
ndarray.flags
ndarray.shape
ndarray.strides
ndarray.ndim
ndarray.data
ndarray.size
ndarray.itemsize
ndarray.nbytes
ndarray.base
Data type
---------
.. seealso:: :ref:`Data type objects <arrays.dtypes>`
The data type object associated with the array can be found in the
:attr:`dtype <ndarray.dtype>` attribute:
.. autosummary::
:toctree: generated/
ndarray.dtype
Other attributes
----------------
.. autosummary::
:toctree: generated/
ndarray.T
ndarray.real
ndarray.imag
ndarray.flat
ndarray.ctypes
__array_priority__
.. _arrays.ndarray.array-interface:
Array interface
---------------
.. seealso:: :ref:`arrays.interface`.
========================== ===================================
:obj:`__array_interface__` Python-side of the array interface
:obj:`__array_struct__` C-side of the array interface
========================== ===================================
:mod:`ctypes` foreign function interface
----------------------------------------
.. autosummary::
:toctree: generated/
ndarray.ctypes
.. _array.ndarray.methods:
Array methods
=============
An :class:`ndarray` object has many methods which operate on or with
the array in some fashion, typically returning an array result. These
methods are briefly explained below. (Each method's docstring has a
more complete description.)
For the following methods there are also corresponding functions in
:mod:`numpy`: :func:`all`, :func:`any`, :func:`argmax`,
:func:`argmin`, :func:`argsort`, :func:`choose`, :func:`clip`,
:func:`compress`, :func:`copy`, :func:`cumprod`, :func:`cumsum`,
:func:`diagonal`, :func:`imag`, :func:`max <amax>`, :func:`mean`,
:func:`min <amin>`, :func:`nonzero`, :func:`prod`, :func:`ptp`,
:func:`put`, :func:`ravel`, :func:`real`, :func:`repeat`,
:func:`reshape`, :func:`round <around>`, :func:`searchsorted`,
:func:`sort`, :func:`squeeze`, :func:`std`, :func:`sum`,
:func:`swapaxes`, :func:`take`, :func:`trace`, :func:`transpose`,
:func:`var`.
Array conversion
----------------
.. autosummary::
:toctree: generated/
ndarray.item
ndarray.tolist
ndarray.itemset
ndarray.setasflat
ndarray.tostring
ndarray.tofile
ndarray.dump
ndarray.dumps
ndarray.astype
ndarray.byteswap
ndarray.copy
ndarray.view
ndarray.getfield
ndarray.setflags
ndarray.fill
Shape manipulation
------------------
For reshape, resize, and transpose, the single tuple argument may be
replaced with ``n`` integers which will be interpreted as an n-tuple.
.. autosummary::
:toctree: generated/
ndarray.reshape
ndarray.resize
ndarray.transpose
ndarray.swapaxes
ndarray.flatten
ndarray.ravel
ndarray.squeeze
Item selection and manipulation
-------------------------------
For array methods that take an *axis* keyword, it defaults to
:const:`None`. If axis is *None*, then the array is treated as a 1-D
array. Any other value for *axis* represents the dimension along which
the operation should proceed.
.. autosummary::
:toctree: generated/
ndarray.take
ndarray.put
ndarray.repeat
ndarray.choose
ndarray.sort
ndarray.argsort
ndarray.searchsorted
ndarray.nonzero
ndarray.compress
ndarray.diagonal
Calculation
-----------
.. index:: axis
Many of these methods take an argument named *axis*. In such cases,
- If *axis* is *None* (the default), the array is treated as a 1-D
array and the operation is performed over the entire array. This
behavior is also the default if self is a 0-dimensional array or
array scalar. (An array scalar is an instance of the types/classes
float32, float64, etc., whereas a 0-dimensional array is an ndarray
instance containing precisely one array scalar.)
- If *axis* is an integer, then the operation is done over the given
axis (for each 1-D subarray that can be created along the given axis).
.. admonition:: Example of the *axis* argument
A 3-dimensional array of size 3 x 3 x 3, summed over each of its
three axes
>>> x
array([[[ 0, 1, 2],
[ 3, 4, 5],
[ 6, 7, 8]],
[[ 9, 10, 11],
[12, 13, 14],
[15, 16, 17]],
[[18, 19, 20],
[21, 22, 23],
[24, 25, 26]]])
>>> x.sum(axis=0)
array([[27, 30, 33],
[36, 39, 42],
[45, 48, 51]])
>>> # for sum, axis is the first keyword, so we may omit it,
>>> # specifying only its value
>>> x.sum(0), x.sum(1), x.sum(2)
(array([[27, 30, 33],
[36, 39, 42],
[45, 48, 51]]),
array([[ 9, 12, 15],
[36, 39, 42],
[63, 66, 69]]),
array([[ 3, 12, 21],
[30, 39, 48],
[57, 66, 75]]))
The parameter *dtype* specifies the data type over which a reduction
operation (like summing) should take place. The default reduce data
type is the same as the data type of *self*. To avoid overflow, it can
be useful to perform the reduction using a larger data type.
For several methods, an optional *out* argument can also be provided
and the result will be placed into the output array given. The *out*
argument must be an :class:`ndarray` and have the same number of
elements. It can have a different data type in which case casting will
be performed.
.. autosummary::
:toctree: generated/
ndarray.argmax
ndarray.min
ndarray.argmin
ndarray.ptp
ndarray.clip
ndarray.conj
ndarray.round
ndarray.trace
ndarray.sum
ndarray.cumsum
ndarray.mean
ndarray.var
ndarray.std
ndarray.prod
ndarray.cumprod
ndarray.all
ndarray.any
Arithmetic and comparison operations
====================================
.. index:: comparison, arithmetic, operation, operator
Arithmetic and comparison operations on :class:`ndarrays <ndarray>`
are defined as element-wise operations, and generally yield
:class:`ndarray` objects as results.
Each of the arithmetic operations (``+``, ``-``, ``*``, ``/``, ``//``,
``%``, ``divmod()``, ``**`` or ``pow()``, ``<<``, ``>>``, ``&``,
``^``, ``|``, ``~``) and the comparisons (``==``, ``<``, ``>``,
``<=``, ``>=``, ``!=``) is equivalent to the corresponding
:term:`universal function` (or :term:`ufunc` for short) in Numpy. For
more information, see the section on :ref:`Universal Functions
<ufuncs>`.
Comparison operators:
.. autosummary::
:toctree: generated/
ndarray.__lt__
ndarray.__le__
ndarray.__gt__
ndarray.__ge__
ndarray.__eq__
ndarray.__ne__
Truth value of an array (:func:`bool()`):
.. autosummary::
:toctree: generated/
ndarray.__nonzero__
.. note::
Truth-value testing of an array invokes
:meth:`ndarray.__nonzero__`, which raises an error if the number of
elements in the the array is larger than 1, because the truth value
of such arrays is ambiguous. Use :meth:`.any() <ndarray.any>` and
:meth:`.all() <ndarray.all>` instead to be clear about what is meant
in such cases. (If the number of elements is 0, the array evaluates
to ``False``.)
Unary operations:
.. autosummary::
:toctree: generated/
ndarray.__neg__
ndarray.__pos__
ndarray.__abs__
ndarray.__invert__
Arithmetic:
.. autosummary::
:toctree: generated/
ndarray.__add__
ndarray.__sub__
ndarray.__mul__
ndarray.__div__
ndarray.__truediv__
ndarray.__floordiv__
ndarray.__mod__
ndarray.__divmod__
ndarray.__pow__
ndarray.__lshift__
ndarray.__rshift__
ndarray.__and__
ndarray.__or__
ndarray.__xor__
.. note::
- Any third argument to :func:`pow()` is silently ignored,
as the underlying :func:`ufunc <power>` takes only two arguments.
- The three division operators are all defined; :obj:`div` is active
by default, :obj:`truediv` is active when
:obj:`__future__` division is in effect.
- Because :class:`ndarray` is a built-in type (written in C), the
``__r{op}__`` special methods are not directly defined.
- The functions called to implement many arithmetic special methods
for arrays can be modified using :func:`set_numeric_ops`.
Arithmetic, in-place:
.. autosummary::
:toctree: generated/
ndarray.__iadd__
ndarray.__isub__
ndarray.__imul__
ndarray.__idiv__
ndarray.__itruediv__
ndarray.__ifloordiv__
ndarray.__imod__
ndarray.__ipow__
ndarray.__ilshift__
ndarray.__irshift__
ndarray.__iand__
ndarray.__ior__
ndarray.__ixor__
.. warning::
In place operations will perform the calculation using the
precision decided by the data type of the two operands, but will
silently downcast the result (if necessary) so it can fit back into
the array. Therefore, for mixed precision calculations, ``A {op}=
B`` can be different than ``A = A {op} B``. For example, suppose
``a = ones((3,3))``. Then, ``a += 3j`` is different than ``a = a +
3j``: while they both perform the same computation, ``a += 3``
casts the result to fit back in ``a``, whereas ``a = a + 3j``
re-binds the name ``a`` to the result.
Special methods
===============
For standard library functions:
.. autosummary::
:toctree: generated/
ndarray.__copy__
ndarray.__deepcopy__
ndarray.__reduce__
ndarray.__setstate__
Basic customization:
.. autosummary::
:toctree: generated/
ndarray.__new__
ndarray.__array__
ndarray.__array_wrap__
Container customization: (see :ref:`Indexing <arrays.indexing>`)
.. autosummary::
:toctree: generated/
ndarray.__len__
ndarray.__getitem__
ndarray.__setitem__
ndarray.__getslice__
ndarray.__setslice__
ndarray.__contains__
Conversion; the operations :func:`complex()`, :func:`int()`,
:func:`long()`, :func:`float()`, :func:`oct()`, and
:func:`hex()`. They work only on arrays that have one element in them
and return the appropriate scalar.
.. autosummary::
:toctree: generated/
ndarray.__int__
ndarray.__long__
ndarray.__float__
ndarray.__oct__
ndarray.__hex__
String representations:
.. autosummary::
:toctree: generated/
ndarray.__str__
ndarray.__repr__

View File

@@ -1,47 +0,0 @@
.. _arrays:
*************
Array objects
*************
.. currentmodule:: numpy
NumPy provides an N-dimensional array type, the :ref:`ndarray
<arrays.ndarray>`, which describes a collection of "items" of the same
type. The items can be :ref:`indexed <arrays.indexing>` using for
example N integers.
All ndarrays are :term:`homogenous`: every item takes up the same size
block of memory, and all blocks are interpreted in exactly the same
way. How each item in the array is to be interpreted is specified by a
separate :ref:`data-type object <arrays.dtypes>`, one of which is associated
with every array. In addition to basic types (integers, floats,
*etc.*), the data type objects can also represent data structures.
An item extracted from an array, *e.g.*, by indexing, is represented
by a Python object whose type is one of the :ref:`array scalar types
<arrays.scalars>` built in Numpy. The array scalars allow easy manipulation
of also more complicated arrangements of data.
.. figure:: figures/threefundamental.png
**Figure**
Conceptual diagram showing the relationship between the three
fundamental objects used to describe the data in an array: 1) the
ndarray itself, 2) the data-type object that describes the layout
of a single fixed-size element of the array, 3) the array-scalar
Python object that is returned when a single element of the array
is accessed.
.. toctree::
:maxdepth: 2
arrays.ndarray
arrays.scalars
arrays.dtypes
arrays.indexing
arrays.classes
maskedarray
arrays.interface

View File

@@ -1,286 +0,0 @@
.. _arrays.scalars:
*******
Scalars
*******
.. currentmodule:: numpy
Python defines only one type of a particular data class (there is only
one integer type, one floating-point type, etc.). This can be
convenient in applications that don't need to be concerned with all
the ways data can be represented in a computer. For scientific
computing, however, more control is often needed.
In NumPy, there are 24 new fundamental Python types to describe
different types of scalars. These type descriptors are mostly based on
the types available in the C language that CPython is written in, with
several additional types compatible with Python's types.
Array scalars have the same attributes and methods as :class:`ndarrays
<ndarray>`. [#]_ This allows one to treat items of an array partly on
the same footing as arrays, smoothing out rough edges that result when
mixing scalar and array operations.
Array scalars live in a hierarchy (see the Figure below) of data
types. They can be detected using the hierarchy: For example,
``isinstance(val, np.generic)`` will return :const:`True` if *val* is
an array scalar object. Alternatively, what kind of array scalar is
present can be determined using other members of the data type
hierarchy. Thus, for example ``isinstance(val, np.complexfloating)``
will return :const:`True` if *val* is a complex valued type, while
:const:`isinstance(val, np.flexible)` will return true if *val* is one
of the flexible itemsize array types (:class:`string`,
:class:`unicode`, :class:`void`).
.. figure:: figures/dtype-hierarchy.png
**Figure:** Hierarchy of type objects representing the array data
types. Not shown are the two integer types :class:`intp` and
:class:`uintp` which just point to the integer type that holds a
pointer for the platform. All the number types can be obtained
using bit-width names as well.
.. [#] However, array scalars are immutable, so none of the array
scalar attributes are settable.
.. _arrays.scalars.character-codes:
.. _arrays.scalars.built-in:
Built-in scalar types
=====================
The built-in scalar types are shown below. Along with their (mostly)
C-derived names, the integer, float, and complex data-types are also
available using a bit-width convention so that an array of the right
size can always be ensured (e.g. :class:`int8`, :class:`float64`,
:class:`complex128`). Two aliases (:class:`intp` and :class:`uintp`)
pointing to the integer type that is sufficiently large to hold a C pointer
are also provided. The C-like names are associated with character codes,
which are shown in the table. Use of the character codes, however,
is discouraged.
Five of the scalar types are essentially equivalent to fundamental
Python types and therefore inherit from them as well as from the
generic array scalar type:
==================== ====================
Array scalar type Related Python type
==================== ====================
:class:`int_` :class:`IntType`
:class:`float_` :class:`FloatType`
:class:`complex_` :class:`ComplexType`
:class:`str_` :class:`StringType`
:class:`unicode_` :class:`UnicodeType`
==================== ====================
The :class:`bool_` data type is very similar to the Python
:class:`BooleanType` but does not inherit from it because Python's
:class:`BooleanType` does not allow itself to be inherited from, and
on the C-level the size of the actual bool data is not the same as a
Python Boolean scalar.
.. warning::
The :class:`bool_` type is not a subclass of the :class:`int_` type
(the :class:`bool_` is not even a number type). This is different
than Python's default implementation of :class:`bool` as a
sub-class of int.
.. tip:: The default data type in Numpy is :class:`float_`.
In the tables below, ``platform?`` means that the type may not be
available on all platforms. Compatibility with different C or Python
types is indicated: two types are compatible if their data is of the
same size and interpreted in the same way.
Booleans:
=================== ============================= ===============
Type Remarks Character code
=================== ============================= ===============
:class:`bool_` compatible: Python bool ``'?'``
:class:`bool8` 8 bits
=================== ============================= ===============
Integers:
=================== ============================= ===============
:class:`byte` compatible: C char ``'b'``
:class:`short` compatible: C short ``'h'``
:class:`intc` compatible: C int ``'i'``
:class:`int_` compatible: Python int ``'l'``
:class:`longlong` compatible: C long long ``'q'``
:class:`intp` large enough to fit a pointer ``'p'``
:class:`int8` 8 bits
:class:`int16` 16 bits
:class:`int32` 32 bits
:class:`int64` 64 bits
=================== ============================= ===============
Unsigned integers:
=================== ============================= ===============
:class:`ubyte` compatible: C unsigned char ``'B'``
:class:`ushort` compatible: C unsigned short ``'H'``
:class:`uintc` compatible: C unsigned int ``'I'``
:class:`uint` compatible: Python int ``'L'``
:class:`ulonglong` compatible: C long long ``'Q'``
:class:`uintp` large enough to fit a pointer ``'P'``
:class:`uint8` 8 bits
:class:`uint16` 16 bits
:class:`uint32` 32 bits
:class:`uint64` 64 bits
=================== ============================= ===============
Floating-point numbers:
=================== ============================= ===============
:class:`half` ``'e'``
:class:`single` compatible: C float ``'f'``
:class:`double` compatible: C double
:class:`float_` compatible: Python float ``'d'``
:class:`longfloat` compatible: C long float ``'g'``
:class:`float16` 16 bits
:class:`float32` 32 bits
:class:`float64` 64 bits
:class:`float96` 96 bits, platform?
:class:`float128` 128 bits, platform?
=================== ============================= ===============
Complex floating-point numbers:
=================== ============================= ===============
:class:`csingle` ``'F'``
:class:`complex_` compatible: Python complex ``'D'``
:class:`clongfloat` ``'G'``
:class:`complex64` two 32-bit floats
:class:`complex128` two 64-bit floats
:class:`complex192` two 96-bit floats,
platform?
:class:`complex256` two 128-bit floats,
platform?
=================== ============================= ===============
Any Python object:
=================== ============================= ===============
:class:`object_` any Python object ``'O'``
=================== ============================= ===============
.. note::
The data actually stored in :term:`object arrays <object array>`
(*i.e.*, arrays having dtype :class:`object_`) are references to
Python objects, not the objects themselves. Hence, object arrays
behave more like usual Python :class:`lists <list>`, in the sense
that their contents need not be of the same Python type.
The object type is also special because an array containing
:class:`object_` items does not return an :class:`object_` object
on item access, but instead returns the actual object that
the array item refers to.
The following data types are :term:`flexible`. They have no predefined
size: the data they describe can be of different length in different
arrays. (In the character codes ``#`` is an integer denoting how many
elements the data type consists of.)
=================== ============================= ========
:class:`str_` compatible: Python str ``'S#'``
:class:`unicode_` compatible: Python unicode ``'U#'``
:class:`void` ``'V#'``
=================== ============================= ========
.. warning::
Numeric Compatibility: If you used old typecode characters in your
Numeric code (which was never recommended), you will need to change
some of them to the new characters. In particular, the needed
changes are ``c -> S1``, ``b -> B``, ``1 -> b``, ``s -> h``, ``w ->
H``, and ``u -> I``. These changes make the type character
convention more consistent with other Python modules such as the
:mod:`struct` module.
Attributes
==========
The array scalar objects have an :obj:`array priority
<__array_priority__>` of :cdata:`NPY_SCALAR_PRIORITY`
(-1,000,000.0). They also do not (yet) have a :attr:`ctypes <ndarray.ctypes>`
attribute. Otherwise, they share the same attributes as arrays:
.. autosummary::
:toctree: generated/
generic.flags
generic.shape
generic.strides
generic.ndim
generic.data
generic.size
generic.itemsize
generic.base
generic.dtype
generic.real
generic.imag
generic.flat
generic.T
generic.__array_interface__
generic.__array_struct__
generic.__array_priority__
generic.__array_wrap__
Indexing
========
.. seealso:: :ref:`arrays.indexing`, :ref:`arrays.dtypes`
Array scalars can be indexed like 0-dimensional arrays: if *x* is an
array scalar,
- ``x[()]`` returns a 0-dimensional :class:`ndarray`
- ``x['field-name']`` returns the array scalar in the field *field-name*.
(*x* can have fields, for example, when it corresponds to a record data type.)
Methods
=======
Array scalars have exactly the same methods as arrays. The default
behavior of these methods is to internally convert the scalar to an
equivalent 0-dimensional array and to call the corresponding array
method. In addition, math operations on array scalars are defined so
that the same hardware flags are set and used to interpret the results
as for :ref:`ufunc <ufuncs>`, so that the error state used for ufuncs
also carries over to the math on array scalars.
The exceptions to the above rules are given below:
.. autosummary::
:toctree: generated/
generic
generic.__array__
generic.__array_wrap__
generic.squeeze
generic.byteswap
generic.__reduce__
generic.__setstate__
generic.setflags
Defining new types
==================
There are two ways to effectively define a new array scalar type
(apart from composing record :ref:`dtypes <arrays.dtypes>` from the built-in
scalar types): One way is to simply subclass the :class:`ndarray` and
overwrite the methods of interest. This will work to a degree, but
internally certain behaviors are fixed by the data type of the array.
To fully customize the data type of an array you need to define a new
data-type, and register it with NumPy. Such new types can only be
defined in C, using the :ref:`Numpy C-API <c-api>`.

View File

@@ -1,3118 +0,0 @@
Array API
=========
.. sectionauthor:: Travis E. Oliphant
| The test of a first-rate intelligence is the ability to hold two
| opposed ideas in the mind at the same time, and still retain the
| ability to function.
| --- *F. Scott Fitzgerald*
| For a successful technology, reality must take precedence over public
| relations, for Nature cannot be fooled.
| --- *Richard P. Feynman*
.. index::
pair: ndarray; C-API
pair: C-API; array
Array structure and data access
-------------------------------
These macros all access the :ctype:`PyArrayObject` structure members. The input
argument, obj, can be any :ctype:`PyObject *` that is directly interpretable
as a :ctype:`PyArrayObject *` (any instance of the :cdata:`PyArray_Type` and its
sub-types).
.. cfunction:: void *PyArray_DATA(PyObject *obj)
.. cfunction:: char *PyArray_BYTES(PyObject *obj)
These two macros are similar and obtain the pointer to the
data-buffer for the array. The first macro can (and should be)
assigned to a particular pointer where the second is for generic
processing. If you have not guaranteed a contiguous and/or aligned
array then be sure you understand how to access the data in the
array to avoid memory and/or alignment problems.
.. cfunction:: npy_intp *PyArray_DIMS(PyObject *arr)
.. cfunction:: npy_intp *PyArray_STRIDES(PyObject* arr)
.. cfunction:: npy_intp PyArray_DIM(PyObject* arr, int n)
Return the shape in the *n* :math:`^{\textrm{th}}` dimension.
.. cfunction:: npy_intp PyArray_STRIDE(PyObject* arr, int n)
Return the stride in the *n* :math:`^{\textrm{th}}` dimension.
.. cfunction:: PyObject *PyArray_BASE(PyObject* arr)
.. cfunction:: PyArray_Descr *PyArray_DESCR(PyObject* arr)
.. cfunction:: int PyArray_FLAGS(PyObject* arr)
.. cfunction:: int PyArray_ITEMSIZE(PyObject* arr)
Return the itemsize for the elements of this array.
.. cfunction:: int PyArray_TYPE(PyObject* arr)
Return the (builtin) typenumber for the elements of this array.
.. cfunction:: PyObject *PyArray_GETITEM(PyObject* arr, void* itemptr)
Get a Python object from the ndarray, *arr*, at the location
pointed to by itemptr. Return ``NULL`` on failure.
.. cfunction:: int PyArray_SETITEM(PyObject* arr, void* itemptr, PyObject* obj)
Convert obj and place it in the ndarray, *arr*, at the place
pointed to by itemptr. Return -1 if an error occurs or 0 on
success.
.. cfunction:: npy_intp PyArray_SIZE(PyObject* arr)
Returns the total size (in number of elements) of the array.
.. cfunction:: npy_intp PyArray_Size(PyObject* obj)
Returns 0 if *obj* is not a sub-class of bigndarray. Otherwise,
returns the total number of elements in the array. Safer version
of :cfunc:`PyArray_SIZE` (*obj*).
.. cfunction:: npy_intp PyArray_NBYTES(PyObject* arr)
Returns the total number of bytes consumed by the array.
Data access
^^^^^^^^^^^
These functions and macros provide easy access to elements of the
ndarray from C. These work for all arrays. You may need to take care
when accessing the data in the array, however, if it is not in machine
byte-order, misaligned, or not writeable. In other words, be sure to
respect the state of the flags unless you know what you are doing, or
have previously guaranteed an array that is writeable, aligned, and in
machine byte-order using :cfunc:`PyArray_FromAny`. If you wish to handle all
types of arrays, the copyswap function for each type is useful for
handling misbehaved arrays. Some platforms (e.g. Solaris) do not like
misaligned data and will crash if you de-reference a misaligned
pointer. Other platforms (e.g. x86 Linux) will just work more slowly
with misaligned data.
.. cfunction:: void* PyArray_GetPtr(PyArrayObject* aobj, npy_intp* ind)
Return a pointer to the data of the ndarray, *aobj*, at the
N-dimensional index given by the c-array, *ind*, (which must be
at least *aobj* ->nd in size). You may want to typecast the
returned pointer to the data type of the ndarray.
.. cfunction:: void* PyArray_GETPTR1(PyObject* obj, <npy_intp> i)
.. cfunction:: void* PyArray_GETPTR2(PyObject* obj, <npy_intp> i, <npy_intp> j)
.. cfunction:: void* PyArray_GETPTR3(PyObject* obj, <npy_intp> i, <npy_intp> j, <npy_intp> k)
.. cfunction:: void* PyArray_GETPTR4(PyObject* obj, <npy_intp> i, <npy_intp> j, <npy_intp> k, <npy_intp> l)
Quick, inline access to the element at the given coordinates in
the ndarray, *obj*, which must have respectively 1, 2, 3, or 4
dimensions (this is not checked). The corresponding *i*, *j*,
*k*, and *l* coordinates can be any integer but will be
interpreted as ``npy_intp``. You may want to typecast the
returned pointer to the data type of the ndarray.
Creating arrays
---------------
From scratch
^^^^^^^^^^^^
.. cfunction:: PyObject* PyArray_NewFromDescr(PyTypeObject* subtype, PyArray_Descr* descr, int nd, npy_intp* dims, npy_intp* strides, void* data, int flags, PyObject* obj)
This is the main array creation function. Most new arrays are
created with this flexible function. The returned object is an
object of Python-type *subtype*, which must be a subtype of
:cdata:`PyArray_Type`. The array has *nd* dimensions, described by
*dims*. The data-type descriptor of the new array is *descr*. If
*subtype* is not :cdata:`&PyArray_Type` (*e.g.* a Python subclass of
the ndarray), then *obj* is the object to pass to the
:obj:`__array_finalize__` method of the subclass. If *data* is
``NULL``, then new memory will be allocated and *flags* can be
non-zero to indicate a Fortran-style contiguous array. If *data*
is not ``NULL``, then it is assumed to point to the memory to be
used for the array and the *flags* argument is used as the new
flags for the array (except the state of :cdata:`NPY_OWNDATA` and
:cdata:`UPDATEIFCOPY` flags of the new array will be reset). In
addition, if *data* is non-NULL, then *strides* can also be
provided. If *strides* is ``NULL``, then the array strides are
computed as C-style contiguous (default) or Fortran-style
contiguous (*flags* is nonzero for *data* = ``NULL`` or *flags* &
:cdata:`NPY_F_CONTIGUOUS` is nonzero non-NULL *data*). Any provided
*dims* and *strides* are copied into newly allocated dimension and
strides arrays for the new array object.
.. cfunction:: PyObject* PyArray_NewLikeArray(PyArrayObject* prototype, NPY_ORDER order, PyArray_Descr* descr, int subok)
.. versionadded:: 1.6
This function steals a reference to *descr* if it is not NULL.
This array creation routine allows for the convenient creation of
a new array matching an existing array's shapes and memory layout,
possibly changing the layout and/or data type.
When *order* is :cdata:`NPY_ANYORDER`, the result order is
:cdata:`NPY_FORTRANORDER` if *prototype* is a fortran array,
:cdata:`NPY_CORDER` otherwise. When *order* is
:cdata:`NPY_KEEPORDER`, the result order matches that of *prototype*, even
when the axes of *prototype* aren't in C or Fortran order.
If *descr* is NULL, the data type of *prototype* is used.
If *subok* is 1, the newly created array will use the sub-type of
*prototype* to create the new array, otherwise it will create a
base-class array.
.. cfunction:: PyObject* PyArray_New(PyTypeObject* subtype, int nd, npy_intp* dims, int type_num, npy_intp* strides, void* data, int itemsize, int flags, PyObject* obj)
This is similar to :cfunc:`PyArray_DescrNew` (...) except you
specify the data-type descriptor with *type_num* and *itemsize*,
where *type_num* corresponds to a builtin (or user-defined)
type. If the type always has the same number of bytes, then
itemsize is ignored. Otherwise, itemsize specifies the particular
size of this array.
.. warning::
If data is passed to :cfunc:`PyArray_NewFromDescr` or :cfunc:`PyArray_New`,
this memory must not be deallocated until the new array is
deleted. If this data came from another Python object, this can
be accomplished using :cfunc:`Py_INCREF` on that object and setting the
base member of the new array to point to that object. If strides
are passed in they must be consistent with the dimensions, the
itemsize, and the data of the array.
.. cfunction:: PyObject* PyArray_SimpleNew(int nd, npy_intp* dims, int typenum)
Create a new unitialized array of type, *typenum*, whose size in
each of *nd* dimensions is given by the integer array, *dims*.
This function cannot be used to create a flexible-type array (no
itemsize given).
.. cfunction:: PyObject* PyArray_SimpleNewFromData(int nd, npy_intp* dims, int typenum, void* data)
Create an array wrapper around *data* pointed to by the given
pointer. The array flags will have a default that the data area is
well-behaved and C-style contiguous. The shape of the array is
given by the *dims* c-array of length *nd*. The data-type of the
array is indicated by *typenum*.
.. cfunction:: PyObject* PyArray_SimpleNewFromDescr(int nd, npy_intp* dims, PyArray_Descr* descr)
Create a new array with the provided data-type descriptor, *descr*
, of the shape deteremined by *nd* and *dims*.
.. cfunction:: PyArray_FILLWBYTE(PyObject* obj, int val)
Fill the array pointed to by *obj* ---which must be a (subclass
of) bigndarray---with the contents of *val* (evaluated as a byte).
.. cfunction:: PyObject* PyArray_Zeros(int nd, npy_intp* dims, PyArray_Descr* dtype, int fortran)
Construct a new *nd* -dimensional array with shape given by *dims*
and data type given by *dtype*. If *fortran* is non-zero, then a
Fortran-order array is created, otherwise a C-order array is
created. Fill the memory with zeros (or the 0 object if *dtype*
corresponds to :ctype:`PyArray_OBJECT` ).
.. cfunction:: PyObject* PyArray_ZEROS(int nd, npy_intp* dims, int type_num, int fortran)
Macro form of :cfunc:`PyArray_Zeros` which takes a type-number instead
of a data-type object.
.. cfunction:: PyObject* PyArray_Empty(int nd, npy_intp* dims, PyArray_Descr* dtype, int fortran)
Construct a new *nd* -dimensional array with shape given by *dims*
and data type given by *dtype*. If *fortran* is non-zero, then a
Fortran-order array is created, otherwise a C-order array is
created. The array is uninitialized unless the data type
corresponds to :ctype:`PyArray_OBJECT` in which case the array is
filled with :cdata:`Py_None`.
.. cfunction:: PyObject* PyArray_EMPTY(int nd, npy_intp* dims, int typenum, int fortran)
Macro form of :cfunc:`PyArray_Empty` which takes a type-number,
*typenum*, instead of a data-type object.
.. cfunction:: PyObject* PyArray_Arange(double start, double stop, double step, int typenum)
Construct a new 1-dimensional array of data-type, *typenum*, that
ranges from *start* to *stop* (exclusive) in increments of *step*
. Equivalent to **arange** (*start*, *stop*, *step*, dtype).
.. cfunction:: PyObject* PyArray_ArangeObj(PyObject* start, PyObject* stop, PyObject* step, PyArray_Descr* descr)
Construct a new 1-dimensional array of data-type determined by
``descr``, that ranges from ``start`` to ``stop`` (exclusive) in
increments of ``step``. Equivalent to arange( ``start``,
``stop``, ``step``, ``typenum`` ).
From other objects
^^^^^^^^^^^^^^^^^^
.. cfunction:: PyObject* PyArray_FromAny(PyObject* op, PyArray_Descr* dtype, int min_depth, int max_depth, int requirements, PyObject* context)
This is the main function used to obtain an array from any nested
sequence, or object that exposes the array interface, *op*. The
parameters allow specification of the required *dtype*, the
minimum (*min_depth*) and maximum (*max_depth*) number of
dimensions acceptable, and other *requirements* for the array. The
*dtype* argument needs to be a :ctype:`PyArray_Descr` structure
indicating the desired data-type (including required
byteorder). The *dtype* argument may be NULL, indicating that any
data-type (and byteorder) is acceptable. Unless ``FORCECAST`` is
present in ``flags``, this call will generate an error if the data
type cannot be safely obtained from the object. If you want to use
``NULL`` for the *dtype* and ensure the array is notswapped then
use :cfunc:`PyArray_CheckFromAny`. A value of 0 for either of the
depth parameters causes the parameter to be ignored. Any of the
following array flags can be added (*e.g.* using \|) to get the
*requirements* argument. If your code can handle general (*e.g.*
strided, byte-swapped, or unaligned arrays) then *requirements*
may be 0. Also, if *op* is not already an array (or does not
expose the array interface), then a new array will be created (and
filled from *op* using the sequence protocol). The new array will
have :cdata:`NPY_DEFAULT` as its flags member. The *context* argument
is passed to the :obj:`__array__` method of *op* and is only used if
the array is constructed that way. Almost always this
parameter is ``NULL``.
.. cvar:: NPY_C_CONTIGUOUS
Make sure the returned array is C-style contiguous
.. cvar:: NPY_F_CONTIGUOUS
Make sure the returned array is Fortran-style contiguous.
.. cvar:: NPY_ALIGNED
Make sure the returned array is aligned on proper boundaries for its
data type. An aligned array has the data pointer and every strides
factor as a multiple of the alignment factor for the data-type-
descriptor.
.. cvar:: NPY_WRITEABLE
Make sure the returned array can be written to.
.. cvar:: NPY_ENSURECOPY
Make sure a copy is made of *op*. If this flag is not
present, data is not copied if it can be avoided.
.. cvar:: NPY_ENSUREARRAY
Make sure the result is a base-class ndarray or bigndarray. By
default, if *op* is an instance of a subclass of the
bigndarray, an instance of that same subclass is returned. If
this flag is set, an ndarray object will be returned instead.
.. cvar:: NPY_FORCECAST
Force a cast to the output type even if it cannot be done
safely. Without this flag, a data cast will occur only if it
can be done safely, otherwise an error is reaised.
.. cvar:: NPY_UPDATEIFCOPY
If *op* is already an array, but does not satisfy the
requirements, then a copy is made (which will satisfy the
requirements). If this flag is present and a copy (of an
object that is already an array) must be made, then the
corresponding :cdata:`NPY_UPDATEIFCOPY` flag is set in the returned
copy and *op* is made to be read-only. When the returned copy
is deleted (presumably after your calculations are complete),
its contents will be copied back into *op* and the *op* array
will be made writeable again. If *op* is not writeable to
begin with, then an error is raised. If *op* is not already an
array, then this flag has no effect.
.. cvar:: NPY_BEHAVED
:cdata:`NPY_ALIGNED` \| :cdata:`NPY_WRITEABLE`
.. cvar:: NPY_CARRAY
:cdata:`NPY_C_CONTIGUOUS` \| :cdata:`NPY_BEHAVED`
.. cvar:: NPY_CARRAY_RO
:cdata:`NPY_C_CONTIGUOUS` \| :cdata:`NPY_ALIGNED`
.. cvar:: NPY_FARRAY
:cdata:`NPY_F_CONTIGUOUS` \| :cdata:`NPY_BEHAVED`
.. cvar:: NPY_FARRAY_RO
:cdata:`NPY_F_CONTIGUOUS` \| :cdata:`NPY_ALIGNED`
.. cvar:: NPY_DEFAULT
:cdata:`NPY_CARRAY`
.. cvar:: NPY_IN_ARRAY
:cdata:`NPY_CONTIGUOUS` \| :cdata:`NPY_ALIGNED`
.. cvar:: NPY_IN_FARRAY
:cdata:`NPY_F_CONTIGUOUS` \| :cdata:`NPY_ALIGNED`
.. cvar:: NPY_OUT_ARRAY
:cdata:`NPY_C_CONTIGUOUS` \| :cdata:`NPY_WRITEABLE` \|
:cdata:`NPY_ALIGNED`
.. cvar:: NPY_OUT_FARRAY
:cdata:`NPY_F_CONTIGUOUS` \| :cdata:`NPY_WRITEABLE` \|
:cdata:`NPY_ALIGNED`
.. cvar:: NPY_INOUT_ARRAY
:cdata:`NPY_C_CONTIGUOUS` \| :cdata:`NPY_WRITEABLE` \|
:cdata:`NPY_ALIGNED` \| :cdata:`NPY_UPDATEIFCOPY`
.. cvar:: NPY_INOUT_FARRAY
:cdata:`NPY_F_CONTIGUOUS` \| :cdata:`NPY_WRITEABLE` \|
:cdata:`NPY_ALIGNED` \| :cdata:`NPY_UPDATEIFCOPY`
.. cfunction:: int PyArray_GetArrayParamsFromObject(PyObject* op, PyArray_Descr* requested_dtype, npy_bool writeable, PyArray_Descr** out_dtype, int* out_ndim, npy_intp* out_dims, PyArrayObject** out_arr, PyObject* context)
.. versionadded:: 1.6
Retrieves the array parameters for viewing/converting an arbitrary
PyObject* to a NumPy array. This allows the "innate type and shape"
of Python list-of-lists to be discovered without
actually converting to an array. PyArray_FromAny calls this function
to analyze its input.
In some cases, such as structured arrays and the __array__ interface,
a data type needs to be used to make sense of the object. When
this is needed, provide a Descr for 'requested_dtype', otherwise
provide NULL. This reference is not stolen. Also, if the requested
dtype doesn't modify the interpretation of the input, out_dtype will
still get the "innate" dtype of the object, not the dtype passed
in 'requested_dtype'.
If writing to the value in 'op' is desired, set the boolean
'writeable' to 1. This raises an error when 'op' is a scalar, list
of lists, or other non-writeable 'op'. This differs from passing
NPY_WRITEABLE to PyArray_FromAny, where the writeable array may
be a copy of the input.
When success (0 return value) is returned, either out_arr
is filled with a non-NULL PyArrayObject and
the rest of the parameters are untouched, or out_arr is
filled with NULL, and the rest of the parameters are filled.
Typical usage:
.. code-block:: c
PyArrayObject *arr = NULL;
PyArray_Descr *dtype = NULL;
int ndim = 0;
npy_intp dims[NPY_MAXDIMS];
if (PyArray_GetArrayParamsFromObject(op, NULL, 1, &dtype,
&ndim, &dims, &arr, NULL) < 0) {
return NULL;
}
if (arr == NULL) {
... validate/change dtype, validate flags, ndim, etc ...
// Could make custom strides here too
arr = PyArray_NewFromDescr(&PyArray_Type, dtype, ndim,
dims, NULL,
fortran ? NPY_F_CONTIGUOUS : 0,
NULL);
if (arr == NULL) {
return NULL;
}
if (PyArray_CopyObject(arr, op) < 0) {
Py_DECREF(arr);
return NULL;
}
}
else {
... in this case the other parameters weren't filled, just
validate and possibly copy arr itself ...
}
... use arr ...
.. cfunction:: PyObject* PyArray_CheckFromAny(PyObject* op, PyArray_Descr* dtype, int min_depth, int max_depth, int requirements, PyObject* context)
Nearly identical to :cfunc:`PyArray_FromAny` (...) except
*requirements* can contain :cdata:`NPY_NOTSWAPPED` (over-riding the
specification in *dtype*) and :cdata:`NPY_ELEMENTSTRIDES` which
indicates that the array should be aligned in the sense that the
strides are multiples of the element size.
.. cvar:: NPY_NOTSWAPPED
Make sure the returned array has a data-type descriptor that is in
machine byte-order, over-riding any specification in the *dtype*
argument. Normally, the byte-order requirement is determined by
the *dtype* argument. If this flag is set and the dtype argument
does not indicate a machine byte-order descriptor (or is NULL and
the object is already an array with a data-type descriptor that is
not in machine byte- order), then a new data-type descriptor is
created and used with its byte-order field set to native.
.. cvar:: NPY_BEHAVED_NS
:cdata:`NPY_ALIGNED` \| :cdata:`NPY_WRITEABLE` \| :cdata:`NPY_NOTSWAPPED`
.. cvar:: NPY_ELEMENTSTRIDES
Make sure the returned array has strides that are multiples of the
element size.
.. cfunction:: PyObject* PyArray_FromArray(PyArrayObject* op, PyArray_Descr* newtype, int requirements)
Special case of :cfunc:`PyArray_FromAny` for when *op* is already an
array but it needs to be of a specific *newtype* (including
byte-order) or has certain *requirements*.
.. cfunction:: PyObject* PyArray_FromStructInterface(PyObject* op)
Returns an ndarray object from a Python object that exposes the
:obj:`__array_struct__`` method and follows the array interface
protocol. If the object does not contain this method then a
borrowed reference to :cdata:`Py_NotImplemented` is returned.
.. cfunction:: PyObject* PyArray_FromInterface(PyObject* op)
Returns an ndarray object from a Python object that exposes the
:obj:`__array_shape__` and :obj:`__array_typestr__`
methods following
the array interface protocol. If the object does not contain one
of these method then a borrowed reference to :cdata:`Py_NotImplemented`
is returned.
.. cfunction:: PyObject* PyArray_FromArrayAttr(PyObject* op, PyArray_Descr* dtype, PyObject* context)
Return an ndarray object from a Python object that exposes the
:obj:`__array__` method. The :obj:`__array__` method can take 0, 1, or 2
arguments ([dtype, context]) where *context* is used to pass
information about where the :obj:`__array__` method is being called
from (currently only used in ufuncs).
.. cfunction:: PyObject* PyArray_ContiguousFromAny(PyObject* op, int typenum, int min_depth, int max_depth)
This function returns a (C-style) contiguous and behaved function
array from any nested sequence or array interface exporting
object, *op*, of (non-flexible) type given by the enumerated
*typenum*, of minimum depth *min_depth*, and of maximum depth
*max_depth*. Equivalent to a call to :cfunc:`PyArray_FromAny` with
requirements set to :cdata:`NPY_DEFAULT` and the type_num member of the
type argument set to *typenum*.
.. cfunction:: PyObject *PyArray_FromObject(PyObject *op, int typenum, int min_depth, int max_depth)
Return an aligned and in native-byteorder array from any nested
sequence or array-interface exporting object, op, of a type given by
the enumerated typenum. The minimum number of dimensions the array can
have is given by min_depth while the maximum is max_depth. This is
equivalent to a call to :cfunc:`PyArray_FromAny` with requirements set to
BEHAVED.
.. cfunction:: PyObject* PyArray_EnsureArray(PyObject* op)
This function **steals a reference** to ``op`` and makes sure that
``op`` is a base-class ndarray. It special cases array scalars,
but otherwise calls :cfunc:`PyArray_FromAny` ( ``op``, NULL, 0, 0,
:cdata:`NPY_ENSUREARRAY`).
.. cfunction:: PyObject* PyArray_FromString(char* string, npy_intp slen, PyArray_Descr* dtype, npy_intp num, char* sep)
Construct a one-dimensional ndarray of a single type from a binary
or (ASCII) text ``string`` of length ``slen``. The data-type of
the array to-be-created is given by ``dtype``. If num is -1, then
**copy** the entire string and return an appropriately sized
array, otherwise, ``num`` is the number of items to **copy** from
the string. If ``sep`` is NULL (or ""), then interpret the string
as bytes of binary data, otherwise convert the sub-strings
separated by ``sep`` to items of data-type ``dtype``. Some
data-types may not be readable in text mode and an error will be
raised if that occurs. All errors return NULL.
.. cfunction:: PyObject* PyArray_FromFile(FILE* fp, PyArray_Descr* dtype, npy_intp num, char* sep)
Construct a one-dimensional ndarray of a single type from a binary
or text file. The open file pointer is ``fp``, the data-type of
the array to be created is given by ``dtype``. This must match
the data in the file. If ``num`` is -1, then read until the end of
the file and return an appropriately sized array, otherwise,
``num`` is the number of items to read. If ``sep`` is NULL (or
""), then read from the file in binary mode, otherwise read from
the file in text mode with ``sep`` providing the item
separator. Some array types cannot be read in text mode in which
case an error is raised.
.. cfunction:: PyObject* PyArray_FromBuffer(PyObject* buf, PyArray_Descr* dtype, npy_intp count, npy_intp offset)
Construct a one-dimensional ndarray of a single type from an
object, ``buf``, that exports the (single-segment) buffer protocol
(or has an attribute __buffer\__ that returns an object that
exports the buffer protocol). A writeable buffer will be tried
first followed by a read- only buffer. The :cdata:`NPY_WRITEABLE`
flag of the returned array will reflect which one was
successful. The data is assumed to start at ``offset`` bytes from
the start of the memory location for the object. The type of the
data in the buffer will be interpreted depending on the data- type
descriptor, ``dtype.`` If ``count`` is negative then it will be
determined from the size of the buffer and the requested itemsize,
otherwise, ``count`` represents how many elements should be
converted from the buffer.
.. cfunction:: int PyArray_CopyInto(PyArrayObject* dest, PyArrayObject* src)
Copy from the source array, ``src``, into the destination array,
``dest``, performing a data-type conversion if necessary. If an
error occurs return -1 (otherwise 0). The shape of ``src`` must be
broadcastable to the shape of ``dest``. The data areas of dest
and src must not overlap.
.. cfunction:: int PyArray_MoveInto(PyArrayObject* dest, PyArrayObject* src)
Move data from the source array, ``src``, into the destination
array, ``dest``, performing a data-type conversion if
necessary. If an error occurs return -1 (otherwise 0). The shape
of ``src`` must be broadcastable to the shape of ``dest``. The
data areas of dest and src may overlap.
.. cfunction:: PyArrayObject* PyArray_GETCONTIGUOUS(PyObject* op)
If ``op`` is already (C-style) contiguous and well-behaved then
just return a reference, otherwise return a (contiguous and
well-behaved) copy of the array. The parameter op must be a
(sub-class of an) ndarray and no checking for that is done.
.. cfunction:: PyObject* PyArray_FROM_O(PyObject* obj)
Convert ``obj`` to an ndarray. The argument can be any nested
sequence or object that exports the array interface. This is a
macro form of :cfunc:`PyArray_FromAny` using ``NULL``, 0, 0, 0 for the
other arguments. Your code must be able to handle any data-type
descriptor and any combination of data-flags to use this macro.
.. cfunction:: PyObject* PyArray_FROM_OF(PyObject* obj, int requirements)
Similar to :cfunc:`PyArray_FROM_O` except it can take an argument
of *requirements* indicating properties the resulting array must
have. Available requirements that can be enforced are
:cdata:`NPY_CONTIGUOUS`, :cdata:`NPY_F_CONTIGUOUS`,
:cdata:`NPY_ALIGNED`, :cdata:`NPY_WRITEABLE`,
:cdata:`NPY_NOTSWAPPED`, :cdata:`NPY_ENSURECOPY`,
:cdata:`NPY_UPDATEIFCOPY`, :cdata:`NPY_FORCECAST`, and
:cdata:`NPY_ENSUREARRAY`. Standard combinations of flags can also
be used:
.. cfunction:: PyObject* PyArray_FROM_OT(PyObject* obj, int typenum)
Similar to :cfunc:`PyArray_FROM_O` except it can take an argument of
*typenum* specifying the type-number the returned array.
.. cfunction:: PyObject* PyArray_FROM_OTF(PyObject* obj, int typenum, int requirements)
Combination of :cfunc:`PyArray_FROM_OF` and :cfunc:`PyArray_FROM_OT`
allowing both a *typenum* and a *flags* argument to be provided..
.. cfunction:: PyObject* PyArray_FROMANY(PyObject* obj, int typenum, int min, int max, int requirements)
Similar to :cfunc:`PyArray_FromAny` except the data-type is
specified using a typenumber. :cfunc:`PyArray_DescrFromType`
(*typenum*) is passed directly to :cfunc:`PyArray_FromAny`. This
macro also adds :cdata:`NPY_DEFAULT` to requirements if
:cdata:`NPY_ENSURECOPY` is passed in as requirements.
.. cfunction:: PyObject *PyArray_CheckAxis(PyObject* obj, int* axis, int requirements)
Encapsulate the functionality of functions and methods that take
the axis= keyword and work properly with None as the axis
argument. The input array is ``obj``, while ``*axis`` is a
converted integer (so that >=MAXDIMS is the None value), and
``requirements`` gives the needed properties of ``obj``. The
output is a converted version of the input so that requirements
are met and if needed a flattening has occurred. On output
negative values of ``*axis`` are converted and the new value is
checked to ensure consistency with the shape of ``obj``.
Dealing with types
------------------
General check of Python Type
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. cfunction:: PyArray_Check(op)
Evaluates true if *op* is a Python object whose type is a sub-type
of :cdata:`PyArray_Type`.
.. cfunction:: PyArray_CheckExact(op)
Evaluates true if *op* is a Python object with type
:cdata:`PyArray_Type`.
.. cfunction:: PyArray_HasArrayInterface(op, out)
If ``op`` implements any part of the array interface, then ``out``
will contain a new reference to the newly created ndarray using
the interface or ``out`` will contain ``NULL`` if an error during
conversion occurs. Otherwise, out will contain a borrowed
reference to :cdata:`Py_NotImplemented` and no error condition is set.
.. cfunction:: PyArray_HasArrayInterfaceType(op, type, context, out)
If ``op`` implements any part of the array interface, then ``out``
will contain a new reference to the newly created ndarray using
the interface or ``out`` will contain ``NULL`` if an error during
conversion occurs. Otherwise, out will contain a borrowed
reference to Py_NotImplemented and no error condition is set.
This version allows setting of the type and context in the part of
the array interface that looks for the :obj:`__array__` attribute.
.. cfunction:: PyArray_IsZeroDim(op)
Evaluates true if *op* is an instance of (a subclass of)
:cdata:`PyArray_Type` and has 0 dimensions.
.. cfunction:: PyArray_IsScalar(op, cls)
Evaluates true if *op* is an instance of :cdata:`Py{cls}ArrType_Type`.
.. cfunction:: PyArray_CheckScalar(op)
Evaluates true if *op* is either an array scalar (an instance of a
sub-type of :cdata:`PyGenericArr_Type` ), or an instance of (a
sub-class of) :cdata:`PyArray_Type` whose dimensionality is 0.
.. cfunction:: PyArray_IsPythonScalar(op)
Evaluates true if *op* is a builtin Python "scalar" object (int,
float, complex, str, unicode, long, bool).
.. cfunction:: PyArray_IsAnyScalar(op)
Evaluates true if *op* is either a Python scalar or an array
scalar (an instance of a sub- type of :cdata:`PyGenericArr_Type` ).
Data-type checking
^^^^^^^^^^^^^^^^^^
For the typenum macros, the argument is an integer representing an
enumerated array data type. For the array type checking macros the
argument must be a :ctype:`PyObject *` that can be directly interpreted as a
:ctype:`PyArrayObject *`.
.. cfunction:: PyTypeNum_ISUNSIGNED(num)
.. cfunction:: PyDataType_ISUNSIGNED(descr)
.. cfunction:: PyArray_ISUNSIGNED(obj)
Type represents an unsigned integer.
.. cfunction:: PyTypeNum_ISSIGNED(num)
.. cfunction:: PyDataType_ISSIGNED(descr)
.. cfunction:: PyArray_ISSIGNED(obj)
Type represents a signed integer.
.. cfunction:: PyTypeNum_ISINTEGER(num)
.. cfunction:: PyDataType_ISINTEGER(descr)
.. cfunction:: PyArray_ISINTEGER(obj)
Type represents any integer.
.. cfunction:: PyTypeNum_ISFLOAT(num)
.. cfunction:: PyDataType_ISFLOAT(descr)
.. cfunction:: PyArray_ISFLOAT(obj)
Type represents any floating point number.
.. cfunction:: PyTypeNum_ISCOMPLEX(num)
.. cfunction:: PyDataType_ISCOMPLEX(descr)
.. cfunction:: PyArray_ISCOMPLEX(obj)
Type represents any complex floating point number.
.. cfunction:: PyTypeNum_ISNUMBER(num)
.. cfunction:: PyDataType_ISNUMBER(descr)
.. cfunction:: PyArray_ISNUMBER(obj)
Type represents any integer, floating point, or complex floating point
number.
.. cfunction:: PyTypeNum_ISSTRING(num)
.. cfunction:: PyDataType_ISSTRING(descr)
.. cfunction:: PyArray_ISSTRING(obj)
Type represents a string data type.
.. cfunction:: PyTypeNum_ISPYTHON(num)
.. cfunction:: PyDataType_ISPYTHON(descr)
.. cfunction:: PyArray_ISPYTHON(obj)
Type represents an enumerated type corresponding to one of the
standard Python scalar (bool, int, float, or complex).
.. cfunction:: PyTypeNum_ISFLEXIBLE(num)
.. cfunction:: PyDataType_ISFLEXIBLE(descr)
.. cfunction:: PyArray_ISFLEXIBLE(obj)
Type represents one of the flexible array types ( :cdata:`NPY_STRING`,
:cdata:`NPY_UNICODE`, or :cdata:`NPY_VOID` ).
.. cfunction:: PyTypeNum_ISUSERDEF(num)
.. cfunction:: PyDataType_ISUSERDEF(descr)
.. cfunction:: PyArray_ISUSERDEF(obj)
Type represents a user-defined type.
.. cfunction:: PyTypeNum_ISEXTENDED(num)
.. cfunction:: PyDataType_ISEXTENDED(descr)
.. cfunction:: PyArray_ISEXTENDED(obj)
Type is either flexible or user-defined.
.. cfunction:: PyTypeNum_ISOBJECT(num)
.. cfunction:: PyDataType_ISOBJECT(descr)
.. cfunction:: PyArray_ISOBJECT(obj)
Type represents object data type.
.. cfunction:: PyTypeNum_ISBOOL(num)
.. cfunction:: PyDataType_ISBOOL(descr)
.. cfunction:: PyArray_ISBOOL(obj)
Type represents Boolean data type.
.. cfunction:: PyDataType_HASFIELDS(descr)
.. cfunction:: PyArray_HASFIELDS(obj)
Type has fields associated with it.
.. cfunction:: PyArray_ISNOTSWAPPED(m)
Evaluates true if the data area of the ndarray *m* is in machine
byte-order according to the array's data-type descriptor.
.. cfunction:: PyArray_ISBYTESWAPPED(m)
Evaluates true if the data area of the ndarray *m* is **not** in
machine byte-order according to the array's data-type descriptor.
.. cfunction:: Bool PyArray_EquivTypes(PyArray_Descr* type1, PyArray_Descr* type2)
Return :cdata:`NPY_TRUE` if *type1* and *type2* actually represent
equivalent types for this platform (the fortran member of each
type is ignored). For example, on 32-bit platforms,
:cdata:`NPY_LONG` and :cdata:`NPY_INT` are equivalent. Otherwise
return :cdata:`NPY_FALSE`.
.. cfunction:: Bool PyArray_EquivArrTypes(PyArrayObject* a1, PyArrayObject * a2)
Return :cdata:`NPY_TRUE` if *a1* and *a2* are arrays with equivalent
types for this platform.
.. cfunction:: Bool PyArray_EquivTypenums(int typenum1, int typenum2)
Special case of :cfunc:`PyArray_EquivTypes` (...) that does not accept
flexible data types but may be easier to call.
.. cfunction:: int PyArray_EquivByteorders({byteorder} b1, {byteorder} b2)
True if byteorder characters ( :cdata:`NPY_LITTLE`,
:cdata:`NPY_BIG`, :cdata:`NPY_NATIVE`, :cdata:`NPY_IGNORE` ) are
either equal or equivalent as to their specification of a native
byte order. Thus, on a little-endian machine :cdata:`NPY_LITTLE`
and :cdata:`NPY_NATIVE` are equivalent where they are not
equivalent on a big-endian machine.
Converting data types
^^^^^^^^^^^^^^^^^^^^^
.. cfunction:: PyObject* PyArray_Cast(PyArrayObject* arr, int typenum)
Mainly for backwards compatibility to the Numeric C-API and for
simple casts to non-flexible types. Return a new array object with
the elements of *arr* cast to the data-type *typenum* which must
be one of the enumerated types and not a flexible type.
.. cfunction:: PyObject* PyArray_CastToType(PyArrayObject* arr, PyArray_Descr* type, int fortran)
Return a new array of the *type* specified, casting the elements
of *arr* as appropriate. The fortran argument specifies the
ordering of the output array.
.. cfunction:: int PyArray_CastTo(PyArrayObject* out, PyArrayObject* in)
As of 1.6, this function simply calls :cfunc:`PyArray_CopyInto`,
which handles the casting.
Cast the elements of the array *in* into the array *out*. The
output array should be writeable, have an integer-multiple of the
number of elements in the input array (more than one copy can be
placed in out), and have a data type that is one of the builtin
types. Returns 0 on success and -1 if an error occurs.
.. cfunction:: PyArray_VectorUnaryFunc* PyArray_GetCastFunc(PyArray_Descr* from, int totype)
Return the low-level casting function to cast from the given
descriptor to the builtin type number. If no casting function
exists return ``NULL`` and set an error. Using this function
instead of direct access to *from* ->f->cast will allow support of
any user-defined casting functions added to a descriptors casting
dictionary.
.. cfunction:: int PyArray_CanCastSafely(int fromtype, int totype)
Returns non-zero if an array of data type *fromtype* can be cast
to an array of data type *totype* without losing information. An
exception is that 64-bit integers are allowed to be cast to 64-bit
floating point values even though this can lose precision on large
integers so as not to proliferate the use of long doubles without
explict requests. Flexible array types are not checked according
to their lengths with this function.
.. cfunction:: int PyArray_CanCastTo(PyArray_Descr* fromtype, PyArray_Descr* totype)
:cfunc:`PyArray_CanCastTypeTo` supercedes this function in
NumPy 1.6 and later.
Equivalent to PyArray_CanCastTypeTo(fromtype, totype, NPY_SAFE_CASTING).
.. cfunction:: int PyArray_CanCastTypeTo(PyArray_Descr* fromtype, PyArray_Descr* totype, NPY_CASTING casting)
.. versionadded:: 1.6
Returns non-zero if an array of data type *fromtype* (which can
include flexible types) can be cast safely to an array of data
type *totype* (which can include flexible types) according to
the casting rule *casting*. For simple types with :cdata:`NPY_SAFE_CASTING`,
this is basically a wrapper around :cfunc:`PyArray_CanCastSafely`, but
for flexible types such as strings or unicode, it produces results
taking into account their sizes.
.. cfunction:: int PyArray_CanCastArrayTo(PyArrayObject* arr, PyArray_Descr* totype, NPY_CASTING casting)
.. versionadded:: 1.6
Returns non-zero if *arr* can be cast to *totype* according
to the casting rule given in *casting*. If *arr* is an array
scalar, its value is taken into account, and non-zero is also
returned when the value will not overflow or be truncated to
an integer when converting to a smaller type.
This is almost the same as the result of
PyArray_CanCastTypeTo(PyArray_MinScalarType(arr), totype, casting),
but it also handles a special case arising because the set
of uint values is not a subset of the int values for types with the
same number of bits.
.. cfunction:: PyArray_Descr* PyArray_MinScalarType(PyArrayObject* arr)
.. versionadded:: 1.6
If *arr* is an array, returns its data type descriptor, but if
*arr* is an array scalar (has 0 dimensions), it finds the data type
of smallest size to which the value may be converted
without overflow or truncation to an integer.
This function will not demote complex to float or anything to
boolean, but will demote a signed integer to an unsigned integer
when the scalar value is positive.
.. cfunction:: PyArray_Descr* PyArray_PromoteTypes(PyArray_Descr* type1, PyArray_Descr* type2)
.. versionadded:: 1.6
Finds the data type of smallest size and kind to which *type1* and
*type2* may be safely converted. This function is symmetric and
associative.
.. cfunction:: PyArray_Descr* PyArray_ResultType(npy_intp narrs, PyArrayObject**arrs, npy_intp ndtypes, PyArray_Descr**dtypes)
.. versionadded:: 1.6
This applies type promotion to all the inputs,
using the NumPy rules for combining scalars and arrays, to
determine the output type of a set of operands. This is the
same result type that ufuncs produce. The specific algorithm
used is as follows.
Categories are determined by first checking which of boolean,
integer (int/uint), or floating point (float/complex) the maximum
kind of all the arrays and the scalars are.
If there are only scalars or the maximum category of the scalars
is higher than the maximum category of the arrays,
the data types are combined with :cfunc:`PyArray_PromoteTypes`
to produce the return value.
Otherwise, PyArray_MinScalarType is called on each array, and
the resulting data types are all combined with
:cfunc:`PyArray_PromoteTypes` to produce the return value.
The set of int values is not a subset of the uint values for types
with the same number of bits, something not reflected in
:cfunc:`PyArray_MinScalarType`, but handled as a special case in
PyArray_ResultType.
.. cfunction:: int PyArray_ObjectType(PyObject* op, int mintype)
This function is superceded by :cfunc:`PyArray_MinScalarType` and/or
:cfunc:`PyArray_ResultType`.
This function is useful for determining a common type that two or
more arrays can be converted to. It only works for non-flexible
array types as no itemsize information is passed. The *mintype*
argument represents the minimum type acceptable, and *op*
represents the object that will be converted to an array. The
return value is the enumerated typenumber that represents the
data-type that *op* should have.
.. cfunction:: void PyArray_ArrayType(PyObject* op, PyArray_Descr* mintype, PyArray_Descr* outtype)
This function is superceded by :cfunc:`PyArray_ResultType`.
This function works similarly to :cfunc:`PyArray_ObjectType` (...)
except it handles flexible arrays. The *mintype* argument can have
an itemsize member and the *outtype* argument will have an
itemsize member at least as big but perhaps bigger depending on
the object *op*.
.. cfunction:: PyArrayObject** PyArray_ConvertToCommonType(PyObject* op, int* n)
The functionality this provides is largely superceded by iterator
:ctype:`NpyIter` introduced in 1.6, with flag
:cdata:`NPY_ITER_COMMON_DTYPE` or with the same dtype parameter for
all operands.
Convert a sequence of Python objects contained in *op* to an array
of ndarrays each having the same data type. The type is selected
based on the typenumber (larger type number is chosen over a
smaller one) ignoring objects that are only scalars. The length of
the sequence is returned in *n*, and an *n* -length array of
:ctype:`PyArrayObject` pointers is the return value (or ``NULL`` if an
error occurs). The returned array must be freed by the caller of
this routine (using :cfunc:`PyDataMem_FREE` ) and all the array objects
in it ``DECREF`` 'd or a memory-leak will occur. The example
template-code below shows a typically usage:
.. code-block:: c
mps = PyArray_ConvertToCommonType(obj, &n);
if (mps==NULL) return NULL;
{code}
<before return>
for (i=0; i<n; i++) Py_DECREF(mps[i]);
PyDataMem_FREE(mps);
{return}
.. cfunction:: char* PyArray_Zero(PyArrayObject* arr)
A pointer to newly created memory of size *arr* ->itemsize that
holds the representation of 0 for that type. The returned pointer,
*ret*, **must be freed** using :cfunc:`PyDataMem_FREE` (ret) when it is
not needed anymore.
.. cfunction:: char* PyArray_One(PyArrayObject* arr)
A pointer to newly created memory of size *arr* ->itemsize that
holds the representation of 1 for that type. The returned pointer,
*ret*, **must be freed** using :cfunc:`PyDataMem_FREE` (ret) when it
is not needed anymore.
.. cfunction:: int PyArray_ValidType(int typenum)
Returns :cdata:`NPY_TRUE` if *typenum* represents a valid type-number
(builtin or user-defined or character code). Otherwise, this
function returns :cdata:`NPY_FALSE`.
New data types
^^^^^^^^^^^^^^
.. cfunction:: void PyArray_InitArrFuncs(PyArray_ArrFuncs* f)
Initialize all function pointers and members to ``NULL``.
.. cfunction:: int PyArray_RegisterDataType(PyArray_Descr* dtype)
Register a data-type as a new user-defined data type for
arrays. The type must have most of its entries filled in. This is
not always checked and errors can produce segfaults. In
particular, the typeobj member of the ``dtype`` structure must be
filled with a Python type that has a fixed-size element-size that
corresponds to the elsize member of *dtype*. Also the ``f``
member must have the required functions: nonzero, copyswap,
copyswapn, getitem, setitem, and cast (some of the cast functions
may be ``NULL`` if no support is desired). To avoid confusion, you
should choose a unique character typecode but this is not enforced
and not relied on internally.
A user-defined type number is returned that uniquely identifies
the type. A pointer to the new structure can then be obtained from
:cfunc:`PyArray_DescrFromType` using the returned type number. A -1 is
returned if an error occurs. If this *dtype* has already been
registered (checked only by the address of the pointer), then
return the previously-assigned type-number.
.. cfunction:: int PyArray_RegisterCastFunc(PyArray_Descr* descr, int totype, PyArray_VectorUnaryFunc* castfunc)
Register a low-level casting function, *castfunc*, to convert
from the data-type, *descr*, to the given data-type number,
*totype*. Any old casting function is over-written. A ``0`` is
returned on success or a ``-1`` on failure.
.. cfunction:: int PyArray_RegisterCanCast(PyArray_Descr* descr, int totype, PyArray_SCALARKIND scalar)
Register the data-type number, *totype*, as castable from
data-type object, *descr*, of the given *scalar* kind. Use
*scalar* = :cdata:`NPY_NOSCALAR` to register that an array of data-type
*descr* can be cast safely to a data-type whose type_number is
*totype*.
Special functions for PyArray_OBJECT
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. cfunction:: int PyArray_INCREF(PyArrayObject* op)
Used for an array, *op*, that contains any Python objects. It
increments the reference count of every object in the array
according to the data-type of *op*. A -1 is returned if an error
occurs, otherwise 0 is returned.
.. cfunction:: void PyArray_Item_INCREF(char* ptr, PyArray_Descr* dtype)
A function to INCREF all the objects at the location *ptr*
according to the data-type *dtype*. If *ptr* is the start of a
record with an object at any offset, then this will (recursively)
increment the reference count of all object-like items in the
record.
.. cfunction:: int PyArray_XDECREF(PyArrayObject* op)
Used for an array, *op*, that contains any Python objects. It
decrements the reference count of every object in the array
according to the data-type of *op*. Normal return value is 0. A
-1 is returned if an error occurs.
.. cfunction:: void PyArray_Item_XDECREF(char* ptr, PyArray_Descr* dtype)
A function to XDECREF all the object-like items at the loacation
*ptr* as recorded in the data-type, *dtype*. This works
recursively so that if ``dtype`` itself has fields with data-types
that contain object-like items, all the object-like fields will be
XDECREF ``'d``.
.. cfunction:: void PyArray_FillObjectArray(PyArrayObject* arr, PyObject* obj)
Fill a newly created array with a single value obj at all
locations in the structure with object data-types. No checking is
performed but *arr* must be of data-type :ctype:`PyArray_OBJECT` and be
single-segment and uninitialized (no previous objects in
position). Use :cfunc:`PyArray_DECREF` (*arr*) if you need to
decrement all the items in the object array prior to calling this
function.
Array flags
-----------
The ``flags`` attribute of the ``PyArrayObject`` structure contains
important information about the memory used by the array (pointed to
by the data member) This flag information must be kept accurate or
strange results and even segfaults may result.
There are 6 (binary) flags that describe the memory area used by the
data buffer. These constants are defined in ``arrayobject.h`` and
determine the bit-position of the flag. Python exposes a nice
attribute- based interface as well as a dictionary-like interface for
getting (and, if appropriate, setting) these flags.
Memory areas of all kinds can be pointed to by an ndarray,
necessitating these flags. If you get an arbitrary ``PyArrayObject``
in C-code, you need to be aware of the flags that are set. If you
need to guarantee a certain kind of array (like :cdata:`NPY_C_CONTIGUOUS` and
:cdata:`NPY_BEHAVED`), then pass these requirements into the
PyArray_FromAny function.
Basic Array Flags
^^^^^^^^^^^^^^^^^
An ndarray can have a data segment that is not a simple contiguous
chunk of well-behaved memory you can manipulate. It may not be aligned
with word boundaries (very important on some platforms). It might have
its data in a different byte-order than the machine recognizes. It
might not be writeable. It might be in Fortan-contiguous order. The
array flags are used to indicate what can be said about data
associated with an array.
.. cvar:: NPY_C_CONTIGUOUS
The data area is in C-style contiguous order (last index varies the
fastest).
.. cvar:: NPY_F_CONTIGUOUS
The data area is in Fortran-style contiguous order (first index varies
the fastest).
Notice that contiguous 1-d arrays are always both Fortran
contiguous and C contiguous. Both of these flags can be checked and
are convenience flags only as whether or not an array is
:cdata:`NPY_C_CONTIGUOUS` or :cdata:`NPY_F_CONTIGUOUS` can be determined by the
``strides``, ``dimensions``, and ``itemsize`` attributes.
.. cvar:: NPY_OWNDATA
The data area is owned by this array.
.. cvar:: NPY_ALIGNED
The data area is aligned appropriately (for all strides).
.. cvar:: NPY_WRITEABLE
The data area can be written to.
Notice that the above 3 flags are are defined so that a new, well-
behaved array has these flags defined as true.
.. cvar:: NPY_UPDATEIFCOPY
The data area represents a (well-behaved) copy whose information
should be transferred back to the original when this array is deleted.
This is a special flag that is set if this array represents a copy
made because a user required certain flags in
:cfunc:`PyArray_FromAny` and a copy had to be made of some other
array (and the user asked for this flag to be set in such a
situation). The base attribute then points to the "misbehaved"
array (which is set read_only). When the array with this flag set
is deallocated, it will copy its contents back to the "misbehaved"
array (casting if necessary) and will reset the "misbehaved" array
to :cdata:`NPY_WRITEABLE`. If the "misbehaved" array was not
:cdata:`NPY_WRITEABLE` to begin with then :cfunc:`PyArray_FromAny`
would have returned an error because :cdata:`NPY_UPDATEIFCOPY`
would not have been possible.
:cfunc:`PyArray_UpdateFlags` (obj, flags) will update the
``obj->flags`` for ``flags`` which can be any of
:cdata:`NPY_C_CONTIGUOUS`, :cdata:`NPY_F_CONTIGUOUS`, :cdata:`NPY_ALIGNED`,
or :cdata:`NPY_WRITEABLE`.
Combinations of array flags
^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. cvar:: NPY_BEHAVED
:cdata:`NPY_ALIGNED` \| :cdata:`NPY_WRITEABLE`
.. cvar:: NPY_CARRAY
:cdata:`NPY_C_CONTIGUOUS` \| :cdata:`NPY_BEHAVED`
.. cvar:: NPY_CARRAY_RO
:cdata:`NPY_C_CONTIGUOUS` \| :cdata:`NPY_ALIGNED`
.. cvar:: NPY_FARRAY
:cdata:`NPY_F_CONTIGUOUS` \| :cdata:`NPY_BEHAVED`
.. cvar:: NPY_FARRAY_RO
:cdata:`NPY_F_CONTIGUOUS` \| :cdata:`NPY_ALIGNED`
.. cvar:: NPY_DEFAULT
:cdata:`NPY_CARRAY`
.. cvar:: NPY_UPDATE_ALL
:cdata:`NPY_C_CONTIGUOUS` \| :cdata:`NPY_F_CONTIGUOUS` \| :cdata:`NPY_ALIGNED`
Flag-like constants
^^^^^^^^^^^^^^^^^^^
These constants are used in :cfunc:`PyArray_FromAny` (and its macro forms) to
specify desired properties of the new array.
.. cvar:: NPY_FORCECAST
Cast to the desired type, even if it can't be done without losing
information.
.. cvar:: NPY_ENSURECOPY
Make sure the resulting array is a copy of the original.
.. cvar:: NPY_ENSUREARRAY
Make sure the resulting object is an actual ndarray (or bigndarray),
and not a sub-class.
.. cvar:: NPY_NOTSWAPPED
Only used in :cfunc:`PyArray_CheckFromAny` to over-ride the byteorder
of the data-type object passed in.
.. cvar:: NPY_BEHAVED_NS
:cdata:`NPY_ALIGNED` \| :cdata:`NPY_WRITEABLE` \| :cdata:`NPY_NOTSWAPPED`
Flag checking
^^^^^^^^^^^^^
For all of these macros *arr* must be an instance of a (subclass of)
:cdata:`PyArray_Type`, but no checking is done.
.. cfunction:: PyArray_CHKFLAGS(arr, flags)
The first parameter, arr, must be an ndarray or subclass. The
parameter, *flags*, should be an integer consisting of bitwise
combinations of the possible flags an array can have:
:cdata:`NPY_C_CONTIGUOUS`, :cdata:`NPY_F_CONTIGUOUS`,
:cdata:`NPY_OWNDATA`, :cdata:`NPY_ALIGNED`,
:cdata:`NPY_WRITEABLE`, :cdata:`NPY_UPDATEIFCOPY`.
.. cfunction:: PyArray_ISCONTIGUOUS(arr)
Evaluates true if *arr* is C-style contiguous.
.. cfunction:: PyArray_ISFORTRAN(arr)
Evaluates true if *arr* is Fortran-style contiguous.
.. cfunction:: PyArray_ISWRITEABLE(arr)
Evaluates true if the data area of *arr* can be written to
.. cfunction:: PyArray_ISALIGNED(arr)
Evaluates true if the data area of *arr* is properly aligned on
the machine.
.. cfunction:: PyArray_ISBEHAVED(arr)
Evalutes true if the data area of *arr* is aligned and writeable
and in machine byte-order according to its descriptor.
.. cfunction:: PyArray_ISBEHAVED_RO(arr)
Evaluates true if the data area of *arr* is aligned and in machine
byte-order.
.. cfunction:: PyArray_ISCARRAY(arr)
Evaluates true if the data area of *arr* is C-style contiguous,
and :cfunc:`PyArray_ISBEHAVED` (*arr*) is true.
.. cfunction:: PyArray_ISFARRAY(arr)
Evaluates true if the data area of *arr* is Fortran-style
contiguous and :cfunc:`PyArray_ISBEHAVED` (*arr*) is true.
.. cfunction:: PyArray_ISCARRAY_RO(arr)
Evaluates true if the data area of *arr* is C-style contiguous,
aligned, and in machine byte-order.
.. cfunction:: PyArray_ISFARRAY_RO(arr)
Evaluates true if the data area of *arr* is Fortran-style
contiguous, aligned, and in machine byte-order **.**
.. cfunction:: PyArray_ISONESEGMENT(arr)
Evaluates true if the data area of *arr* consists of a single
(C-style or Fortran-style) contiguous segment.
.. cfunction:: void PyArray_UpdateFlags(PyArrayObject* arr, int flagmask)
The :cdata:`NPY_C_CONTIGUOUS`, :cdata:`NPY_ALIGNED`, and
:cdata:`NPY_F_CONTIGUOUS` array flags can be "calculated" from the
array object itself. This routine updates one or more of these
flags of *arr* as specified in *flagmask* by performing the
required calculation.
.. warning::
It is important to keep the flags updated (using
:cfunc:`PyArray_UpdateFlags` can help) whenever a manipulation with an
array is performed that might cause them to change. Later
calculations in NumPy that rely on the state of these flags do not
repeat the calculation to update them.
Array method alternative API
----------------------------
Conversion
^^^^^^^^^^
.. cfunction:: PyObject* PyArray_GetField(PyArrayObject* self, PyArray_Descr* dtype, int offset)
Equivalent to :meth:`ndarray.getfield` (*self*, *dtype*, *offset*). Return
a new array of the given *dtype* using the data in the current
array at a specified *offset* in bytes. The *offset* plus the
itemsize of the new array type must be less than *self*
->descr->elsize or an error is raised. The same shape and strides
as the original array are used. Therefore, this function has the
effect of returning a field from a record array. But, it can also
be used to select specific bytes or groups of bytes from any array
type.
.. cfunction:: int PyArray_SetField(PyArrayObject* self, PyArray_Descr* dtype, int offset, PyObject* val)
Equivalent to :meth:`ndarray.setfield` (*self*, *val*, *dtype*, *offset*
). Set the field starting at *offset* in bytes and of the given
*dtype* to *val*. The *offset* plus *dtype* ->elsize must be less
than *self* ->descr->elsize or an error is raised. Otherwise, the
*val* argument is converted to an array and copied into the field
pointed to. If necessary, the elements of *val* are repeated to
fill the destination array, But, the number of elements in the
destination must be an integer multiple of the number of elements
in *val*.
.. cfunction:: PyObject* PyArray_Byteswap(PyArrayObject* self, Bool inplace)
Equivalent to :meth:`ndarray.byteswap` (*self*, *inplace*). Return an array
whose data area is byteswapped. If *inplace* is non-zero, then do
the byteswap inplace and return a reference to self. Otherwise,
create a byteswapped copy and leave self unchanged.
.. cfunction:: PyObject* PyArray_NewCopy(PyArrayObject* old, NPY_ORDER order)
Equivalent to :meth:`ndarray.copy` (*self*, *fortran*). Make a copy of the
*old* array. The returned array is always aligned and writeable
with data interpreted the same as the old array. If *order* is
:cdata:`NPY_CORDER`, then a C-style contiguous array is returned. If
*order* is :cdata:`NPY_FORTRANORDER`, then a Fortran-style contiguous
array is returned. If *order is* :cdata:`NPY_ANYORDER`, then the array
returned is Fortran-style contiguous only if the old one is;
otherwise, it is C-style contiguous.
.. cfunction:: PyObject* PyArray_ToList(PyArrayObject* self)
Equivalent to :meth:`ndarray.tolist` (*self*). Return a nested Python list
from *self*.
.. cfunction:: PyObject* PyArray_ToString(PyArrayObject* self, NPY_ORDER order)
Equivalent to :meth:`ndarray.tostring` (*self*, *order*). Return the bytes
of this array in a Python string.
.. cfunction:: PyObject* PyArray_ToFile(PyArrayObject* self, FILE* fp, char* sep, char* format)
Write the contents of *self* to the file pointer *fp* in C-style
contiguous fashion. Write the data as binary bytes if *sep* is the
string ""or ``NULL``. Otherwise, write the contents of *self* as
text using the *sep* string as the item separator. Each item will
be printed to the file. If the *format* string is not ``NULL`` or
"", then it is a Python print statement format string showing how
the items are to be written.
.. cfunction:: int PyArray_Dump(PyObject* self, PyObject* file, int protocol)
Pickle the object in *self* to the given *file* (either a string
or a Python file object). If *file* is a Python string it is
considered to be the name of a file which is then opened in binary
mode. The given *protocol* is used (if *protocol* is negative, or
the highest available is used). This is a simple wrapper around
cPickle.dump(*self*, *file*, *protocol*).
.. cfunction:: PyObject* PyArray_Dumps(PyObject* self, int protocol)
Pickle the object in *self* to a Python string and return it. Use
the Pickle *protocol* provided (or the highest available if
*protocol* is negative).
.. cfunction:: int PyArray_FillWithScalar(PyArrayObject* arr, PyObject* obj)
Fill the array, *arr*, with the given scalar object, *obj*. The
object is first converted to the data type of *arr*, and then
copied into every location. A -1 is returned if an error occurs,
otherwise 0 is returned.
.. cfunction:: PyObject* PyArray_View(PyArrayObject* self, PyArray_Descr* dtype)
Equivalent to :meth:`ndarray.view` (*self*, *dtype*). Return a new view of
the array *self* as possibly a different data-type, *dtype*. If
*dtype* is ``NULL``, then the returned array will have the same
data type as *self*. The new data-type must be consistent with
the size of *self*. Either the itemsizes must be identical, or
*self* must be single-segment and the total number of bytes must
be the same. In the latter case the dimensions of the returned
array will be altered in the last (or first for Fortran-style
contiguous arrays) dimension. The data area of the returned array
and self is exactly the same.
Shape Manipulation
^^^^^^^^^^^^^^^^^^
.. cfunction:: PyObject* PyArray_Newshape(PyArrayObject* self, PyArray_Dims* newshape)
Result will be a new array (pointing to the same memory location
as *self* if possible), but having a shape given by *newshape*
. If the new shape is not compatible with the strides of *self*,
then a copy of the array with the new specified shape will be
returned.
.. cfunction:: PyObject* PyArray_Reshape(PyArrayObject* self, PyObject* shape)
Equivalent to :meth:`ndarray.reshape` (*self*, *shape*) where *shape* is a
sequence. Converts *shape* to a :ctype:`PyArray_Dims` structure and
calls :cfunc:`PyArray_Newshape` internally.
.. cfunction:: PyObject* PyArray_Squeeze(PyArrayObject* self)
Equivalent to :meth:`ndarray.squeeze` (*self*). Return a new view of *self*
with all of the dimensions of length 1 removed from the shape.
.. warning::
matrix objects are always 2-dimensional. Therefore,
:cfunc:`PyArray_Squeeze` has no effect on arrays of matrix sub-class.
.. cfunction:: PyObject* PyArray_SwapAxes(PyArrayObject* self, int a1, int a2)
Equivalent to :meth:`ndarray.swapaxes` (*self*, *a1*, *a2*). The returned
array is a new view of the data in *self* with the given axes,
*a1* and *a2*, swapped.
.. cfunction:: PyObject* PyArray_Resize(PyArrayObject* self, PyArray_Dims* newshape, int refcheck, NPY_ORDER fortran)
Equivalent to :meth:`ndarray.resize` (*self*, *newshape*, refcheck
``=`` *refcheck*, order= fortran ). This function only works on
single-segment arrays. It changes the shape of *self* inplace and
will reallocate the memory for *self* if *newshape* has a
different total number of elements then the old shape. If
reallocation is necessary, then *self* must own its data, have
*self* - ``>base==NULL``, have *self* - ``>weakrefs==NULL``, and
(unless refcheck is 0) not be referenced by any other array. A
reference to the new array is returned. The fortran argument can
be :cdata:`NPY_ANYORDER`, :cdata:`NPY_CORDER`, or
:cdata:`NPY_FORTRANORDER`. It currently has no effect. Eventually
it could be used to determine how the resize operation should view
the data when constructing a differently-dimensioned array.
.. cfunction:: PyObject* PyArray_Transpose(PyArrayObject* self, PyArray_Dims* permute)
Equivalent to :meth:`ndarray.transpose` (*self*, *permute*). Permute the
axes of the ndarray object *self* according to the data structure
*permute* and return the result. If *permute* is ``NULL``, then
the resulting array has its axes reversed. For example if *self*
has shape :math:`10\times20\times30`, and *permute* ``.ptr`` is
(0,2,1) the shape of the result is :math:`10\times30\times20.` If
*permute* is ``NULL``, the shape of the result is
:math:`30\times20\times10.`
.. cfunction:: PyObject* PyArray_Flatten(PyArrayObject* self, NPY_ORDER order)
Equivalent to :meth:`ndarray.flatten` (*self*, *order*). Return a 1-d copy
of the array. If *order* is :cdata:`NPY_FORTRANORDER` the elements are
scanned out in Fortran order (first-dimension varies the
fastest). If *order* is :cdata:`NPY_CORDER`, the elements of ``self``
are scanned in C-order (last dimension varies the fastest). If
*order* :cdata:`NPY_ANYORDER`, then the result of
:cfunc:`PyArray_ISFORTRAN` (*self*) is used to determine which order
to flatten.
.. cfunction:: PyObject* PyArray_Ravel(PyArrayObject* self, NPY_ORDER order)
Equivalent to *self*.ravel(*order*). Same basic functionality
as :cfunc:`PyArray_Flatten` (*self*, *order*) except if *order* is 0
and *self* is C-style contiguous, the shape is altered but no copy
is performed.
Item selection and manipulation
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. cfunction:: PyObject* PyArray_TakeFrom(PyArrayObject* self, PyObject* indices, int axis, PyArrayObject* ret, NPY_CLIPMODE clipmode)
Equivalent to :meth:`ndarray.take` (*self*, *indices*, *axis*, *ret*,
*clipmode*) except *axis* =None in Python is obtained by setting
*axis* = :cdata:`NPY_MAXDIMS` in C. Extract the items from self
indicated by the integer-valued *indices* along the given *axis.*
The clipmode argument can be :cdata:`NPY_RAISE`, :cdata:`NPY_WRAP`, or
:cdata:`NPY_CLIP` to indicate what to do with out-of-bound indices. The
*ret* argument can specify an output array rather than having one
created internally.
.. cfunction:: PyObject* PyArray_PutTo(PyArrayObject* self, PyObject* values, PyObject* indices, NPY_CLIPMODE clipmode)
Equivalent to *self*.put(*values*, *indices*, *clipmode*
). Put *values* into *self* at the corresponding (flattened)
*indices*. If *values* is too small it will be repeated as
necessary.
.. cfunction:: PyObject* PyArray_PutMask(PyArrayObject* self, PyObject* values, PyObject* mask)
Place the *values* in *self* wherever corresponding positions
(using a flattened context) in *mask* are true. The *mask* and
*self* arrays must have the same total number of elements. If
*values* is too small, it will be repeated as necessary.
.. cfunction:: PyObject* PyArray_Repeat(PyArrayObject* self, PyObject* op, int axis)
Equivalent to :meth:`ndarray.repeat` (*self*, *op*, *axis*). Copy the
elements of *self*, *op* times along the given *axis*. Either
*op* is a scalar integer or a sequence of length *self*
->dimensions[ *axis* ] indicating how many times to repeat each
item along the axis.
.. cfunction:: PyObject* PyArray_Choose(PyArrayObject* self, PyObject* op, PyArrayObject* ret, NPY_CLIPMODE clipmode)
Equivalent to :meth:`ndarray.choose` (*self*, *op*, *ret*, *clipmode*).
Create a new array by selecting elements from the sequence of
arrays in *op* based on the integer values in *self*. The arrays
must all be broadcastable to the same shape and the entries in
*self* should be between 0 and len(*op*). The output is placed
in *ret* unless it is ``NULL`` in which case a new output is
created. The *clipmode* argument determines behavior for when
entries in *self* are not between 0 and len(*op*).
.. cvar:: NPY_RAISE
raise a ValueError;
.. cvar:: NPY_WRAP
wrap values < 0 by adding len(*op*) and values >=len(*op*)
by subtracting len(*op*) until they are in range;
.. cvar:: NPY_CLIP
all values are clipped to the region [0, len(*op*) ).
.. cfunction:: PyObject* PyArray_Sort(PyArrayObject* self, int axis)
Equivalent to :meth:`ndarray.sort` (*self*, *axis*). Return an array with
the items of *self* sorted along *axis*.
.. cfunction:: PyObject* PyArray_ArgSort(PyArrayObject* self, int axis)
Equivalent to :meth:`ndarray.argsort` (*self*, *axis*). Return an array of
indices such that selection of these indices along the given
``axis`` would return a sorted version of *self*. If *self*
->descr is a data-type with fields defined, then
self->descr->names is used to determine the sort order. A
comparison where the first field is equal will use the second
field and so on. To alter the sort order of a record array, create
a new data-type with a different order of names and construct a
view of the array with that new data-type.
.. cfunction:: PyObject* PyArray_LexSort(PyObject* sort_keys, int axis)
Given a sequence of arrays (*sort_keys*) of the same shape,
return an array of indices (similar to :cfunc:`PyArray_ArgSort` (...))
that would sort the arrays lexicographically. A lexicographic sort
specifies that when two keys are found to be equal, the order is
based on comparison of subsequent keys. A merge sort (which leaves
equal entries unmoved) is required to be defined for the
types. The sort is accomplished by sorting the indices first using
the first *sort_key* and then using the second *sort_key* and so
forth. This is equivalent to the lexsort(*sort_keys*, *axis*)
Python command. Because of the way the merge-sort works, be sure
to understand the order the *sort_keys* must be in (reversed from
the order you would use when comparing two elements).
If these arrays are all collected in a record array, then
:cfunc:`PyArray_Sort` (...) can also be used to sort the array
directly.
.. cfunction:: PyObject* PyArray_SearchSorted(PyArrayObject* self, PyObject* values)
Equivalent to :meth:`ndarray.searchsorted` (*self*, *values*). Assuming
*self* is a 1-d array in ascending order representing bin
boundaries then the output is an array the same shape as *values*
of bin numbers, giving the bin into which each item in *values*
would be placed. No checking is done on whether or not self is in
ascending order.
.. cfunction:: PyObject* PyArray_Diagonal(PyArrayObject* self, int offset, int axis1, int axis2)
Equivalent to :meth:`ndarray.diagonal` (*self*, *offset*, *axis1*, *axis2*
). Return the *offset* diagonals of the 2-d arrays defined by
*axis1* and *axis2*.
.. cfunction:: npy_intp PyArray_CountNonzero(PyArrayObject* self)
.. versionadded:: 1.6
Counts the number of non-zero elements in the array object *self*.
.. cfunction:: PyObject* PyArray_Nonzero(PyArrayObject* self)
Equivalent to :meth:`ndarray.nonzero` (*self*). Returns a tuple of index
arrays that select elements of *self* that are nonzero. If (nd=
:cfunc:`PyArray_NDIM` ( ``self`` ))==1, then a single index array is
returned. The index arrays have data type :cdata:`NPY_INTP`. If a
tuple is returned (nd :math:`\neq` 1), then its length is nd.
.. cfunction:: PyObject* PyArray_Compress(PyArrayObject* self, PyObject* condition, int axis, PyArrayObject* out)
Equivalent to :meth:`ndarray.compress` (*self*, *condition*, *axis*
). Return the elements along *axis* corresponding to elements of
*condition* that are true.
Calculation
^^^^^^^^^^^
.. tip::
Pass in :cdata:`NPY_MAXDIMS` for axis in order to achieve the same
effect that is obtained by passing in *axis* = :const:`None` in Python
(treating the array as a 1-d array).
.. cfunction:: PyObject* PyArray_ArgMax(PyArrayObject* self, int axis)
Equivalent to :meth:`ndarray.argmax` (*self*, *axis*). Return the index of
the largest element of *self* along *axis*.
.. cfunction:: PyObject* PyArray_ArgMin(PyArrayObject* self, int axis)
Equivalent to :meth:`ndarray.argmin` (*self*, *axis*). Return the index of
the smallest element of *self* along *axis*.
.. cfunction:: PyObject* PyArray_Max(PyArrayObject* self, int axis, PyArrayObject* out)
Equivalent to :meth:`ndarray.max` (*self*, *axis*). Return the largest
element of *self* along the given *axis*.
.. cfunction:: PyObject* PyArray_Min(PyArrayObject* self, int axis, PyArrayObject* out)
Equivalent to :meth:`ndarray.min` (*self*, *axis*). Return the smallest
element of *self* along the given *axis*.
.. cfunction:: PyObject* PyArray_Ptp(PyArrayObject* self, int axis, PyArrayObject* out)
Equivalent to :meth:`ndarray.ptp` (*self*, *axis*). Return the difference
between the largest element of *self* along *axis* and the
smallest element of *self* along *axis*.
.. note::
The rtype argument specifies the data-type the reduction should
take place over. This is important if the data-type of the array
is not "large" enough to handle the output. By default, all
integer data-types are made at least as large as :cdata:`NPY_LONG`
for the "add" and "multiply" ufuncs (which form the basis for
mean, sum, cumsum, prod, and cumprod functions).
.. cfunction:: PyObject* PyArray_Mean(PyArrayObject* self, int axis, int rtype, PyArrayObject* out)
Equivalent to :meth:`ndarray.mean` (*self*, *axis*, *rtype*). Returns the
mean of the elements along the given *axis*, using the enumerated
type *rtype* as the data type to sum in. Default sum behavior is
obtained using :cdata:`PyArray_NOTYPE` for *rtype*.
.. cfunction:: PyObject* PyArray_Trace(PyArrayObject* self, int offset, int axis1, int axis2, int rtype, PyArrayObject* out)
Equivalent to :meth:`ndarray.trace` (*self*, *offset*, *axis1*, *axis2*,
*rtype*). Return the sum (using *rtype* as the data type of
summation) over the *offset* diagonal elements of the 2-d arrays
defined by *axis1* and *axis2* variables. A positive offset
chooses diagonals above the main diagonal. A negative offset
selects diagonals below the main diagonal.
.. cfunction:: PyObject* PyArray_Clip(PyArrayObject* self, PyObject* min, PyObject* max)
Equivalent to :meth:`ndarray.clip` (*self*, *min*, *max*). Clip an array,
*self*, so that values larger than *max* are fixed to *max* and
values less than *min* are fixed to *min*.
.. cfunction:: PyObject* PyArray_Conjugate(PyArrayObject* self)
Equivalent to :meth:`ndarray.conjugate` (*self*).
Return the complex conjugate of *self*. If *self* is not of
complex data type, then return *self* with an reference.
.. cfunction:: PyObject* PyArray_Round(PyArrayObject* self, int decimals, PyArrayObject* out)
Equivalent to :meth:`ndarray.round` (*self*, *decimals*, *out*). Returns
the array with elements rounded to the nearest decimal place. The
decimal place is defined as the :math:`10^{-\textrm{decimals}}`
digit so that negative *decimals* cause rounding to the nearest 10's, 100's, etc. If out is ``NULL``, then the output array is created, otherwise the output is placed in *out* which must be the correct size and type.
.. cfunction:: PyObject* PyArray_Std(PyArrayObject* self, int axis, int rtype, PyArrayObject* out)
Equivalent to :meth:`ndarray.std` (*self*, *axis*, *rtype*). Return the
standard deviation using data along *axis* converted to data type
*rtype*.
.. cfunction:: PyObject* PyArray_Sum(PyArrayObject* self, int axis, int rtype, PyArrayObject* out)
Equivalent to :meth:`ndarray.sum` (*self*, *axis*, *rtype*). Return 1-d
vector sums of elements in *self* along *axis*. Perform the sum
after converting data to data type *rtype*.
.. cfunction:: PyObject* PyArray_CumSum(PyArrayObject* self, int axis, int rtype, PyArrayObject* out)
Equivalent to :meth:`ndarray.cumsum` (*self*, *axis*, *rtype*). Return
cumulative 1-d sums of elements in *self* along *axis*. Perform
the sum after converting data to data type *rtype*.
.. cfunction:: PyObject* PyArray_Prod(PyArrayObject* self, int axis, int rtype, PyArrayObject* out)
Equivalent to :meth:`ndarray.prod` (*self*, *axis*, *rtype*). Return 1-d
products of elements in *self* along *axis*. Perform the product
after converting data to data type *rtype*.
.. cfunction:: PyObject* PyArray_CumProd(PyArrayObject* self, int axis, int rtype, PyArrayObject* out)
Equivalent to :meth:`ndarray.cumprod` (*self*, *axis*, *rtype*). Return
1-d cumulative products of elements in ``self`` along ``axis``.
Perform the product after converting data to data type ``rtype``.
.. cfunction:: PyObject* PyArray_All(PyArrayObject* self, int axis, PyArrayObject* out)
Equivalent to :meth:`ndarray.all` (*self*, *axis*). Return an array with
True elements for every 1-d sub-array of ``self`` defined by
``axis`` in which all the elements are True.
.. cfunction:: PyObject* PyArray_Any(PyArrayObject* self, int axis, PyArrayObject* out)
Equivalent to :meth:`ndarray.any` (*self*, *axis*). Return an array with
True elements for every 1-d sub-array of *self* defined by *axis*
in which any of the elements are True.
Functions
---------
Array Functions
^^^^^^^^^^^^^^^
.. cfunction:: int PyArray_AsCArray(PyObject** op, void* ptr, npy_intp* dims, int nd, int typenum, int itemsize)
Sometimes it is useful to access a multidimensional array as a
C-style multi-dimensional array so that algorithms can be
implemented using C's a[i][j][k] syntax. This routine returns a
pointer, *ptr*, that simulates this kind of C-style array, for
1-, 2-, and 3-d ndarrays.
:param op:
The address to any Python object. This Python object will be replaced
with an equivalent well-behaved, C-style contiguous, ndarray of the
given data type specifice by the last two arguments. Be sure that
stealing a reference in this way to the input object is justified.
:param ptr:
The address to a (ctype* for 1-d, ctype** for 2-d or ctype*** for 3-d)
variable where ctype is the equivalent C-type for the data type. On
return, *ptr* will be addressable as a 1-d, 2-d, or 3-d array.
:param dims:
An output array that contains the shape of the array object. This
array gives boundaries on any looping that will take place.
:param nd:
The dimensionality of the array (1, 2, or 3).
:param typenum:
The expected data type of the array.
:param itemsize:
This argument is only needed when *typenum* represents a
flexible array. Otherwise it should be 0.
.. note::
The simulation of a C-style array is not complete for 2-d and 3-d
arrays. For example, the simulated arrays of pointers cannot be passed
to subroutines expecting specific, statically-defined 2-d and 3-d
arrays. To pass to functions requiring those kind of inputs, you must
statically define the required array and copy data.
.. cfunction:: int PyArray_Free(PyObject* op, void* ptr)
Must be called with the same objects and memory locations returned
from :cfunc:`PyArray_AsCArray` (...). This function cleans up memory
that otherwise would get leaked.
.. cfunction:: PyObject* PyArray_Concatenate(PyObject* obj, int axis)
Join the sequence of objects in *obj* together along *axis* into a
single array. If the dimensions or types are not compatible an
error is raised.
.. cfunction:: PyObject* PyArray_InnerProduct(PyObject* obj1, PyObject* obj2)
Compute a product-sum over the last dimensions of *obj1* and
*obj2*. Neither array is conjugated.
.. cfunction:: PyObject* PyArray_MatrixProduct(PyObject* obj1, PyObject* obj)
Compute a product-sum over the last dimension of *obj1* and the
second-to-last dimension of *obj2*. For 2-d arrays this is a
matrix-product. Neither array is conjugated.
.. cfunction:: PyObject* PyArray_MatrixProduct2(PyObject* obj1, PyObject* obj, PyObject* out)
.. versionadded:: 1.6
Same as PyArray_MatrixProduct, but store the result in *out*. The
output array must have the correct shape, type, and be
C-contiguous, or an exception is raised.
.. cfunction:: PyObject* PyArray_EinsteinSum(char* subscripts, npy_intp nop, PyArrayObject** op_in, PyArray_Descr* dtype, NPY_ORDER order, NPY_CASTING casting, PyArrayObject* out)
.. versionadded:: 1.6
Applies the einstein summation convention to the array operands
provided, returning a new array or placing the result in *out*.
The string in *subscripts* is a comma separated list of index
letters. The number of operands is in *nop*, and *op_in* is an
array containing those operands. The data type of the output can
be forced with *dtype*, the output order can be forced with *order*
(:cdata:`NPY_KEEPORDER` is recommended), and when *dtype* is specified,
*casting* indicates how permissive the data conversion should be.
See the :func:`einsum` function for more details.
.. cfunction:: PyObject* PyArray_CopyAndTranspose(PyObject \* op)
A specialized copy and transpose function that works only for 2-d
arrays. The returned array is a transposed copy of *op*.
.. cfunction:: PyObject* PyArray_Correlate(PyObject* op1, PyObject* op2, int mode)
Compute the 1-d correlation of the 1-d arrays *op1* and *op2*
. The correlation is computed at each output point by multiplying
*op1* by a shifted version of *op2* and summing the result. As a
result of the shift, needed values outside of the defined range of
*op1* and *op2* are interpreted as zero. The mode determines how
many shifts to return: 0 - return only shifts that did not need to
assume zero- values; 1 - return an object that is the same size as
*op1*, 2 - return all possible shifts (any overlap at all is
accepted).
.. rubric:: Notes
This does not compute the usual correlation: if op2 is larger than op1, the
arguments are swapped, and the conjugate is never taken for complex arrays.
See PyArray_Correlate2 for the usual signal processing correlation.
.. cfunction:: PyObject* PyArray_Correlate2(PyObject* op1, PyObject* op2, int mode)
Updated version of PyArray_Correlate, which uses the usual definition of
correlation for 1d arrays. The correlation is computed at each output point
by multiplying *op1* by a shifted version of *op2* and summing the result.
As a result of the shift, needed values outside of the defined range of
*op1* and *op2* are interpreted as zero. The mode determines how many
shifts to return: 0 - return only shifts that did not need to assume zero-
values; 1 - return an object that is the same size as *op1*, 2 - return all
possible shifts (any overlap at all is accepted).
.. rubric:: Notes
Compute z as follows::
z[k] = sum_n op1[n] * conj(op2[n+k])
.. cfunction:: PyObject* PyArray_Where(PyObject* condition, PyObject* x, PyObject* y)
If both ``x`` and ``y`` are ``NULL``, then return
:cfunc:`PyArray_Nonzero` (*condition*). Otherwise, both *x* and *y*
must be given and the object returned is shaped like *condition*
and has elements of *x* and *y* where *condition* is respectively
True or False.
Other functions
^^^^^^^^^^^^^^^
.. cfunction:: Bool PyArray_CheckStrides(int elsize, int nd, npy_intp numbytes, npy_intp* dims, npy_intp* newstrides)
Determine if *newstrides* is a strides array consistent with the
memory of an *nd* -dimensional array with shape ``dims`` and
element-size, *elsize*. The *newstrides* array is checked to see
if jumping by the provided number of bytes in each direction will
ever mean jumping more than *numbytes* which is the assumed size
of the available memory segment. If *numbytes* is 0, then an
equivalent *numbytes* is computed assuming *nd*, *dims*, and
*elsize* refer to a single-segment array. Return :cdata:`NPY_TRUE` if
*newstrides* is acceptable, otherwise return :cdata:`NPY_FALSE`.
.. cfunction:: npy_intp PyArray_MultiplyList(npy_intp* seq, int n)
.. cfunction:: int PyArray_MultiplyIntList(int* seq, int n)
Both of these routines multiply an *n* -length array, *seq*, of
integers and return the result. No overflow checking is performed.
.. cfunction:: int PyArray_CompareLists(npy_intp* l1, npy_intp* l2, int n)
Given two *n* -length arrays of integers, *l1*, and *l2*, return
1 if the lists are identical; otherwise, return 0.
Array Iterators
---------------
As of Numpy 1.6, these array iterators are superceded by
the new array iterator, :ctype:`NpyIter`.
An array iterator is a simple way to access the elements of an
N-dimensional array quickly and efficiently. Section `2
<#sec-array-iterator>`__ provides more description and examples of
this useful approach to looping over an array.
.. cfunction:: PyObject* PyArray_IterNew(PyObject* arr)
Return an array iterator object from the array, *arr*. This is
equivalent to *arr*. **flat**. The array iterator object makes
it easy to loop over an N-dimensional non-contiguous array in
C-style contiguous fashion.
.. cfunction:: PyObject* PyArray_IterAllButAxis(PyObject* arr, int \*axis)
Return an array iterator that will iterate over all axes but the
one provided in *\*axis*. The returned iterator cannot be used
with :cfunc:`PyArray_ITER_GOTO1D`. This iterator could be used to
write something similar to what ufuncs do wherein the loop over
the largest axis is done by a separate sub-routine. If *\*axis* is
negative then *\*axis* will be set to the axis having the smallest
stride and that axis will be used.
.. cfunction:: PyObject *PyArray_BroadcastToShape(PyObject* arr, npy_intp *dimensions, int nd)
Return an array iterator that is broadcast to iterate as an array
of the shape provided by *dimensions* and *nd*.
.. cfunction:: int PyArrayIter_Check(PyObject* op)
Evaluates true if *op* is an array iterator (or instance of a
subclass of the array iterator type).
.. cfunction:: void PyArray_ITER_RESET(PyObject* iterator)
Reset an *iterator* to the beginning of the array.
.. cfunction:: void PyArray_ITER_NEXT(PyObject* iterator)
Incremement the index and the dataptr members of the *iterator* to
point to the next element of the array. If the array is not
(C-style) contiguous, also increment the N-dimensional coordinates
array.
.. cfunction:: void *PyArray_ITER_DATA(PyObject* iterator)
A pointer to the current element of the array.
.. cfunction:: void PyArray_ITER_GOTO(PyObject* iterator, npy_intp* destination)
Set the *iterator* index, dataptr, and coordinates members to the
location in the array indicated by the N-dimensional c-array,
*destination*, which must have size at least *iterator*
->nd_m1+1.
.. cfunction:: PyArray_ITER_GOTO1D(PyObject* iterator, npy_intp index)
Set the *iterator* index and dataptr to the location in the array
indicated by the integer *index* which points to an element in the
C-styled flattened array.
.. cfunction:: int PyArray_ITER_NOTDONE(PyObject* iterator)
Evaluates TRUE as long as the iterator has not looped through all of
the elements, otherwise it evaluates FALSE.
Broadcasting (multi-iterators)
------------------------------
.. cfunction:: PyObject* PyArray_MultiIterNew(int num, ...)
A simplified interface to broadcasting. This function takes the
number of arrays to broadcast and then *num* extra ( :ctype:`PyObject *`
) arguments. These arguments are converted to arrays and iterators
are created. :cfunc:`PyArray_Broadcast` is then called on the resulting
multi-iterator object. The resulting, broadcasted mult-iterator
object is then returned. A broadcasted operation can then be
performed using a single loop and using :cfunc:`PyArray_MultiIter_NEXT`
(..)
.. cfunction:: void PyArray_MultiIter_RESET(PyObject* multi)
Reset all the iterators to the beginning in a multi-iterator
object, *multi*.
.. cfunction:: void PyArray_MultiIter_NEXT(PyObject* multi)
Advance each iterator in a multi-iterator object, *multi*, to its
next (broadcasted) element.
.. cfunction:: void *PyArray_MultiIter_DATA(PyObject* multi, int i)
Return the data-pointer of the *i* :math:`^{\textrm{th}}` iterator
in a multi-iterator object.
.. cfunction:: void PyArray_MultiIter_NEXTi(PyObject* multi, int i)
Advance the pointer of only the *i* :math:`^{\textrm{th}}` iterator.
.. cfunction:: void PyArray_MultiIter_GOTO(PyObject* multi, npy_intp* destination)
Advance each iterator in a multi-iterator object, *multi*, to the
given :math:`N` -dimensional *destination* where :math:`N` is the
number of dimensions in the broadcasted array.
.. cfunction:: void PyArray_MultiIter_GOTO1D(PyObject* multi, npy_intp index)
Advance each iterator in a multi-iterator object, *multi*, to the
corresponding location of the *index* into the flattened
broadcasted array.
.. cfunction:: int PyArray_MultiIter_NOTDONE(PyObject* multi)
Evaluates TRUE as long as the multi-iterator has not looped
through all of the elements (of the broadcasted result), otherwise
it evaluates FALSE.
.. cfunction:: int PyArray_Broadcast(PyArrayMultiIterObject* mit)
This function encapsulates the broadcasting rules. The *mit*
container should already contain iterators for all the arrays that
need to be broadcast. On return, these iterators will be adjusted
so that iteration over each simultaneously will accomplish the
broadcasting. A negative number is returned if an error occurs.
.. cfunction:: int PyArray_RemoveSmallest(PyArrayMultiIterObject* mit)
This function takes a multi-iterator object that has been
previously "broadcasted," finds the dimension with the smallest
"sum of strides" in the broadcasted result and adapts all the
iterators so as not to iterate over that dimension (by effectively
making them of length-1 in that dimension). The corresponding
dimension is returned unless *mit* ->nd is 0, then -1 is
returned. This function is useful for constructing ufunc-like
routines that broadcast their inputs correctly and then call a
strided 1-d version of the routine as the inner-loop. This 1-d
version is usually optimized for speed and for this reason the
loop should be performed over the axis that won't require large
stride jumps.
Neighborhood iterator
---------------------
.. versionadded:: 1.4.0
Neighborhood iterators are subclasses of the iterator object, and can be used
to iter over a neighborhood of a point. For example, you may want to iterate
over every voxel of a 3d image, and for every such voxel, iterate over an
hypercube. Neighborhood iterator automatically handle boundaries, thus making
this kind of code much easier to write than manual boundaries handling, at the
cost of a slight overhead.
.. cfunction:: PyObject* PyArray_NeighborhoodIterNew(PyArrayIterObject* iter, npy_intp bounds, int mode, PyArrayObject* fill_value)
This function creates a new neighborhood iterator from an existing
iterator. The neighborhood will be computed relatively to the position
currently pointed by *iter*, the bounds define the shape of the
neighborhood iterator, and the mode argument the boundaries handling mode.
The *bounds* argument is expected to be a (2 * iter->ao->nd) arrays, such
as the range bound[2*i]->bounds[2*i+1] defines the range where to walk for
dimension i (both bounds are included in the walked coordinates). The
bounds should be ordered for each dimension (bounds[2*i] <= bounds[2*i+1]).
The mode should be one of:
* NPY_NEIGHBORHOOD_ITER_ZERO_PADDING: zero padding. Outside bounds values
will be 0.
* NPY_NEIGHBORHOOD_ITER_ONE_PADDING: one padding, Outside bounds values
will be 1.
* NPY_NEIGHBORHOOD_ITER_CONSTANT_PADDING: constant padding. Outside bounds
values will be the same as the first item in fill_value.
* NPY_NEIGHBORHOOD_ITER_MIRROR_PADDING: mirror padding. Outside bounds
values will be as if the array items were mirrored. For example, for the
array [1, 2, 3, 4], x[-2] will be 2, x[-2] will be 1, x[4] will be 4,
x[5] will be 1, etc...
* NPY_NEIGHBORHOOD_ITER_CIRCULAR_PADDING: circular padding. Outside bounds
values will be as if the array was repeated. For example, for the
array [1, 2, 3, 4], x[-2] will be 3, x[-2] will be 4, x[4] will be 1,
x[5] will be 2, etc...
If the mode is constant filling (NPY_NEIGHBORHOOD_ITER_CONSTANT_PADDING),
fill_value should point to an array object which holds the filling value
(the first item will be the filling value if the array contains more than
one item). For other cases, fill_value may be NULL.
- The iterator holds a reference to iter
- Return NULL on failure (in which case the reference count of iter is not
changed)
- iter itself can be a Neighborhood iterator: this can be useful for .e.g
automatic boundaries handling
- the object returned by this function should be safe to use as a normal
iterator
- If the position of iter is changed, any subsequent call to
PyArrayNeighborhoodIter_Next is undefined behavior, and
PyArrayNeighborhoodIter_Reset must be called.
.. code-block:: c
PyArrayIterObject \*iter;
PyArrayNeighborhoodIterObject \*neigh_iter;
iter = PyArray_IterNew(x);
//For a 3x3 kernel
bounds = {-1, 1, -1, 1};
neigh_iter = (PyArrayNeighborhoodIterObject*)PyArrayNeighborhoodIter_New(
iter, bounds, NPY_NEIGHBORHOOD_ITER_ZERO_PADDING, NULL);
for(i = 0; i < iter->size; ++i) {
for (j = 0; j < neigh_iter->size; ++j) {
// Walk around the item currently pointed by iter->dataptr
PyArrayNeighborhoodIter_Next(neigh_iter);
}
// Move to the next point of iter
PyArrayIter_Next(iter);
PyArrayNeighborhoodIter_Reset(neigh_iter);
}
.. cfunction:: int PyArrayNeighborhoodIter_Reset(PyArrayNeighborhoodIterObject* iter)
Reset the iterator position to the first point of the neighborhood. This
should be called whenever the iter argument given at
PyArray_NeighborhoodIterObject is changed (see example)
.. cfunction:: int PyArrayNeighborhoodIter_Next(PyArrayNeighborhoodIterObject* iter)
After this call, iter->dataptr points to the next point of the
neighborhood. Calling this function after every point of the
neighborhood has been visited is undefined.
Array Scalars
-------------
.. cfunction:: PyObject* PyArray_Return(PyArrayObject* arr)
This function checks to see if *arr* is a 0-dimensional array and,
if so, returns the appropriate array scalar. It should be used
whenever 0-dimensional arrays could be returned to Python.
.. cfunction:: PyObject* PyArray_Scalar(void* data, PyArray_Descr* dtype, PyObject* itemsize)
Return an array scalar object of the given enumerated *typenum*
and *itemsize* by **copying** from memory pointed to by *data*
. If *swap* is nonzero then this function will byteswap the data
if appropriate to the data-type because array scalars are always
in correct machine-byte order.
.. cfunction:: PyObject* PyArray_ToScalar(void* data, PyArrayObject* arr)
Return an array scalar object of the type and itemsize indicated
by the array object *arr* copied from the memory pointed to by
*data* and swapping if the data in *arr* is not in machine
byte-order.
.. cfunction:: PyObject* PyArray_FromScalar(PyObject* scalar, PyArray_Descr* outcode)
Return a 0-dimensional array of type determined by *outcode* from
*scalar* which should be an array-scalar object. If *outcode* is
NULL, then the type is determined from *scalar*.
.. cfunction:: void PyArray_ScalarAsCtype(PyObject* scalar, void* ctypeptr)
Return in *ctypeptr* a pointer to the actual value in an array
scalar. There is no error checking so *scalar* must be an
array-scalar object, and ctypeptr must have enough space to hold
the correct type. For flexible-sized types, a pointer to the data
is copied into the memory of *ctypeptr*, for all other types, the
actual data is copied into the address pointed to by *ctypeptr*.
.. cfunction:: void PyArray_CastScalarToCtype(PyObject* scalar, void* ctypeptr, PyArray_Descr* outcode)
Return the data (cast to the data type indicated by *outcode*)
from the array-scalar, *scalar*, into the memory pointed to by
*ctypeptr* (which must be large enough to handle the incoming
memory).
.. cfunction:: PyObject* PyArray_TypeObjectFromType(int type)
Returns a scalar type-object from a type-number, *type*
. Equivalent to :cfunc:`PyArray_DescrFromType` (*type*)->typeobj
except for reference counting and error-checking. Returns a new
reference to the typeobject on success or ``NULL`` on failure.
.. cfunction:: NPY_SCALARKIND PyArray_ScalarKind(int typenum, PyArrayObject** arr)
See the function :cfunc:`PyArray_MinScalarType` for an alternative
mechanism introduced in NumPy 1.6.0.
Return the kind of scalar represented by *typenum* and the array
in *\*arr* (if *arr* is not ``NULL`` ). The array is assumed to be
rank-0 and only used if *typenum* represents a signed integer. If
*arr* is not ``NULL`` and the first element is negative then
:cdata:`NPY_INTNEG_SCALAR` is returned, otherwise
:cdata:`NPY_INTPOS_SCALAR` is returned. The possible return values
are :cdata:`NPY_{kind}_SCALAR` where ``{kind}`` can be **INTPOS**,
**INTNEG**, **FLOAT**, **COMPLEX**, **BOOL**, or **OBJECT**.
:cdata:`NPY_NOSCALAR` is also an enumerated value
:ctype:`NPY_SCALARKIND` variables can take on.
.. cfunction:: int PyArray_CanCoerceScalar(char thistype, char neededtype, NPY_SCALARKIND scalar)
See the function :cfunc:`PyArray_ResultType` for details of
NumPy type promotion, updated in NumPy 1.6.0.
Implements the rules for scalar coercion. Scalars are only
silently coerced from thistype to neededtype if this function
returns nonzero. If scalar is :cdata:`NPY_NOSCALAR`, then this
function is equivalent to :cfunc:`PyArray_CanCastSafely`. The rule is
that scalars of the same KIND can be coerced into arrays of the
same KIND. This rule means that high-precision scalars will never
cause low-precision arrays of the same KIND to be upcast.
Data-type descriptors
---------------------
.. warning::
Data-type objects must be reference counted so be aware of the
action on the data-type reference of different C-API calls. The
standard rule is that when a data-type object is returned it is a
new reference. Functions that take :ctype:`PyArray_Descr *` objects and
return arrays steal references to the data-type their inputs
unless otherwise noted. Therefore, you must own a reference to any
data-type object used as input to such a function.
.. cfunction:: int PyArrayDescr_Check(PyObject* obj)
Evaluates as true if *obj* is a data-type object ( :ctype:`PyArray_Descr *` ).
.. cfunction:: PyArray_Descr* PyArray_DescrNew(PyArray_Descr* obj)
Return a new data-type object copied from *obj* (the fields
reference is just updated so that the new object points to the
same fields dictionary if any).
.. cfunction:: PyArray_Descr* PyArray_DescrNewFromType(int typenum)
Create a new data-type object from the built-in (or
user-registered) data-type indicated by *typenum*. All builtin
types should not have any of their fields changed. This creates a
new copy of the :ctype:`PyArray_Descr` structure so that you can fill
it in as appropriate. This function is especially needed for
flexible data-types which need to have a new elsize member in
order to be meaningful in array construction.
.. cfunction:: PyArray_Descr* PyArray_DescrNewByteorder(PyArray_Descr* obj, char newendian)
Create a new data-type object with the byteorder set according to
*newendian*. All referenced data-type objects (in subdescr and
fields members of the data-type object) are also changed
(recursively). If a byteorder of :cdata:`NPY_IGNORE` is encountered it
is left alone. If newendian is :cdata:`NPY_SWAP`, then all byte-orders
are swapped. Other valid newendian values are :cdata:`NPY_NATIVE`,
:cdata:`NPY_LITTLE`, and :cdata:`NPY_BIG` which all cause the returned
data-typed descriptor (and all it's
referenced data-type descriptors) to have the corresponding byte-
order.
.. cfunction:: PyArray_Descr* PyArray_DescrFromObject(PyObject* op, PyArray_Descr* mintype)
Determine an appropriate data-type object from the object *op*
(which should be a "nested" sequence object) and the minimum
data-type descriptor mintype (which can be ``NULL`` ). Similar in
behavior to array(*op*).dtype. Don't confuse this function with
:cfunc:`PyArray_DescrConverter`. This function essentially looks at
all the objects in the (nested) sequence and determines the
data-type from the elements it finds.
.. cfunction:: PyArray_Descr* PyArray_DescrFromScalar(PyObject* scalar)
Return a data-type object from an array-scalar object. No checking
is done to be sure that *scalar* is an array scalar. If no
suitable data-type can be determined, then a data-type of
:cdata:`NPY_OBJECT` is returned by default.
.. cfunction:: PyArray_Descr* PyArray_DescrFromType(int typenum)
Returns a data-type object corresponding to *typenum*. The
*typenum* can be one of the enumerated types, a character code for
one of the enumerated types, or a user-defined type.
.. cfunction:: int PyArray_DescrConverter(PyObject* obj, PyArray_Descr** dtype)
Convert any compatible Python object, *obj*, to a data-type object
in *dtype*. A large number of Python objects can be converted to
data-type objects. See :ref:`arrays.dtypes` for a complete
description. This version of the converter converts None objects
to a :cdata:`NPY_DEFAULT_TYPE` data-type object. This function can
be used with the "O&" character code in :cfunc:`PyArg_ParseTuple`
processing.
.. cfunction:: int PyArray_DescrConverter2(PyObject* obj, PyArray_Descr** dtype)
Convert any compatible Python object, *obj*, to a data-type
object in *dtype*. This version of the converter converts None
objects so that the returned data-type is ``NULL``. This function
can also be used with the "O&" character in PyArg_ParseTuple
processing.
.. cfunction:: int Pyarray_DescrAlignConverter(PyObject* obj, PyArray_Descr** dtype)
Like :cfunc:`PyArray_DescrConverter` except it aligns C-struct-like
objects on word-boundaries as the compiler would.
.. cfunction:: int Pyarray_DescrAlignConverter2(PyObject* obj, PyArray_Descr** dtype)
Like :cfunc:`PyArray_DescrConverter2` except it aligns C-struct-like
objects on word-boundaries as the compiler would.
.. cfunction:: PyObject *PyArray_FieldNames(PyObject* dict)
Take the fields dictionary, *dict*, such as the one attached to a
data-type object and construct an ordered-list of field names such
as is stored in the names field of the :ctype:`PyArray_Descr` object.
Conversion Utilities
--------------------
For use with :cfunc:`PyArg_ParseTuple`
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
All of these functions can be used in :cfunc:`PyArg_ParseTuple` (...) with
the "O&" format specifier to automatically convert any Python object
to the required C-object. All of these functions return
:cdata:`NPY_SUCCEED` if successful and :cdata:`NPY_FAIL` if not. The first
argument to all of these function is a Python object. The second
argument is the **address** of the C-type to convert the Python object
to.
.. warning::
Be sure to understand what steps you should take to manage the
memory when using these conversion functions. These functions can
require freeing memory, and/or altering the reference counts of
specific objects based on your use.
.. cfunction:: int PyArray_Converter(PyObject* obj, PyObject** address)
Convert any Python object to a :ctype:`PyArrayObject`. If
:cfunc:`PyArray_Check` (*obj*) is TRUE then its reference count is
incremented and a reference placed in *address*. If *obj* is not
an array, then convert it to an array using :cfunc:`PyArray_FromAny`
. No matter what is returned, you must DECREF the object returned
by this routine in *address* when you are done with it.
.. cfunction:: int PyArray_OutputConverter(PyObject* obj, PyArrayObject** address)
This is a default converter for output arrays given to
functions. If *obj* is :cdata:`Py_None` or ``NULL``, then *\*address*
will be ``NULL`` but the call will succeed. If :cfunc:`PyArray_Check` (
*obj*) is TRUE then it is returned in *\*address* without
incrementing its reference count.
.. cfunction:: int PyArray_IntpConverter(PyObject* obj, PyArray_Dims* seq)
Convert any Python sequence, *obj*, smaller than :cdata:`NPY_MAXDIMS`
to a C-array of :ctype:`npy_intp`. The Python object could also be a
single number. The *seq* variable is a pointer to a structure with
members ptr and len. On successful return, *seq* ->ptr contains a
pointer to memory that must be freed to avoid a memory leak. The
restriction on memory size allows this converter to be
conveniently used for sequences intended to be interpreted as
array shapes.
.. cfunction:: int PyArray_BufferConverter(PyObject* obj, PyArray_Chunk* buf)
Convert any Python object, *obj*, with a (single-segment) buffer
interface to a variable with members that detail the object's use
of its chunk of memory. The *buf* variable is a pointer to a
structure with base, ptr, len, and flags members. The
:ctype:`PyArray_Chunk` structure is binary compatibile with the
Python's buffer object (through its len member on 32-bit platforms
and its ptr member on 64-bit platforms or in Python 2.5). On
return, the base member is set to *obj* (or its base if *obj* is
already a buffer object pointing to another object). If you need
to hold on to the memory be sure to INCREF the base member. The
chunk of memory is pointed to by *buf* ->ptr member and has length
*buf* ->len. The flags member of *buf* is :cdata:`NPY_BEHAVED_RO` with
the :cdata:`NPY_WRITEABLE` flag set if *obj* has a writeable buffer
interface.
.. cfunction:: int PyArray_AxisConverter(PyObject \* obj, int* axis)
Convert a Python object, *obj*, representing an axis argument to
the proper value for passing to the functions that take an integer
axis. Specifically, if *obj* is None, *axis* is set to
:cdata:`NPY_MAXDIMS` which is interpreted correctly by the C-API
functions that take axis arguments.
.. cfunction:: int PyArray_BoolConverter(PyObject* obj, Bool* value)
Convert any Python object, *obj*, to :cdata:`NPY_TRUE` or
:cdata:`NPY_FALSE`, and place the result in *value*.
.. cfunction:: int PyArray_ByteorderConverter(PyObject* obj, char* endian)
Convert Python strings into the corresponding byte-order
character:
'>', '<', 's', '=', or '\|'.
.. cfunction:: int PyArray_SortkindConverter(PyObject* obj, NPY_SORTKIND* sort)
Convert Python strings into one of :cdata:`NPY_QUICKSORT` (starts
with 'q' or 'Q') , :cdata:`NPY_HEAPSORT` (starts with 'h' or 'H'),
or :cdata:`NPY_MERGESORT` (starts with 'm' or 'M').
.. cfunction:: int PyArray_SearchsideConverter(PyObject* obj, NPY_SEARCHSIDE* side)
Convert Python strings into one of :cdata:`NPY_SEARCHLEFT` (starts with 'l'
or 'L'), or :cdata:`NPY_SEARCHRIGHT` (starts with 'r' or 'R').
.. cfunction:: int PyArray_OrderConverter(PyObject* obj, NPY_ORDER* order)
Convert the Python strings 'C', 'F', 'A', and 'K' into the :ctype:`NPY_ORDER`
enumeration :cdata:`NPY_CORDER`, :cdata:`NPY_FORTRANORDER`,
:cdata:`NPY_ANYORDER`, and :cdata:`NPY_KEEPORDER`.
.. cfunction:: int PyArray_CastingConverter(PyObject* obj, NPY_CASTING* casting)
Convert the Python strings 'no', 'equiv', 'safe', 'same_kind', and
'unsafe' into the :ctype:`NPY_CASTING` enumeration :cdata:`NPY_NO_CASTING`,
:cdata:`NPY_EQUIV_CASTING`, :cdata:`NPY_SAFE_CASTING`,
:cdata:`NPY_SAME_KIND_CASTING`, and :cdata:`NPY_UNSAFE_CASTING`.
.. cfunction:: int PyArray_ClipmodeConverter(PyObject* object, NPY_CLIPMODE* val)
Convert the Python strings 'clip', 'wrap', and 'raise' into the
:ctype:`NPY_CLIPMODE` enumeration :cdata:`NPY_CLIP`, :cdata:`NPY_WRAP`,
and :cdata:`NPY_RAISE`.
.. cfunction:: int PyArray_ConvertClipmodeSequence(PyObject* object, NPY_CLIPMODE* modes, int n)
Converts either a sequence of clipmodes or a single clipmode into
a C array of :ctype:`NPY_CLIPMODE` values. The number of clipmodes *n*
must be known before calling this function. This function is provided
to help functions allow a different clipmode for each dimension.
Other conversions
^^^^^^^^^^^^^^^^^
.. cfunction:: int PyArray_PyIntAsInt(PyObject* op)
Convert all kinds of Python objects (including arrays and array
scalars) to a standard integer. On error, -1 is returned and an
exception set. You may find useful the macro:
.. code-block:: c
#define error_converting(x) (((x) == -1) && PyErr_Occurred()
.. cfunction:: npy_intp PyArray_PyIntAsIntp(PyObject* op)
Convert all kinds of Python objects (including arrays and array
scalars) to a (platform-pointer-sized) integer. On error, -1 is
returned and an exception set.
.. cfunction:: int PyArray_IntpFromSequence(PyObject* seq, npy_intp* vals, int maxvals)
Convert any Python sequence (or single Python number) passed in as
*seq* to (up to) *maxvals* pointer-sized integers and place them
in the *vals* array. The sequence can be smaller then *maxvals* as
the number of converted objects is returned.
.. cfunction:: int PyArray_TypestrConvert(int itemsize, int gentype)
Convert typestring characters (with *itemsize*) to basic
enumerated data types. The typestring character corresponding to
signed and unsigned integers, floating point numbers, and
complex-floating point numbers are recognized and converted. Other
values of gentype are returned. This function can be used to
convert, for example, the string 'f4' to :cdata:`NPY_FLOAT32`.
Miscellaneous
-------------
Importing the API
^^^^^^^^^^^^^^^^^
In order to make use of the C-API from another extension module, the
``import_array`` () command must be used. If the extension module is
self-contained in a single .c file, then that is all that needs to be
done. If, however, the extension module involves multiple files where
the C-API is needed then some additional steps must be taken.
.. cfunction:: void import_array(void)
This function must be called in the initialization section of a
module that will make use of the C-API. It imports the module
where the function-pointer table is stored and points the correct
variable to it.
.. cmacro:: PY_ARRAY_UNIQUE_SYMBOL
.. cmacro:: NO_IMPORT_ARRAY
Using these #defines you can use the C-API in multiple files for a
single extension module. In each file you must define
:cmacro:`PY_ARRAY_UNIQUE_SYMBOL` to some name that will hold the
C-API (*e.g.* myextension_ARRAY_API). This must be done **before**
including the numpy/arrayobject.h file. In the module
intialization routine you call ``import_array`` (). In addition,
in the files that do not have the module initialization
sub_routine define :cmacro:`NO_IMPORT_ARRAY` prior to including
numpy/arrayobject.h.
Suppose I have two files coolmodule.c and coolhelper.c which need
to be compiled and linked into a single extension module. Suppose
coolmodule.c contains the required initcool module initialization
function (with the import_array() function called). Then,
coolmodule.c would have at the top:
.. code-block:: c
#define PY_ARRAY_UNIQUE_SYMBOL cool_ARRAY_API
#include numpy/arrayobject.h
On the other hand, coolhelper.c would contain at the top:
.. code-block:: c
#define PY_ARRAY_UNIQUE_SYMBOL cool_ARRAY_API
#define NO_IMPORT_ARRAY
#include numpy/arrayobject.h
Checking the API Version
^^^^^^^^^^^^^^^^^^^^^^^^
Because python extensions are not used in the same way as usual libraries on
most platforms, some errors cannot be automatically detected at build time or
even runtime. For example, if you build an extension using a function available
only for numpy >= 1.3.0, and you import the extension later with numpy 1.2, you
will not get an import error (but almost certainly a segmentation fault when
calling the function). That's why several functions are provided to check for
numpy versions. The macros :cdata:`NPY_VERSION` and
:cdata:`NPY_FEATURE_VERSION` corresponds to the numpy version used to build the
extension, whereas the versions returned by the functions
PyArray_GetNDArrayCVersion and PyArray_GetNDArrayCFeatureVersion corresponds to
the runtime numpy's version.
The rules for ABI and API compatibilities can be summarized as follows:
* Whenever :cdata:`NPY_VERSION` != PyArray_GetNDArrayCVersion, the
extension has to be recompiled (ABI incompatibility).
* :cdata:`NPY_VERSION` == PyArray_GetNDArrayCVersion and
:cdata:`NPY_FEATURE_VERSION` <= PyArray_GetNDArrayCFeatureVersion means
backward compatible changes.
ABI incompatibility is automatically detected in every numpy's version. API
incompatibility detection was added in numpy 1.4.0. If you want to supported
many different numpy versions with one extension binary, you have to build your
extension with the lowest NPY_FEATURE_VERSION as possible.
.. cfunction:: unsigned int PyArray_GetNDArrayCVersion(void)
This just returns the value :cdata:`NPY_VERSION`. :cdata:`NPY_VERSION`
changes whenever a backward incompatible change at the ABI level. Because
it is in the C-API, however, comparing the output of this function from the
value defined in the current header gives a way to test if the C-API has
changed thus requiring a re-compilation of extension modules that use the
C-API. This is automatically checked in the function import_array.
.. cfunction:: unsigned int PyArray_GetNDArrayCFeatureVersion(void)
.. versionadded:: 1.4.0
This just returns the value :cdata:`NPY_FEATURE_VERSION`.
:cdata:`NPY_FEATURE_VERSION` changes whenever the API changes (e.g. a
function is added). A changed value does not always require a recompile.
Internal Flexibility
^^^^^^^^^^^^^^^^^^^^
.. cfunction:: int PyArray_SetNumericOps(PyObject* dict)
NumPy stores an internal table of Python callable objects that are
used to implement arithmetic operations for arrays as well as
certain array calculation methods. This function allows the user
to replace any or all of these Python objects with their own
versions. The keys of the dictionary, *dict*, are the named
functions to replace and the paired value is the Python callable
object to use. Care should be taken that the function used to
replace an internal array operation does not itself call back to
that internal array operation (unless you have designed the
function to handle that), or an unchecked infinite recursion can
result (possibly causing program crash). The key names that
represent operations that can be replaced are:
**add**, **subtract**, **multiply**, **divide**,
**remainder**, **power**, **square**, **reciprocal**,
**ones_like**, **sqrt**, **negative**, **absolute**,
**invert**, **left_shift**, **right_shift**,
**bitwise_and**, **bitwise_xor**, **bitwise_or**,
**less**, **less_equal**, **equal**, **not_equal**,
**greater**, **greater_equal**, **floor_divide**,
**true_divide**, **logical_or**, **logical_and**,
**floor**, **ceil**, **maximum**, **minimum**, **rint**.
These functions are included here because they are used at least once
in the array object's methods. The function returns -1 (without
setting a Python Error) if one of the objects being assigned is not
callable.
.. cfunction:: PyObject* PyArray_GetNumericOps(void)
Return a Python dictionary containing the callable Python objects
stored in the the internal arithmetic operation table. The keys of
this dictionary are given in the explanation for :cfunc:`PyArray_SetNumericOps`.
.. cfunction:: void PyArray_SetStringFunction(PyObject* op, int repr)
This function allows you to alter the tp_str and tp_repr methods
of the array object to any Python function. Thus you can alter
what happens for all arrays when str(arr) or repr(arr) is called
from Python. The function to be called is passed in as *op*. If
*repr* is non-zero, then this function will be called in response
to repr(arr), otherwise the function will be called in response to
str(arr). No check on whether or not *op* is callable is
performed. The callable passed in to *op* should expect an array
argument and should return a string to be printed.
Memory management
^^^^^^^^^^^^^^^^^
.. cfunction:: char* PyDataMem_NEW(size_t nbytes)
.. cfunction:: PyDataMem_FREE(char* ptr)
.. cfunction:: char* PyDataMem_RENEW(void * ptr, size_t newbytes)
Macros to allocate, free, and reallocate memory. These macros are used
internally to create arrays.
.. cfunction:: npy_intp* PyDimMem_NEW(nd)
.. cfunction:: PyDimMem_FREE(npy_intp* ptr)
.. cfunction:: npy_intp* PyDimMem_RENEW(npy_intp* ptr, npy_intp newnd)
Macros to allocate, free, and reallocate dimension and strides memory.
.. cfunction:: PyArray_malloc(nbytes)
.. cfunction:: PyArray_free(ptr)
.. cfunction:: PyArray_realloc(ptr, nbytes)
These macros use different memory allocators, depending on the
constant :cdata:`NPY_USE_PYMEM`. The system malloc is used when
:cdata:`NPY_USE_PYMEM` is 0, if :cdata:`NPY_USE_PYMEM` is 1, then
the Python memory allocator is used.
Threading support
^^^^^^^^^^^^^^^^^
These macros are only meaningful if :cdata:`NPY_ALLOW_THREADS`
evaluates True during compilation of the extension module. Otherwise,
these macros are equivalent to whitespace. Python uses a single Global
Interpreter Lock (GIL) for each Python process so that only a single
thread may excecute at a time (even on multi-cpu machines). When
calling out to a compiled function that may take time to compute (and
does not have side-effects for other threads like updated global
variables), the GIL should be released so that other Python threads
can run while the time-consuming calculations are performed. This can
be accomplished using two groups of macros. Typically, if one macro in
a group is used in a code block, all of them must be used in the same
code block. Currently, :cdata:`NPY_ALLOW_THREADS` is defined to the
python-defined :cdata:`WITH_THREADS` constant unless the environment
variable :cdata:`NPY_NOSMP` is set in which case
:cdata:`NPY_ALLOW_THREADS` is defined to be 0.
Group 1
"""""""
This group is used to call code that may take some time but does not
use any Python C-API calls. Thus, the GIL should be released during
its calculation.
.. cmacro:: NPY_BEGIN_ALLOW_THREADS
Equivalent to :cmacro:`Py_BEGIN_ALLOW_THREADS` except it uses
:cdata:`NPY_ALLOW_THREADS` to determine if the macro if
replaced with white-space or not.
.. cmacro:: NPY_END_ALLOW_THREADS
Equivalent to :cmacro:`Py_END_ALLOW_THREADS` except it uses
:cdata:`NPY_ALLOW_THREADS` to determine if the macro if
replaced with white-space or not.
.. cmacro:: NPY_BEGIN_THREADS_DEF
Place in the variable declaration area. This macro sets up the
variable needed for storing the Python state.
.. cmacro:: NPY_BEGIN_THREADS
Place right before code that does not need the Python
interpreter (no Python C-API calls). This macro saves the
Python state and releases the GIL.
.. cmacro:: NPY_END_THREADS
Place right after code that does not need the Python
interpreter. This macro acquires the GIL and restores the
Python state from the saved variable.
.. cfunction:: NPY_BEGIN_THREADS_DESCR(PyArray_Descr *dtype)
Useful to release the GIL only if *dtype* does not contain
arbitrary Python objects which may need the Python interpreter
during execution of the loop. Equivalent to
.. cfunction:: NPY_END_THREADS_DESCR(PyArray_Descr *dtype)
Useful to regain the GIL in situations where it was released
using the BEGIN form of this macro.
Group 2
"""""""
This group is used to re-acquire the Python GIL after it has been
released. For example, suppose the GIL has been released (using the
previous calls), and then some path in the code (perhaps in a
different subroutine) requires use of the Python C-API, then these
macros are useful to acquire the GIL. These macros accomplish
essentially a reverse of the previous three (acquire the LOCK saving
what state it had) and then re-release it with the saved state.
.. cmacro:: NPY_ALLOW_C_API_DEF
Place in the variable declaration area to set up the necessary
variable.
.. cmacro:: NPY_ALLOW_C_API
Place before code that needs to call the Python C-API (when it is
known that the GIL has already been released).
.. cmacro:: NPY_DISABLE_C_API
Place after code that needs to call the Python C-API (to re-release
the GIL).
.. tip::
Never use semicolons after the threading support macros.
Priority
^^^^^^^^
.. cvar:: NPY_PRIOIRTY
Default priority for arrays.
.. cvar:: NPY_SUBTYPE_PRIORITY
Default subtype priority.
.. cvar:: NPY_SCALAR_PRIORITY
Default scalar priority (very small)
.. cfunction:: double PyArray_GetPriority(PyObject* obj, double def)
Return the :obj:`__array_priority__` attribute (converted to a
double) of *obj* or *def* if no attribute of that name
exists. Fast returns that avoid the attribute lookup are provided
for objects of type :cdata:`PyArray_Type`.
Default buffers
^^^^^^^^^^^^^^^
.. cvar:: NPY_BUFSIZE
Default size of the user-settable internal buffers.
.. cvar:: NPY_MIN_BUFSIZE
Smallest size of user-settable internal buffers.
.. cvar:: NPY_MAX_BUFSIZE
Largest size allowed for the user-settable buffers.
Other constants
^^^^^^^^^^^^^^^
.. cvar:: NPY_NUM_FLOATTYPE
The number of floating-point types
.. cvar:: NPY_MAXDIMS
The maximum number of dimensions allowed in arrays.
.. cvar:: NPY_VERSION
The current version of the ndarray object (check to see if this
variable is defined to guarantee the numpy/arrayobject.h header is
being used).
.. cvar:: NPY_FALSE
Defined as 0 for use with Bool.
.. cvar:: NPY_TRUE
Defined as 1 for use with Bool.
.. cvar:: NPY_FAIL
The return value of failed converter functions which are called using
the "O&" syntax in :cfunc:`PyArg_ParseTuple`-like functions.
.. cvar:: NPY_SUCCEED
The return value of successful converter functions which are called
using the "O&" syntax in :cfunc:`PyArg_ParseTuple`-like functions.
Miscellaneous Macros
^^^^^^^^^^^^^^^^^^^^
.. cfunction:: PyArray_SAMESHAPE(a1, a2)
Evaluates as True if arrays *a1* and *a2* have the same shape.
.. cfunction:: PyArray_MAX(a,b)
Returns the maximum of *a* and *b*. If (*a*) or (*b*) are
expressions they are evaluated twice.
.. cfunction:: PyArray_MIN(a,b)
Returns the minimum of *a* and *b*. If (*a*) or (*b*) are
expressions they are evaluated twice.
.. cfunction:: PyArray_CLT(a,b)
.. cfunction:: PyArray_CGT(a,b)
.. cfunction:: PyArray_CLE(a,b)
.. cfunction:: PyArray_CGE(a,b)
.. cfunction:: PyArray_CEQ(a,b)
.. cfunction:: PyArray_CNE(a,b)
Implements the complex comparisons between two complex numbers
(structures with a real and imag member) using NumPy's definition
of the ordering which is lexicographic: comparing the real parts
first and then the complex parts if the real parts are equal.
.. cfunction:: PyArray_REFCOUNT(PyObject* op)
Returns the reference count of any Python object.
.. cfunction:: PyArray_XDECREF_ERR(PyObject \*obj)
DECREF's an array object which may have the :cdata:`NPY_UPDATEIFCOPY`
flag set without causing the contents to be copied back into the
original array. Resets the :cdata:`NPY_WRITEABLE` flag on the base
object. This is useful for recovering from an error condition when
:cdata:`NPY_UPDATEIFCOPY` is used.
Enumerated Types
^^^^^^^^^^^^^^^^
.. ctype:: NPY_SORTKIND
A special variable-type which can take on the values :cdata:`NPY_{KIND}`
where ``{KIND}`` is
**QUICKSORT**, **HEAPSORT**, **MERGESORT**
.. cvar:: NPY_NSORTS
Defined to be the number of sorts.
.. ctype:: NPY_SCALARKIND
A special variable type indicating the number of "kinds" of
scalars distinguished in determining scalar-coercion rules. This
variable can take on the values :cdata:`NPY_{KIND}` where ``{KIND}`` can be
**NOSCALAR**, **BOOL_SCALAR**, **INTPOS_SCALAR**,
**INTNEG_SCALAR**, **FLOAT_SCALAR**, **COMPLEX_SCALAR**,
**OBJECT_SCALAR**
.. cvar:: NPY_NSCALARKINDS
Defined to be the number of scalar kinds
(not including :cdata:`NPY_NOSCALAR`).
.. ctype:: NPY_ORDER
An enumeration type indicating the element order that an array should be
interpreted in. When a brand new array is created, generally
only **NPY_CORDER** and **NPY_FORTRANORDER** are used, whereas
when one or more inputs are provided, the order can be based on them.
.. cvar:: NPY_ANYORDER
Fortran order if all the inputs are Fortran, C otherwise.
.. cvar:: NPY_CORDER
C order.
.. cvar:: NPY_FORTRANORDER
Fortran order.
.. cvar:: NPY_KEEPORDER
An order as close to the order of the inputs as possible, even
if the input is in neither C nor Fortran order.
.. ctype:: NPY_CLIPMODE
A variable type indicating the kind of clipping that should be
applied in certain functions.
.. cvar:: NPY_RAISE
The default for most operations, raises an exception if an index
is out of bounds.
.. cvar:: NPY_CLIP
Clips an index to the valid range if it is out of bounds.
.. cvar:: NPY_WRAP
Wraps an index to the valid range if it is out of bounds.
.. ctype:: NPY_CASTING
.. versionadded:: 1.6
An enumeration type indicating how permissive data conversions should
be. This is used by the iterator added in NumPy 1.6, and is intended
to be used more broadly in a future version.
.. cvar:: NPY_NO_CASTING
Only allow identical types.
.. cvar:: NPY_EQUIV_CASTING
Allow identical and casts involving byte swapping.
.. cvar:: NPY_SAFE_CASTING
Only allow casts which will not cause values to be rounded,
truncated, or otherwise changed.
.. cvar:: NPY_SAME_KIND_CASTING
Allow any safe casts, and casts between types of the same kind.
For example, float64 -> float32 is permitted with this rule.
.. cvar:: NPY_UNSAFE_CASTING
Allow any cast, no matter what kind of data loss may occur.
.. index::
pair: ndarray; C-API

View File

@@ -1,104 +0,0 @@
System configuration
====================
.. sectionauthor:: Travis E. Oliphant
When NumPy is built, information about system configuration is
recorded, and is made available for extension modules using Numpy's C
API. These are mostly defined in ``numpyconfig.h`` (included in
``ndarrayobject.h``). The public symbols are prefixed by ``NPY_*``.
Numpy also offers some functions for querying information about the
platform in use.
For private use, Numpy also constructs a ``config.h`` in the NumPy
include directory, which is not exported by Numpy (that is a python
extension which use the numpy C API will not see those symbols), to
avoid namespace pollution.
Data type sizes
---------------
The :cdata:`NPY_SIZEOF_{CTYPE}` constants are defined so that sizeof
information is available to the pre-processor.
.. cvar:: NPY_SIZEOF_SHORT
sizeof(short)
.. cvar:: NPY_SIZEOF_INT
sizeof(int)
.. cvar:: NPY_SIZEOF_LONG
sizeof(long)
.. cvar:: NPY_SIZEOF_LONG_LONG
sizeof(longlong) where longlong is defined appropriately on the
platform (A macro defines **NPY_SIZEOF_LONGLONG** as well.)
.. cvar:: NPY_SIZEOF_PY_LONG_LONG
.. cvar:: NPY_SIZEOF_FLOAT
sizeof(float)
.. cvar:: NPY_SIZEOF_DOUBLE
sizeof(double)
.. cvar:: NPY_SIZEOF_LONG_DOUBLE
sizeof(longdouble) (A macro defines **NPY_SIZEOF_LONGDOUBLE** as well.)
.. cvar:: NPY_SIZEOF_PY_INTPTR_T
Size of a pointer on this platform (sizeof(void \*)) (A macro defines
NPY_SIZEOF_INTP as well.)
Platform information
--------------------
.. cvar:: NPY_CPU_X86
.. cvar:: NPY_CPU_AMD64
.. cvar:: NPY_CPU_IA64
.. cvar:: NPY_CPU_PPC
.. cvar:: NPY_CPU_PPC64
.. cvar:: NPY_CPU_SPARC
.. cvar:: NPY_CPU_SPARC64
.. cvar:: NPY_CPU_S390
.. cvar:: NPY_CPU_PARISC
.. versionadded:: 1.3.0
CPU architecture of the platform; only one of the above is
defined.
Defined in ``numpy/npy_cpu.h``
.. cvar:: NPY_LITTLE_ENDIAN
.. cvar:: NPY_BIG_ENDIAN
.. cvar:: NPY_BYTE_ORDER
.. versionadded:: 1.3.0
Portable alternatives to the ``endian.h`` macros of GNU Libc.
If big endian, :cdata:`NPY_BYTE_ORDER` == :cdata:`NPY_BIG_ENDIAN`, and
similarly for little endian architectures.
Defined in ``numpy/npy_endian.h``.
.. cfunction:: PyArray_GetEndianness()
.. versionadded:: 1.3.0
Returns the endianness of the current platform.
One of :cdata:`NPY_CPU_BIG`, :cdata:`NPY_CPU_LITTLE`,
or :cdata:`NPY_CPU_UNKNOWN_ENDIAN`.

View File

@@ -1,380 +0,0 @@
Numpy core libraries
====================
.. sectionauthor:: David Cournapeau
.. versionadded:: 1.3.0
Starting from numpy 1.3.0, we are working on separating the pure C,
"computational" code from the python dependent code. The goal is twofolds:
making the code cleaner, and enabling code reuse by other extensions outside
numpy (scipy, etc...).
Numpy core math library
-----------------------
The numpy core math library ('npymath') is a first step in this direction. This
library contains most math-related C99 functionality, which can be used on
platforms where C99 is not well supported. The core math functions have the
same API as the C99 ones, except for the npy_* prefix.
The available functions are defined in <numpy/npy_math.h> - please refer to this header when
in doubt.
Floating point classification
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. cvar:: NPY_NAN
This macro is defined to a NaN (Not a Number), and is guaranteed to have
the signbit unset ('positive' NaN). The corresponding single and extension
precision macro are available with the suffix F and L.
.. cvar:: NPY_INFINITY
This macro is defined to a positive inf. The corresponding single and
extension precision macro are available with the suffix F and L.
.. cvar:: NPY_PZERO
This macro is defined to positive zero. The corresponding single and
extension precision macro are available with the suffix F and L.
.. cvar:: NPY_NZERO
This macro is defined to negative zero (that is with the sign bit set). The
corresponding single and extension precision macro are available with the
suffix F and L.
.. cfunction:: int npy_isnan(x)
This is a macro, and is equivalent to C99 isnan: works for single, double
and extended precision, and return a non 0 value is x is a NaN.
.. cfunction:: int npy_isfinite(x)
This is a macro, and is equivalent to C99 isfinite: works for single,
double and extended precision, and return a non 0 value is x is neither a
NaN nor an infinity.
.. cfunction:: int npy_isinf(x)
This is a macro, and is equivalent to C99 isinf: works for single, double
and extended precision, and return a non 0 value is x is infinite (positive
and negative).
.. cfunction:: int npy_signbit(x)
This is a macro, and is equivalent to C99 signbit: works for single, double
and extended precision, and return a non 0 value is x has the signbit set
(that is the number is negative).
.. cfunction:: double npy_copysign(double x, double y)
This is a function equivalent to C99 copysign: return x with the same sign
as y. Works for any value, including inf and nan. Single and extended
precisions are available with suffix f and l.
.. versionadded:: 1.4.0
Useful math constants
~~~~~~~~~~~~~~~~~~~~~
The following math constants are available in npy_math.h. Single and extended
precision are also available by adding the F and L suffixes respectively.
.. cvar:: NPY_E
Base of natural logarithm (:math:`e`)
.. cvar:: NPY_LOG2E
Logarithm to base 2 of the Euler constant (:math:`\frac{\ln(e)}{\ln(2)}`)
.. cvar:: NPY_LOG10E
Logarithm to base 10 of the Euler constant (:math:`\frac{\ln(e)}{\ln(10)}`)
.. cvar:: NPY_LOGE2
Natural logarithm of 2 (:math:`\ln(2)`)
.. cvar:: NPY_LOGE10
Natural logarithm of 10 (:math:`\ln(10)`)
.. cvar:: NPY_PI
Pi (:math:`\pi`)
.. cvar:: NPY_PI_2
Pi divided by 2 (:math:`\frac{\pi}{2}`)
.. cvar:: NPY_PI_4
Pi divided by 4 (:math:`\frac{\pi}{4}`)
.. cvar:: NPY_1_PI
Reciprocal of pi (:math:`\frac{1}{\pi}`)
.. cvar:: NPY_2_PI
Two times the reciprocal of pi (:math:`\frac{2}{\pi}`)
.. cvar:: NPY_EULER
The Euler constant
:math:`\lim_{n\rightarrow\infty}({\sum_{k=1}^n{\frac{1}{k}}-\ln n})`
Low-level floating point manipulation
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Those can be useful for precise floating point comparison.
.. cfunction:: double npy_nextafter(double x, double y)
This is a function equivalent to C99 nextafter: return next representable
floating point value from x in the direction of y. Single and extended
precisions are available with suffix f and l.
.. versionadded:: 1.4.0
.. cfunction:: double npy_spacing(double x)
This is a function equivalent to Fortran intrinsic. Return distance between
x and next representable floating point value from x, e.g. spacing(1) ==
eps. spacing of nan and +/- inf return nan. Single and extended precisions
are available with suffix f and l.
.. versionadded:: 1.4.0
Complex functions
~~~~~~~~~~~~~~~~~
.. versionadded:: 1.4.0
C99-like complex functions have been added. Those can be used if you wish to
implement portable C extensions. Since we still support platforms without C99
complex type, you need to restrict to C90-compatible syntax, e.g.:
.. code-block:: c
/* a = 1 + 2i \*/
npy_complex a = npy_cpack(1, 2);
npy_complex b;
b = npy_log(a);
Linking against the core math library in an extension
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. versionadded:: 1.4.0
To use the core math library in your own extension, you need to add the npymath
compile and link options to your extension in your setup.py:
>>> from numpy.distutils.misc_utils import get_info
>>> info = get_info('npymath')
>>> config.add_extension('foo', sources=['foo.c'], extra_info=**info)
In other words, the usage of info is exactly the same as when using blas_info
and co.
Half-precision functions
~~~~~~~~~~~~~~~~~~~~~~~~
.. versionadded:: 2.0.0
The header file <numpy/halffloat.h> provides functions to work with
IEEE 754-2008 16-bit floating point values. While this format is
not typically used for numerical computations, it is useful for
storing values which require floating point but do not need much precision.
It can also be used as an educational tool to understand the nature
of floating point round-off error.
Like for other types, NumPy includes a typedef npy_half for the 16 bit
float. Unlike for most of the other types, you cannot use this as a
normal type in C, since is is a typedef for npy_uint16. For example,
1.0 looks like 0x3c00 to C, and if you do an equality comparison
between the different signed zeros, you will get -0.0 != 0.0
(0x8000 != 0x0000), which is incorrect.
For these reasons, NumPy provides an API to work with npy_half values
accessible by including <numpy/halffloat.h> and linking to 'npymath'.
For functions that are not provided directly, such as the arithmetic
operations, the preferred method is to convert to float
or double and back again, as in the following example.
.. code-block:: c
npy_half sum(int n, npy_half *array) {
float ret = 0;
while(n--) {
ret += npy_half_to_float(*array++);
}
return npy_float_to_half(ret);
}
External Links:
* `754-2008 IEEE Standard for Floating-Point Arithmetic`__
* `Half-precision Float Wikipedia Article`__.
* `OpenGL Half Float Pixel Support`__
* `The OpenEXR image format`__.
__ http://ieeexplore.ieee.org/servlet/opac?punumber=4610933
__ http://en.wikipedia.org/wiki/Half_precision_floating-point_format
__ http://www.opengl.org/registry/specs/ARB/half_float_pixel.txt
__ http://www.openexr.com/about.html
.. cvar:: NPY_HALF_ZERO
This macro is defined to positive zero.
.. cvar:: NPY_HALF_PZERO
This macro is defined to positive zero.
.. cvar:: NPY_HALF_NZERO
This macro is defined to negative zero.
.. cvar:: NPY_HALF_ONE
This macro is defined to 1.0.
.. cvar:: NPY_HALF_NEGONE
This macro is defined to -1.0.
.. cvar:: NPY_HALF_PINF
This macro is defined to +inf.
.. cvar:: NPY_HALF_NINF
This macro is defined to -inf.
.. cvar:: NPY_HALF_NAN
This macro is defined to a NaN value, guaranteed to have its sign bit unset.
.. cfunction:: float npy_half_to_float(npy_half h)
Converts a half-precision float to a single-precision float.
.. cfunction:: double npy_half_to_double(npy_half h)
Converts a half-precision float to a double-precision float.
.. cfunction:: npy_half npy_float_to_half(float f)
Converts a single-precision float to a half-precision float. The
value is rounded to the nearest representable half, with ties going
to the nearest even. If the value is too small or too big, the
system's floating point underflow or overflow bit will be set.
.. cfunction:: npy_half npy_double_to_half(double d)
Converts a double-precision float to a half-precision float. The
value is rounded to the nearest representable half, with ties going
to the nearest even. If the value is too small or too big, the
system's floating point underflow or overflow bit will be set.
.. cfunction:: int npy_half_eq(npy_half h1, npy_half h2)
Compares two half-precision floats (h1 == h2).
.. cfunction:: int npy_half_ne(npy_half h1, npy_half h2)
Compares two half-precision floats (h1 != h2).
.. cfunction:: int npy_half_le(npy_half h1, npy_half h2)
Compares two half-precision floats (h1 <= h2).
.. cfunction:: int npy_half_lt(npy_half h1, npy_half h2)
Compares two half-precision floats (h1 < h2).
.. cfunction:: int npy_half_ge(npy_half h1, npy_half h2)
Compares two half-precision floats (h1 >= h2).
.. cfunction:: int npy_half_gt(npy_half h1, npy_half h2)
Compares two half-precision floats (h1 > h2).
.. cfunction:: int npy_half_eq_nonan(npy_half h1, npy_half h2)
Compares two half-precision floats that are known to not be NaN (h1 == h2). If
a value is NaN, the result is undefined.
.. cfunction:: int npy_half_lt_nonan(npy_half h1, npy_half h2)
Compares two half-precision floats that are known to not be NaN (h1 < h2). If
a value is NaN, the result is undefined.
.. cfunction:: int npy_half_le_nonan(npy_half h1, npy_half h2)
Compares two half-precision floats that are known to not be NaN (h1 <= h2). If
a value is NaN, the result is undefined.
.. cfunction:: int npy_half_iszero(npy_half h)
Tests whether the half-precision float has a value equal to zero. This may be slightly
faster than calling npy_half_eq(h, NPY_ZERO).
.. cfunction:: int npy_half_isnan(npy_half h)
Tests whether the half-precision float is a NaN.
.. cfunction:: int npy_half_isinf(npy_half h)
Tests whether the half-precision float is plus or minus Inf.
.. cfunction:: int npy_half_isfinite(npy_half h)
Tests whether the half-precision float is finite (not NaN or Inf).
.. cfunction:: int npy_half_signbit(npy_half h)
Returns 1 is h is negative, 0 otherwise.
.. cfunction:: npy_half npy_half_copysign(npy_half x, npy_half y)
Returns the value of x with the sign bit copied from y. Works for any value,
including Inf and NaN.
.. cfunction:: npy_half npy_half_spacing(npy_half h)
This is the same for half-precision float as npy_spacing and npy_spacingf
described in the low-level floating point section.
.. cfunction:: npy_half npy_half_nextafter(npy_half x, npy_half y)
This is the same for half-precision float as npy_nextafter and npy_nextafterf
described in the low-level floating point section.
.. cfunction:: npy_uint16 npy_floatbits_to_halfbits(npy_uint32 f)
Low-level function which converts a 32-bit single-precision float, stored
as a uint32, into a 16-bit half-precision float.
.. cfunction:: npy_uint16 npy_doublebits_to_halfbits(npy_uint64 d)
Low-level function which converts a 64-bit double-precision float, stored
as a uint64, into a 16-bit half-precision float.
.. cfunction:: npy_uint32 npy_halfbits_to_floatbits(npy_uint16 h)
Low-level function which converts a 16-bit half-precision float
into a 32-bit single-precision float, stored as a uint32.
.. cfunction:: npy_uint64 npy_halfbits_to_doublebits(npy_uint16 h)
Low-level function which converts a 16-bit half-precision float
into a 64-bit double-precision float, stored as a uint64.

View File

@@ -1,220 +0,0 @@
Data Type API
=============
.. sectionauthor:: Travis E. Oliphant
The standard array can have 24 different data types (and has some
support for adding your own types). These data types all have an
enumerated type, an enumerated type-character, and a corresponding
array scalar Python type object (placed in a hierarchy). There are
also standard C typedefs to make it easier to manipulate elements of
the given data type. For the numeric types, there are also bit-width
equivalent C typedefs and named typenumbers that make it easier to
select the precision desired.
.. warning::
The names for the types in c code follows c naming conventions
more closely. The Python names for these types follow Python
conventions. Thus, :cdata:`NPY_FLOAT` picks up a 32-bit float in
C, but :class:`numpy.float_` in Python corresponds to a 64-bit
double. The bit-width names can be used in both Python and C for
clarity.
Enumerated Types
----------------
There is a list of enumerated types defined providing the basic 24
data types plus some useful generic names. Whenever the code requires
a type number, one of these enumerated types is requested. The types
are all called :cdata:`NPY_{NAME}` where ``{NAME}`` can be
**BOOL**, **BYTE**, **UBYTE**, **SHORT**, **USHORT**, **INT**,
**UINT**, **LONG**, **ULONG**, **LONGLONG**, **ULONGLONG**,
**HALF**, **FLOAT**, **DOUBLE**, **LONGDOUBLE**, **CFLOAT**,
**CDOUBLE**, **CLONGDOUBLE**, **DATETIME**, **TIMEDELTA**,
**OBJECT**, **STRING**, **UNICODE**, **VOID**
**NTYPES**, **NOTYPE**, **USERDEF**, **DEFAULT_TYPE**
The various character codes indicating certain types are also part of
an enumerated list. References to type characters (should they be
needed at all) should always use these enumerations. The form of them
is :cdata:`NPY_{NAME}LTR` where ``{NAME}`` can be
**BOOL**, **BYTE**, **UBYTE**, **SHORT**, **USHORT**, **INT**,
**UINT**, **LONG**, **ULONG**, **LONGLONG**, **ULONGLONG**,
**HALF**, **FLOAT**, **DOUBLE**, **LONGDOUBLE**, **CFLOAT**,
**CDOUBLE**, **CLONGDOUBLE**, **DATETIME**, **TIMEDELTA**,
**OBJECT**, **STRING**, **VOID**
**INTP**, **UINTP**
**GENBOOL**, **SIGNED**, **UNSIGNED**, **FLOATING**, **COMPLEX**
The latter group of ``{NAME}s`` corresponds to letters used in the array
interface typestring specification.
Defines
-------
Max and min values for integers
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. cvar:: NPY_MAX_INT{bits}
.. cvar:: NPY_MAX_UINT{bits}
.. cvar:: NPY_MIN_INT{bits}
These are defined for ``{bits}`` = 8, 16, 32, 64, 128, and 256 and provide
the maximum (minimum) value of the corresponding (unsigned) integer
type. Note: the actual integer type may not be available on all
platforms (i.e. 128-bit and 256-bit integers are rare).
.. cvar:: NPY_MIN_{type}
This is defined for ``{type}`` = **BYTE**, **SHORT**, **INT**,
**LONG**, **LONGLONG**, **INTP**
.. cvar:: NPY_MAX_{type}
This is defined for all defined for ``{type}`` = **BYTE**, **UBYTE**,
**SHORT**, **USHORT**, **INT**, **UINT**, **LONG**, **ULONG**,
**LONGLONG**, **ULONGLONG**, **INTP**, **UINTP**
Number of bits in data types
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
All :cdata:`NPY_SIZEOF_{CTYPE}` constants have corresponding
:cdata:`NPY_BITSOF_{CTYPE}` constants defined. The :cdata:`NPY_BITSOF_{CTYPE}`
constants provide the number of bits in the data type. Specifically,
the available ``{CTYPE}s`` are
**BOOL**, **CHAR**, **SHORT**, **INT**, **LONG**,
**LONGLONG**, **FLOAT**, **DOUBLE**, **LONGDOUBLE**
Bit-width references to enumerated typenums
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
All of the numeric data types (integer, floating point, and complex)
have constants that are defined to be a specific enumerated type
number. Exactly which enumerated type a bit-width type refers to is
platform dependent. In particular, the constants available are
:cdata:`PyArray_{NAME}{BITS}` where ``{NAME}`` is **INT**, **UINT**,
**FLOAT**, **COMPLEX** and ``{BITS}`` can be 8, 16, 32, 64, 80, 96, 128,
160, 192, 256, and 512. Obviously not all bit-widths are available on
all platforms for all the kinds of numeric types. Commonly 8-, 16-,
32-, 64-bit integers; 32-, 64-bit floats; and 64-, 128-bit complex
types are available.
Integer that can hold a pointer
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The constants **PyArray_INTP** and **PyArray_UINTP** refer to an
enumerated integer type that is large enough to hold a pointer on the
platform. Index arrays should always be converted to **PyArray_INTP**
, because the dimension of the array is of type npy_intp.
C-type names
------------
There are standard variable types for each of the numeric data types
and the bool data type. Some of these are already available in the
C-specification. You can create variables in extension code with these
types.
Boolean
^^^^^^^
.. ctype:: npy_bool
unsigned char; The constants :cdata:`NPY_FALSE` and
:cdata:`NPY_TRUE` are also defined.
(Un)Signed Integer
^^^^^^^^^^^^^^^^^^
Unsigned versions of the integers can be defined by pre-pending a 'u'
to the front of the integer name.
.. ctype:: npy_(u)byte
(unsigned) char
.. ctype:: npy_(u)short
(unsigned) short
.. ctype:: npy_(u)int
(unsigned) int
.. ctype:: npy_(u)long
(unsigned) long int
.. ctype:: npy_(u)longlong
(unsigned long long int)
.. ctype:: npy_(u)intp
(unsigned) Py_intptr_t (an integer that is the size of a pointer on
the platform).
(Complex) Floating point
^^^^^^^^^^^^^^^^^^^^^^^^
.. ctype:: npy_(c)float
float
.. ctype:: npy_(c)double
double
.. ctype:: npy_(c)longdouble
long double
complex types are structures with **.real** and **.imag** members (in
that order).
Bit-width names
^^^^^^^^^^^^^^^
There are also typedefs for signed integers, unsigned integers,
floating point, and complex floating point types of specific bit-
widths. The available type names are
:ctype:`npy_int{bits}`, :ctype:`npy_uint{bits}`, :ctype:`npy_float{bits}`,
and :ctype:`npy_complex{bits}`
where ``{bits}`` is the number of bits in the type and can be **8**,
**16**, **32**, **64**, 128, and 256 for integer types; 16, **32**
, **64**, 80, 96, 128, and 256 for floating-point types; and 32,
**64**, **128**, 160, 192, and 512 for complex-valued types. Which
bit-widths are available is platform dependent. The bolded bit-widths
are usually available on all platforms.
Printf Formatting
-----------------
For help in printing, the following strings are defined as the correct
format specifier in printf and related commands.
:cdata:`NPY_LONGLONG_FMT`, :cdata:`NPY_ULONGLONG_FMT`,
:cdata:`NPY_INTP_FMT`, :cdata:`NPY_UINTP_FMT`,
:cdata:`NPY_LONGDOUBLE_FMT`

View File

@@ -1,175 +0,0 @@
==================================
Generalized Universal Function API
==================================
There is a general need for looping over not only functions on scalars
but also over functions on vectors (or arrays), as explained on
http://scipy.org/scipy/numpy/wiki/GeneralLoopingFunctions. We propose
to realize this concept by generalizing the universal functions
(ufuncs), and provide a C implementation that adds ~500 lines
to the numpy code base. In current (specialized) ufuncs, the elementary
function is limited to element-by-element operations, whereas the
generalized version supports "sub-array" by "sub-array" operations.
The Perl vector library PDL provides a similar functionality and its
terms are re-used in the following.
Each generalized ufunc has information associated with it that states
what the "core" dimensionality of the inputs is, as well as the
corresponding dimensionality of the outputs (the element-wise ufuncs
have zero core dimensions). The list of the core dimensions for all
arguments is called the "signature" of a ufunc. For example, the
ufunc numpy.add has signature ``(),()->()`` defining two scalar inputs
and one scalar output.
Another example is (see the GeneralLoopingFunctions page) the function
``inner1d(a,b)`` with a signature of ``(i),(i)->()``. This applies the
inner product along the last axis of each input, but keeps the
remaining indices intact. For example, where ``a`` is of shape ``(3,5,N)``
and ``b`` is of shape ``(5,N)``, this will return an output of shape ``(3,5)``.
The underlying elementary function is called 3*5 times. In the
signature, we specify one core dimension ``(i)`` for each input and zero core
dimensions ``()`` for the output, since it takes two 1-d arrays and
returns a scalar. By using the same name ``i``, we specify that the two
corresponding dimensions should be of the same size (or one of them is
of size 1 and will be broadcasted).
The dimensions beyond the core dimensions are called "loop" dimensions. In
the above example, this corresponds to ``(3,5)``.
The usual numpy "broadcasting" rules apply, where the signature
determines how the dimensions of each input/output object are split
into core and loop dimensions:
#. While an input array has a smaller dimensionality than the corresponding
number of core dimensions, 1's are pre-pended to its shape.
#. The core dimensions are removed from all inputs and the remaining
dimensions are broadcasted; defining the loop dimensions.
#. The output is given by the loop dimensions plus the output core dimensions.
Definitions
-----------
Elementary Function
Each ufunc consists of an elementary function that performs the
most basic operation on the smallest portion of array arguments
(e.g. adding two numbers is the most basic operation in adding two
arrays). The ufunc applies the elementary function multiple times
on different parts of the arrays. The input/output of elementary
functions can be vectors; e.g., the elementary function of inner1d
takes two vectors as input.
Signature
A signature is a string describing the input/output dimensions of
the elementary function of a ufunc. See section below for more
details.
Core Dimension
The dimensionality of each input/output of an elementary function
is defined by its core dimensions (zero core dimensions correspond
to a scalar input/output). The core dimensions are mapped to the
last dimensions of the input/output arrays.
Dimension Name
A dimension name represents a core dimension in the signature.
Different dimensions may share a name, indicating that they are of
the same size (or are broadcastable).
Dimension Index
A dimension index is an integer representing a dimension name. It
enumerates the dimension names according to the order of the first
occurrence of each name in the signature.
Details of Signature
--------------------
The signature defines "core" dimensionality of input and output
variables, and thereby also defines the contraction of the
dimensions. The signature is represented by a string of the
following format:
* Core dimensions of each input or output array are represented by a
list of dimension names in parentheses, ``(i_1,...,i_N)``; a scalar
input/output is denoted by ``()``. Instead of ``i_1``, ``i_2``,
etc, one can use any valid Python variable name.
* Dimension lists for different arguments are separated by ``","``.
Input/output arguments are separated by ``"->"``.
* If one uses the same dimension name in multiple locations, this
enforces the same size (or broadcastable size) of the corresponding
dimensions.
The formal syntax of signatures is as follows::
<Signature> ::= <Input arguments> "->" <Output arguments>
<Input arguments> ::= <Argument list>
<Output arguments> ::= <Argument list>
<Argument list> ::= nil | <Argument> | <Argument> "," <Argument list>
<Argument> ::= "(" <Core dimension list> ")"
<Core dimension list> ::= nil | <Dimension name> |
<Dimension name> "," <Core dimension list>
<Dimension name> ::= valid Python variable name
Notes:
#. All quotes are for clarity.
#. Core dimensions that share the same name must be broadcastable, as
the two ``i`` in our example above. Each dimension name typically
corresponding to one level of looping in the elementary function's
implementation.
#. White spaces are ignored.
Here are some examples of signatures:
+-------------+------------------------+-----------------------------------+
| add | ``(),()->()`` | |
+-------------+------------------------+-----------------------------------+
| inner1d | ``(i),(i)->()`` | |
+-------------+------------------------+-----------------------------------+
| sum1d | ``(i)->()`` | |
+-------------+------------------------+-----------------------------------+
| dot2d | ``(m,n),(n,p)->(m,p)`` | matrix multiplication |
+-------------+------------------------+-----------------------------------+
| outer_inner | ``(i,t),(j,t)->(i,j)`` | inner over the last dimension, |
| | | outer over the second to last, |
| | | and loop/broadcast over the rest. |
+-------------+------------------------+-----------------------------------+
C-API for implementing Elementary Functions
-------------------------------------------
The current interface remains unchanged, and ``PyUFunc_FromFuncAndData``
can still be used to implement (specialized) ufuncs, consisting of
scalar elementary functions.
One can use ``PyUFunc_FromFuncAndDataAndSignature`` to declare a more
general ufunc. The argument list is the same as
``PyUFunc_FromFuncAndData``, with an additional argument specifying the
signature as C string.
Furthermore, the callback function is of the same type as before,
``void (*foo)(char **args, intp *dimensions, intp *steps, void *func)``.
When invoked, ``args`` is a list of length ``nargs`` containing
the data of all input/output arguments. For a scalar elementary
function, ``steps`` is also of length ``nargs``, denoting the strides used
for the arguments. ``dimensions`` is a pointer to a single integer
defining the size of the axis to be looped over.
For a non-trivial signature, ``dimensions`` will also contain the sizes
of the core dimensions as well, starting at the second entry. Only
one size is provided for each unique dimension name and the sizes are
given according to the first occurrence of a dimension name in the
signature.
The first ``nargs`` elements of ``steps`` remain the same as for scalar
ufuncs. The following elements contain the strides of all core
dimensions for all arguments in order.
For example, consider a ufunc with signature ``(i,j),(i)->()``. In
this case, ``args`` will contain three pointers to the data of the
input/output arrays ``a``, ``b``, ``c``. Furthermore, ``dimensions`` will be
``[N, I, J]`` to define the size of ``N`` of the loop and the sizes ``I`` and ``J``
for the core dimensions ``i`` and ``j``. Finally, ``steps`` will be
``[a_N, b_N, c_N, a_i, a_j, b_i]``, containing all necessary strides.

View File

@@ -1,1184 +0,0 @@
Array Iterator API
==================
.. sectionauthor:: Mark Wiebe
.. index::
pair: iterator; C-API
pair: C-API; iterator
.. versionadded:: 1.6
Array Iterator
--------------
The array iterator encapsulates many of the key features in ufuncs,
allowing user code to support features like output parameters,
preservation of memory layouts, and buffering of data with the wrong
alignment or type, without requiring difficult coding.
This page documents the API for the iterator.
The C-API naming convention chosen is based on the one in the numpy-refactor
branch, so will integrate naturally into the refactored code base.
The iterator is named ``NpyIter`` and functions are
named ``NpyIter_*``.
Converting from Previous NumPy Iterators
----------------------------------------
The existing iterator API includes functions like PyArrayIter_Check,
PyArray_Iter* and PyArray_ITER_*. The multi-iterator array includes
PyArray_MultiIter*, PyArray_Broadcast, and PyArray_RemoveSmallest. The
new iterator design replaces all of this functionality with a single object
and associated API. One goal of the new API is that all uses of the
existing iterator should be replaceable with the new iterator without
significant effort. In 1.6, the major exception to this is the neighborhood
iterator, which does not have corresponding features in this iterator.
Here is a conversion table for which functions to use with the new iterator:
===================================== =============================================
*Iterator Functions*
:cfunc:`PyArray_IterNew` :cfunc:`NpyIter_New`
:cfunc:`PyArray_IterAllButAxis` :cfunc:`NpyIter_New` + ``axes`` parameter **or**
Iterator flag :cdata:`NPY_ITER_EXTERNAL_LOOP`
:cfunc:`PyArray_BroadcastToShape` **NOT SUPPORTED** (Use the support for
multiple operands instead.)
:cfunc:`PyArrayIter_Check` Will need to add this in Python exposure
:cfunc:`PyArray_ITER_RESET` :cfunc:`NpyIter_Reset`
:cfunc:`PyArray_ITER_NEXT` Function pointer from :cfunc:`NpyIter_GetIterNext`
:cfunc:`PyArray_ITER_DATA` :cfunc:`NpyIter_GetDataPtrArray`
:cfunc:`PyArray_ITER_GOTO` :cfunc:`NpyIter_GotoMultiIndex`
:cfunc:`PyArray_ITER_GOTO1D` :cfunc:`NpyIter_GotoIndex` or
:cfunc:`NpyIter_GotoIterIndex`
:cfunc:`PyArray_ITER_NOTDONE` Return value of ``iternext`` function pointer
*Multi-iterator Functions*
:cfunc:`PyArray_MultiIterNew` :cfunc:`NpyIter_MultiNew`
:cfunc:`PyArray_MultiIter_RESET` :cfunc:`NpyIter_Reset`
:cfunc:`PyArray_MultiIter_NEXT` Function pointer from :cfunc:`NpyIter_GetIterNext`
:cfunc:`PyArray_MultiIter_DATA` :cfunc:`NpyIter_GetDataPtrArray`
:cfunc:`PyArray_MultiIter_NEXTi` **NOT SUPPORTED** (always lock-step iteration)
:cfunc:`PyArray_MultiIter_GOTO` :cfunc:`NpyIter_GotoMultiIndex`
:cfunc:`PyArray_MultiIter_GOTO1D` :cfunc:`NpyIter_GotoIndex` or
:cfunc:`NpyIter_GotoIterIndex`
:cfunc:`PyArray_MultiIter_NOTDONE` Return value of ``iternext`` function pointer
:cfunc:`PyArray_Broadcast` Handled by :cfunc:`NpyIter_MultiNew`
:cfunc:`PyArray_RemoveSmallest` Iterator flag :cdata:`NPY_ITER_EXTERNAL_LOOP`
*Other Functions*
:cfunc:`PyArray_ConvertToCommonType` Iterator flag :cdata:`NPY_ITER_COMMON_DTYPE`
===================================== =============================================
Simple Iteration Example
------------------------
The best way to become familiar with the iterator is to look at its
usage within the NumPy codebase itself. For example, here is a slightly
tweaked version of the code for :cfunc:`PyArray_CountNonzero`, which counts the
number of non-zero elements in an array.
.. code-block:: c
npy_intp PyArray_CountNonzero(PyArrayObject* self)
{
/* Nonzero boolean function */
PyArray_NonzeroFunc* nonzero = PyArray_DESCR(self)->f->nonzero;
NpyIter* iter;
NpyIter_IterNextFunc *iternext;
char** dataptr;
npy_intp* strideptr,* innersizeptr;
/* Handle zero-sized arrays specially */
if (PyArray_SIZE(self) == 0) {
return 0;
}
/*
* Create and use an iterator to count the nonzeros.
* flag NPY_ITER_READONLY
* - The array is never written to.
* flag NPY_ITER_EXTERNAL_LOOP
* - Inner loop is done outside the iterator for efficiency.
* flag NPY_ITER_NPY_ITER_REFS_OK
* - Reference types are acceptable.
* order NPY_KEEPORDER
* - Visit elements in memory order, regardless of strides.
* This is good for performance when the specific order
* elements are visited is unimportant.
* casting NPY_NO_CASTING
* - No casting is required for this operation.
*/
iter = NpyIter_New(self, NPY_ITER_READONLY|
NPY_ITER_EXTERNAL_LOOP|
NPY_ITER_REFS_OK,
NPY_KEEPORDER, NPY_NO_CASTING,
NULL);
if (iter == NULL) {
return -1;
}
/*
* The iternext function gets stored in a local variable
* so it can be called repeatedly in an efficient manner.
*/
iternext = NpyIter_GetIterNext(iter, NULL);
if (iternext == NULL) {
NpyIter_Deallocate(iter);
return -1;
}
/* The location of the data pointer which the iterator may update */
dataptr = NpyIter_GetDataPtrArray(iter);
/* The location of the stride which the iterator may update */
strideptr = NpyIter_GetInnerStrideArray(iter);
/* The location of the inner loop size which the iterator may update */
innersizeptr = NpyIter_GetInnerLoopSizePtr(iter);
/* The iteration loop */
do {
/* Get the inner loop data/stride/count values */
char* data = *dataptr;
npy_intp stride = *strideptr;
npy_intp count = *innersizeptr;
/* This is a typical inner loop for NPY_ITER_EXTERNAL_LOOP */
while (count--) {
if (nonzero(data, self)) {
++nonzero_count;
}
data += stride;
}
/* Increment the iterator to the next inner loop */
} while(iternext(iter));
NpyIter_Deallocate(iter);
return nonzero_count;
}
Simple Multi-Iteration Example
------------------------------
Here is a simple copy function using the iterator. The ``order`` parameter
is used to control the memory layout of the allocated result, typically
:cdata:`NPY_KEEPORDER` is desired.
.. code-block:: c
PyObject *CopyArray(PyObject *arr, NPY_ORDER order)
{
NpyIter *iter;
NpyIter_IterNextFunc *iternext;
PyObject *op[2], *ret;
npy_uint32 flags;
npy_uint32 op_flags[2];
npy_intp itemsize, *innersizeptr, innerstride;
char **dataptrarray;
/*
* No inner iteration - inner loop is handled by CopyArray code
*/
flags = NPY_ITER_EXTERNAL_LOOP;
/*
* Tell the constructor to automatically allocate the output.
* The data type of the output will match that of the input.
*/
op[0] = arr;
op[1] = NULL;
op_flags[0] = NPY_ITER_READONLY;
op_flags[1] = NPY_ITER_WRITEONLY | NPY_ITER_ALLOCATE;
/* Construct the iterator */
iter = NpyIter_MultiNew(2, op, flags, order, NPY_NO_CASTING,
op_flags, NULL);
if (iter == NULL) {
return NULL;
}
/*
* Make a copy of the iternext function pointer and
* a few other variables the inner loop needs.
*/
iternext = NpyIter_GetIterNext(iter);
innerstride = NpyIter_GetInnerStrideArray(iter)[0];
itemsize = NpyIter_GetDescrArray(iter)[0]->elsize;
/*
* The inner loop size and data pointers may change during the
* loop, so just cache the addresses.
*/
innersizeptr = NpyIter_GetInnerLoopSizePtr(iter);
dataptrarray = NpyIter_GetDataPtrArray(iter);
/*
* Note that because the iterator allocated the output,
* it matches the iteration order and is packed tightly,
* so we don't need to check it like the input.
*/
if (innerstride == itemsize) {
do {
memcpy(dataptrarray[1], dataptrarray[0],
itemsize * (*innersizeptr));
} while (iternext(iter));
} else {
/* For efficiency, should specialize this based on item size... */
npy_intp i;
do {
npy_intp size = *innersizeptr;
char *src = dataaddr[0], *dst = dataaddr[1];
for(i = 0; i < size; i++, src += innerstride, dst += itemsize) {
memcpy(dst, src, itemsize);
}
} while (iternext(iter));
}
/* Get the result from the iterator object array */
ret = NpyIter_GetOperandArray(iter)[1];
Py_INCREF(ret);
if (NpyIter_Deallocate(iter) != NPY_SUCCEED) {
Py_DECREF(ret);
return NULL;
}
return ret;
}
Iterator Data Types
---------------------
The iterator layout is an internal detail, and user code only sees
an incomplete struct.
.. ctype:: NpyIter
This is an opaque pointer type for the iterator. Access to its contents
can only be done through the iterator API.
.. ctype:: NpyIter_Type
This is the type which exposes the iterator to Python. Currently, no
API is exposed which provides access to the values of a Python-created
iterator. If an iterator is created in Python, it must be used in Python
and vice versa. Such an API will likely be created in a future version.
.. ctype:: NpyIter_IterNextFunc
This is a function pointer for the iteration loop, returned by
:cfunc:`NpyIter_GetIterNext`.
.. ctype:: NpyIter_GetMultiIndexFunc
This is a function pointer for getting the current iterator multi-index,
returned by :cfunc:`NpyIter_GetGetMultiIndex`.
Construction and Destruction
----------------------------
.. cfunction:: NpyIter* NpyIter_New(PyArrayObject* op, npy_uint32 flags, NPY_ORDER order, NPY_CASTING casting, PyArray_Descr* dtype)
Creates an iterator for the given numpy array object ``op``.
Flags that may be passed in ``flags`` are any combination
of the global and per-operand flags documented in
:cfunc:`NpyIter_MultiNew`, except for :cdata:`NPY_ITER_ALLOCATE`.
Any of the :ctype:`NPY_ORDER` enum values may be passed to ``order``. For
efficient iteration, :ctype:`NPY_KEEPORDER` is the best option, and
the other orders enforce the particular iteration pattern.
Any of the :ctype:`NPY_CASTING` enum values may be passed to ``casting``.
The values include :cdata:`NPY_NO_CASTING`, :cdata:`NPY_EQUIV_CASTING`,
:cdata:`NPY_SAFE_CASTING`, :cdata:`NPY_SAME_KIND_CASTING`, and
:cdata:`NPY_UNSAFE_CASTING`. To allow the casts to occur, copying or
buffering must also be enabled.
If ``dtype`` isn't ``NULL``, then it requires that data type.
If copying is allowed, it will make a temporary copy if the data
is castable. If :cdata:`NPY_ITER_UPDATEIFCOPY` is enabled, it will
also copy the data back with another cast upon iterator destruction.
Returns NULL if there is an error, otherwise returns the allocated
iterator.
To make an iterator similar to the old iterator, this should work.
.. code-block:: c
iter = NpyIter_New(op, NPY_ITER_READWRITE,
NPY_CORDER, NPY_NO_CASTING, NULL);
If you want to edit an array with aligned ``double`` code,
but the order doesn't matter, you would use this.
.. code-block:: c
dtype = PyArray_DescrFromType(NPY_DOUBLE);
iter = NpyIter_New(op, NPY_ITER_READWRITE|
NPY_ITER_BUFFERED|
NPY_ITER_NBO|
NPY_ITER_ALIGNED,
NPY_KEEPORDER,
NPY_SAME_KIND_CASTING,
dtype);
Py_DECREF(dtype);
.. cfunction:: NpyIter* NpyIter_MultiNew(npy_intp nop, PyArrayObject** op, npy_uint32 flags, NPY_ORDER order, NPY_CASTING casting, npy_uint32* op_flags, PyArray_Descr** op_dtypes)
Creates an iterator for broadcasting the ``nop`` array objects provided
in ``op``, using regular NumPy broadcasting rules.
Any of the :ctype:`NPY_ORDER` enum values may be passed to ``order``. For
efficient iteration, :cdata:`NPY_KEEPORDER` is the best option, and the
other orders enforce the particular iteration pattern. When using
:cdata:`NPY_KEEPORDER`, if you also want to ensure that the iteration is
not reversed along an axis, you should pass the flag
:cdata:`NPY_ITER_DONT_NEGATE_STRIDES`.
Any of the :ctype:`NPY_CASTING` enum values may be passed to ``casting``.
The values include :cdata:`NPY_NO_CASTING`, :cdata:`NPY_EQUIV_CASTING`,
:cdata:`NPY_SAFE_CASTING`, :cdata:`NPY_SAME_KIND_CASTING`, and
:cdata:`NPY_UNSAFE_CASTING`. To allow the casts to occur, copying or
buffering must also be enabled.
If ``op_dtypes`` isn't ``NULL``, it specifies a data type or ``NULL``
for each ``op[i]``.
Returns NULL if there is an error, otherwise returns the allocated
iterator.
Flags that may be passed in ``flags``, applying to the whole
iterator, are:
.. cvar:: NPY_ITER_C_INDEX
Causes the iterator to track a raveled flat index matching C
order. This option cannot be used with :cdata:`NPY_ITER_F_INDEX`.
.. cvar:: NPY_ITER_F_INDEX
Causes the iterator to track a raveled flat index matching Fortran
order. This option cannot be used with :cdata:`NPY_ITER_C_INDEX`.
.. cvar:: NPY_ITER_MULTI_INDEX
Causes the iterator to track a multi-index.
This prevents the iterator from coalescing axes to
produce bigger inner loops.
.. cvar:: NPY_ITER_EXTERNAL_LOOP
Causes the iterator to skip iteration of the innermost
loop, requiring the user of the iterator to handle it.
This flag is incompatible with :cdata:`NPY_ITER_C_INDEX`,
:cdata:`NPY_ITER_F_INDEX`, and :cdata:`NPY_ITER_MULTI_INDEX`.
.. cvar:: NPY_ITER_DONT_NEGATE_STRIDES
This only affects the iterator when :ctype:`NPY_KEEPORDER` is
specified for the order parameter. By default with
:ctype:`NPY_KEEPORDER`, the iterator reverses axes which have
negative strides, so that memory is traversed in a forward
direction. This disables this step. Use this flag if you
want to use the underlying memory-ordering of the axes,
but don't want an axis reversed. This is the behavior of
``numpy.ravel(a, order='K')``, for instance.
.. cvar:: NPY_ITER_COMMON_DTYPE
Causes the iterator to convert all the operands to a common
data type, calculated based on the ufunc type promotion rules.
Copying or buffering must be enabled.
If the common data type is known ahead of time, don't use this
flag. Instead, set the requested dtype for all the operands.
.. cvar:: NPY_ITER_REFS_OK
Indicates that arrays with reference types (object
arrays or structured arrays containing an object type)
may be accepted and used in the iterator. If this flag
is enabled, the caller must be sure to check whether
:cfunc:`NpyIter_IterationNeedsAPI`(iter) is true, in which case
it may not release the GIL during iteration.
.. cvar:: NPY_ITER_ZEROSIZE_OK
Indicates that arrays with a size of zero should be permitted.
Since the typical iteration loop does not naturally work with
zero-sized arrays, you must check that the IterSize is non-zero
before entering the iteration loop.
.. cvar:: NPY_ITER_REDUCE_OK
Permits writeable operands with a dimension with zero
stride and size greater than one. Note that such operands
must be read/write.
When buffering is enabled, this also switches to a special
buffering mode which reduces the loop length as necessary to
not trample on values being reduced.
Note that if you want to do a reduction on an automatically
allocated output, you must use :cfunc:`NpyIter_GetOperandArray`
to get its reference, then set every value to the reduction
unit before doing the iteration loop. In the case of a
buffered reduction, this means you must also specify the
flag :cdata:`NPY_ITER_DELAY_BUFALLOC`, then reset the iterator
after initializing the allocated operand to prepare the
buffers.
.. cvar:: NPY_ITER_RANGED
Enables support for iteration of sub-ranges of the full
``iterindex`` range ``[0, NpyIter_IterSize(iter))``. Use
the function :cfunc:`NpyIter_ResetToIterIndexRange` to specify
a range for iteration.
This flag can only be used with :cdata:`NPY_ITER_EXTERNAL_LOOP`
when :cdata:`NPY_ITER_BUFFERED` is enabled. This is because
without buffering, the inner loop is always the size of the
innermost iteration dimension, and allowing it to get cut up
would require special handling, effectively making it more
like the buffered version.
.. cvar:: NPY_ITER_BUFFERED
Causes the iterator to store buffering data, and use buffering
to satisfy data type, alignment, and byte-order requirements.
To buffer an operand, do not specify the :cdata:`NPY_ITER_COPY`
or :cdata:`NPY_ITER_UPDATEIFCOPY` flags, because they will
override buffering. Buffering is especially useful for Python
code using the iterator, allowing for larger chunks
of data at once to amortize the Python interpreter overhead.
If used with :cdata:`NPY_ITER_EXTERNAL_LOOP`, the inner loop
for the caller may get larger chunks than would be possible
without buffering, because of how the strides are laid out.
Note that if an operand is given the flag :cdata:`NPY_ITER_COPY`
or :cdata:`NPY_ITER_UPDATEIFCOPY`, a copy will be made in preference
to buffering. Buffering will still occur when the array was
broadcast so elements need to be duplicated to get a constant
stride.
In normal buffering, the size of each inner loop is equal
to the buffer size, or possibly larger if
:cdata:`NPY_ITER_GROWINNER` is specified. If
:cdata:`NPY_ITER_REDUCE_OK` is enabled and a reduction occurs,
the inner loops may become smaller depending
on the structure of the reduction.
.. cvar:: NPY_ITER_GROWINNER
When buffering is enabled, this allows the size of the inner
loop to grow when buffering isn't necessary. This option
is best used if you're doing a straight pass through all the
data, rather than anything with small cache-friendly arrays
of temporary values for each inner loop.
.. cvar:: NPY_ITER_DELAY_BUFALLOC
When buffering is enabled, this delays allocation of the
buffers until :cfunc:`NpyIter_Reset` or another reset function is
called. This flag exists to avoid wasteful copying of
buffer data when making multiple copies of a buffered
iterator for multi-threaded iteration.
Another use of this flag is for setting up reduction operations.
After the iterator is created, and a reduction output
is allocated automatically by the iterator (be sure to use
READWRITE access), its value may be initialized to the reduction
unit. Use :cfunc:`NpyIter_GetOperandArray` to get the object.
Then, call :cfunc:`NpyIter_Reset` to allocate and fill the buffers
with their initial values.
Flags that may be passed in ``op_flags[i]``, where ``0 <= i < nop``:
.. cvar:: NPY_ITER_READWRITE
.. cvar:: NPY_ITER_READONLY
.. cvar:: NPY_ITER_WRITEONLY
Indicate how the user of the iterator will read or write
to ``op[i]``. Exactly one of these flags must be specified
per operand.
.. cvar:: NPY_ITER_COPY
Allow a copy of ``op[i]`` to be made if it does not
meet the data type or alignment requirements as specified
by the constructor flags and parameters.
.. cvar:: NPY_ITER_UPDATEIFCOPY
Triggers :cdata:`NPY_ITER_COPY`, and when an array operand
is flagged for writing and is copied, causes the data
in a copy to be copied back to ``op[i]`` when the iterator
is destroyed.
If the operand is flagged as write-only and a copy is needed,
an uninitialized temporary array will be created and then copied
to back to ``op[i]`` on destruction, instead of doing
the unecessary copy operation.
.. cvar:: NPY_ITER_NBO
.. cvar:: NPY_ITER_ALIGNED
.. cvar:: NPY_ITER_CONTIG
Causes the iterator to provide data for ``op[i]``
that is in native byte order, aligned according to
the dtype requirements, contiguous, or any combination.
By default, the iterator produces pointers into the
arrays provided, which may be aligned or unaligned, and
with any byte order. If copying or buffering is not
enabled and the operand data doesn't satisfy the constraints,
an error will be raised.
The contiguous constraint applies only to the inner loop,
successive inner loops may have arbitrary pointer changes.
If the requested data type is in non-native byte order,
the NBO flag overrides it and the requested data type is
converted to be in native byte order.
.. cvar:: NPY_ITER_ALLOCATE
This is for output arrays, and requires that the flag
:cdata:`NPY_ITER_WRITEONLY` be set. If ``op[i]`` is NULL,
creates a new array with the final broadcast dimensions,
and a layout matching the iteration order of the iterator.
When ``op[i]`` is NULL, the requested data type
``op_dtypes[i]`` may be NULL as well, in which case it is
automatically generated from the dtypes of the arrays which
are flagged as readable. The rules for generating the dtype
are the same is for UFuncs. Of special note is handling
of byte order in the selected dtype. If there is exactly
one input, the input's dtype is used as is. Otherwise,
if more than one input dtypes are combined together, the
output will be in native byte order.
After being allocated with this flag, the caller may retrieve
the new array by calling :cfunc:`NpyIter_GetOperandArray` and
getting the i-th object in the returned C array. The caller
must call Py_INCREF on it to claim a reference to the array.
.. cvar:: NPY_ITER_NO_SUBTYPE
For use with :cdata:`NPY_ITER_ALLOCATE`, this flag disables
allocating an array subtype for the output, forcing
it to be a straight ndarray.
TODO: Maybe it would be better to introduce a function
``NpyIter_GetWrappedOutput`` and remove this flag?
.. cvar:: NPY_ITER_NO_BROADCAST
Ensures that the input or output matches the iteration
dimensions exactly.
.. cfunction:: NpyIter* NpyIter_AdvancedNew(npy_intp nop, PyArrayObject** op, npy_uint32 flags, NPY_ORDER order, NPY_CASTING casting, npy_uint32* op_flags, PyArray_Descr** op_dtypes, int oa_ndim, int** op_axes, npy_intp* itershape, npy_intp buffersize)
Extends :cfunc:`NpyIter_MultiNew` with several advanced options providing
more control over broadcasting and buffering.
If 0/NULL values are passed to ``oa_ndim``, ``op_axes``, ``itershape``,
and ``buffersize``, it is equivalent to :cfunc:`NpyIter_MultiNew`.
The parameter ``oa_ndim``, when non-zero, specifies the number of
dimensions that will be iterated with customized broadcasting.
If it is provided, ``op_axes`` and/or ``itershape`` must also be provided.
The ``op_axes`` parameter let you control in detail how the
axes of the operand arrays get matched together and iterated.
In ``op_axes``, you must provide an array of ``nop`` pointers
to ``oa_ndim``-sized arrays of type ``npy_intp``. If an entry
in ``op_axes`` is NULL, normal broadcasting rules will apply.
In ``op_axes[j][i]`` is stored either a valid axis of ``op[j]``, or
-1 which means ``newaxis``. Within each ``op_axes[j]`` array, axes
may not be repeated. The following example is how normal broadcasting
applies to a 3-D array, a 2-D array, a 1-D array and a scalar.
.. code-block:: c
int oa_ndim = 3; /* # iteration axes */
int op0_axes[] = {0, 1, 2}; /* 3-D operand */
int op1_axes[] = {-1, 0, 1}; /* 2-D operand */
int op2_axes[] = {-1, -1, 0}; /* 1-D operand */
int op3_axes[] = {-1, -1, -1} /* 0-D (scalar) operand */
int* op_axes[] = {op0_axes, op1_axes, op2_axes, op3_axes};
The ``itershape`` parameter allows you to force the iterator
to have a specific iteration shape. It is an array of length
``oa_ndim``. When an entry is negative, its value is determined
from the operands. This parameter allows automatically allocated
outputs to get additional dimensions which don't match up with
any dimension of an input.
If ``buffersize`` is zero, a default buffer size is used,
otherwise it specifies how big of a buffer to use. Buffers
which are powers of 2 such as 4096 or 8192 are recommended.
Returns NULL if there is an error, otherwise returns the allocated
iterator.
.. cfunction:: NpyIter* NpyIter_Copy(NpyIter* iter)
Makes a copy of the given iterator. This function is provided
primarily to enable multi-threaded iteration of the data.
*TODO*: Move this to a section about multithreaded iteration.
The recommended approach to multithreaded iteration is to
first create an iterator with the flags
:cdata:`NPY_ITER_EXTERNAL_LOOP`, :cdata:`NPY_ITER_RANGED`,
:cdata:`NPY_ITER_BUFFERED`, :cdata:`NPY_ITER_DELAY_BUFALLOC`, and
possibly :cdata:`NPY_ITER_GROWINNER`. Create a copy of this iterator
for each thread (minus one for the first iterator). Then, take
the iteration index range ``[0, NpyIter_GetIterSize(iter))`` and
split it up into tasks, for example using a TBB parallel_for loop.
When a thread gets a task to execute, it then uses its copy of
the iterator by calling :cfunc:`NpyIter_ResetToIterIndexRange` and
iterating over the full range.
When using the iterator in multi-threaded code or in code not
holding the Python GIL, care must be taken to only call functions
which are safe in that context. :cfunc:`NpyIter_Copy` cannot be safely
called without the Python GIL, because it increments Python
references. The ``Reset*`` and some other functions may be safely
called by passing in the ``errmsg`` parameter as non-NULL, so that
the functions will pass back errors through it instead of setting
a Python exception.
.. cfunction:: int NpyIter_RemoveAxis(NpyIter* iter, int axis)``
Removes an axis from iteration. This requires that
:cdata:`NPY_ITER_MULTI_INDEX` was set for iterator creation, and does
not work if buffering is enabled or an index is being tracked. This
function also resets the iterator to its initial state.
This is useful for setting up an accumulation loop, for example.
The iterator can first be created with all the dimensions, including
the accumulation axis, so that the output gets created correctly.
Then, the accumulation axis can be removed, and the calculation
done in a nested fashion.
**WARNING**: This function may change the internal memory layout of
the iterator. Any cached functions or pointers from the iterator
must be retrieved again!
Returns ``NPY_SUCCEED`` or ``NPY_FAIL``.
.. cfunction:: int NpyIter_RemoveMultiIndex(NpyIter* iter)
If the iterator is tracking a multi-index, this strips support for them,
and does further iterator optimizations that are possible if multi-indices
are not needed. This function also resets the iterator to its initial
state.
**WARNING**: This function may change the internal memory layout of
the iterator. Any cached functions or pointers from the iterator
must be retrieved again!
After calling this function, :cfunc:`NpyIter_HasMultiIndex`(iter) will
return false.
Returns ``NPY_SUCCEED`` or ``NPY_FAIL``.
.. cfunction:: int NpyIter_EnableExternalLoop(NpyIter* iter)
If :cfunc:`NpyIter_RemoveMultiIndex` was called, you may want to enable the
flag :cdata:`NPY_ITER_EXTERNAL_LOOP`. This flag is not permitted
together with :cdata:`NPY_ITER_MULTI_INDEX`, so this function is provided
to enable the feature after :cfunc:`NpyIter_RemoveMultiIndex` is called.
This function also resets the iterator to its initial state.
**WARNING**: This function changes the internal logic of the iterator.
Any cached functions or pointers from the iterator must be retrieved
again!
Returns ``NPY_SUCCEED`` or ``NPY_FAIL``.
.. cfunction:: int NpyIter_Deallocate(NpyIter* iter)
Deallocates the iterator object. This additionally frees any
copies made, triggering UPDATEIFCOPY behavior where necessary.
Returns ``NPY_SUCCEED`` or ``NPY_FAIL``.
.. cfunction:: int NpyIter_Reset(NpyIter* iter, char** errmsg)
Resets the iterator back to its initial state, at the beginning
of the iteration range.
Returns ``NPY_SUCCEED`` or ``NPY_FAIL``. If errmsg is non-NULL,
no Python exception is set when ``NPY_FAIL`` is returned.
Instead, \*errmsg is set to an error message. When errmsg is
non-NULL, the function may be safely called without holding
the Python GIL.
.. cfunction:: int NpyIter_ResetToIterIndexRange(NpyIter* iter, npy_intp istart, npy_intp iend, char** errmsg)
Resets the iterator and restricts it to the ``iterindex`` range
``[istart, iend)``. See :cfunc:`NpyIter_Copy` for an explanation of
how to use this for multi-threaded iteration. This requires that
the flag :cdata:`NPY_ITER_RANGED` was passed to the iterator constructor.
If you want to reset both the ``iterindex`` range and the base
pointers at the same time, you can do the following to avoid
extra buffer copying (be sure to add the return code error checks
when you copy this code).
.. code-block:: c
/* Set to a trivial empty range */
NpyIter_ResetToIterIndexRange(iter, 0, 0);
/* Set the base pointers */
NpyIter_ResetBasePointers(iter, baseptrs);
/* Set to the desired range */
NpyIter_ResetToIterIndexRange(iter, istart, iend);
Returns ``NPY_SUCCEED`` or ``NPY_FAIL``. If errmsg is non-NULL,
no Python exception is set when ``NPY_FAIL`` is returned.
Instead, \*errmsg is set to an error message. When errmsg is
non-NULL, the function may be safely called without holding
the Python GIL.
.. cfunction:: int NpyIter_ResetBasePointers(NpyIter *iter, char** baseptrs, char** errmsg)
Resets the iterator back to its initial state, but using the values
in ``baseptrs`` for the data instead of the pointers from the arrays
being iterated. This functions is intended to be used, together with
the ``op_axes`` parameter, by nested iteration code with two or more
iterators.
Returns ``NPY_SUCCEED`` or ``NPY_FAIL``. If errmsg is non-NULL,
no Python exception is set when ``NPY_FAIL`` is returned.
Instead, \*errmsg is set to an error message. When errmsg is
non-NULL, the function may be safely called without holding
the Python GIL.
*TODO*: Move the following into a special section on nested iterators.
Creating iterators for nested iteration requires some care. All
the iterator operands must match exactly, or the calls to
:cfunc:`NpyIter_ResetBasePointers` will be invalid. This means that
automatic copies and output allocation should not be used haphazardly.
It is possible to still use the automatic data conversion and casting
features of the iterator by creating one of the iterators with
all the conversion parameters enabled, then grabbing the allocated
operands with the :cfunc:`NpyIter_GetOperandArray` function and passing
them into the constructors for the rest of the iterators.
**WARNING**: When creating iterators for nested iteration,
the code must not use a dimension more than once in the different
iterators. If this is done, nested iteration will produce
out-of-bounds pointers during iteration.
**WARNING**: When creating iterators for nested iteration, buffering
can only be applied to the innermost iterator. If a buffered iterator
is used as the source for ``baseptrs``, it will point into a small buffer
instead of the array and the inner iteration will be invalid.
The pattern for using nested iterators is as follows.
.. code-block:: c
NpyIter *iter1, *iter1;
NpyIter_IterNextFunc *iternext1, *iternext2;
char **dataptrs1;
/*
* With the exact same operands, no copies allowed, and
* no axis in op_axes used both in iter1 and iter2.
* Buffering may be enabled for iter2, but not for iter1.
*/
iter1 = ...; iter2 = ...;
iternext1 = NpyIter_GetIterNext(iter1);
iternext2 = NpyIter_GetIterNext(iter2);
dataptrs1 = NpyIter_GetDataPtrArray(iter1);
do {
NpyIter_ResetBasePointers(iter2, dataptrs1);
do {
/* Use the iter2 values */
} while (iternext2(iter2));
} while (iternext1(iter1));
.. cfunction:: int NpyIter_GotoMultiIndex(NpyIter* iter, npy_intp* multi_index)
Adjusts the iterator to point to the ``ndim`` indices
pointed to by ``multi_index``. Returns an error if a multi-index
is not being tracked, the indices are out of bounds,
or inner loop iteration is disabled.
Returns ``NPY_SUCCEED`` or ``NPY_FAIL``.
.. cfunction:: int NpyIter_GotoIndex(NpyIter* iter, npy_intp index)
Adjusts the iterator to point to the ``index`` specified.
If the iterator was constructed with the flag
:cdata:`NPY_ITER_C_INDEX`, ``index`` is the C-order index,
and if the iterator was constructed with the flag
:cdata:`NPY_ITER_F_INDEX`, ``index`` is the Fortran-order
index. Returns an error if there is no index being tracked,
the index is out of bounds, or inner loop iteration is disabled.
Returns ``NPY_SUCCEED`` or ``NPY_FAIL``.
.. cfunction:: npy_intp NpyIter_GetIterSize(NpyIter* iter)
Returns the number of elements being iterated. This is the product
of all the dimensions in the shape.
.. cfunction:: npy_intp NpyIter_GetIterIndex(NpyIter* iter)
Gets the ``iterindex`` of the iterator, which is an index matching
the iteration order of the iterator.
.. cfunction:: void NpyIter_GetIterIndexRange(NpyIter* iter, npy_intp* istart, npy_intp* iend)
Gets the ``iterindex`` sub-range that is being iterated. If
:cdata:`NPY_ITER_RANGED` was not specified, this always returns the
range ``[0, NpyIter_IterSize(iter))``.
.. cfunction:: int NpyIter_GotoIterIndex(NpyIter* iter, npy_intp iterindex)
Adjusts the iterator to point to the ``iterindex`` specified.
The IterIndex is an index matching the iteration order of the iterator.
Returns an error if the ``iterindex`` is out of bounds,
buffering is enabled, or inner loop iteration is disabled.
Returns ``NPY_SUCCEED`` or ``NPY_FAIL``.
.. cfunction:: npy_bool NpyIter_HasDelayedBufAlloc(NpyIter* iter)
Returns 1 if the flag :cdata:`NPY_ITER_DELAY_BUFALLOC` was passed
to the iterator constructor, and no call to one of the Reset
functions has been done yet, 0 otherwise.
.. cfunction:: npy_bool NpyIter_HasExternalLoop(NpyIter* iter)
Returns 1 if the caller needs to handle the inner-most 1-dimensional
loop, or 0 if the iterator handles all looping. This is controlled
by the constructor flag :cdata:`NPY_ITER_EXTERNAL_LOOP` or
:cfunc:`NpyIter_EnableExternalLoop`.
.. cfunction:: npy_bool NpyIter_HasMultiIndex(NpyIter* iter)
Returns 1 if the iterator was created with the
:cdata:`NPY_ITER_MULTI_INDEX` flag, 0 otherwise.
.. cfunction:: npy_bool NpyIter_HasIndex(NpyIter* iter)
Returns 1 if the iterator was created with the
:cdata:`NPY_ITER_C_INDEX` or :cdata:`NPY_ITER_F_INDEX`
flag, 0 otherwise.
.. cfunction:: npy_bool NpyIter_RequiresBuffering(NpyIter* iter)
Returns 1 if the iterator requires buffering, which occurs
when an operand needs conversion or alignment and so cannot
be used directly.
.. cfunction:: npy_bool NpyIter_IsBuffered(NpyIter* iter)
Returns 1 if the iterator was created with the
:cdata:`NPY_ITER_BUFFERED` flag, 0 otherwise.
.. cfunction:: npy_bool NpyIter_IsGrowInner(NpyIter* iter)
Returns 1 if the iterator was created with the
:cdata:`NPY_ITER_GROWINNER` flag, 0 otherwise.
.. cfunction:: npy_intp NpyIter_GetBufferSize(NpyIter* iter)
If the iterator is buffered, returns the size of the buffer
being used, otherwise returns 0.
.. cfunction:: int NpyIter_GetNDim(NpyIter* iter)
Returns the number of dimensions being iterated. If a multi-index
was not requested in the iterator constructor, this value
may be smaller than the number of dimensions in the original
objects.
.. cfunction:: int NpyIter_GetNOp(NpyIter* iter)
Returns the number of operands in the iterator.
.. cfunction:: npy_intp* NpyIter_GetAxisStrideArray(NpyIter* iter, int axis)
Gets the array of strides for the specified axis. Requires that
the iterator be tracking a multi-index, and that buffering not
be enabled.
This may be used when you want to match up operand axes in
some fashion, then remove them with :cfunc:`NpyIter_RemoveAxis` to
handle their processing manually. By calling this function
before removing the axes, you can get the strides for the
manual processing.
Returns ``NULL`` on error.
.. cfunction:: int NpyIter_GetShape(NpyIter* iter, npy_intp* outshape)
Returns the broadcast shape of the iterator in ``outshape``.
This can only be called on an iterator which is tracking a multi-index.
Returns ``NPY_SUCCEED`` or ``NPY_FAIL``.
.. cfunction:: PyArray_Descr** NpyIter_GetDescrArray(NpyIter* iter)
This gives back a pointer to the ``nop`` data type Descrs for
the objects being iterated. The result points into ``iter``,
so the caller does not gain any references to the Descrs.
This pointer may be cached before the iteration loop, calling
``iternext`` will not change it.
.. cfunction:: PyObject** NpyIter_GetOperandArray(NpyIter* iter)
This gives back a pointer to the ``nop`` operand PyObjects
that are being iterated. The result points into ``iter``,
so the caller does not gain any references to the PyObjects.
.. cfunction:: PyObject* NpyIter_GetIterView(NpyIter* iter, npy_intp i)
This gives back a reference to a new ndarray view, which is a view
into the i-th object in the array :cfunc:`NpyIter_GetOperandArray`(),
whose dimensions and strides match the internal optimized
iteration pattern. A C-order iteration of this view is equivalent
to the iterator's iteration order.
For example, if an iterator was created with a single array as its
input, and it was possible to rearrange all its axes and then
collapse it into a single strided iteration, this would return
a view that is a one-dimensional array.
.. cfunction:: void NpyIter_GetReadFlags(NpyIter* iter, char* outreadflags)
Fills ``nop`` flags. Sets ``outreadflags[i]`` to 1 if
``op[i]`` can be read from, and to 0 if not.
.. cfunction:: void NpyIter_GetWriteFlags(NpyIter* iter, char* outwriteflags)
Fills ``nop`` flags. Sets ``outwriteflags[i]`` to 1 if
``op[i]`` can be written to, and to 0 if not.
.. cfunction:: int NpyIter_CreateCompatibleStrides(NpyIter* iter, npy_intp itemsize, npy_intp* outstrides)
Builds a set of strides which are the same as the strides of an
output array created using the :cdata:`NPY_ITER_ALLOCATE` flag, where NULL
was passed for op_axes. This is for data packed contiguously,
but not necessarily in C or Fortran order. This should be used
together with :cfunc:`NpyIter_GetShape` and :cfunc:`NpyIter_GetNDim`
with the flag :cdata:`NPY_ITER_MULTI_INDEX` passed into the constructor.
A use case for this function is to match the shape and layout of
the iterator and tack on one or more dimensions. For example,
in order to generate a vector per input value for a numerical gradient,
you pass in ndim*itemsize for itemsize, then add another dimension to
the end with size ndim and stride itemsize. To do the Hessian matrix,
you do the same thing but add two dimensions, or take advantage of
the symmetry and pack it into 1 dimension with a particular encoding.
This function may only be called if the iterator is tracking a multi-index
and if :cdata:`NPY_ITER_DONT_NEGATE_STRIDES` was used to prevent an axis
from being iterated in reverse order.
If an array is created with this method, simply adding 'itemsize'
for each iteration will traverse the new array matching the
iterator.
Returns ``NPY_SUCCEED`` or ``NPY_FAIL``.
Functions For Iteration
-----------------------
.. cfunction:: NpyIter_IterNextFunc* NpyIter_GetIterNext(NpyIter* iter, char** errmsg)
Returns a function pointer for iteration. A specialized version
of the function pointer may be calculated by this function
instead of being stored in the iterator structure. Thus, to
get good performance, it is required that the function pointer
be saved in a variable rather than retrieved for each loop iteration.
Returns NULL if there is an error. If errmsg is non-NULL,
no Python exception is set when ``NPY_FAIL`` is returned.
Instead, \*errmsg is set to an error message. When errmsg is
non-NULL, the function may be safely called without holding
the Python GIL.
The typical looping construct is as follows.
.. code-block:: c
NpyIter_IterNextFunc *iternext = NpyIter_GetIterNext(iter, NULL);
char** dataptr = NpyIter_GetDataPtrArray(iter);
do {
/* use the addresses dataptr[0], ... dataptr[nop-1] */
} while(iternext(iter));
When :cdata:`NPY_ITER_EXTERNAL_LOOP` is specified, the typical
inner loop construct is as follows.
.. code-block:: c
NpyIter_IterNextFunc *iternext = NpyIter_GetIterNext(iter, NULL);
char** dataptr = NpyIter_GetDataPtrArray(iter);
npy_intp* stride = NpyIter_GetInnerStrideArray(iter);
npy_intp* size_ptr = NpyIter_GetInnerLoopSizePtr(iter), size;
npy_intp iop, nop = NpyIter_GetNOp(iter);
do {
size = *size_ptr;
while (size--) {
/* use the addresses dataptr[0], ... dataptr[nop-1] */
for (iop = 0; iop < nop; ++iop) {
dataptr[iop] += stride[iop];
}
}
} while (iternext());
Observe that we are using the dataptr array inside the iterator, not
copying the values to a local temporary. This is possible because
when ``iternext()`` is called, these pointers will be overwritten
with fresh values, not incrementally updated.
If a compile-time fixed buffer is being used (both flags
:cdata:`NPY_ITER_BUFFERED` and :cdata:`NPY_ITER_EXTERNAL_LOOP`), the
inner size may be used as a signal as well. The size is guaranteed
to become zero when ``iternext()`` returns false, enabling the
following loop construct. Note that if you use this construct,
you should not pass :cdata:`NPY_ITER_GROWINNER` as a flag, because it
will cause larger sizes under some circumstances.
.. code-block:: c
/* The constructor should have buffersize passed as this value */
#define FIXED_BUFFER_SIZE 1024
NpyIter_IterNextFunc *iternext = NpyIter_GetIterNext(iter, NULL);
char **dataptr = NpyIter_GetDataPtrArray(iter);
npy_intp *stride = NpyIter_GetInnerStrideArray(iter);
npy_intp *size_ptr = NpyIter_GetInnerLoopSizePtr(iter), size;
npy_intp i, iop, nop = NpyIter_GetNOp(iter);
/* One loop with a fixed inner size */
size = *size_ptr;
while (size == FIXED_BUFFER_SIZE) {
/*
* This loop could be manually unrolled by a factor
* which divides into FIXED_BUFFER_SIZE
*/
for (i = 0; i < FIXED_BUFFER_SIZE; ++i) {
/* use the addresses dataptr[0], ... dataptr[nop-1] */
for (iop = 0; iop < nop; ++iop) {
dataptr[iop] += stride[iop];
}
}
iternext();
size = *size_ptr;
}
/* Finish-up loop with variable inner size */
if (size > 0) do {
size = *size_ptr;
while (size--) {
/* use the addresses dataptr[0], ... dataptr[nop-1] */
for (iop = 0; iop < nop; ++iop) {
dataptr[iop] += stride[iop];
}
}
} while (iternext());
.. cfunction:: NpyIter_GetMultiIndexFunc *NpyIter_GetGetMultiIndex(NpyIter* iter, char** errmsg)
Returns a function pointer for getting the current multi-index
of the iterator. Returns NULL if the iterator is not tracking
a multi-index. It is recommended that this function
pointer be cached in a local variable before the iteration
loop.
Returns NULL if there is an error. If errmsg is non-NULL,
no Python exception is set when ``NPY_FAIL`` is returned.
Instead, \*errmsg is set to an error message. When errmsg is
non-NULL, the function may be safely called without holding
the Python GIL.
.. cfunction:: char** NpyIter_GetDataPtrArray(NpyIter* iter)
This gives back a pointer to the ``nop`` data pointers. If
:cdata:`NPY_ITER_EXTERNAL_LOOP` was not specified, each data
pointer points to the current data item of the iterator. If
no inner iteration was specified, it points to the first data
item of the inner loop.
This pointer may be cached before the iteration loop, calling
``iternext`` will not change it. This function may be safely
called without holding the Python GIL.
.. cfunction:: char** NpyIter_GetInitialDataPtrArray(NpyIter* iter)
Gets the array of data pointers directly into the arrays (never
into the buffers), corresponding to iteration index 0.
These pointers are different from the pointers accepted by
``NpyIter_ResetBasePointers``, because the direction along
some axes may have been reversed.
This function may be safely called without holding the Python GIL.
.. cfunction:: npy_intp* NpyIter_GetIndexPtr(NpyIter* iter)
This gives back a pointer to the index being tracked, or NULL
if no index is being tracked. It is only useable if one of
the flags :cdata:`NPY_ITER_C_INDEX` or :cdata:`NPY_ITER_F_INDEX`
were specified during construction.
When the flag :cdata:`NPY_ITER_EXTERNAL_LOOP` is used, the code
needs to know the parameters for doing the inner loop. These
functions provide that information.
.. cfunction:: npy_intp* NpyIter_GetInnerStrideArray(NpyIter* iter)
Returns a pointer to an array of the ``nop`` strides,
one for each iterated object, to be used by the inner loop.
This pointer may be cached before the iteration loop, calling
``iternext`` will not change it. This function may be safely
called without holding the Python GIL.
.. cfunction:: npy_intp* NpyIter_GetInnerLoopSizePtr(NpyIter* iter)
Returns a pointer to the number of iterations the
inner loop should execute.
This address may be cached before the iteration loop, calling
``iternext`` will not change it. The value itself may change during
iteration, in particular if buffering is enabled. This function
may be safely called without holding the Python GIL.
.. cfunction:: void NpyIter_GetInnerFixedStrideArray(NpyIter* iter, npy_intp* out_strides)
Gets an array of strides which are fixed, or will not change during
the entire iteration. For strides that may change, the value
NPY_MAX_INTP is placed in the stride.
Once the iterator is prepared for iteration (after a reset if
:cdata:`NPY_DELAY_BUFALLOC` was used), call this to get the strides
which may be used to select a fast inner loop function. For example,
if the stride is 0, that means the inner loop can always load its
value into a variable once, then use the variable throughout the loop,
or if the stride equals the itemsize, a contiguous version for that
operand may be used.
This function may be safely called without holding the Python GIL.
.. index::
pair: iterator; C-API

View File

@@ -1,50 +0,0 @@
.. _c-api:
###########
Numpy C-API
###########
.. sectionauthor:: Travis E. Oliphant
| Beware of the man who won't be bothered with details.
| --- *William Feather, Sr.*
| The truth is out there.
| --- *Chris Carter, The X Files*
NumPy provides a C-API to enable users to extend the system and get
access to the array object for use in other routines. The best way to
truly understand the C-API is to read the source code. If you are
unfamiliar with (C) source code, however, this can be a daunting
experience at first. Be assured that the task becomes easier with
practice, and you may be surprised at how simple the C-code can be to
understand. Even if you don't think you can write C-code from scratch,
it is much easier to understand and modify already-written source code
then create it *de novo*.
Python extensions are especially straightforward to understand because
they all have a very similar structure. Admittedly, NumPy is not a
trivial extension to Python, and may take a little more snooping to
grasp. This is especially true because of the code-generation
techniques, which simplify maintenance of very similar code, but can
make the code a little less readable to beginners. Still, with a
little persistence, the code can be opened to your understanding. It
is my hope, that this guide to the C-API can assist in the process of
becoming familiar with the compiled-level work that can be done with
NumPy in order to squeeze that last bit of necessary speed out of your
code.
.. currentmodule:: numpy-c-api
.. toctree::
:maxdepth: 2
c-api.types-and-structures
c-api.config
c-api.dtype
c-api.array
c-api.iterator
c-api.ufunc
c-api.generalized-ufuncs
c-api.coremath

View File

@@ -1,1193 +0,0 @@
*****************************
Python Types and C-Structures
*****************************
.. sectionauthor:: Travis E. Oliphant
Several new types are defined in the C-code. Most of these are
accessible from Python, but a few are not exposed due to their limited
use. Every new Python type has an associated :ctype:`PyObject *` with an
internal structure that includes a pointer to a "method table" that
defines how the new object behaves in Python. When you receive a
Python object into C code, you always get a pointer to a
:ctype:`PyObject` structure. Because a :ctype:`PyObject` structure is
very generic and defines only :cmacro:`PyObject_HEAD`, by itself it
is not very interesting. However, different objects contain more
details after the :cmacro:`PyObject_HEAD` (but you have to cast to the
correct type to access them --- or use accessor functions or macros).
New Python Types Defined
========================
Python types are the functional equivalent in C of classes in Python.
By constructing a new Python type you make available a new object for
Python. The ndarray object is an example of a new type defined in C.
New types are defined in C by two basic steps:
1. creating a C-structure (usually named :ctype:`Py{Name}Object`) that is
binary- compatible with the :ctype:`PyObject` structure itself but holds
the additional information needed for that particular object;
2. populating the :ctype:`PyTypeObject` table (pointed to by the ob_type
member of the :ctype:`PyObject` structure) with pointers to functions
that implement the desired behavior for the type.
Instead of special method names which define behavior for Python
classes, there are "function tables" which point to functions that
implement the desired results. Since Python 2.2, the PyTypeObject
itself has become dynamic which allows C types that can be "sub-typed
"from other C-types in C, and sub-classed in Python. The children
types inherit the attributes and methods from their parent(s).
There are two major new types: the ndarray ( :cdata:`PyArray_Type` )
and the ufunc ( :cdata:`PyUFunc_Type` ). Additional types play a
supportive role: the :cdata:`PyArrayIter_Type`, the
:cdata:`PyArrayMultiIter_Type`, and the :cdata:`PyArrayDescr_Type`
. The :cdata:`PyArrayIter_Type` is the type for a flat iterator for an
ndarray (the object that is returned when getting the flat
attribute). The :cdata:`PyArrayMultiIter_Type` is the type of the
object returned when calling ``broadcast`` (). It handles iteration
and broadcasting over a collection of nested sequences. Also, the
:cdata:`PyArrayDescr_Type` is the data-type-descriptor type whose
instances describe the data. Finally, there are 21 new scalar-array
types which are new Python scalars corresponding to each of the
fundamental data types available for arrays. An additional 10 other
types are place holders that allow the array scalars to fit into a
hierarchy of actual Python types.
PyArray_Type
------------
.. cvar:: PyArray_Type
The Python type of the ndarray is :cdata:`PyArray_Type`. In C, every
ndarray is a pointer to a :ctype:`PyArrayObject` structure. The ob_type
member of this structure contains a pointer to the :cdata:`PyArray_Type`
typeobject.
.. ctype:: PyArrayObject
The :ctype:`PyArrayObject` C-structure contains all of the required
information for an array. All instances of an ndarray (and its
subclasses) will have this structure. For future compatibility,
these structure members should normally be accessed using the
provided macros. If you need a shorter name, then you can make use
of :ctype:`NPY_AO` which is defined to be equivalent to
:ctype:`PyArrayObject`.
.. code-block:: c
typedef struct PyArrayObject {
PyObject_HEAD
char *data;
int nd;
npy_intp *dimensions;
npy_intp *strides;
PyObject *base;
PyArray_Descr *descr;
int flags;
PyObject *weakreflist;
} PyArrayObject;
.. cmacro:: PyArrayObject.PyObject_HEAD
This is needed by all Python objects. It consists of (at least)
a reference count member ( ``ob_refcnt`` ) and a pointer to the
typeobject ( ``ob_type`` ). (Other elements may also be present
if Python was compiled with special options see
Include/object.h in the Python source tree for more
information). The ob_type member points to a Python type
object.
.. cmember:: char *PyArrayObject.data
A pointer to the first element of the array. This pointer can
(and normally should) be recast to the data type of the array.
.. cmember:: int PyArrayObject.nd
An integer providing the number of dimensions for this
array. When nd is 0, the array is sometimes called a rank-0
array. Such arrays have undefined dimensions and strides and
cannot be accessed. :cdata:`NPY_MAXDIMS` is the largest number of
dimensions for any array.
.. cmember:: npy_intp PyArrayObject.dimensions
An array of integers providing the shape in each dimension as
long as nd :math:`\geq` 1. The integer is always large enough
to hold a pointer on the platform, so the dimension size is
only limited by memory.
.. cmember:: npy_intp *PyArrayObject.strides
An array of integers providing for each dimension the number of
bytes that must be skipped to get to the next element in that
dimension.
.. cmember:: PyObject *PyArrayObject.base
This member is used to hold a pointer to another Python object
that is related to this array. There are two use cases: 1) If
this array does not own its own memory, then base points to the
Python object that owns it (perhaps another array object), 2)
If this array has the :cdata:`NPY_UPDATEIFCOPY` flag set, then this
array is a working copy of a "misbehaved" array. As soon as
this array is deleted, the array pointed to by base will be
updated with the contents of this array.
.. cmember:: PyArray_Descr *PyArrayObject.descr
A pointer to a data-type descriptor object (see below). The
data-type descriptor object is an instance of a new built-in
type which allows a generic description of memory. There is a
descriptor structure for each data type supported. This
descriptor structure contains useful information about the type
as well as a pointer to a table of function pointers to
implement specific functionality.
.. cmember:: int PyArrayObject.flags
Flags indicating how the memory pointed to by data is to be
interpreted. Possible flags are :cdata:`NPY_C_CONTIGUOUS`,
:cdata:`NPY_F_CONTIGUOUS`, :cdata:`NPY_OWNDATA`, :cdata:`NPY_ALIGNED`,
:cdata:`NPY_WRITEABLE`, and :cdata:`NPY_UPDATEIFCOPY`.
.. cmember:: PyObject *PyArrayObject.weakreflist
This member allows array objects to have weak references (using the
weakref module).
PyArrayDescr_Type
-----------------
.. cvar:: PyArrayDescr_Type
The :cdata:`PyArrayDescr_Type` is the built-in type of the
data-type-descriptor objects used to describe how the bytes comprising
the array are to be interpreted. There are 21 statically-defined
:ctype:`PyArray_Descr` objects for the built-in data-types. While these
participate in reference counting, their reference count should never
reach zero. There is also a dynamic table of user-defined
:ctype:`PyArray_Descr` objects that is also maintained. Once a
data-type-descriptor object is "registered" it should never be
deallocated either. The function :cfunc:`PyArray_DescrFromType` (...) can
be used to retrieve a :ctype:`PyArray_Descr` object from an enumerated
type-number (either built-in or user- defined).
.. ctype:: PyArray_Descr
The format of the :ctype:`PyArray_Descr` structure that lies at the
heart of the :cdata:`PyArrayDescr_Type` is
.. code-block:: c
typedef struct {
PyObject_HEAD
PyTypeObject *typeobj;
char kind;
char type;
char byteorder;
char unused;
int flags;
int type_num;
int elsize;
int alignment;
PyArray_ArrayDescr *subarray;
PyObject *fields;
PyArray_ArrFuncs *f;
} PyArray_Descr;
.. cmember:: PyTypeObject *PyArray_Descr.typeobj
Pointer to a typeobject that is the corresponding Python type for
the elements of this array. For the builtin types, this points to
the corresponding array scalar. For user-defined types, this
should point to a user-defined typeobject. This typeobject can
either inherit from array scalars or not. If it does not inherit
from array scalars, then the :cdata:`NPY_USE_GETITEM` and
:cdata:`NPY_USE_SETITEM` flags should be set in the ``flags`` member.
.. cmember:: char PyArray_Descr.kind
A character code indicating the kind of array (using the array
interface typestring notation). A 'b' represents Boolean, a 'i'
represents signed integer, a 'u' represents unsigned integer, 'f'
represents floating point, 'c' represents complex floating point, 'S'
represents 8-bit character string, 'U' represents 32-bit/character
unicode string, and 'V' repesents arbitrary.
.. cmember:: char PyArray_Descr.type
A traditional character code indicating the data type.
.. cmember:: char PyArray_Descr.byteorder
A character indicating the byte-order: '>' (big-endian), '<' (little-
endian), '=' (native), '\|' (irrelevant, ignore). All builtin data-
types have byteorder '='.
.. cmember:: int PyArray_Descr.flags
A data-type bit-flag that determines if the data-type exhibits object-
array like behavior. Each bit in this member is a flag which are named
as:
.. cvar:: NPY_ITEM_REFCOUNT
.. cvar:: NPY_ITEM_HASOBJECT
Indicates that items of this data-type must be reference
counted (using :cfunc:`Py_INCREF` and :cfunc:`Py_DECREF` ).
.. cvar:: NPY_ITEM_LISTPICKLE
Indicates arrays of this data-type must be converted to a list
before pickling.
.. cvar:: NPY_ITEM_IS_POINTER
Indicates the item is a pointer to some other data-type
.. cvar:: NPY_NEEDS_INIT
Indicates memory for this data-type must be initialized (set
to 0) on creation.
.. cvar:: NPY_NEEDS_PYAPI
Indicates this data-type requires the Python C-API during
access (so don't give up the GIL if array access is going to
be needed).
.. cvar:: NPY_USE_GETITEM
On array access use the ``f->getitem`` function pointer
instead of the standard conversion to an array scalar. Must
use if you don't define an array scalar to go along with
the data-type.
.. cvar:: NPY_USE_SETITEM
When creating a 0-d array from an array scalar use
``f->setitem`` instead of the standard copy from an array
scalar. Must use if you don't define an array scalar to go
along with the data-type.
.. cvar:: NPY_FROM_FIELDS
The bits that are inherited for the parent data-type if these
bits are set in any field of the data-type. Currently (
:cdata:`NPY_NEEDS_INIT` \| :cdata:`NPY_LIST_PICKLE` \|
:cdata:`NPY_ITEM_REFCOUNT` \| :cdata:`NPY_NEEDS_PYAPI` ).
.. cvar:: NPY_OBJECT_DTYPE_FLAGS
Bits set for the object data-type: ( :cdata:`NPY_LIST_PICKLE`
\| :cdata:`NPY_USE_GETITEM` \| :cdata:`NPY_ITEM_IS_POINTER` \|
:cdata:`NPY_REFCOUNT` \| :cdata:`NPY_NEEDS_INIT` \|
:cdata:`NPY_NEEDS_PYAPI`).
.. cfunction:: PyDataType_FLAGCHK(PyArray_Descr *dtype, int flags)
Return true if all the given flags are set for the data-type
object.
.. cfunction:: PyDataType_REFCHK(PyArray_Descr *dtype)
Equivalent to :cfunc:`PyDataType_FLAGCHK` (*dtype*,
:cdata:`NPY_ITEM_REFCOUNT`).
.. cmember:: int PyArray_Descr.type_num
A number that uniquely identifies the data type. For new data-types,
this number is assigned when the data-type is registered.
.. cmember:: int PyArray_Descr.elsize
For data types that are always the same size (such as long), this
holds the size of the data type. For flexible data types where
different arrays can have a different elementsize, this should be
0.
.. cmember:: int PyArray_Descr.alignment
A number providing alignment information for this data type.
Specifically, it shows how far from the start of a 2-element
structure (whose first element is a ``char`` ), the compiler
places an item of this type: ``offsetof(struct {char c; type v;},
v)``
.. cmember:: PyArray_ArrayDescr *PyArray_Descr.subarray
If this is non- ``NULL``, then this data-type descriptor is a
C-style contiguous array of another data-type descriptor. In
other-words, each element that this descriptor describes is
actually an array of some other base descriptor. This is most
useful as the data-type descriptor for a field in another
data-type descriptor. The fields member should be ``NULL`` if this
is non- ``NULL`` (the fields member of the base descriptor can be
non- ``NULL`` however). The :ctype:`PyArray_ArrayDescr` structure is
defined using
.. code-block:: c
typedef struct {
PyArray_Descr *base;
PyObject *shape;
} PyArray_ArrayDescr;
The elements of this structure are:
.. cmember:: PyArray_Descr *PyArray_ArrayDescr.base
The data-type-descriptor object of the base-type.
.. cmember:: PyObject *PyArray_ArrayDescr.shape
The shape (always C-style contiguous) of the sub-array as a Python
tuple.
.. cmember:: PyObject *PyArray_Descr.fields
If this is non-NULL, then this data-type-descriptor has fields
described by a Python dictionary whose keys are names (and also
titles if given) and whose values are tuples that describe the
fields. Recall that a data-type-descriptor always describes a
fixed-length set of bytes. A field is a named sub-region of that
total, fixed-length collection. A field is described by a tuple
composed of another data- type-descriptor and a byte
offset. Optionally, the tuple may contain a title which is
normally a Python string. These tuples are placed in this
dictionary keyed by name (and also title if given).
.. cmember:: PyArray_ArrFuncs *PyArray_Descr.f
A pointer to a structure containing functions that the type needs
to implement internal features. These functions are not the same
thing as the universal functions (ufuncs) described later. Their
signatures can vary arbitrarily.
.. ctype:: PyArray_ArrFuncs
Functions implementing internal features. Not all of these
function pointers must be defined for a given type. The required
members are ``nonzero``, ``copyswap``, ``copyswapn``, ``setitem``,
``getitem``, and ``cast``. These are assumed to be non- ``NULL``
and ``NULL`` entries will cause a program crash. The other
functions may be ``NULL`` which will just mean reduced
functionality for that data-type. (Also, the nonzero function will
be filled in with a default function if it is ``NULL`` when you
register a user-defined data-type).
.. code-block:: c
typedef struct {
PyArray_VectorUnaryFunc *cast[PyArray_NTYPES];
PyArray_GetItemFunc *getitem;
PyArray_SetItemFunc *setitem;
PyArray_CopySwapNFunc *copyswapn;
PyArray_CopySwapFunc *copyswap;
PyArray_CompareFunc *compare;
PyArray_ArgFunc *argmax;
PyArray_DotFunc *dotfunc;
PyArray_ScanFunc *scanfunc;
PyArray_FromStrFunc *fromstr;
PyArray_NonzeroFunc *nonzero;
PyArray_FillFunc *fill;
PyArray_FillWithScalarFunc *fillwithscalar;
PyArray_SortFunc *sort[PyArray_NSORTS];
PyArray_ArgSortFunc *argsort[PyArray_NSORTS];
PyObject *castdict;
PyArray_ScalarKindFunc *scalarkind;
int **cancastscalarkindto;
int *cancastto;
int listpickle
} PyArray_ArrFuncs;
The concept of a behaved segment is used in the description of the
function pointers. A behaved segment is one that is aligned and in
native machine byte-order for the data-type. The ``nonzero``,
``copyswap``, ``copyswapn``, ``getitem``, and ``setitem``
functions can (and must) deal with mis-behaved arrays. The other
functions require behaved memory segments.
.. cmember:: void cast(void *from, void *to, npy_intp n, void *fromarr,
void *toarr)
An array of function pointers to cast from the current type to
all of the other builtin types. Each function casts a
contiguous, aligned, and notswapped buffer pointed at by
*from* to a contiguous, aligned, and notswapped buffer pointed
at by *to* The number of items to cast is given by *n*, and
the arguments *fromarr* and *toarr* are interpreted as
PyArrayObjects for flexible arrays to get itemsize
information.
.. cmember:: PyObject *getitem(void *data, void *arr)
A pointer to a function that returns a standard Python object
from a single element of the array object *arr* pointed to by
*data*. This function must be able to deal with "misbehaved
"(misaligned and/or swapped) arrays correctly.
.. cmember:: int setitem(PyObject *item, void *data, void *arr)
A pointer to a function that sets the Python object *item*
into the array, *arr*, at the position pointed to by *data*
. This function deals with "misbehaved" arrays. If successful,
a zero is returned, otherwise, a negative one is returned (and
a Python error set).
.. cmember:: void copyswapn(void *dest, npy_intp dstride, void *src,
npy_intp sstride, npy_intp n, int swap, void *arr)
.. cmember:: void copyswap(void *dest, void *src, int swap, void *arr)
These members are both pointers to functions to copy data from
*src* to *dest* and *swap* if indicated. The value of arr is
only used for flexible ( :cdata:`NPY_STRING`, :cdata:`NPY_UNICODE`,
and :cdata:`NPY_VOID` ) arrays (and is obtained from
``arr->descr->elsize`` ). The second function copies a single
value, while the first loops over n values with the provided
strides. These functions can deal with misbehaved *src*
data. If *src* is NULL then no copy is performed. If *swap* is
0, then no byteswapping occurs. It is assumed that *dest* and
*src* do not overlap. If they overlap, then use ``memmove``
(...) first followed by ``copyswap(n)`` with NULL valued
``src``.
.. cmember:: int compare(const void* d1, const void* d2, void* arr)
A pointer to a function that compares two elements of the
array, ``arr``, pointed to by ``d1`` and ``d2``. This
function requires behaved arrays. The return value is 1 if *
``d1`` > * ``d2``, 0 if * ``d1`` == * ``d2``, and -1 if *
``d1`` < * ``d2``. The array object arr is used to retrieve
itemsize and field information for flexible arrays.
.. cmember:: int argmax(void* data, npy_intp n, npy_intp* max_ind,
void* arr)
A pointer to a function that retrieves the index of the
largest of ``n`` elements in ``arr`` beginning at the element
pointed to by ``data``. This function requires that the
memory segment be contiguous and behaved. The return value is
always 0. The index of the largest element is returned in
``max_ind``.
.. cmember:: void dotfunc(void* ip1, npy_intp is1, void* ip2, npy_intp is2,
void* op, npy_intp n, void* arr)
A pointer to a function that multiplies two ``n`` -length
sequences together, adds them, and places the result in
element pointed to by ``op`` of ``arr``. The start of the two
sequences are pointed to by ``ip1`` and ``ip2``. To get to
the next element in each sequence requires a jump of ``is1``
and ``is2`` *bytes*, respectively. This function requires
behaved (though not necessarily contiguous) memory.
.. cmember:: int scanfunc(FILE* fd, void* ip , void* sep , void* arr)
A pointer to a function that scans (scanf style) one element
of the corresponding type from the file descriptor ``fd`` into
the array memory pointed to by ``ip``. The array is assumed
to be behaved. If ``sep`` is not NULL, then a separator string
is also scanned from the file before returning. The last
argument ``arr`` is the array to be scanned into. A 0 is
returned if the scan is successful. A negative number
indicates something went wrong: -1 means the end of file was
reached before the separator string could be scanned, -4 means
that the end of file was reached before the element could be
scanned, and -3 means that the element could not be
interpreted from the format string. Requires a behaved array.
.. cmember:: int fromstr(char* str, void* ip, char** endptr, void* arr)
A pointer to a function that converts the string pointed to by
``str`` to one element of the corresponding type and places it
in the memory location pointed to by ``ip``. After the
conversion is completed, ``*endptr`` points to the rest of the
string. The last argument ``arr`` is the array into which ip
points (needed for variable-size data- types). Returns 0 on
success or -1 on failure. Requires a behaved array.
.. cmember:: Bool nonzero(void* data, void* arr)
A pointer to a function that returns TRUE if the item of
``arr`` pointed to by ``data`` is nonzero. This function can
deal with misbehaved arrays.
.. cmember:: void fill(void* data, npy_intp length, void* arr)
A pointer to a function that fills a contiguous array of given
length with data. The first two elements of the array must
already be filled- in. From these two values, a delta will be
computed and the values from item 3 to the end will be
computed by repeatedly adding this computed delta. The data
buffer must be well-behaved.
.. cmember:: void fillwithscalar(void* buffer, npy_intp length,
void* value, void* arr)
A pointer to a function that fills a contiguous ``buffer`` of
the given ``length`` with a single scalar ``value`` whose
address is given. The final argument is the array which is
needed to get the itemsize for variable-length arrays.
.. cmember:: int sort(void* start, npy_intp length, void* arr)
An array of function pointers to a particular sorting
algorithms. A particular sorting algorithm is obtained using a
key (so far :cdata:`PyArray_QUICKSORT`, :data`PyArray_HEAPSORT`, and
:cdata:`PyArray_MERGESORT` are defined). These sorts are done
in-place assuming contiguous and aligned data.
.. cmember:: int argsort(void* start, npy_intp* result, npy_intp length,
void \*arr)
An array of function pointers to sorting algorithms for this
data type. The same sorting algorithms as for sort are
available. The indices producing the sort are returned in
result (which must be initialized with indices 0 to length-1
inclusive).
.. cmember:: PyObject *castdict
Either ``NULL`` or a dictionary containing low-level casting
functions for user- defined data-types. Each function is
wrapped in a :ctype:`PyCObject *` and keyed by the data-type number.
.. cmember:: PyArray_SCALARKIND scalarkind(PyArrayObject* arr)
A function to determine how scalars of this type should be
interpreted. The argument is ``NULL`` or a 0-dimensional array
containing the data (if that is needed to determine the kind
of scalar). The return value must be of type
:ctype:`PyArray_SCALARKIND`.
.. cmember:: int **cancastscalarkindto
Either ``NULL`` or an array of :ctype:`PyArray_NSCALARKINDS`
pointers. These pointers should each be either ``NULL`` or a
pointer to an array of integers (terminated by
:cdata:`PyArray_NOTYPE`) indicating data-types that a scalar of
this data-type of the specified kind can be cast to safely
(this usually means without losing precision).
.. cmember:: int *cancastto
Either ``NULL`` or an array of integers (terminated by
:cdata:`PyArray_NOTYPE` ) indicated data-types that this data-type
can be cast to safely (this usually means without losing
precision).
.. cmember:: int listpickle
Unused.
The :cdata:`PyArray_Type` typeobject implements many of the features of
Python objects including the tp_as_number, tp_as_sequence,
tp_as_mapping, and tp_as_buffer interfaces. The rich comparison
(tp_richcompare) is also used along with new-style attribute lookup
for methods (tp_methods) and properties (tp_getset). The
:cdata:`PyArray_Type` can also be sub-typed.
.. tip::
The tp_as_number methods use a generic approach to call whatever
function has been registered for handling the operation. The
function PyNumeric_SetOps(..) can be used to register functions to
handle particular mathematical operations (for all arrays). When
the umath module is imported, it sets the numeric operations for
all arrays to the corresponding ufuncs. The tp_str and tp_repr
methods can also be altered using PyString_SetStringFunction(...).
PyUFunc_Type
------------
.. cvar:: PyUFunc_Type
The ufunc object is implemented by creation of the
:cdata:`PyUFunc_Type`. It is a very simple type that implements only
basic getattribute behavior, printing behavior, and has call
behavior which allows these objects to act like functions. The
basic idea behind the ufunc is to hold a reference to fast
1-dimensional (vector) loops for each data type that supports the
operation. These one-dimensional loops all have the same signature
and are the key to creating a new ufunc. They are called by the
generic looping code as appropriate to implement the N-dimensional
function. There are also some generic 1-d loops defined for
floating and complexfloating arrays that allow you to define a
ufunc using a single scalar function (*e.g.* atanh).
.. ctype:: PyUFuncObject
The core of the ufunc is the :ctype:`PyUFuncObject` which contains all
the information needed to call the underlying C-code loops that
perform the actual work. It has the following structure:
.. code-block:: c
typedef struct {
PyObject_HEAD
int nin;
int nout;
int nargs;
int identity;
PyUFuncGenericFunction *functions;
void **data;
int ntypes;
int check_return;
char *name;
char *types;
char *doc;
void *ptr;
PyObject *obj;
PyObject *userloops;
} PyUFuncObject;
.. cmacro:: PyUFuncObject.PyObject_HEAD
required for all Python objects.
.. cmember:: int PyUFuncObject.nin
The number of input arguments.
.. cmember:: int PyUFuncObject.nout
The number of output arguments.
.. cmember:: int PyUFuncObject.nargs
The total number of arguments (*nin* + *nout*). This must be
less than :cdata:`NPY_MAXARGS`.
.. cmember:: int PyUFuncObject.identity
Either :cdata:`PyUFunc_One`, :cdata:`PyUFunc_Zero`, or
:cdata:`PyUFunc_None` to indicate the identity for this operation.
It is only used for a reduce-like call on an empty array.
.. cmember:: void PyUFuncObject.functions(char** args, npy_intp* dims,
npy_intp* steps, void* extradata)
An array of function pointers --- one for each data type
supported by the ufunc. This is the vector loop that is called
to implement the underlying function *dims* [0] times. The
first argument, *args*, is an array of *nargs* pointers to
behaved memory. Pointers to the data for the input arguments
are first, followed by the pointers to the data for the output
arguments. How many bytes must be skipped to get to the next
element in the sequence is specified by the corresponding entry
in the *steps* array. The last argument allows the loop to
receive extra information. This is commonly used so that a
single, generic vector loop can be used for multiple
functions. In this case, the actual scalar function to call is
passed in as *extradata*. The size of this function pointer
array is ntypes.
.. cmember:: void **PyUFuncObject.data
Extra data to be passed to the 1-d vector loops or ``NULL`` if
no extra-data is needed. This C-array must be the same size (
*i.e.* ntypes) as the functions array. ``NULL`` is used if
extra_data is not needed. Several C-API calls for UFuncs are
just 1-d vector loops that make use of this extra data to
receive a pointer to the actual function to call.
.. cmember:: int PyUFuncObject.ntypes
The number of supported data types for the ufunc. This number
specifies how many different 1-d loops (of the builtin data types) are
available.
.. cmember:: int PyUFuncObject.check_return
Obsolete and unused. However, it is set by the corresponding entry in
the main ufunc creation routine: :cfunc:`PyUFunc_FromFuncAndData` (...).
.. cmember:: char *PyUFuncObject.name
A string name for the ufunc. This is used dynamically to build
the __doc\__ attribute of ufuncs.
.. cmember:: char *PyUFuncObject.types
An array of *nargs* :math:`\times` *ntypes* 8-bit type_numbers
which contains the type signature for the function for each of
the supported (builtin) data types. For each of the *ntypes*
functions, the corresponding set of type numbers in this array
shows how the *args* argument should be interpreted in the 1-d
vector loop. These type numbers do not have to be the same type
and mixed-type ufuncs are supported.
.. cmember:: char *PyUFuncObject.doc
Documentation for the ufunc. Should not contain the function
signature as this is generated dynamically when __doc\__ is
retrieved.
.. cmember:: void *PyUFuncObject.ptr
Any dynamically allocated memory. Currently, this is used for dynamic
ufuncs created from a python function to store room for the types,
data, and name members.
.. cmember:: PyObject *PyUFuncObject.obj
For ufuncs dynamically created from python functions, this member
holds a reference to the underlying Python function.
.. cmember:: PyObject *PyUFuncObject.userloops
A dictionary of user-defined 1-d vector loops (stored as CObject ptrs)
for user-defined types. A loop may be registered by the user for any
user-defined type. It is retrieved by type number. User defined type
numbers are always larger than :cdata:`NPY_USERDEF`.
PyArrayIter_Type
----------------
.. cvar:: PyArrayIter_Type
This is an iterator object that makes it easy to loop over an N-dimensional
array. It is the object returned from the flat attribute of an
ndarray. It is also used extensively throughout the implementation
internals to loop over an N-dimensional array. The tp_as_mapping
interface is implemented so that the iterator object can be indexed
(using 1-d indexing), and a few methods are implemented through the
tp_methods table. This object implements the next method and can be
used anywhere an iterator can be used in Python.
.. ctype:: PyArrayIterObject
The C-structure corresponding to an object of :cdata:`PyArrayIter_Type` is
the :ctype:`PyArrayIterObject`. The :ctype:`PyArrayIterObject` is used to
keep track of a pointer into an N-dimensional array. It contains associated
information used to quickly march through the array. The pointer can
be adjusted in three basic ways: 1) advance to the "next" position in
the array in a C-style contiguous fashion, 2) advance to an arbitrary
N-dimensional coordinate in the array, and 3) advance to an arbitrary
one-dimensional index into the array. The members of the
:ctype:`PyArrayIterObject` structure are used in these
calculations. Iterator objects keep their own dimension and strides
information about an array. This can be adjusted as needed for
"broadcasting," or to loop over only specific dimensions.
.. code-block:: c
typedef struct {
PyObject_HEAD
int nd_m1;
npy_intp index;
npy_intp size;
npy_intp coordinates[NPY_MAXDIMS];
npy_intp dims_m1[NPY_MAXDIMS];
npy_intp strides[NPY_MAXDIMS];
npy_intp backstrides[NPY_MAXDIMS];
npy_intp factors[NPY_MAXDIMS];
PyArrayObject *ao;
char *dataptr;
Bool contiguous;
} PyArrayIterObject;
.. cmember:: int PyArrayIterObject.nd_m1
:math:`N-1` where :math:`N` is the number of dimensions in the
underlying array.
.. cmember:: npy_intp PyArrayIterObject.index
The current 1-d index into the array.
.. cmember:: npy_intp PyArrayIterObject.size
The total size of the underlying array.
.. cmember:: npy_intp *PyArrayIterObject.coordinates
An :math:`N` -dimensional index into the array.
.. cmember:: npy_intp *PyArrayIterObject.dims_m1
The size of the array minus 1 in each dimension.
.. cmember:: npy_intp *PyArrayIterObject.strides
The strides of the array. How many bytes needed to jump to the next
element in each dimension.
.. cmember:: npy_intp *PyArrayIterObject.backstrides
How many bytes needed to jump from the end of a dimension back
to its beginning. Note that *backstrides* [k]= *strides* [k]*d
*ims_m1* [k], but it is stored here as an optimization.
.. cmember:: npy_intp *PyArrayIterObject.factors
This array is used in computing an N-d index from a 1-d index. It
contains needed products of the dimensions.
.. cmember:: PyArrayObject *PyArrayIterObject.ao
A pointer to the underlying ndarray this iterator was created to
represent.
.. cmember:: char *PyArrayIterObject.dataptr
This member points to an element in the ndarray indicated by the
index.
.. cmember:: Bool PyArrayIterObject.contiguous
This flag is true if the underlying array is
:cdata:`NPY_C_CONTIGUOUS`. It is used to simplify calculations when
possible.
How to use an array iterator on a C-level is explained more fully in
later sections. Typically, you do not need to concern yourself with
the internal structure of the iterator object, and merely interact
with it through the use of the macros :cfunc:`PyArray_ITER_NEXT` (it),
:cfunc:`PyArray_ITER_GOTO` (it, dest), or :cfunc:`PyArray_ITER_GOTO1D` (it,
index). All of these macros require the argument *it* to be a
:ctype:`PyArrayIterObject *`.
PyArrayMultiIter_Type
---------------------
.. cvar:: PyArrayMultiIter_Type
This type provides an iterator that encapsulates the concept of
broadcasting. It allows :math:`N` arrays to be broadcast together
so that the loop progresses in C-style contiguous fashion over the
broadcasted array. The corresponding C-structure is the
:ctype:`PyArrayMultiIterObject` whose memory layout must begin any
object, *obj*, passed in to the :cfunc:`PyArray_Broadcast` (obj)
function. Broadcasting is performed by adjusting array iterators so
that each iterator represents the broadcasted shape and size, but
has its strides adjusted so that the correct element from the array
is used at each iteration.
.. ctype:: PyArrayMultiIterObject
.. code-block:: c
typedef struct {
PyObject_HEAD
int numiter;
npy_intp size;
npy_intp index;
int nd;
npy_intp dimensions[NPY_MAXDIMS];
PyArrayIterObject *iters[NPY_MAXDIMS];
} PyArrayMultiIterObject;
.. cmacro:: PyArrayMultiIterObject.PyObject_HEAD
Needed at the start of every Python object (holds reference count and
type identification).
.. cmember:: int PyArrayMultiIterObject.numiter
The number of arrays that need to be broadcast to the same shape.
.. cmember:: npy_intp PyArrayMultiIterObject.size
The total broadcasted size.
.. cmember:: npy_intp PyArrayMultiIterObject.index
The current (1-d) index into the broadcasted result.
.. cmember:: int PyArrayMultiIterObject.nd
The number of dimensions in the broadcasted result.
.. cmember:: npy_intp *PyArrayMultiIterObject.dimensions
The shape of the broadcasted result (only ``nd`` slots are used).
.. cmember:: PyArrayIterObject **PyArrayMultiIterObject.iters
An array of iterator objects that holds the iterators for the arrays
to be broadcast together. On return, the iterators are adjusted for
broadcasting.
PyArrayNeighborhoodIter_Type
----------------------------
.. cvar:: PyArrayNeighborhoodIter_Type
This is an iterator object that makes it easy to loop over an N-dimensional
neighborhood.
.. ctype:: PyArrayNeighborhoodIterObject
The C-structure corresponding to an object of
:cdata:`PyArrayNeighborhoodIter_Type` is the
:ctype:`PyArrayNeighborhoodIterObject`.
PyArrayFlags_Type
-----------------
.. cvar:: PyArrayFlags_Type
When the flags attribute is retrieved from Python, a special
builtin object of this type is constructed. This special type makes
it easier to work with the different flags by accessing them as
attributes or by accessing them as if the object were a dictionary
with the flag names as entries.
ScalarArrayTypes
----------------
There is a Python type for each of the different built-in data types
that can be present in the array Most of these are simple wrappers
around the corresponding data type in C. The C-names for these types
are :cdata:`Py{TYPE}ArrType_Type` where ``{TYPE}`` can be
**Bool**, **Byte**, **Short**, **Int**, **Long**, **LongLong**,
**UByte**, **UShort**, **UInt**, **ULong**, **ULongLong**,
**Half**, **Float**, **Double**, **LongDouble**, **CFloat**, **CDouble**,
**CLongDouble**, **String**, **Unicode**, **Void**, and
**Object**.
These type names are part of the C-API and can therefore be created in
extension C-code. There is also a :cdata:`PyIntpArrType_Type` and a
:cdata:`PyUIntpArrType_Type` that are simple substitutes for one of the
integer types that can hold a pointer on the platform. The structure
of these scalar objects is not exposed to C-code. The function
:cfunc:`PyArray_ScalarAsCtype` (..) can be used to extract the C-type value
from the array scalar and the function :cfunc:`PyArray_Scalar` (...) can be
used to construct an array scalar from a C-value.
Other C-Structures
==================
A few new C-structures were found to be useful in the development of
NumPy. These C-structures are used in at least one C-API call and are
therefore documented here. The main reason these structures were
defined is to make it easy to use the Python ParseTuple C-API to
convert from Python objects to a useful C-Object.
PyArray_Dims
------------
.. ctype:: PyArray_Dims
This structure is very useful when shape and/or strides information is
supposed to be interpreted. The structure is:
.. code-block:: c
typedef struct {
npy_intp *ptr;
int len;
} PyArray_Dims;
The members of this structure are
.. cmember:: npy_intp *PyArray_Dims.ptr
A pointer to a list of (:ctype:`npy_intp`) integers which usually
represent array shape or array strides.
.. cmember:: int PyArray_Dims.len
The length of the list of integers. It is assumed safe to
access *ptr* [0] to *ptr* [len-1].
PyArray_Chunk
-------------
.. ctype:: PyArray_Chunk
This is equivalent to the buffer object structure in Python up to
the ptr member. On 32-bit platforms (*i.e.* if :cdata:`NPY_SIZEOF_INT`
== :cdata:`NPY_SIZEOF_INTP` ) or in Python 2.5, the len member also
matches an equivalent member of the buffer object. It is useful to
represent a generic single- segment chunk of memory.
.. code-block:: c
typedef struct {
PyObject_HEAD
PyObject *base;
void *ptr;
npy_intp len;
int flags;
} PyArray_Chunk;
The members are
.. cmacro:: PyArray_Chunk.PyObject_HEAD
Necessary for all Python objects. Included here so that the
:ctype:`PyArray_Chunk` structure matches that of the buffer object
(at least to the len member).
.. cmember:: PyObject *PyArray_Chunk.base
The Python object this chunk of memory comes from. Needed so that
memory can be accounted for properly.
.. cmember:: void *PyArray_Chunk.ptr
A pointer to the start of the single-segment chunk of memory.
.. cmember:: npy_intp PyArray_Chunk.len
The length of the segment in bytes.
.. cmember:: int PyArray_Chunk.flags
Any data flags (*e.g.* :cdata:`NPY_WRITEABLE` ) that should be used
to interpret the memory.
PyArrayInterface
----------------
.. seealso:: :ref:`arrays.interface`
.. ctype:: PyArrayInterface
The :ctype:`PyArrayInterface` structure is defined so that NumPy and
other extension modules can use the rapid array interface
protocol. The :obj:`__array_struct__` method of an object that
supports the rapid array interface protocol should return a
:ctype:`PyCObject` that contains a pointer to a :ctype:`PyArrayInterface`
structure with the relevant details of the array. After the new
array is created, the attribute should be ``DECREF``'d which will
free the :ctype:`PyArrayInterface` structure. Remember to ``INCREF`` the
object (whose :obj:`__array_struct__` attribute was retrieved) and
point the base member of the new :ctype:`PyArrayObject` to this same
object. In this way the memory for the array will be managed
correctly.
.. code-block:: c
typedef struct {
int two;
int nd;
char typekind;
int itemsize;
int flags;
npy_intp *shape;
npy_intp *strides;
void *data;
PyObject *descr;
} PyArrayInterface;
.. cmember:: int PyArrayInterface.two
the integer 2 as a sanity check.
.. cmember:: int PyArrayInterface.nd
the number of dimensions in the array.
.. cmember:: char PyArrayInterface.typekind
A character indicating what kind of array is present according to the
typestring convention with 't' -> bitfield, 'b' -> Boolean, 'i' ->
signed integer, 'u' -> unsigned integer, 'f' -> floating point, 'c' ->
complex floating point, 'O' -> object, 'S' -> string, 'U' -> unicode,
'V' -> void.
.. cmember:: int PyArrayInterface.itemsize
The number of bytes each item in the array requires.
.. cmember:: int PyArrayInterface.flags
Any of the bits :cdata:`NPY_C_CONTIGUOUS` (1),
:cdata:`NPY_F_CONTIGUOUS` (2), :cdata:`NPY_ALIGNED` (0x100),
:cdata:`NPY_NOTSWAPPED` (0x200), or :cdata:`NPY_WRITEABLE`
(0x400) to indicate something about the data. The
:cdata:`NPY_ALIGNED`, :cdata:`NPY_C_CONTIGUOUS`, and
:cdata:`NPY_F_CONTIGUOUS` flags can actually be determined from
the other parameters. The flag :cdata:`NPY_ARR_HAS_DESCR`
(0x800) can also be set to indicate to objects consuming the
version 3 array interface that the descr member of the
structure is present (it will be ignored by objects consuming
version 2 of the array interface).
.. cmember:: npy_intp *PyArrayInterface.shape
An array containing the size of the array in each dimension.
.. cmember:: npy_intp *PyArrayInterface.strides
An array containing the number of bytes to jump to get to the next
element in each dimension.
.. cmember:: void *PyArrayInterface.data
A pointer *to* the first element of the array.
.. cmember:: PyObject *PyArrayInterface.descr
A Python object describing the data-type in more detail (same
as the *descr* key in :obj:`__array_interface__`). This can be
``NULL`` if *typekind* and *itemsize* provide enough
information. This field is also ignored unless
:cdata:`ARR_HAS_DESCR` flag is on in *flags*.
Internally used structures
--------------------------
Internally, the code uses some additional Python objects primarily for
memory management. These types are not accessible directly from
Python, and are not exposed to the C-API. They are included here only
for completeness and assistance in understanding the code.
.. ctype:: PyUFuncLoopObject
A loose wrapper for a C-structure that contains the information
needed for looping. This is useful if you are trying to understand
the ufunc looping code. The :ctype:`PyUFuncLoopObject` is the associated
C-structure. It is defined in the ``ufuncobject.h`` header.
.. ctype:: PyUFuncReduceObject
A loose wrapper for the C-structure that contains the information
needed for reduce-like methods of ufuncs. This is useful if you are
trying to understand the reduce, accumulate, and reduce-at
code. The :ctype:`PyUFuncReduceObject` is the associated C-structure. It
is defined in the ``ufuncobject.h`` header.
.. ctype:: PyUFunc_Loop1d
A simple linked-list of C-structures containing the information needed
to define a 1-d loop for a ufunc for every defined signature of a
user-defined data-type.
.. cvar:: PyArrayMapIter_Type
Advanced indexing is handled with this Python type. It is simply a
loose wrapper around the C-structure containing the variables
needed for advanced array indexing. The associated C-structure,
:ctype:`PyArrayMapIterObject`, is useful if you are trying to
understand the advanced-index mapping code. It is defined in the
``arrayobject.h`` header. This type is not exposed to Python and
could be replaced with a C-structure. As a Python type it takes
advantage of reference- counted memory management.

View File

@@ -1,387 +0,0 @@
UFunc API
=========
.. sectionauthor:: Travis E. Oliphant
.. index::
pair: ufunc; C-API
Constants
---------
.. cvar:: UFUNC_ERR_{HANDLER}
``{HANDLER}`` can be **IGNORE**, **WARN**, **RAISE**, or **CALL**
.. cvar:: UFUNC_{THING}_{ERR}
``{THING}`` can be **MASK**, **SHIFT**, or **FPE**, and ``{ERR}`` can
be **DIVIDEBYZERO**, **OVERFLOW**, **UNDERFLOW**, and **INVALID**.
.. cvar:: PyUFunc_{VALUE}
``{VALUE}`` can be **One** (1), **Zero** (0), or **None** (-1)
Macros
------
.. cmacro:: NPY_LOOP_BEGIN_THREADS
Used in universal function code to only release the Python GIL if
loop->obj is not true (*i.e.* this is not an OBJECT array
loop). Requires use of :cmacro:`NPY_BEGIN_THREADS_DEF` in variable
declaration area.
.. cmacro:: NPY_LOOP_END_THREADS
Used in universal function code to re-acquire the Python GIL if it
was released (because loop->obj was not true).
.. cfunction:: UFUNC_CHECK_ERROR(loop)
A macro used internally to check for errors and goto fail if
found. This macro requires a fail label in the current code
block. The *loop* variable must have at least members (obj,
errormask, and errorobj). If *loop* ->obj is nonzero, then
:cfunc:`PyErr_Occurred` () is called (meaning the GIL must be held). If
*loop* ->obj is zero, then if *loop* ->errormask is nonzero,
:cfunc:`PyUFunc_checkfperr` is called with arguments *loop* ->errormask
and *loop* ->errobj. If the result of this check of the IEEE
floating point registers is true then the code redirects to the
fail label which must be defined.
.. cfunction:: UFUNC_CHECK_STATUS(ret)
A macro that expands to platform-dependent code. The *ret*
variable can can be any integer. The :cdata:`UFUNC_FPE_{ERR}` bits are
set in *ret* according to the status of the corresponding error
flags of the floating point processor.
Functions
---------
.. cfunction:: PyObject* PyUFunc_FromFuncAndData(PyUFuncGenericFunction* func,
void** data, char* types, int ntypes, int nin, int nout, int identity,
char* name, char* doc, int check_return)
Create a new broadcasting universal function from required variables.
Each ufunc builds around the notion of an element-by-element
operation. Each ufunc object contains pointers to 1-d loops
implementing the basic functionality for each supported type.
.. note::
The *func*, *data*, *types*, *name*, and *doc* arguments are not
copied by :cfunc:`PyUFunc_FromFuncAndData`. The caller must ensure
that the memory used by these arrays is not freed as long as the
ufunc object is alive.
:param func:
Must to an array of length *ntypes* containing
:ctype:`PyUFuncGenericFunction` items. These items are pointers to
functions that actually implement the underlying
(element-by-element) function :math:`N` times.
:param data:
Should be ``NULL`` or a pointer to an array of size *ntypes*
. This array may contain arbitrary extra-data to be passed to
the corresponding 1-d loop function in the func array.
:param types:
Must be of length (*nin* + *nout*) \* *ntypes*, and it
contains the data-types (built-in only) that the corresponding
function in the *func* array can deal with.
:param ntypes:
How many different data-type "signatures" the ufunc has implemented.
:param nin:
The number of inputs to this operation.
:param nout:
The number of outputs
:param name:
The name for the ufunc. Specifying a name of 'add' or
'multiply' enables a special behavior for integer-typed
reductions when no dtype is given. If the input type is an
integer (or boolean) data type smaller than the size of the int_
data type, it will be internally upcast to the int_ (or uint)
data type.
:param doc:
Allows passing in a documentation string to be stored with the
ufunc. The documentation string should not contain the name
of the function or the calling signature as that will be
dynamically determined from the object and available when
accessing the **__doc__** attribute of the ufunc.
:param check_return:
Unused and present for backwards compatibility of the C-API. A
corresponding *check_return* integer does exist in the ufunc
structure and it does get set with this value when the ufunc
object is created.
.. cfunction:: int PyUFunc_RegisterLoopForType(PyUFuncObject* ufunc,
int usertype, PyUFuncGenericFunction function, int* arg_types, void* data)
This function allows the user to register a 1-d loop with an
already- created ufunc to be used whenever the ufunc is called
with any of its input arguments as the user-defined
data-type. This is needed in order to make ufuncs work with
built-in data-types. The data-type must have been previously
registered with the numpy system. The loop is passed in as
*function*. This loop can take arbitrary data which should be
passed in as *data*. The data-types the loop requires are passed
in as *arg_types* which must be a pointer to memory at least as
large as ufunc->nargs.
.. cfunction:: int PyUFunc_ReplaceLoopBySignature(PyUFuncObject* ufunc,
PyUFuncGenericFunction newfunc, int* signature,
PyUFuncGenericFunction* oldfunc)
Replace a 1-d loop matching the given *signature* in the
already-created *ufunc* with the new 1-d loop newfunc. Return the
old 1-d loop function in *oldfunc*. Return 0 on success and -1 on
failure. This function works only with built-in types (use
:cfunc:`PyUFunc_RegisterLoopForType` for user-defined types). A
signature is an array of data-type numbers indicating the inputs
followed by the outputs assumed by the 1-d loop.
.. cfunction:: int PyUFunc_GenericFunction(PyUFuncObject* self,
PyObject* args, PyObject* kwds, PyArrayObject** mps)
A generic ufunc call. The ufunc is passed in as *self*, the arguments
to the ufunc as *args* and *kwds*. The *mps* argument is an array of
:ctype:`PyArrayObject` pointers whose values are discarded and which
receive the converted input arguments as well as the ufunc outputs
when success is returned. The user is responsible for managing this
array and receives a new reference for each array in *mps*. The total
number of arrays in *mps* is given by *self* ->nin + *self* ->nout.
Returns 0 on success, -1 on error.
.. cfunction:: int PyUFunc_checkfperr(int errmask, PyObject* errobj)
A simple interface to the IEEE error-flag checking support. The
*errmask* argument is a mask of :cdata:`UFUNC_MASK_{ERR}` bitmasks
indicating which errors to check for (and how to check for
them). The *errobj* must be a Python tuple with two elements: a
string containing the name which will be used in any communication
of error and either a callable Python object (call-back function)
or :cdata:`Py_None`. The callable object will only be used if
:cdata:`UFUNC_ERR_CALL` is set as the desired error checking
method. This routine manages the GIL and is safe to call even
after releasing the GIL. If an error in the IEEE-compatibile
hardware is determined a -1 is returned, otherwise a 0 is
returned.
.. cfunction:: void PyUFunc_clearfperr()
Clear the IEEE error flags.
.. cfunction:: void PyUFunc_GetPyValues(char* name, int* bufsize,
int* errmask, PyObject** errobj)
Get the Python values used for ufunc processing from the
thread-local storage area unless the defaults have been set in
which case the name lookup is bypassed. The name is placed as a
string in the first element of *\*errobj*. The second element is
the looked-up function to call on error callback. The value of the
looked-up buffer-size to use is passed into *bufsize*, and the
value of the error mask is placed into *errmask*.
Generic functions
-----------------
At the core of every ufunc is a collection of type-specific functions
that defines the basic functionality for each of the supported types.
These functions must evaluate the underlying function :math:`N\geq1`
times. Extra-data may be passed in that may be used during the
calculation. This feature allows some general functions to be used as
these basic looping functions. The general function has all the code
needed to point variables to the right place and set up a function
call. The general function assumes that the actual function to call is
passed in as the extra data and calls it with the correct values. All
of these functions are suitable for placing directly in the array of
functions stored in the functions member of the PyUFuncObject
structure.
.. cfunction:: void PyUFunc_f_f_As_d_d(char** args, npy_intp* dimensions,
npy_intp* steps, void* func)
.. cfunction:: void PyUFunc_d_d(char** args, npy_intp* dimensions,
npy_intp* steps, void* func)
.. cfunction:: void PyUFunc_f_f(char** args, npy_intp* dimensions,
npy_intp* steps, void* func)
.. cfunction:: void PyUFunc_g_g(char** args, npy_intp* dimensions,
npy_intp* steps, void* func)
.. cfunction:: void PyUFunc_F_F_As_D_D(char** args, npy_intp* dimensions,
npy_intp* steps, void* func)
.. cfunction:: void PyUFunc_F_F(char** args, npy_intp* dimensions,
npy_intp* steps, void* func)
.. cfunction:: void PyUFunc_D_D(char** args, npy_intp* dimensions,
npy_intp* steps, void* func)
.. cfunction:: void PyUFunc_G_G(char** args, npy_intp* dimensions,
npy_intp* steps, void* func)
.. cfunction:: void PyUFunc_e_e(char** args, npy_intp* dimensions,
npy_intp* steps, void* func)
.. cfunction:: void PyUFunc_e_e_As_f_f(char** args, npy_intp* dimensions,
npy_intp* steps, void* func)
.. cfunction:: void PyUFunc_e_e_As_d_d(char** args, npy_intp* dimensions,
npy_intp* steps, void* func)
Type specific, core 1-d functions for ufuncs where each
calculation is obtained by calling a function taking one input
argument and returning one output. This function is passed in
``func``. The letters correspond to dtypechar's of the supported
data types ( ``e`` - half, ``f`` - float, ``d`` - double,
``g`` - long double, ``F`` - cfloat, ``D`` - cdouble,
``G`` - clongdouble). The argument *func* must support the same
signature. The _As_X_X variants assume ndarray's of one data type
but cast the values to use an underlying function that takes a
different data type. Thus, :cfunc:`PyUFunc_f_f_As_d_d` uses
ndarrays of data type :cdata:`NPY_FLOAT` but calls out to a
C-function that takes double and returns double.
.. cfunction:: void PyUFunc_ff_f_As_dd_d(char** args, npy_intp* dimensions,
npy_intp* steps, void* func)
.. cfunction:: void PyUFunc_ff_f(char** args, npy_intp* dimensions,
npy_intp* steps, void* func)
.. cfunction:: void PyUFunc_dd_d(char** args, npy_intp* dimensions,
npy_intp* steps, void* func)
.. cfunction:: void PyUFunc_gg_g(char** args, npy_intp* dimensions,
npy_intp* steps, void* func)
.. cfunction:: void PyUFunc_FF_F_As_DD_D(char** args, npy_intp* dimensions,
npy_intp* steps, void* func)
.. cfunction:: void PyUFunc_DD_D(char** args, npy_intp* dimensions,
npy_intp* steps, void* func)
.. cfunction:: void PyUFunc_FF_F(char** args, npy_intp* dimensions,
npy_intp* steps, void* func)
.. cfunction:: void PyUFunc_GG_G(char** args, npy_intp* dimensions,
npy_intp* steps, void* func)
.. cfunction:: void PyUFunc_ee_e(char** args, npy_intp* dimensions,
npy_intp* steps, void* func)
.. cfunction:: void PyUFunc_ee_e_As_ff_f(char** args, npy_intp* dimensions,
npy_intp* steps, void* func)
.. cfunction:: void PyUFunc_ee_e_As_dd_d(char** args, npy_intp* dimensions,
npy_intp* steps, void* func)
Type specific, core 1-d functions for ufuncs where each
calculation is obtained by calling a function taking two input
arguments and returning one output. The underlying function to
call is passed in as *func*. The letters correspond to
dtypechar's of the specific data type supported by the
general-purpose function. The argument ``func`` must support the
corresponding signature. The ``_As_XX_X`` variants assume ndarrays
of one data type but cast the values at each iteration of the loop
to use the underlying function that takes a different data type.
.. cfunction:: void PyUFunc_O_O(char** args, npy_intp* dimensions,
npy_intp* steps, void* func)
.. cfunction:: void PyUFunc_OO_O(char** args, npy_intp* dimensions,
npy_intp* steps, void* func)
One-input, one-output, and two-input, one-output core 1-d functions
for the :cdata:`NPY_OBJECT` data type. These functions handle reference
count issues and return early on error. The actual function to call is
*func* and it must accept calls with the signature ``(PyObject*)
(PyObject*)`` for :cfunc:`PyUFunc_O_O` or ``(PyObject*)(PyObject *,
PyObject *)`` for :cfunc:`PyUFunc_OO_O`.
.. cfunction:: void PyUFunc_O_O_method(char** args, npy_intp* dimensions,
npy_intp* steps, void* func)
This general purpose 1-d core function assumes that *func* is a string
representing a method of the input object. For each
iteration of the loop, the Python obejct is extracted from the array
and its *func* method is called returning the result to the output array.
.. cfunction:: void PyUFunc_OO_O_method(char** args, npy_intp* dimensions,
npy_intp* steps, void* func)
This general purpose 1-d core function assumes that *func* is a
string representing a method of the input object that takes one
argument. The first argument in *args* is the method whose function is
called, the second argument in *args* is the argument passed to the
function. The output of the function is stored in the third entry
of *args*.
.. cfunction:: void PyUFunc_On_Om(char** args, npy_intp* dimensions,
npy_intp* steps, void* func)
This is the 1-d core function used by the dynamic ufuncs created
by umath.frompyfunc(function, nin, nout). In this case *func* is a
pointer to a :ctype:`PyUFunc_PyFuncData` structure which has definition
.. ctype:: PyUFunc_PyFuncData
.. code-block:: c
typedef struct {
int nin;
int nout;
PyObject *callable;
} PyUFunc_PyFuncData;
At each iteration of the loop, the *nin* input objects are exctracted
from their object arrays and placed into an argument tuple, the Python
*callable* is called with the input arguments, and the nout
outputs are placed into their object arrays.
Importing the API
-----------------
.. cvar:: PY_UFUNC_UNIQUE_SYMBOL
.. cvar:: NO_IMPORT_UFUNC
.. cfunction:: void import_ufunc(void)
These are the constants and functions for accessing the ufunc
C-API from extension modules in precisely the same way as the
array C-API can be accessed. The ``import_ufunc`` () function must
always be called (in the initialization subroutine of the
extension module). If your extension module is in one file then
that is all that is required. The other two constants are useful
if your extension module makes use of multiple files. In that
case, define :cdata:`PY_UFUNC_UNIQUE_SYMBOL` to something unique to
your code and then in source files that do not contain the module
initialization function but still need access to the UFUNC API,
define :cdata:`PY_UFUNC_UNIQUE_SYMBOL` to the same name used previously
and also define :cdata:`NO_IMPORT_UFUNC`.
The C-API is actually an array of function pointers. This array is
created (and pointed to by a global variable) by import_ufunc. The
global variable is either statically defined or allowed to be seen
by other files depending on the state of
:cdata:`Py_UFUNC_UNIQUE_SYMBOL` and :cdata:`NO_IMPORT_UFUNC`.
.. index::
pair: ufunc; C-API

View File

@@ -1,316 +0,0 @@
**********************************
Packaging (:mod:`numpy.distutils`)
**********************************
.. module:: numpy.distutils
NumPy provides enhanced distutils functionality to make it easier to
build and install sub-packages, auto-generate code, and extension
modules that use Fortran-compiled libraries. To use features of NumPy
distutils, use the :func:`setup <core.setup>` command from
:mod:`numpy.distutils.core`. A useful :class:`Configuration
<misc_util.Configuration>` class is also provided in
:mod:`numpy.distutils.misc_util` that can make it easier to construct
keyword arguments to pass to the setup function (by passing the
dictionary obtained from the todict() method of the class). More
information is available in the NumPy Distutils Users Guide in
``<site-packages>/numpy/doc/DISTUTILS.txt``.
.. index::
single: distutils
Modules in :mod:`numpy.distutils`
=================================
misc_util
---------
.. module:: numpy.distutils.misc_util
.. autosummary::
:toctree: generated/
get_numpy_include_dirs
dict_append
appendpath
allpath
dot_join
generate_config_py
get_cmd
terminal_has_colors
red_text
green_text
yellow_text
blue_text
cyan_text
cyg2win32
all_strings
has_f_sources
has_cxx_sources
filter_sources
get_dependencies
is_local_src_dir
get_ext_source_files
get_script_files
.. class:: Configuration(package_name=None, parent_name=None, top_path=None, package_path=None, **attrs)
Construct a configuration instance for the given package name. If
*parent_name* is not None, then construct the package as a
sub-package of the *parent_name* package. If *top_path* and
*package_path* are None then they are assumed equal to
the path of the file this instance was created in. The setup.py
files in the numpy distribution are good examples of how to use
the :class:`Configuration` instance.
.. automethod:: todict
.. automethod:: get_distribution
.. automethod:: get_subpackage
.. automethod:: add_subpackage
.. automethod:: add_data_files
.. automethod:: add_data_dir
.. automethod:: add_include_dirs
.. automethod:: add_headers
.. automethod:: add_extension
.. automethod:: add_library
.. automethod:: add_scripts
.. automethod:: add_installed_library
.. automethod:: add_npy_pkg_config
.. automethod:: paths
.. automethod:: get_config_cmd
.. automethod:: get_build_temp_dir
.. automethod:: have_f77c
.. automethod:: have_f90c
.. automethod:: get_version
.. automethod:: make_svn_version_py
.. automethod:: make_config_py
.. automethod:: get_info
Other modules
-------------
.. currentmodule:: numpy.distutils
.. autosummary::
:toctree: generated/
system_info.get_info
system_info.get_standard_file
cpuinfo.cpu
log.set_verbosity
exec_command
Building Installable C libraries
================================
Conventional C libraries (installed through `add_library`) are not installed, and
are just used during the build (they are statically linked). An installable C
library is a pure C library, which does not depend on the python C runtime, and
is installed such that it may be used by third-party packages. To build and
install the C library, you just use the method `add_installed_library` instead of
`add_library`, which takes the same arguments except for an additional
``install_dir`` argument::
>>> config.add_installed_library('foo', sources=['foo.c'], install_dir='lib')
npy-pkg-config files
--------------------
To make the necessary build options available to third parties, you could use
the `npy-pkg-config` mechanism implemented in `numpy.distutils`. This mechanism is
based on a .ini file which contains all the options. A .ini file is very
similar to .pc files as used by the pkg-config unix utility::
[meta]
Name: foo
Version: 1.0
Description: foo library
[variables]
prefix = /home/user/local
libdir = ${prefix}/lib
includedir = ${prefix}/include
[default]
cflags = -I${includedir}
libs = -L${libdir} -lfoo
Generally, the file needs to be generated during the build, since it needs some
information known at build time only (e.g. prefix). This is mostly automatic if
one uses the `Configuration` method `add_npy_pkg_config`. Assuming we have a
template file foo.ini.in as follows::
[meta]
Name: foo
Version: @version@
Description: foo library
[variables]
prefix = @prefix@
libdir = ${prefix}/lib
includedir = ${prefix}/include
[default]
cflags = -I${includedir}
libs = -L${libdir} -lfoo
and the following code in setup.py::
>>> config.add_installed_library('foo', sources=['foo.c'], install_dir='lib')
>>> subst = {'version': '1.0'}
>>> config.add_npy_pkg_config('foo.ini.in', 'lib', subst_dict=subst)
This will install the file foo.ini into the directory package_dir/lib, and the
foo.ini file will be generated from foo.ini.in, where each ``@version@`` will be
replaced by ``subst_dict['version']``. The dictionary has an additional prefix
substitution rule automatically added, which contains the install prefix (since
this is not easy to get from setup.py). npy-pkg-config files can also be
installed at the same location as used for numpy, using the path returned from
`get_npy_pkg_dir` function.
Reusing a C library from another package
----------------------------------------
Info are easily retrieved from the `get_info` function in
`numpy.distutils.misc_util`::
>>> info = get_info('npymath')
>>> config.add_extension('foo', sources=['foo.c'], extra_info=**info)
An additional list of paths to look for .ini files can be given to `get_info`.
Conversion of ``.src`` files
============================
NumPy distutils supports automatic conversion of source files named
<somefile>.src. This facility can be used to maintain very similar
code blocks requiring only simple changes between blocks. During the
build phase of setup, if a template file named <somefile>.src is
encountered, a new file named <somefile> is constructed from the
template and placed in the build directory to be used instead. Two
forms of template conversion are supported. The first form occurs for
files named named <file>.ext.src where ext is a recognized Fortran
extension (f, f90, f95, f77, for, ftn, pyf). The second form is used
for all other cases.
.. index::
single: code generation
Fortran files
-------------
This template converter will replicate all **function** and
**subroutine** blocks in the file with names that contain '<...>'
according to the rules in '<...>'. The number of comma-separated words
in '<...>' determines the number of times the block is repeated. What
these words are indicates what that repeat rule, '<...>', should be
replaced with in each block. All of the repeat rules in a block must
contain the same number of comma-separated words indicating the number
of times that block should be repeated. If the word in the repeat rule
needs a comma, leftarrow, or rightarrow, then prepend it with a
backslash ' \'. If a word in the repeat rule matches ' \\<index>' then
it will be replaced with the <index>-th word in the same repeat
specification. There are two forms for the repeat rule: named and
short.
Named repeat rule
^^^^^^^^^^^^^^^^^
A named repeat rule is useful when the same set of repeats must be
used several times in a block. It is specified using <rule1=item1,
item2, item3,..., itemN>, where N is the number of times the block
should be repeated. On each repeat of the block, the entire
expression, '<...>' will be replaced first with item1, and then with
item2, and so forth until N repeats are accomplished. Once a named
repeat specification has been introduced, the same repeat rule may be
used **in the current block** by referring only to the name
(i.e. <rule1>.
Short repeat rule
^^^^^^^^^^^^^^^^^
A short repeat rule looks like <item1, item2, item3, ..., itemN>. The
rule specifies that the entire expression, '<...>' should be replaced
first with item1, and then with item2, and so forth until N repeats
are accomplished.
Pre-defined names
^^^^^^^^^^^^^^^^^
The following predefined named repeat rules are available:
- <prefix=s,d,c,z>
- <_c=s,d,c,z>
- <_t=real, double precision, complex, double complex>
- <ftype=real, double precision, complex, double complex>
- <ctype=float, double, complex_float, complex_double>
- <ftypereal=float, double precision, \\0, \\1>
- <ctypereal=float, double, \\0, \\1>
Other files
-----------
Non-Fortran files use a separate syntax for defining template blocks
that should be repeated using a variable expansion similar to the
named repeat rules of the Fortran-specific repeats. The template rules
for these files are:
1. "/\**begin repeat "on a line by itself marks the beginning of
a segment that should be repeated.
2. Named variable expansions are defined using #name=item1, item2, item3,
..., itemN# and placed on successive lines. These variables are
replaced in each repeat block with corresponding word. All named
variables in the same repeat block must define the same number of
words.
3. In specifying the repeat rule for a named variable, item*N is short-
hand for item, item, ..., item repeated N times. In addition,
parenthesis in combination with \*N can be used for grouping several
items that should be repeated. Thus, #name=(item1, item2)*4# is
equivalent to #name=item1, item2, item1, item2, item1, item2, item1,
item2#
4. "\*/ "on a line by itself marks the end of the the variable expansion
naming. The next line is the first line that will be repeated using
the named rules.
5. Inside the block to be repeated, the variables that should be expanded
are specified as @name@.
6. "/\**end repeat**/ "on a line by itself marks the previous line
as the last line of the block to be repeated.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 18 KiB

View File

@@ -1,57 +0,0 @@
#FIG 3.2
Landscape
Center
Inches
Letter
100.00
Single
-2
1200 2
6 1950 2850 4350 3450
2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
1950 2850 4350 2850 4350 3450 1950 3450 1950 2850
2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
2550 2850 2550 3450
2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
3150 2850 3150 3450
2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
3750 2850 3750 3450
-6
2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
5100 2850 7500 2850 7500 3450 5100 3450 5100 2850
2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
5700 2850 5700 3450
2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
6300 2850 6300 3450
2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2
6900 2850 6900 3450
2 4 0 1 0 7 50 -1 -1 0.000 0 0 7 0 0 5
7800 3600 7800 2700 525 2700 525 3600 7800 3600
2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
675 2850 1725 2850 1725 3450 675 3450 675 2850
2 2 0 4 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
5700 2850 6300 2850 6300 3450 5700 3450 5700 2850
2 2 0 4 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
5700 1725 6300 1725 6300 2325 5700 2325 5700 1725
2 4 0 1 0 7 50 -1 -1 0.000 0 0 7 0 0 5
6450 2475 6450 1275 5550 1275 5550 2475 6450 2475
2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5
5700 1350 6300 1350 6300 1575 5700 1575 5700 1350
2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 3
2 1 1.00 60.00 120.00
900 2850 900 1875 1575 1875
2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
2 1 1.00 60.00 120.00
3375 1800 5550 1800
2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2
2 1 1.00 60.00 120.00
6000 2850 6000 2325
2 4 0 1 0 7 50 -1 -1 0.000 0 0 7 0 0 5
3375 2100 3375 1575 1575 1575 1575 2100 3375 2100
4 0 0 50 -1 18 14 0.0000 4 165 720 825 3225 header\001
4 0 0 50 -1 2 40 0.0000 4 105 450 4500 3225 ...\001
4 0 0 50 -1 18 14 0.0000 4 210 810 3600 3900 ndarray\001
4 0 0 50 -1 18 14 0.0000 4 165 630 6600 2175 scalar\001
4 0 0 50 -1 18 14 0.0000 4 165 540 6600 1950 array\001
4 0 0 50 -1 16 12 0.0000 4 135 420 5775 1500 head\001
4 0 0 50 -1 18 14 0.0000 4 210 975 1950 1875 data-type\001

Binary file not shown.

Before

Width:  |  Height:  |  Size: 5.5 KiB

View File

@@ -1,43 +0,0 @@
.. _reference:
###############
NumPy Reference
###############
:Release: |version|
:Date: |today|
.. module:: numpy
This reference manual details functions, modules, and objects
included in Numpy, describing what they are and what they do.
For learning how to use NumPy, see also :ref:`user`.
.. toctree::
:maxdepth: 2
arrays
ufuncs
routines
ctypes
distutils
c-api
internals
swig
Acknowledgements
================
Large parts of this manual originate from Travis E. Oliphant's book
`Guide to Numpy <http://www.tramy.us/>`__ (which generously entered
Public Domain in August 2008). The reference documentation for many of
the functions are written by numerous contributors and developers of
Numpy, both prior to and during the
`Numpy Documentation Marathon
<http://scipy.org/Developer_Zone/DocMarathon2008>`__.
Please help to improve NumPy's documentation! Instructions on how to
join the ongoing documentation marathon can be found
`on the scipy.org website <http://scipy.org/Developer_Zone/DocMarathon2008>`__

View File

@@ -1,666 +0,0 @@
.. currentmodule:: numpy
*************************
Numpy C Code Explanations
*************************
Fanaticism consists of redoubling your efforts when you have forgotten
your aim.
--- *George Santayana*
An authority is a person who can tell you more about something than
you really care to know.
--- *Unknown*
This Chapter attempts to explain the logic behind some of the new
pieces of code. The purpose behind these explanations is to enable
somebody to be able to understand the ideas behind the implementation
somewhat more easily than just staring at the code. Perhaps in this
way, the algorithms can be improved on, borrowed from, and/or
optimized.
Memory model
============
.. index::
pair: ndarray; memory model
One fundamental aspect of the ndarray is that an array is seen as a
"chunk" of memory starting at some location. The interpretation of
this memory depends on the stride information. For each dimension in
an :math:`N` -dimensional array, an integer (stride) dictates how many
bytes must be skipped to get to the next element in that dimension.
Unless you have a single-segment array, this stride information must
be consulted when traversing through an array. It is not difficult to
write code that accepts strides, you just have to use (char \*)
pointers because strides are in units of bytes. Keep in mind also that
strides do not have to be unit-multiples of the element size. Also,
remember that if the number of dimensions of the array is 0 (sometimes
called a rank-0 array), then the strides and dimensions variables are
NULL.
Besides the structural information contained in the strides and
dimensions members of the :ctype:`PyArrayObject`, the flags contain important
information about how the data may be accessed. In particular, the
:cdata:`NPY_ALIGNED` flag is set when the memory is on a suitable boundary
according to the data-type array. Even if you have a contiguous chunk
of memory, you cannot just assume it is safe to dereference a data-
type-specific pointer to an element. Only if the :cdata:`NPY_ALIGNED` flag is
set is this a safe operation (on some platforms it will work but on
others, like Solaris, it will cause a bus error). The :cdata:`NPY_WRITEABLE`
should also be ensured if you plan on writing to the memory area of
the array. It is also possible to obtain a pointer to an unwriteable
memory area. Sometimes, writing to the memory area when the
:cdata:`NPY_WRITEABLE` flag is not set will just be rude. Other times it can
cause program crashes ( *e.g.* a data-area that is a read-only
memory-mapped file).
Data-type encapsulation
=======================
.. index::
single: dtype
The data-type is an important abstraction of the ndarray. Operations
will look to the data-type to provide the key functionality that is
needed to operate on the array. This functionality is provided in the
list of function pointers pointed to by the 'f' member of the
:ctype:`PyArray_Descr` structure. In this way, the number of data-types can be
extended simply by providing a :ctype:`PyArray_Descr` structure with suitable
function pointers in the 'f' member. For built-in types there are some
optimizations that by-pass this mechanism, but the point of the data-
type abstraction is to allow new data-types to be added.
One of the built-in data-types, the void data-type allows for
arbitrary records containing 1 or more fields as elements of the
array. A field is simply another data-type object along with an offset
into the current record. In order to support arbitrarily nested
fields, several recursive implementations of data-type access are
implemented for the void type. A common idiom is to cycle through the
elements of the dictionary and perform a specific operation based on
the data-type object stored at the given offset. These offsets can be
arbitrary numbers. Therefore, the possibility of encountering mis-
aligned data must be recognized and taken into account if necessary.
N-D Iterators
=============
.. index::
single: array iterator
A very common operation in much of NumPy code is the need to iterate
over all the elements of a general, strided, N-dimensional array. This
operation of a general-purpose N-dimensional loop is abstracted in the
notion of an iterator object. To write an N-dimensional loop, you only
have to create an iterator object from an ndarray, work with the
dataptr member of the iterator object structure and call the macro
:cfunc:`PyArray_ITER_NEXT` (it) on the iterator object to move to the next
element. The "next" element is always in C-contiguous order. The macro
works by first special casing the C-contiguous, 1-D, and 2-D cases
which work very simply.
For the general case, the iteration works by keeping track of a list
of coordinate counters in the iterator object. At each iteration, the
last coordinate counter is increased (starting from 0). If this
counter is smaller then one less than the size of the array in that
dimension (a pre-computed and stored value), then the counter is
increased and the dataptr member is increased by the strides in that
dimension and the macro ends. If the end of a dimension is reached,
the counter for the last dimension is reset to zero and the dataptr is
moved back to the beginning of that dimension by subtracting the
strides value times one less than the number of elements in that
dimension (this is also pre-computed and stored in the backstrides
member of the iterator object). In this case, the macro does not end,
but a local dimension counter is decremented so that the next-to-last
dimension replaces the role that the last dimension played and the
previously-described tests are executed again on the next-to-last
dimension. In this way, the dataptr is adjusted appropriately for
arbitrary striding.
The coordinates member of the :ctype:`PyArrayIterObject` structure maintains
the current N-d counter unless the underlying array is C-contiguous in
which case the coordinate counting is by-passed. The index member of
the :ctype:`PyArrayIterObject` keeps track of the current flat index of the
iterator. It is updated by the :cfunc:`PyArray_ITER_NEXT` macro.
Broadcasting
============
.. index::
single: broadcasting
In Numeric, broadcasting was implemented in several lines of code
buried deep in ufuncobject.c. In NumPy, the notion of broadcasting has
been abstracted so that it can be performed in multiple places.
Broadcasting is handled by the function :cfunc:`PyArray_Broadcast`. This
function requires a :ctype:`PyArrayMultiIterObject` (or something that is a
binary equivalent) to be passed in. The :ctype:`PyArrayMultiIterObject` keeps
track of the broadcasted number of dimensions and size in each
dimension along with the total size of the broadcasted result. It also
keeps track of the number of arrays being broadcast and a pointer to
an iterator for each of the arrays being broadcasted.
The :cfunc:`PyArray_Broadcast` function takes the iterators that have already
been defined and uses them to determine the broadcast shape in each
dimension (to create the iterators at the same time that broadcasting
occurs then use the :cfunc:`PyMultiIter_New` function). Then, the iterators are
adjusted so that each iterator thinks it is iterating over an array
with the broadcasted size. This is done by adjusting the iterators
number of dimensions, and the shape in each dimension. This works
because the iterator strides are also adjusted. Broadcasting only
adjusts (or adds) length-1 dimensions. For these dimensions, the
strides variable is simply set to 0 so that the data-pointer for the
iterator over that array doesn't move as the broadcasting operation
operates over the extended dimension.
Broadcasting was always implemented in Numeric using 0-valued strides
for the extended dimensions. It is done in exactly the same way in
NumPy. The big difference is that now the array of strides is kept
track of in a :ctype:`PyArrayIterObject`, the iterators involved in a
broadcasted result are kept track of in a :ctype:`PyArrayMultiIterObject`,
and the :cfunc:`PyArray_BroadCast` call implements the broad-casting rules.
Array Scalars
=============
.. index::
single: array scalars
The array scalars offer a hierarchy of Python types that allow a one-
to-one correspondence between the data-type stored in an array and the
Python-type that is returned when an element is extracted from the
array. An exception to this rule was made with object arrays. Object
arrays are heterogeneous collections of arbitrary Python objects. When
you select an item from an object array, you get back the original
Python object (and not an object array scalar which does exist but is
rarely used for practical purposes).
The array scalars also offer the same methods and attributes as arrays
with the intent that the same code can be used to support arbitrary
dimensions (including 0-dimensions). The array scalars are read-only
(immutable) with the exception of the void scalar which can also be
written to so that record-array field setting works more naturally
(a[0]['f1'] = ``value`` ).
Advanced ("Fancy") Indexing
=============================
.. index::
single: indexing
The implementation of advanced indexing represents some of the most
difficult code to write and explain. In fact, there are two
implementations of advanced indexing. The first works only with 1-D
arrays and is implemented to handle expressions involving a.flat[obj].
The second is general-purpose that works for arrays of "arbitrary
dimension" (up to a fixed maximum). The one-dimensional indexing
approaches were implemented in a rather straightforward fashion, and
so it is the general-purpose indexing code that will be the focus of
this section.
There is a multi-layer approach to indexing because the indexing code
can at times return an array scalar and at other times return an
array. The functions with "_nice" appended to their name do this
special handling while the function without the _nice appendage always
return an array (perhaps a 0-dimensional array). Some special-case
optimizations (the index being an integer scalar, and the index being
a tuple with as many dimensions as the array) are handled in
array_subscript_nice function which is what Python calls when
presented with the code "a[obj]." These optimizations allow fast
single-integer indexing, and also ensure that a 0-dimensional array is
not created only to be discarded as the array scalar is returned
instead. This provides significant speed-up for code that is selecting
many scalars out of an array (such as in a loop). However, it is still
not faster than simply using a list to store standard Python scalars,
because that is optimized by the Python interpreter itself.
After these optimizations, the array_subscript function itself is
called. This function first checks for field selection which occurs
when a string is passed as the indexing object. Then, 0-D arrays are
given special-case consideration. Finally, the code determines whether
or not advanced, or fancy, indexing needs to be performed. If fancy
indexing is not needed, then standard view-based indexing is performed
using code borrowed from Numeric which parses the indexing object and
returns the offset into the data-buffer and the dimensions necessary
to create a new view of the array. The strides are also changed by
multiplying each stride by the step-size requested along the
corresponding dimension.
Fancy-indexing check
--------------------
The fancy_indexing_check routine determines whether or not to use
standard view-based indexing or new copy-based indexing. If the
indexing object is a tuple, then view-based indexing is assumed by
default. Only if the tuple contains an array object or a sequence
object is fancy-indexing assumed. If the indexing object is an array,
then fancy indexing is automatically assumed. If the indexing object
is any other kind of sequence, then fancy-indexing is assumed by
default. This is over-ridden to simple indexing if the sequence
contains any slice, newaxis, or Ellipsis objects, and no arrays or
additional sequences are also contained in the sequence. The purpose
of this is to allow the construction of "slicing" sequences which is a
common technique for building up code that works in arbitrary numbers
of dimensions.
Fancy-indexing implementation
-----------------------------
The concept of indexing was also abstracted using the idea of an
iterator. If fancy indexing is performed, then a :ctype:`PyArrayMapIterObject`
is created. This internal object is not exposed to Python. It is
created in order to handle the fancy-indexing at a high-level. Both
get and set fancy-indexing operations are implemented using this
object. Fancy indexing is abstracted into three separate operations:
(1) creating the :ctype:`PyArrayMapIterObject` from the indexing object, (2)
binding the :ctype:`PyArrayMapIterObject` to the array being indexed, and (3)
getting (or setting) the items determined by the indexing object.
There is an optimization implemented so that the :ctype:`PyArrayIterObject`
(which has it's own less complicated fancy-indexing) is used for
indexing when possible.
Creating the mapping object
^^^^^^^^^^^^^^^^^^^^^^^^^^^
The first step is to convert the indexing objects into a standard form
where iterators are created for all of the index array inputs and all
Boolean arrays are converted to equivalent integer index arrays (as if
nonzero(arr) had been called). Finally, all integer arrays are
replaced with the integer 0 in the indexing object and all of the
index-array iterators are "broadcast" to the same shape.
Binding the mapping object
^^^^^^^^^^^^^^^^^^^^^^^^^^
When the mapping object is created it does not know which array it
will be used with so once the index iterators are constructed during
mapping-object creation, the next step is to associate these iterators
with a particular ndarray. This process interprets any ellipsis and
slice objects so that the index arrays are associated with the
appropriate axis (the axis indicated by the iteraxis entry
corresponding to the iterator for the integer index array). This
information is then used to check the indices to be sure they are
within range of the shape of the array being indexed. The presence of
ellipsis and/or slice objects implies a sub-space iteration that is
accomplished by extracting a sub-space view of the array (using the
index object resulting from replacing all the integer index arrays
with 0) and storing the information about where this sub-space starts
in the mapping object. This is used later during mapping-object
iteration to select the correct elements from the underlying array.
Getting (or Setting)
^^^^^^^^^^^^^^^^^^^^
After the mapping object is successfully bound to a particular array,
the mapping object contains the shape of the resulting item as well as
iterator objects that will walk through the currently-bound array and
either get or set its elements as needed. The walk is implemented
using the :cfunc:`PyArray_MapIterNext` function. This function sets the
coordinates of an iterator object into the current array to be the
next coordinate location indicated by all of the indexing-object
iterators while adjusting, if necessary, for the presence of a sub-
space. The result of this function is that the dataptr member of the
mapping object structure is pointed to the next position in the array
that needs to be copied out or set to some value.
When advanced indexing is used to extract an array, an iterator for
the new array is constructed and advanced in phase with the mapping
object iterator. When advanced indexing is used to place values in an
array, a special "broadcasted" iterator is constructed from the object
being placed into the array so that it will only work if the values
used for setting have a shape that is "broadcastable" to the shape
implied by the indexing object.
Universal Functions
===================
.. index::
single: ufunc
Universal functions are callable objects that take :math:`N` inputs
and produce :math:`M` outputs by wrapping basic 1-D loops that work
element-by-element into full easy-to use functions that seamlessly
implement broadcasting, type-checking and buffered coercion, and
output-argument handling. New universal functions are normally created
in C, although there is a mechanism for creating ufuncs from Python
functions (:func:`frompyfunc`). The user must supply a 1-D loop that
implements the basic function taking the input scalar values and
placing the resulting scalars into the appropriate output slots as
explaine n implementation.
Setup
-----
Every ufunc calculation involves some overhead related to setting up
the calculation. The practical significance of this overhead is that
even though the actual calculation of the ufunc is very fast, you will
be able to write array and type-specific code that will work faster
for small arrays than the ufunc. In particular, using ufuncs to
perform many calculations on 0-D arrays will be slower than other
Python-based solutions (the silently-imported scalarmath module exists
precisely to give array scalars the look-and-feel of ufunc-based
calculations with significantly reduced overhead).
When a ufunc is called, many things must be done. The information
collected from these setup operations is stored in a loop-object. This
loop object is a C-structure (that could become a Python object but is
not initialized as such because it is only used internally). This loop
object has the layout needed to be used with PyArray_Broadcast so that
the broadcasting can be handled in the same way as it is handled in
other sections of code.
The first thing done is to look-up in the thread-specific global
dictionary the current values for the buffer-size, the error mask, and
the associated error object. The state of the error mask controls what
happens when an error-condiction is found. It should be noted that
checking of the hardware error flags is only performed after each 1-D
loop is executed. This means that if the input and output arrays are
contiguous and of the correct type so that a single 1-D loop is
performed, then the flags may not be checked until all elements of the
array have been calcluated. Looking up these values in a thread-
specific dictionary takes time which is easily ignored for all but
very small arrays.
After checking, the thread-specific global variables, the inputs are
evaluated to determine how the ufunc should proceed and the input and
output arrays are constructed if necessary. Any inputs which are not
arrays are converted to arrays (using context if necessary). Which of
the inputs are scalars (and therefore converted to 0-D arrays) is
noted.
Next, an appropriate 1-D loop is selected from the 1-D loops available
to the ufunc based on the input array types. This 1-D loop is selected
by trying to match the signature of the data-types of the inputs
against the available signatures. The signatures corresponding to
built-in types are stored in the types member of the ufunc structure.
The signatures corresponding to user-defined types are stored in a
linked-list of function-information with the head element stored as a
``CObject`` in the userloops dictionary keyed by the data-type number
(the first user-defined type in the argument list is used as the key).
The signatures are searched until a signature is found to which the
input arrays can all be cast safely (ignoring any scalar arguments
which are not allowed to determine the type of the result). The
implication of this search procedure is that "lesser types" should be
placed below "larger types" when the signatures are stored. If no 1-D
loop is found, then an error is reported. Otherwise, the argument_list
is updated with the stored signature --- in case casting is necessary
and to fix the output types assumed by the 1-D loop.
If the ufunc has 2 inputs and 1 output and the second input is an
Object array then a special-case check is performed so that
NotImplemented is returned if the second input is not an ndarray, has
the __array_priority\__ attribute, and has an __r{op}\__ special
method. In this way, Python is signaled to give the other object a
chance to complete the operation instead of using generic object-array
calculations. This allows (for example) sparse matrices to override
the multiplication operator 1-D loop.
For input arrays that are smaller than the specified buffer size,
copies are made of all non-contiguous, mis-aligned, or out-of-
byteorder arrays to ensure that for small arrays, a single-loop is
used. Then, array iterators are created for all the input arrays and
the resulting collection of iterators is broadcast to a single shape.
The output arguments (if any) are then processed and any missing
return arrays are constructed. If any provided output array doesn't
have the correct type (or is mis-aligned) and is smaller than the
buffer size, then a new output array is constructed with the special
UPDATEIFCOPY flag set so that when it is DECREF'd on completion of the
function, it's contents will be copied back into the output array.
Iterators for the output arguments are then processed.
Finally, the decision is made about how to execute the looping
mechanism to ensure that all elements of the input arrays are combined
to produce the output arrays of the correct type. The options for loop
execution are one-loop (for contiguous, aligned, and correct data-
type), strided-loop (for non-contiguous but still aligned and correct
data-type), and a buffered loop (for mis-aligned or incorrect data-
type situations). Depending on which execution method is called for,
the loop is then setup and computed.
Function call
-------------
This section describes how the basic universal function computation
loop is setup and executed for each of the three different kinds of
execution possibilities. If :cdata:`NPY_ALLOW_THREADS` is defined during
compilation, then the Python Global Interpreter Lock (GIL) is released
prior to calling all of these loops (as long as they don't involve
object arrays). It is re-acquired if necessary to handle error
conditions. The hardware error flags are checked only after the 1-D
loop is calcluated.
One Loop
^^^^^^^^
This is the simplest case of all. The ufunc is executed by calling the
underlying 1-D loop exactly once. This is possible only when we have
aligned data of the correct type (including byte-order) for both input
and output and all arrays have uniform strides (either contiguous,
0-D, or 1-D). In this case, the 1-D computational loop is called once
to compute the calculation for the entire array. Note that the
hardware error flags are only checked after the entire calculation is
complete.
Strided Loop
^^^^^^^^^^^^
When the input and output arrays are aligned and of the correct type,
but the striding is not uniform (non-contiguous and 2-D or larger),
then a second looping structure is employed for the calculation. This
approach converts all of the iterators for the input and output
arguments to iterate over all but the largest dimension. The inner
loop is then handled by the underlying 1-D computational loop. The
outer loop is a standard iterator loop on the converted iterators. The
hardware error flags are checked after each 1-D loop is completed.
Buffered Loop
^^^^^^^^^^^^^
This is the code that handles the situation whenever the input and/or
output arrays are either misaligned or of the wrong data-type
(including being byte-swapped) from what the underlying 1-D loop
expects. The arrays are also assumed to be non-contiguous. The code
works very much like the strided loop except for the inner 1-D loop is
modified so that pre-processing is performed on the inputs and post-
processing is performed on the outputs in bufsize chunks (where
bufsize is a user-settable parameter). The underlying 1-D
computational loop is called on data that is copied over (if it needs
to be). The setup code and the loop code is considerably more
complicated in this case because it has to handle:
- memory allocation of the temporary buffers
- deciding whether or not to use buffers on the input and output data
(mis-aligned and/or wrong data-type)
- copying and possibly casting data for any inputs or outputs for which
buffers are necessary.
- special-casing Object arrays so that reference counts are properly
handled when copies and/or casts are necessary.
- breaking up the inner 1-D loop into bufsize chunks (with a possible
remainder).
Again, the hardware error flags are checked at the end of each 1-D
loop.
Final output manipulation
-------------------------
Ufuncs allow other array-like classes to be passed seamlessly through
the interface in that inputs of a particular class will induce the
outputs to be of that same class. The mechanism by which this works is
the following. If any of the inputs are not ndarrays and define the
:obj:`__array_wrap__` method, then the class with the largest
:obj:`__array_priority__` attribute determines the type of all the
outputs (with the exception of any output arrays passed in). The
:obj:`__array_wrap__` method of the input array will be called with the
ndarray being returned from the ufunc as it's input. There are two
calling styles of the :obj:`__array_wrap__` function supported. The first
takes the ndarray as the first argument and a tuple of "context" as
the second argument. The context is (ufunc, arguments, output argument
number). This is the first call tried. If a TypeError occurs, then the
function is called with just the ndarray as the first argument.
Methods
-------
Their are three methods of ufuncs that require calculation similar to
the general-purpose ufuncs. These are reduce, accumulate, and
reduceat. Each of these methods requires a setup command followed by a
loop. There are four loop styles possible for the methods
corresponding to no-elements, one-element, strided-loop, and buffered-
loop. These are the same basic loop styles as implemented for the
general purpose function call except for the no-element and one-
element cases which are special-cases occurring when the input array
objects have 0 and 1 elements respectively.
Setup
^^^^^
The setup function for all three methods is ``construct_reduce``.
This function creates a reducing loop object and fills it with
parameters needed to complete the loop. All of the methods only work
on ufuncs that take 2-inputs and return 1 output. Therefore, the
underlying 1-D loop is selected assuming a signature of [ ``otype``,
``otype``, ``otype`` ] where ``otype`` is the requested reduction
data-type. The buffer size and error handling is then retrieved from
(per-thread) global storage. For small arrays that are mis-aligned or
have incorrect data-type, a copy is made so that the un-buffered
section of code is used. Then, the looping strategy is selected. If
there is 1 element or 0 elements in the array, then a simple looping
method is selected. If the array is not mis-aligned and has the
correct data-type, then strided looping is selected. Otherwise,
buffered looping must be performed. Looping parameters are then
established, and the return array is constructed. The output array is
of a different shape depending on whether the method is reduce,
accumulate, or reduceat. If an output array is already provided, then
it's shape is checked. If the output array is not C-contiguous,
aligned, and of the correct data type, then a temporary copy is made
with the UPDATEIFCOPY flag set. In this way, the methods will be able
to work with a well-behaved output array but the result will be copied
back into the true output array when the method computation is
complete. Finally, iterators are set up to loop over the correct axis
(depending on the value of axis provided to the method) and the setup
routine returns to the actual computation routine.
Reduce
^^^^^^
.. index::
triple: ufunc; methods; reduce
All of the ufunc methods use the same underlying 1-D computational
loops with input and output arguments adjusted so that the appropriate
reduction takes place. For example, the key to the functioning of
reduce is that the 1-D loop is called with the output and the second
input pointing to the same position in memory and both having a step-
size of 0. The first input is pointing to the input array with a step-
size given by the appropriate stride for the selected axis. In this
way, the operation performed is
.. math::
:nowrap:
\begin{align*}
o & = & i[0] \\
o & = & i[k]\textrm{<op>}o\quad k=1\ldots N
\end{align*}
where :math:`N+1` is the number of elements in the input, :math:`i`,
:math:`o` is the output, and :math:`i[k]` is the
:math:`k^{\textrm{th}}` element of :math:`i` along the selected axis.
This basic operations is repeated for arrays with greater than 1
dimension so that the reduction takes place for every 1-D sub-array
along the selected axis. An iterator with the selected dimension
removed handles this looping.
For buffered loops, care must be taken to copy and cast data before
the loop function is called because the underlying loop expects
aligned data of the correct data-type (including byte-order). The
buffered loop must handle this copying and casting prior to calling
the loop function on chunks no greater than the user-specified
bufsize.
Accumulate
^^^^^^^^^^
.. index::
triple: ufunc; methods; accumulate
The accumulate function is very similar to the reduce function in that
the output and the second input both point to the output. The
difference is that the second input points to memory one stride behind
the current output pointer. Thus, the operation performed is
.. math::
:nowrap:
\begin{align*}
o[0] & = & i[0] \\
o[k] & = & i[k]\textrm{<op>}o[k-1]\quad k=1\ldots N.
\end{align*}
The output has the same shape as the input and each 1-D loop operates
over :math:`N` elements when the shape in the selected axis is :math:`N+1`.
Again, buffered loops take care to copy and cast the data before
calling the underlying 1-D computational loop.
Reduceat
^^^^^^^^
.. index::
triple: ufunc; methods; reduceat
single: ufunc
The reduceat function is a generalization of both the reduce and
accumulate functions. It implements a reduce over ranges of the input
array specified by indices. The extra indices argument is checked to
be sure that every input is not too large for the input array along
the selected dimension before the loop calculations take place. The
loop implementation is handled using code that is very similar to the
reduce code repeated as many times as there are elements in the
indices input. In particular: the first input pointer passed to the
underlying 1-D computational loop points to the input array at the
correct location indicated by the index array. In addition, the output
pointer and the second input pointer passed to the underlying 1-D loop
point to the same position in memory. The size of the 1-D
computational loop is fixed to be the difference between the current
index and the next index (when the current index is the last index,
then the next index is assumed to be the length of the array along the
selected dimension). In this way, the 1-D loop will implement a reduce
over the specified indices.
Mis-aligned or a loop data-type that does not match the input and/or
output data-type is handled using buffered code where-in data is
copied to a temporary buffer and cast to the correct data-type if
necessary prior to calling the underlying 1-D function. The temporary
buffers are created in (element) sizes no bigger than the user
settable buffer-size value. Thus, the loop must be flexible enough to
call the underlying 1-D computational loop enough times to complete
the total calculation in chunks no bigger than the buffer-size.

View File

@@ -1,9 +0,0 @@
***************
Numpy internals
***************
.. toctree::
internals.code-explanations
.. automodule:: numpy.doc.internals

View File

@@ -1,462 +0,0 @@
.. currentmodule:: numpy.ma
.. _numpy.ma.constants:
Constants of the :mod:`numpy.ma` module
=======================================
In addition to the :class:`MaskedArray` class, the :mod:`numpy.ma` module
defines several constants.
.. data:: masked
The :attr:`masked` constant is a special case of :class:`MaskedArray`,
with a float datatype and a null shape. It is used to test whether a
specific entry of a masked array is masked, or to mask one or several
entries of a masked array::
>>> x = ma.array([1, 2, 3], mask=[0, 1, 0])
>>> x[1] is ma.masked
True
>>> x[-1] = ma.masked
>>> x
masked_array(data = [1 -- --],
mask = [False True True],
fill_value = 999999)
.. data:: nomask
Value indicating that a masked array has no invalid entry.
:attr:`nomask` is used internally to speed up computations when the mask
is not needed.
.. data:: masked_print_options
String used in lieu of missing data when a masked array is printed.
By default, this string is ``'--'``.
.. _maskedarray.baseclass:
The :class:`MaskedArray` class
==============================
.. class:: MaskedArray
A subclass of :class:`~numpy.ndarray` designed to manipulate numerical arrays with missing data.
An instance of :class:`MaskedArray` can be thought as the combination of several elements:
* The :attr:`~MaskedArray.data`, as a regular :class:`numpy.ndarray` of any shape or datatype (the data).
* A boolean :attr:`~numpy.ma.MaskedArray.mask` with the same shape as the data, where a ``True`` value indicates that the corresponding element of the data is invalid.
The special value :const:`nomask` is also acceptable for arrays without named fields, and indicates that no data is invalid.
* A :attr:`~numpy.ma.MaskedArray.fill_value`, a value that may be used to replace the invalid entries in order to return a standard :class:`numpy.ndarray`.
Attributes and properties of masked arrays
------------------------------------------
.. seealso:: :ref:`Array Attributes <arrays.ndarray.attributes>`
.. attribute:: MaskedArray.data
Returns the underlying data, as a view of the masked array.
If the underlying data is a subclass of :class:`numpy.ndarray`, it is
returned as such.
>>> x = ma.array(np.matrix([[1, 2], [3, 4]]), mask=[[0, 1], [1, 0]])
>>> x.data
matrix([[1, 2],
[3, 4]])
The type of the data can be accessed through the :attr:`baseclass`
attribute.
.. attribute:: MaskedArray.mask
Returns the underlying mask, as an array with the same shape and structure
as the data, but where all fields are atomically booleans.
A value of ``True`` indicates an invalid entry.
.. attribute:: MaskedArray.recordmask
Returns the mask of the array if it has no named fields. For structured
arrays, returns a ndarray of booleans where entries are ``True`` if **all**
the fields are masked, ``False`` otherwise::
>>> x = ma.array([(1, 1), (2, 2), (3, 3), (4, 4), (5, 5)],
... mask=[(0, 0), (1, 0), (1, 1), (0, 1), (0, 0)],
... dtype=[('a', int), ('b', int)])
>>> x.recordmask
array([False, False, True, False, False], dtype=bool)
.. attribute:: MaskedArray.fill_value
Returns the value used to fill the invalid entries of a masked array.
The value is either a scalar (if the masked array has no named fields),
or a 0-D ndarray with the same :attr:`dtype` as the masked array if it has
named fields.
The default filling value depends on the datatype of the array:
======== ========
datatype default
======== ========
bool True
int 999999
float 1.e20
complex 1.e20+0j
object '?'
string 'N/A'
======== ========
.. attribute:: MaskedArray.baseclass
Returns the class of the underlying data.
>>> x = ma.array(np.matrix([[1, 2], [3, 4]]), mask=[[0, 0], [1, 0]])
>>> x.baseclass
<class 'numpy.matrixlib.defmatrix.matrix'>
.. attribute:: MaskedArray.sharedmask
Returns whether the mask of the array is shared between several masked arrays.
If this is the case, any modification to the mask of one array will be
propagated to the others.
.. attribute:: MaskedArray.hardmask
Returns whether the mask is hard (``True``) or soft (``False``).
When the mask is hard, masked entries cannot be unmasked.
As :class:`MaskedArray` is a subclass of :class:`~numpy.ndarray`, a masked array also inherits all the attributes and properties of a :class:`~numpy.ndarray` instance.
.. autosummary::
:toctree: generated/
MaskedArray.base
MaskedArray.ctypes
MaskedArray.dtype
MaskedArray.flags
MaskedArray.itemsize
MaskedArray.nbytes
MaskedArray.ndim
MaskedArray.shape
MaskedArray.size
MaskedArray.strides
MaskedArray.imag
MaskedArray.real
MaskedArray.flat
MaskedArray.__array_priority__
:class:`MaskedArray` methods
============================
.. seealso:: :ref:`Array methods <array.ndarray.methods>`
Conversion
----------
.. autosummary::
:toctree: generated/
MaskedArray.__float__
MaskedArray.__hex__
MaskedArray.__int__
MaskedArray.__long__
MaskedArray.__oct__
MaskedArray.view
MaskedArray.astype
MaskedArray.byteswap
MaskedArray.compressed
MaskedArray.filled
MaskedArray.tofile
MaskedArray.toflex
MaskedArray.tolist
MaskedArray.torecords
MaskedArray.tostring
Shape manipulation
------------------
For reshape, resize, and transpose, the single tuple argument may be
replaced with ``n`` integers which will be interpreted as an n-tuple.
.. autosummary::
:toctree: generated/
MaskedArray.flatten
MaskedArray.ravel
MaskedArray.reshape
MaskedArray.resize
MaskedArray.squeeze
MaskedArray.swapaxes
MaskedArray.transpose
MaskedArray.T
Item selection and manipulation
-------------------------------
For array methods that take an *axis* keyword, it defaults to `None`.
If axis is *None*, then the array is treated as a 1-D array.
Any other value for *axis* represents the dimension along which
the operation should proceed.
.. autosummary::
:toctree: generated/
MaskedArray.argmax
MaskedArray.argmin
MaskedArray.argsort
MaskedArray.choose
MaskedArray.compress
MaskedArray.diagonal
MaskedArray.fill
MaskedArray.item
MaskedArray.nonzero
MaskedArray.put
MaskedArray.repeat
MaskedArray.searchsorted
MaskedArray.sort
MaskedArray.take
Pickling and copy
-----------------
.. autosummary::
:toctree: generated/
MaskedArray.copy
MaskedArray.dump
MaskedArray.dumps
Calculations
------------
.. autosummary::
:toctree: generated/
MaskedArray.all
MaskedArray.anom
MaskedArray.any
MaskedArray.clip
MaskedArray.conj
MaskedArray.conjugate
MaskedArray.cumprod
MaskedArray.cumsum
MaskedArray.max
MaskedArray.mean
MaskedArray.min
MaskedArray.prod
MaskedArray.product
MaskedArray.ptp
MaskedArray.round
MaskedArray.std
MaskedArray.sum
MaskedArray.trace
MaskedArray.var
Arithmetic and comparison operations
------------------------------------
.. index:: comparison, arithmetic, operation, operator
Comparison operators:
~~~~~~~~~~~~~~~~~~~~~
.. autosummary::
:toctree: generated/
MaskedArray.__lt__
MaskedArray.__le__
MaskedArray.__gt__
MaskedArray.__ge__
MaskedArray.__eq__
MaskedArray.__ne__
Truth value of an array (:func:`bool()`):
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. autosummary::
:toctree: generated/
MaskedArray.__nonzero__
Arithmetic:
~~~~~~~~~~~
.. autosummary::
:toctree: generated/
MaskedArray.__abs__
MaskedArray.__add__
MaskedArray.__radd__
MaskedArray.__sub__
MaskedArray.__rsub__
MaskedArray.__mul__
MaskedArray.__rmul__
MaskedArray.__div__
MaskedArray.__rdiv__
MaskedArray.__truediv__
MaskedArray.__rtruediv__
MaskedArray.__floordiv__
MaskedArray.__rfloordiv__
MaskedArray.__mod__
MaskedArray.__rmod__
MaskedArray.__divmod__
MaskedArray.__rdivmod__
MaskedArray.__pow__
MaskedArray.__rpow__
MaskedArray.__lshift__
MaskedArray.__rlshift__
MaskedArray.__rshift__
MaskedArray.__rrshift__
MaskedArray.__and__
MaskedArray.__rand__
MaskedArray.__or__
MaskedArray.__ror__
MaskedArray.__xor__
MaskedArray.__rxor__
Arithmetic, in-place:
~~~~~~~~~~~~~~~~~~~~~
.. autosummary::
:toctree: generated/
MaskedArray.__iadd__
MaskedArray.__isub__
MaskedArray.__imul__
MaskedArray.__idiv__
MaskedArray.__itruediv__
MaskedArray.__ifloordiv__
MaskedArray.__imod__
MaskedArray.__ipow__
MaskedArray.__ilshift__
MaskedArray.__irshift__
MaskedArray.__iand__
MaskedArray.__ior__
MaskedArray.__ixor__
Representation
--------------
.. autosummary::
:toctree: generated/
MaskedArray.__repr__
MaskedArray.__str__
MaskedArray.ids
MaskedArray.iscontiguous
Special methods
---------------
For standard library functions:
.. autosummary::
:toctree: generated/
MaskedArray.__copy__
MaskedArray.__deepcopy__
MaskedArray.__getstate__
MaskedArray.__reduce__
MaskedArray.__setstate__
Basic customization:
.. autosummary::
:toctree: generated/
MaskedArray.__new__
MaskedArray.__array__
MaskedArray.__array_wrap__
Container customization: (see :ref:`Indexing <arrays.indexing>`)
.. autosummary::
:toctree: generated/
MaskedArray.__len__
MaskedArray.__getitem__
MaskedArray.__setitem__
MaskedArray.__delitem__
MaskedArray.__getslice__
MaskedArray.__setslice__
MaskedArray.__contains__
Specific methods
----------------
Handling the mask
~~~~~~~~~~~~~~~~~
The following methods can be used to access information about the mask or to
manipulate the mask.
.. autosummary::
:toctree: generated/
MaskedArray.__setmask__
MaskedArray.harden_mask
MaskedArray.soften_mask
MaskedArray.unshare_mask
MaskedArray.shrink_mask
Handling the `fill_value`
~~~~~~~~~~~~~~~~~~~~~~~~~
.. autosummary::
:toctree: generated/
MaskedArray.get_fill_value
MaskedArray.set_fill_value
Counting the missing elements
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. autosummary::
:toctree: generated/
MaskedArray.count

View File

@@ -1,499 +0,0 @@
.. currentmodule:: numpy.ma
.. _maskedarray.generic:
The :mod:`numpy.ma` module
==========================
Rationale
---------
Masked arrays are arrays that may have missing or invalid entries.
The :mod:`numpy.ma` module provides a nearly work-alike replacement for numpy
that supports data arrays with masks.
What is a masked array?
-----------------------
In many circumstances, datasets can be incomplete or tainted by the presence
of invalid data. For example, a sensor may have failed to record a data, or
recorded an invalid value. The :mod:`numpy.ma` module provides a convenient
way to address this issue, by introducing masked arrays.
A masked array is the combination of a standard :class:`numpy.ndarray` and a
mask. A mask is either :attr:`nomask`, indicating that no value of the
associated array is invalid, or an array of booleans that determines for each
element of the associated array whether the value is valid or not. When an
element of the mask is ``False``, the corresponding element of the associated
array is valid and is said to be unmasked. When an element of the mask is
``True``, the corresponding element of the associated array is said to be
masked (invalid).
The package ensures that masked entries are not used in computations.
As an illustration, let's consider the following dataset::
>>> import numpy as np
>>> import numpy.ma as ma
>>> x = np.array([1, 2, 3, -1, 5])
We wish to mark the fourth entry as invalid. The easiest is to create a masked
array::
>>> mx = ma.masked_array(x, mask=[0, 0, 0, 1, 0])
We can now compute the mean of the dataset, without taking the invalid data
into account::
>>> mx.mean()
2.75
The :mod:`numpy.ma` module
--------------------------
The main feature of the :mod:`numpy.ma` module is the :class:`MaskedArray`
class, which is a subclass of :class:`numpy.ndarray`. The class, its
attributes and methods are described in more details in the
:ref:`MaskedArray class <maskedarray.baseclass>` section.
The :mod:`numpy.ma` module can be used as an addition to :mod:`numpy`: ::
>>> import numpy as np
>>> import numpy.ma as ma
To create an array with the second element invalid, we would do::
>>> y = ma.array([1, 2, 3], mask = [0, 1, 0])
To create a masked array where all values close to 1.e20 are invalid, we would
do::
>>> z = masked_values([1.0, 1.e20, 3.0, 4.0], 1.e20)
For a complete discussion of creation methods for masked arrays please see
section :ref:`Constructing masked arrays <maskedarray.generic.constructing>`.
Using numpy.ma
==============
.. _maskedarray.generic.constructing:
Constructing masked arrays
--------------------------
There are several ways to construct a masked array.
* A first possibility is to directly invoke the :class:`MaskedArray` class.
* A second possibility is to use the two masked array constructors,
:func:`array` and :func:`masked_array`.
.. autosummary::
:toctree: generated/
array
masked_array
* A third option is to take the view of an existing array. In that case, the
mask of the view is set to :attr:`nomask` if the array has no named fields,
or an array of boolean with the same structure as the array otherwise.
>>> x = np.array([1, 2, 3])
>>> x.view(ma.MaskedArray)
masked_array(data = [1 2 3],
mask = False,
fill_value = 999999)
>>> x = np.array([(1, 1.), (2, 2.)], dtype=[('a',int), ('b', float)])
>>> x.view(ma.MaskedArray)
masked_array(data = [(1, 1.0) (2, 2.0)],
mask = [(False, False) (False, False)],
fill_value = (999999, 1e+20),
dtype = [('a', '<i4'), ('b', '<f8')])
* Yet another possibility is to use any of the following functions:
.. autosummary::
:toctree: generated/
asarray
asanyarray
fix_invalid
masked_equal
masked_greater
masked_greater_equal
masked_inside
masked_invalid
masked_less
masked_less_equal
masked_not_equal
masked_object
masked_outside
masked_values
masked_where
Accessing the data
------------------
The underlying data of a masked array can be accessed in several ways:
* through the :attr:`~MaskedArray.data` attribute. The output is a view of the
array as a :class:`numpy.ndarray` or one of its subclasses, depending on the
type of the underlying data at the masked array creation.
* through the :meth:`~MaskedArray.__array__` method. The output is then a
:class:`numpy.ndarray`.
* by directly taking a view of the masked array as a :class:`numpy.ndarray`
or one of its subclass (which is actually what using the
:attr:`~MaskedArray.data` attribute does).
* by using the :func:`getdata` function.
None of these methods is completely satisfactory if some entries have been
marked as invalid. As a general rule, where a representation of the array is
required without any masked entries, it is recommended to fill the array with
the :meth:`filled` method.
Accessing the mask
------------------
The mask of a masked array is accessible through its :attr:`~MaskedArray.mask`
attribute. We must keep in mind that a ``True`` entry in the mask indicates an
*invalid* data.
Another possibility is to use the :func:`getmask` and :func:`getmaskarray`
functions. :func:`getmask(x)` outputs the mask of ``x`` if ``x`` is a masked
array, and the special value :data:`nomask` otherwise. :func:`getmaskarray(x)`
outputs the mask of ``x`` if ``x`` is a masked array. If ``x`` has no invalid
entry or is not a masked array, the function outputs a boolean array of
``False`` with as many elements as ``x``.
Accessing only the valid entries
---------------------------------
To retrieve only the valid entries, we can use the inverse of the mask as an
index. The inverse of the mask can be calculated with the
:func:`numpy.logical_not` function or simply with the ``~`` operator::
>>> x = ma.array([[1, 2], [3, 4]], mask=[[0, 1], [1, 0]])
>>> x[~x.mask]
masked_array(data = [1 4],
mask = [False False],
fill_value = 999999)
Another way to retrieve the valid data is to use the :meth:`compressed`
method, which returns a one-dimensional :class:`~numpy.ndarray` (or one of its
subclasses, depending on the value of the :attr:`~MaskedArray.baseclass`
attribute)::
>>> x.compressed()
array([1, 4])
Note that the output of :meth:`compressed` is always 1D.
Modifying the mask
------------------
Masking an entry
~~~~~~~~~~~~~~~~
The recommended way to mark one or several specific entries of a masked array
as invalid is to assign the special value :attr:`masked` to them::
>>> x = ma.array([1, 2, 3])
>>> x[0] = ma.masked
>>> x
masked_array(data = [-- 2 3],
mask = [ True False False],
fill_value = 999999)
>>> y = ma.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
>>> y[(0, 1, 2), (1, 2, 0)] = ma.masked
>>> y
masked_array(data =
[[1 -- 3]
[4 5 --]
[-- 8 9]],
mask =
[[False True False]
[False False True]
[ True False False]],
fill_value = 999999)
>>> z = ma.array([1, 2, 3, 4])
>>> z[:-2] = ma.masked
>>> z
masked_array(data = [-- -- 3 4],
mask = [ True True False False],
fill_value = 999999)
A second possibility is to modify the :attr:`~MaskedArray.mask` directly,
but this usage is discouraged.
.. note::
When creating a new masked array with a simple, non-structured datatype,
the mask is initially set to the special value :attr:`nomask`, that
corresponds roughly to the boolean ``False``. Trying to set an element of
:attr:`nomask` will fail with a :exc:`TypeError` exception, as a boolean
does not support item assignment.
All the entries of an array can be masked at once by assigning ``True`` to the
mask::
>>> x = ma.array([1, 2, 3], mask=[0, 0, 1])
>>> x.mask = True
>>> x
masked_array(data = [-- -- --],
mask = [ True True True],
fill_value = 999999)
Finally, specific entries can be masked and/or unmasked by assigning to the
mask a sequence of booleans::
>>> x = ma.array([1, 2, 3])
>>> x.mask = [0, 1, 0]
>>> x
masked_array(data = [1 -- 3],
mask = [False True False],
fill_value = 999999)
Unmasking an entry
~~~~~~~~~~~~~~~~~~
To unmask one or several specific entries, we can just assign one or several
new valid values to them::
>>> x = ma.array([1, 2, 3], mask=[0, 0, 1])
>>> x
masked_array(data = [1 2 --],
mask = [False False True],
fill_value = 999999)
>>> x[-1] = 5
>>> x
masked_array(data = [1 2 5],
mask = [False False False],
fill_value = 999999)
.. note::
Unmasking an entry by direct assignment will silently fail if the masked
array has a *hard* mask, as shown by the :attr:`hardmask` attribute. This
feature was introduced to prevent overwriting the mask. To force the
unmasking of an entry where the array has a hard mask, the mask must first
to be softened using the :meth:`soften_mask` method before the allocation.
It can be re-hardened with :meth:`harden_mask`::
>>> x = ma.array([1, 2, 3], mask=[0, 0, 1], hard_mask=True)
>>> x
masked_array(data = [1 2 --],
mask = [False False True],
fill_value = 999999)
>>> x[-1] = 5
>>> x
masked_array(data = [1 2 --],
mask = [False False True],
fill_value = 999999)
>>> x.soften_mask()
>>> x[-1] = 5
>>> x
masked_array(data = [1 2 5],
mask = [False False False],
fill_value = 999999)
>>> x.harden_mask()
To unmask all masked entries of a masked array (provided the mask isn't a hard
mask), the simplest solution is to assign the constant :attr:`nomask` to the
mask::
>>> x = ma.array([1, 2, 3], mask=[0, 0, 1])
>>> x
masked_array(data = [1 2 --],
mask = [False False True],
fill_value = 999999)
>>> x.mask = ma.nomask
>>> x
masked_array(data = [1 2 3],
mask = [False False False],
fill_value = 999999)
Indexing and slicing
--------------------
As a :class:`MaskedArray` is a subclass of :class:`numpy.ndarray`, it inherits
its mechanisms for indexing and slicing.
When accessing a single entry of a masked array with no named fields, the
output is either a scalar (if the corresponding entry of the mask is
``False``) or the special value :attr:`masked` (if the corresponding entry of
the mask is ``True``)::
>>> x = ma.array([1, 2, 3], mask=[0, 0, 1])
>>> x[0]
1
>>> x[-1]
masked_array(data = --,
mask = True,
fill_value = 1e+20)
>>> x[-1] is ma.masked
True
If the masked array has named fields, accessing a single entry returns a
:class:`numpy.void` object if none of the fields are masked, or a 0d masked
array with the same dtype as the initial array if at least one of the fields
is masked.
>>> y = ma.masked_array([(1,2), (3, 4)],
... mask=[(0, 0), (0, 1)],
... dtype=[('a', int), ('b', int)])
>>> y[0]
(1, 2)
>>> y[-1]
masked_array(data = (3, --),
mask = (False, True),
fill_value = (999999, 999999),
dtype = [('a', '<i4'), ('b', '<i4')])
When accessing a slice, the output is a masked array whose
:attr:`~MaskedArray.data` attribute is a view of the original data, and whose
mask is either :attr:`nomask` (if there was no invalid entries in the original
array) or a copy of the corresponding slice of the original mask. The copy is
required to avoid propagation of any modification of the mask to the original.
>>> x = ma.array([1, 2, 3, 4, 5], mask=[0, 1, 0, 0, 1])
>>> mx = x[:3]
>>> mx
masked_array(data = [1 -- 3],
mask = [False True False],
fill_value = 999999)
>>> mx[1] = -1
>>> mx
masked_array(data = [1 -1 3],
mask = [False True False],
fill_value = 999999)
>>> x.mask
array([False, True, False, False, True], dtype=bool)
>>> x.data
array([ 1, -1, 3, 4, 5])
Accessing a field of a masked array with structured datatype returns a
:class:`MaskedArray`.
Operations on masked arrays
---------------------------
Arithmetic and comparison operations are supported by masked arrays.
As much as possible, invalid entries of a masked array are not processed,
meaning that the corresponding :attr:`data` entries *should* be the same
before and after the operation.
.. warning::
We need to stress that this behavior may not be systematic, that masked
data may be affected by the operation in some cases and therefore users
should not rely on this data remaining unchanged.
The :mod:`numpy.ma` module comes with a specific implementation of most
ufuncs. Unary and binary functions that have a validity domain (such as
:func:`~numpy.log` or :func:`~numpy.divide`) return the :data:`masked`
constant whenever the input is masked or falls outside the validity domain::
>>> ma.log([-1, 0, 1, 2])
masked_array(data = [-- -- 0.0 0.69314718056],
mask = [ True True False False],
fill_value = 1e+20)
Masked arrays also support standard numpy ufuncs. The output is then a masked
array. The result of a unary ufunc is masked wherever the input is masked. The
result of a binary ufunc is masked wherever any of the input is masked. If the
ufunc also returns the optional context output (a 3-element tuple containing
the name of the ufunc, its arguments and its domain), the context is processed
and entries of the output masked array are masked wherever the corresponding
input fall outside the validity domain::
>>> x = ma.array([-1, 1, 0, 2, 3], mask=[0, 0, 0, 0, 1])
>>> np.log(x)
masked_array(data = [-- -- 0.0 0.69314718056 --],
mask = [ True True False False True],
fill_value = 1e+20)
Examples
========
Data with a given value representing missing data
-------------------------------------------------
Let's consider a list of elements, ``x``, where values of -9999. represent
missing data. We wish to compute the average value of the data and the vector
of anomalies (deviations from the average)::
>>> import numpy.ma as ma
>>> x = [0.,1.,-9999.,3.,4.]
>>> mx = ma.masked_values (x, -9999.)
>>> print mx.mean()
2.0
>>> print mx - mx.mean()
[-2.0 -1.0 -- 1.0 2.0]
>>> print mx.anom()
[-2.0 -1.0 -- 1.0 2.0]
Filling in the missing data
---------------------------
Suppose now that we wish to print that same data, but with the missing values
replaced by the average value.
>>> print mx.filled(mx.mean())
[ 0. 1. 2. 3. 4.]
Numerical operations
--------------------
Numerical operations can be easily performed without worrying about missing
values, dividing by zero, square roots of negative numbers, etc.::
>>> import numpy as np, numpy.ma as ma
>>> x = ma.array([1., -1., 3., 4., 5., 6.], mask=[0,0,0,0,1,0])
>>> y = ma.array([1., 2., 0., 4., 5., 6.], mask=[0,0,0,0,0,1])
>>> print np.sqrt(x/y)
[1.0 -- -- 1.0 -- --]
Four values of the output are invalid: the first one comes from taking the
square root of a negative number, the second from the division by zero, and
the last two where the inputs were masked.
Ignoring extreme values
-----------------------
Let's consider an array ``d`` of random floats between 0 and 1. We wish to
compute the average of the values of ``d`` while ignoring any data outside
the range ``[0.1, 0.9]``::
>>> print ma.masked_outside(d, 0.1, 0.9).mean()

View File

@@ -1,19 +0,0 @@
.. _maskedarray:
*************
Masked arrays
*************
Masked arrays are arrays that may have missing or invalid entries.
The :mod:`numpy.ma` module provides a nearly work-alike replacement for numpy
that supports data arrays with masks.
.. index::
single: masked arrays
.. toctree::
:maxdepth: 2
maskedarray.generic
maskedarray.baseclass
routines.ma

View File

@@ -1,103 +0,0 @@
.. _routines.array-creation:
Array creation routines
=======================
.. seealso:: :ref:`Array creation <arrays.creation>`
.. currentmodule:: numpy
Ones and zeros
--------------
.. autosummary::
:toctree: generated/
empty
empty_like
eye
identity
ones
ones_like
zeros
zeros_like
From existing data
------------------
.. autosummary::
:toctree: generated/
array
asarray
asanyarray
ascontiguousarray
asmatrix
copy
frombuffer
fromfile
fromfunction
fromiter
fromstring
loadtxt
.. _routines.array-creation.rec:
Creating record arrays (:mod:`numpy.rec`)
-----------------------------------------
.. note:: :mod:`numpy.rec` is the preferred alias for
:mod:`numpy.core.records`.
.. autosummary::
:toctree: generated/
core.records.array
core.records.fromarrays
core.records.fromrecords
core.records.fromstring
core.records.fromfile
.. _routines.array-creation.char:
Creating character arrays (:mod:`numpy.char`)
---------------------------------------------
.. note:: :mod:`numpy.char` is the preferred alias for
:mod:`numpy.core.defchararray`.
.. autosummary::
:toctree: generated/
core.defchararray.array
core.defchararray.asarray
Numerical ranges
----------------
.. autosummary::
:toctree: generated/
arange
linspace
logspace
meshgrid
mgrid
ogrid
Building matrices
-----------------
.. autosummary::
:toctree: generated/
diag
diagflat
tri
tril
triu
vander
The Matrix class
----------------
.. autosummary::
:toctree: generated/
mat
bmat

View File

@@ -1,104 +0,0 @@
Array manipulation routines
***************************
.. currentmodule:: numpy
Changing array shape
====================
.. autosummary::
:toctree: generated/
reshape
ravel
ndarray.flat
ndarray.flatten
Transpose-like operations
=========================
.. autosummary::
:toctree: generated/
rollaxis
swapaxes
ndarray.T
transpose
Changing number of dimensions
=============================
.. autosummary::
:toctree: generated/
atleast_1d
atleast_2d
atleast_3d
broadcast
broadcast_arrays
expand_dims
squeeze
Changing kind of array
======================
.. autosummary::
:toctree: generated/
asarray
asanyarray
asmatrix
asfarray
asfortranarray
asscalar
require
Joining arrays
==============
.. autosummary::
:toctree: generated/
column_stack
concatenate
dstack
hstack
vstack
Splitting arrays
================
.. autosummary::
:toctree: generated/
array_split
dsplit
hsplit
split
vsplit
Tiling arrays
=============
.. autosummary::
:toctree: generated/
tile
repeat
Adding and removing elements
============================
.. autosummary::
:toctree: generated/
delete
insert
append
resize
trim_zeros
unique
Rearranging elements
====================
.. autosummary::
:toctree: generated/
fliplr
flipud
reshape
roll
rot90

Some files were not shown because too many files have changed in this diff Show More