@@ -664,6 +664,57 @@ <h2>What is xtensor?</h2>
664
664
</ div >
665
665
</ div >
666
666
</ section >
667
+
668
+ </ section >
669
+ < section >
670
+ < section >
671
+ < div class ="splitting "> </ div >
672
+ < p > Iteration</ p >
673
+ < div style ="margin-top: 2%; ">
674
+ < div class ="left-panel ">
675
+ Row-major iteration over the array < code > for x in np.nditer(a)</ code >
676
+ </ div >
677
+ < div class ="right-panel ">
678
+ < pre class ="panel "> < code class ="cpp panel " data-trim >
679
+ for(auto it=a.xbegin(); it!=a.xend(); ++it)
680
+ </ code > </ pre >
681
+ </ div >
682
+ </ div >
683
+ < div style ="margin-top: 2%; ">
684
+ < div class ="left-panel fragment ">
685
+ Iterating over < code > a</ code > with a prescribed broadcasting shape
686
+ </ div >
687
+ < div class ="right-panel fragment ">
688
+ < pre class ="panel "> < code class ="cpp panel " data-trim >
689
+ a.xbegin({3, 4})
690
+ a.xend({3, 4})
691
+ </ code > </ pre >
692
+ </ div >
693
+ </ div >
694
+ < div style ="margin-top:2%; ">
695
+ < div class ="left-panel fragment ">
696
+ Iterating over < code > a</ code > in a column-major fashion
697
+ </ div >
698
+ < div class ="right-panel fragment ">
699
+ < pre class ="panel "> < code class ="cpp panel " data-trim >
700
+ a.template xbegin<layout_type::column_major>()
701
+ a.template xend<layout_type::column_major>()
702
+ </ code > </ pre >
703
+ </ div >
704
+ </ div >
705
+ < div style ="margin-top:2%; ">
706
+ < div class ="left-panel fragment ">
707
+ Iterating over < code > a</ code > in a column-major fashion with a prescribed broadcasting shape
708
+ </ div >
709
+ < div class ="right-panel fragment ">
710
+ < pre class ="panel "> < code class ="cpp panel " data-trim >
711
+ a.template xbegin<layout_type::column_major>({3, 4})
712
+ a.template xend<layout_type::column_major>({3, 4})
713
+ </ code > </ pre >
714
+ </ div >
715
+ </ div >
716
+ </ section >
717
+ </ section >
667
718
</ section >
668
719
< section >
669
720
< section >
@@ -1083,12 +1134,27 @@ <h3>A simple R extension</h3>
1083
1134
< li > Packaging boilerplate (setup.py, build.jl)</ li >
1084
1135
</ ul >
1085
1136
</ section >
1137
+
1138
+ < section >
1139
+ < img alt ="xtensor-robot " src ="robot.png " width ="20% "/>
1140
+ < p > Scientific computing in a polyglot world with xtensor</ p >
1141
+ < ul >
1142
+ < li > Write a native extension once, operating on xtensor expressions</ li >
1143
+ < li > Get free bindings to R, Julia and Python</ li >
1144
+ < li > For more details about the best practices, see
1145
+ < p >
1146
+ < a href ="http://quantstack.net/c++/2017/05/30/polyglot-scientific-computing-with-xtensor.html "> http://quantstack.net/c++/2017/05/30/polyglot-scientific-computing-with-xtensor.html</ a >
1147
+ </ p >
1148
+ </ li >
1149
+ </ ul >
1150
+ </ section >
1151
+
1086
1152
< section >
1087
1153
< p > Bindings with BLAS libraries</ p >
1088
1154
< img alt ="xtensor-blas " src ="xtensor-blas.svg " width ="55% "/>
1089
- < p > BLAS-based implementation of < code > numpy.linalg</ code > module are implemented in xtensor-blas. </ p >
1155
+ < p > BLAS-based implementation of < code > numpy.linalg</ code > </ p >
1090
1156
< ul >
1091
- < li > ISO results with numpy.linalg is main goal</ li >
1157
+ < li > ISO results with < code > numpy.linalg</ code > is main goal</ li >
1092
1158
< li > Seeking adoption by Python to C++ compilers (Pythran, Jet)</ li >
1093
1159
< li > Works with any BLAS implementation (openblas, mkl, netlib)</ li >
1094
1160
< li > See the < a href ="http://xtensor.readthedocs.io/en/latest/numpy.html "> numpy to xtensor cheat sheet</ a > </ li >
@@ -1097,17 +1163,17 @@ <h3>A simple R extension</h3>
1097
1163
1098
1164
< section >
1099
1165
< p > SIMD acceleration</ p >
1100
- < img alt ="xsimd " src ="xsimd.svg " width ="55 % "/>
1166
+ < img alt ="xsimd " src ="xsimd.svg " width ="35 % "/>
1101
1167
< ul >
1102
1168
< li > Unified high-level API to AVX, SSE, Neon with arithmetic operations</ li >
1103
1169
< li > Accelerated mathematical functions</ li >
1104
1170
< li > Aligned memory allocation</ li >
1105
1171
</ ul >
1106
1172
</ section >
1107
1173
< section >
1108
- < p > Assuming that AVX is supported</ p >
1174
+ < p > xsimd example ( Assuming that AVX is supported) </ p >
1109
1175
< pre class ="panel ">
1110
- < code class ="julia ">
1176
+ < code class ="cpp ">
1111
1177
#include <iostream>
1112
1178
#include "xsimd/xsimd.hpp"
1113
1179
@@ -1122,11 +1188,11 @@ <h3>A simple R extension</h3>
1122
1188
return 0;
1123
1189
}
1124
1190
</ code >
1125
- < pre >
1191
+ </ pre >
1126
1192
</ section >
1127
1193
< section >
1128
- < p > Accelerating loops</ p >
1129
- < p > This applies to processing including basic math functions from cmath.</ p >
1194
+ < p > xsimd: accelerating loops</ p >
1195
+ < p > ( This applies to processing including basic math functions from cmath.) </ p >
1130
1196
< pre class ="panel ">
1131
1197
< code class ="julia ">
1132
1198
#include <cstddef>
@@ -1156,17 +1222,18 @@ <h3>A simple R extension</h3>
1156
1222
}
1157
1223
}
1158
1224
</ code >
1159
- < pre >
1225
+ </ pre >
1160
1226
</ section >
1161
1227
1162
1228
< section >
1163
1229
< p > What is coming?</ p >
1164
1230
< div style ="margin-top: 5%; margin-bottom: 10%; ">
1165
1231
< ul >
1166
- < li > Pluging xsimd into xtensor for SIMD acceleration</ li >
1167
- < li class ="fragment " style ="margin-top: 5%; "> Bindings with Apache Arrow</ li >
1232
+ < li > Plugging xsimd into xtensor for SIMD acceleration</ li >
1233
+ < li class ="fragment " style ="margin-top: 5%; "> Bindings with Apache Arrow, HDF5, FITS, NetCDF </ li >
1168
1234
< li class ="fragment " style ="margin-top: 5%; "> GPU acceleration</ li >
1169
- < li class ="fragment " style ="margin-top: 5%; "> ODBC-databases backed expressions</ li >
1235
+ < li class ="fragment " style ="margin-top: 5%; "> Expressions backed wit ODBC-databases</ li >
1236
+ < li class ="fragment " style ="margin-top: 5%; "> Arrays with labeled dimensions</ li >
1170
1237
< li class ="fragment " style ="margin-top: 5%; "> World domination</ li >
1171
1238
</ ul >
1172
1239
</ div >
@@ -1240,6 +1307,47 @@ <h3>A simple R extension</h3>
1240
1307
</ div >
1241
1308
</ div >
1242
1309
</ section >
1310
+ < section >
1311
+ < h2 > The End</ h2 >
1312
+ </ section >
1313
+ < section >
1314
+ < h3 > Aliasing (Appendix)</ h3 >
1315
+ < div style ="width: 70%; margin-left: auto; margin-right: auto; text-align: left; ">
1316
+ < p > In an assignment, aliasing occurs when the LHS refers to the same memory as a member of the RHS expression</ p >
1317
+ < pre class ="panel ">
1318
+ < code class ="cpp ">
1319
+ a = b + c.dot(a);
1320
+ </ code >
1321
+ </ pre >
1322
+ < p > By default, xtensor will solve that issue by assigning the result to a temporary variable that is then < em > moved</ em > to the left-hand side.</ p >
1323
+ < p > If the user < em > knows</ em > that no aliasing accurs in an assignment, we can save this extra-allocation with the < code > noalias</ code > function.</ p >
1324
+ < pre class ="panel ">
1325
+ < code class ="cpp ">
1326
+ noalias(a) = b + c.dot(d);
1327
+ </ code >
1328
+ </ pre >
1329
+ </ section >
1330
+ < section >
1331
+ < h3 > Static and dynamic dimensionality (Appendix)</ h3 >
1332
+ < div style ="width: 70%; margin-left: auto; margin-right: auto; text-align: left; font-size: 30px; ">
1333
+ < p >
1334
+ The xtensor library provides too container types:
1335
+ </ p >
1336
+ < ul >
1337
+ < li > < code > xarray< double > </ code > </ li >
1338
+ < li > < code > xtensor< double , N > </ code > </ li >
1339
+ </ ul >
1340
+ < p > The latter encodes the number of dimensions in the type:</ p >
1341
+ < ul >
1342
+ < li > Stack-allocated shapes and strides.</ li >
1343
+ < li > Faster iteration.</ li >
1344
+ </ ul >
1345
+ < p > What about expressions?</ p >
1346
+ < ul >
1347
+ < li > The compile-time < code > promote_shape</ code > mechanism selects the optimal shape type for an expression at compile time.</ li >
1348
+ </ ul >
1349
+ </ div >
1350
+ </ section >
1243
1351
</ div >
1244
1352
</ div >
1245
1353
0 commit comments