I have a vector of std::vector<SomeType>. The precise number of vectors containing SomeType is unknown at runtime. How best to merge the set into a single std::vector<SomeType> with the SomeTypes arranged one after another, i.e. such that the "columns" of the first vector are transposed into a single row?

0 |

12/22/2016 12:33:33 AM

On 22.12.2016 01:33, bitrex wrote: > I have a vector of std::vector<SomeType>. The precise number of > vectors containing SomeType is unknown at runtime. > > How best to merge the set into a single std::vector<SomeType> with > the SomeTypes arranged one after another, Define “best”. > i.e. such that the "columns" of the first vector are transposed into > a single row? I don't understand this requirement This is simple amortized linear time code to concatenate some vectors: vector<int> a; vector<int> b; vector<int> c; // ... vector<int> all; for( vector<int>* pv : {&a, &b, &c} ) { all.insert( all.end(), pv->begin(), pv->end() ); } To optimize this slightly you can `reserve` the requisite capacity for the `all` vector before the loop. Cheers & hth., - Alf

0 |

12/22/2016 1:09:46 AM

On 12/21/2016 08:09 PM, Alf P. Steinbach wrote: > On 22.12.2016 01:33, bitrex wrote: >> I have a vector of std::vector<SomeType>. The precise number of >> vectors containing SomeType is unknown at runtime. >> >> How best to merge the set into a single std::vector<SomeType> with >> the SomeTypes arranged one after another, > > Define “best”. > > >> i.e. such that the "columns" of the first vector are transposed into >> a single row? > > I don't understand this requirement To clarify I mean that if I have a vector of vector<string> and the two vectors within contain respectively, front to back: {"H", "e", "l", "l", "o"}, {"W", "o", "r", "l", "d"} I would like the output vector to contain, front to back: {"H", "e", "l", "l", "o", "W", "o", "r", "l", "d"} > This is simple amortized linear time code to concatenate some vectors: > > vector<int> a; > vector<int> b; > vector<int> c; > > // ... > vector<int> all; > for( vector<int>* pv : {&a, &b, &c} ) > { > all.insert( all.end(), pv->begin(), pv->end() ); > } > > To optimize this slightly you can `reserve` the requisite capacity for > the `all` vector before the loop. > > > Cheers & hth., > > - Alf Thanks. The only thing is I don't know exactly how many will end up needing to be merged when writing the code, as the number the first vector will contain is a run-time decision based on input data.

0 |

12/22/2016 1:28:00 AM

bitrex <bitrex@de.lete.earthlink.net> writes: >I have a vector of std::vector<SomeType>. The precise number of vectors >containing SomeType is unknown at runtime. ... is only known as late as runtime. >How best to merge the set into a single std::vector<SomeType> with the >SomeTypes arranged one after another, i.e. such that the "columns" of >the first vector are transposed into a single row? #include <initializer_list> #include <iostream> #include <ostream> #include <vector> #include <iterator> #include <algorithm> template< class T > ::std::vector< T >flatten ( ::std::vector< ::std::vector< T >> const & s ) { ::std::vector< T > t; auto bt = back_inserter( t ); for( auto const & v: s ) copy( begin( v ), end( v ), bt ); return t; } int main() { using T = int; ::std::vector< ::std::vector< T >>s { { 1, 2 }, { 3 }}; ::std::vector< T >t = flatten( s ); } Additional exercise: Generalize this to the case where the source vector contains �entries�, each of which at runtime (polymorphically) can be either a number or a list of numbers, nested to an arbitrary depth, and then flatten this! Hint: ( SETQ FLATTEN ( LAMBDA( X ) ( COND ( ( NULLP X ) '() ) ( ( ATOMP X ) X ) ( T ( APPEND ( COND ( ( ATOMP( CAR X ))( LIST( CAR X ))) ( T ( FLATTEN( CAR X )))) ( FLATTEN( CDR X )))))))

0 |

12/22/2016 1:38:05 AM

On Wednesday, December 21, 2016 at 8:28:11 PM UTC-5, bitrex wrote: > On 12/21/2016 08:09 PM, Alf P. Steinbach wrote: > > > This is simple amortized linear time code to concatenate some vectors: > > > > vector<int> a; > > vector<int> b; > > vector<int> c; > > > > // ... > > vector<int> all; > > for( vector<int>* pv : {&a, &b, &c} ) > > { > > all.insert( all.end(), pv->begin(), pv->end() ); > > } > > > > The only thing is I don't know exactly how many will end up > needing to be merged when writing the code, as the number the first > vector will contain is a run-time decision based on input data. It doesn't matter. Given your input vector std::vector<std::vector<std::string>> input; however populated, the same algorithm applies: size_t size = 0; for (const auto& v : input) { size += v.size(); } std::vector<std::string> output; output.reserve(size); for (const auto& v : input) { output.insert(output.end(), v.begin(), v.end()); } Regards, Daniel

0 |

12/22/2016 3:10:35 AM

ram@zedat.fu-berlin.de (Stefan Ram) writes: >template< class T > >::std::vector< T >flatten >( ::std::vector< ::std::vector< T >> const & s ) >{ ::std::vector< T > t; > auto bt = back_inserter( t ); > for( auto const & v: s ) > copy( begin( v ), end( v ), bt ); > return t; } Another approach might be .... using list = ::std::vector< T >; auto product = accumulate ( begin( source ), end( source ), list {}, []( list & l, list & r ) { copy( begin( r ), end( r ), back_inserter( l )); return l; } ); .... I hope that the temporary �list {}� lives long enough here, before the product is being copied into �product�. Also, I hope that in spite of the header �numeric�, accumulate can cope with data which not ::std::is_arithmetic. Of course, in practice, one should always measure whether a �reserve� will make things faster (probably yes), but above I wanted to keep the algorithm simple.

0 |

12/22/2016 12:55:00 PM

ram@zedat.fu-berlin.de (Stefan Ram) writes: >auto product = accumulate >( begin( source ), end( source ), list {}, > []( list & l, list & r ) > { copy( begin( r ), end( r ), back_inserter( l )); > return l; } ); And in C++17, one can then possibly use �:::std::reduce� instead of �::std::accumulate�, which means that the algorithm then possibly can be parallelized. But here, parallelization would mean many more copies and might actually slow down the process, like, totally.

0 |

12/22/2016 3:02:12 PM