f



Quicksort code

Hi all,

Does anyone have a c++ MPI code for the parallel quicksort algorithm I
could have a look at to help me implement a parallel sorting step in my
program?

Thanks,

Burnce
0
Burnce
2/15/2005 5:18:35 PM
comp.parallel.mpi 1534 articles. 0 followers. kisitanggang (69) is leader. Post Follow

4 Replies
3026 Views

Similar Articles

[PageSpeed] 46

Burnce schrieb:
> Hi all,
> 
> Does anyone have a c++ MPI code for the parallel quicksort algorithm I
> could have a look at to help me implement a parallel sorting step in my
> program?
> 
> Thanks,
> 
> Burnce

I think that a parallel quicksort can only be done efficiently with 
shared memory. A common method to parallelize sorting is to sort n 
chunks of the array locally with whatever algorithm fits and then merge 
the chunks either in a tree like fashion or in one place.

0
Georg
2/16/2005 2:01:10 PM
On 2005-02-15 11:18:35 -0600, Burnce <Burnce@gmx.net> said:

> Hi all,
> 
> Does anyone have a c++ MPI code for the parallel quicksort algorithm I
> could have a look at to help me implement a parallel sorting step in my
> program?
> 
> Thanks,
> 
> Burnce

I wrote a parallel quicksort implementation in MPI  using C a few 
semesters ago.  Send me an email if you want to discuss it.

Sam

0
Sam
2/18/2005 4:09:38 AM
hello

did you find a solution to the problem? i'd like to see that code 
too,and then make a porting of it in python.

feel free to send it to my email if you can.

Thanks,
Valerio

Burnce ha scritto:

> Does anyone have a c++ MPI code for the parallel quicksort algorithm I
> could have a look at to help me implement a parallel sorting step in my
> program?
0
valerio
6/2/2005 11:14:02 AM
Hi all,

The original post and most replies have expired
in my news server, now that the thread has catched my eye.
I should have paid more attention.

Which is the complexity for the parallel algorithm?
Is it possible to obtain superlinear speedup?
Have these items been previsously discussed in the thread?

Thanks in advance

-javier
0
ISO
6/2/2005 12:44:55 PM
Reply:

Similar Artilces:

Parallelize an existing Fortran code and generate parallel Fortran code using Maple?
Hi Folks, Here is a numerical integration program: http://people.scs.fsu.edu/~burkardt/f_src/quadpack/quadpack.f90 In evaluating the integrand functions, it is good to do two function evaluations at the same time simultaneously. I just found that I have two CPUs Pentium Xeon 2.4GHz, each supporing only up to SSE2. Although the CPUs are slow, but if I can utilize them to do parallel computing of the numerical integration. It would be great! But I am not a Fortran expert, nor am I a CS major... Could anybody give some easy-to-follow advice on how to make that code parallel on two CPUs? -...

how to code parallel codes in c
I am asking not for which api to chose but what 'design' solution someone considered to be good.. some thoughts? (if no skip the question) On Tue, 27 Jan 2015 11:40:35 -0800 (PST) fir <profesor.fir@gmail.com> wrote: > I am asking not for which api to chose > but what 'design' solution someone considered to be good.. some > thoughts? (if no skip the question) What is the problem? On Tue, 27 Jan 2015 11:40:35 -0800, fir wrote: > I am asking not for which api to chose but what 'design' solution > someone considered to be good.. som...

Porting PVM code to MPI
I am working on porting on application from PVM to MPI. Any suggestions on how to perform the equivalent of pvmkill? Thanks, Christian Christian Trefftz wrote: > I am working on porting on application from PVM to MPI. > Any suggestions on how to perform the equivalent of pvmkill? > Thanks, > Christian Generally, there is no equivalent, since most MPI implementations manage the processes for you. You don't have to start them or stop them other than to call mpirun (or mpiexec or an equivalent). The exception is LAM-MPI, which uses lamboot and lamhalt. It's similar to PVM...

To code or not to code?
To code or not to code, that is question! Reporting from Lotus Notes Applications has never been easy. Notes developers usually need to write reams of code to create even the simplest of reports. Check this - a single-level cross-tab report takes about 2 days of code writing! Creating reports with higher functionality only means more coding and more complexities. Is there an alternative to this? You bet there is! Introducing IntelliPRINT Reporting - a cutting-edge next generation solution for Lotus Notes reporting. It is a native Lotus Notes solution that acts as an add-on lay...

To code or not to code?
To code or not to code, that is question! Reporting from Lotus Notes Applications has never been easy. Notes developers usually need to write reams of code to create even the simplest of reports. Check this - a single-level cross-tab report takes about 2 days of code writing! Creating reports with higher functionality only means more coding and more complexities. Is there an alternative to this? You bet there is! Introducing IntelliPRINT Reporting - a cutting-edge next generation solution for Lotus Notes reporting. It is a native Lotus Notes solution that acts as an add-on lay...

Parallel quicksort
hello, I am realizing a program in order to carry out quicksort the parallel in MPI using algorithm PSRS. Unfortunately the code does not work, task that the problem is in the MPI_Alltoall Someone can help me? The code is following: #include <stdio.h> #include <stdlib.h> #include <mpi.h> #include "MyMPI.h" int main (int argc, char *argv[]) { int i; int id; /* Identificativo del processo */ int p; /* Numero di processi */ int n; /* Numero di elementi che appartengono al vettore*/ long high_value; /* Indice massimo del vettore per questo processo */ long low_value; /* Indice minimo del vettore per questo processo */ long size; /* Numero di elementi del vettore memorizzati da questo processo */ int *vettore; void quicksort (int *vett, int a, int b); MPI_Init(&argc,&argv); MPI_Comm_rank (MPI_COMM_WORLD, &id); MPI_Comm_size (MPI_COMM_WORLD, &p); /* Controllo che sia specificata la dimensione del vettore da ordinare */ if (argc != 2) { if (!id) printf ("Command line: %s <m>\n", argv[0]); MPI_Finalize(); exit (1); } /* leggo la grandezza del vettore*/ n = strtol(argv[1],NULL,10); /* effettuo la decomposizione a blocchi del vettore */ low_value = BLOCK_LOW(id,p,n-1); high_value = BLOCK_HIGH(id,p,n-1); size = BLOCK_SIZE (id,p,n-1); /* alloco la memoria per la parte di vettore gestita da questo processo */ vettore = (i...

To Code or Not to Code?
Just read a paper with the very interesting title, "To code or not to code: lossy source-channel communication revisited", Gastpar et al, IEEE Trans Information Theory, May 2003, p 1147. It's available for free here... http://www.eecs.berkeley.edu/~gastpar/01197846.pdf As far as I can tell, the idea is that although source and channel coding can be separated without loss of optimality (Shannon) it can be overly complex, and less complex systems can get close to this opimality by combining source and channel coding, or at least matching them in some way. This sounds like a pret...

Hi. Kindly please I want the code of reading two files in parallel using the main program code plus one thread.
Hi. Kindly, please, I want the code of reading two files in parallel using the main program code to read one file plus one thread to read the other file. As a concept I know what threads are and I use them in Java. However, I have read the example on the wiki and it is still not clear where is the parallelism there. So reading a line from one file F1 and and another line from another file F2 on the same e.g. hard drive and then print them on the terminal screen would really show the concepts "time sharing" and "parallelism". I want to add it in the book I am writing. thanks. Rani Ahmad <ranixlb@gmail.com> wrote: > As a concept I know what threads are and I use them in Java. > However, I have read the example on the wiki and it is still > not clear where is the parallelism there. Java threads are quite different from Tcl threads. Tcl threads are probably more similar to "two separate processes casually using inter-process synchronisation features to exchange information". > So reading a line from one file F1 and and another line from > another file F2 on the same e.g. hard drive and then print > them on the terminal screen would really show the concepts > "time sharing" and "parallelism". > I want to add it in the book I am writing. thanks. I'd expect that to show real "parallelism" PS: I haven't actually used tcl threads, yet, and don't feel...

Parallel quicksort Question
Hi...thanks for the previous aid... I have write a program in C and MPI for excute the quicksort in parallel... The program that I have written works, but if the carrier to order exceeds the million elements programs it does not finish the execution and it answers with "segmentation fault". I have reported my code below... Can you help me?... Thanks... #include <stdio.h> #include <stdlib.h> #include <math.h> #include <mpi.h> #include "MyMPI.h" int main (int argc, char *argv[]) { long i; int id; /* Identificativo del processo */ int p; /* Numero di processi */ long n; /* Numero di elementi che appartengono al vettore*/ long high_value; /* Indice massimo del vettore per questo processo */ long low_value; /* Indice minimo del vettore per questo processo */ long size; /* Numero di elementi del vettore memorizzati da questo processo */ long *vettore; /*Puntatore agli elementi memorizzati da ciascun processo*/ double elapsed_time; /* Tempo d'esecuzione parallela */ void quicksort (long *a, long l, long r); /* Funzione per l'ordinamento sequenziale */ MPI_Init(&argc,&argv); MPI_Comm_rank (MPI_COMM_WORLD, &id); MPI_Comm_size (MPI_COMM_WORLD, &p); /* Parte il timer*/ MPI_Barrier(MPI_COMM_WORLD); elapsed_time =3D - MPI_Wtime(); /* Controllo che sia specificata la dimensione del vettore da ordinare */ if (argc !=3D 2) { if (!id) printf ("Command line: %s <...

openmp
Is this the correct way to parallelize my code: #pragma omp parallel num_threads(2) { #pragma omp sections { #pragma omp section { task1; } #pragma omp section { task2; } } // end sections { #pragma omp barrier // need to wait that task1 and task2 are done before running task3 and task4 } #pragma omp sections { #pragma omp section { task3; } #pragma omp section { task4; } } // end sections } // end parallel is that correct ? or does task1 and task2 will be done two times by my two threads ? -- pierre -- [Moderator reminder: I am on ...

No functions in Parallel::MPI?
I have MPICH installed successfully on a test cluster of SunBlade 100s (Solaris 9, GNU Perl 5.8.4, Parallel::MPI 0.3), and it will run MPI-enabled C/C++ applications without a hitch. I installed Parallel::MPI from CPAN, and the install (and the prequisite tests) ran fine. The problem arises when I actually try and run a Perl script using Parallell::MPI. Realistically, the smallest possible script would look like: #!/usr/local/bin/perl use Parallel::MPI; MPI_Init(); MPI_Finalize(); This script has no functionality, but it would demonstrate that the module was working. I get the error: Und...

Code to display my code
Hi all, I am taking a class in PHP and thought that it would be nice to display the PHP code that generated my pages when the user clicked a link (or submit button). I have seen page that will display the underlying HTML on the page and it would be a nice touch...I think Is there a way to do this? TIA, Miki Michelle wrote: > I am taking a class in PHP and thought that it would be nice to display > the PHP code that generated my pages when the user clicked a link (or > submit button). I have seen page that will display the underlying HTML > on the page and it would be a n...

Parallel Quicksort 1.0 here...
Hello, Parallel Quicksort 1.0 is here. Description: Parallel Quicksort that uses my threadpool engine. Parallel Quicksort gave me 3x scaling when sorting strings on a quad cores, it scales better than the parallel quicksort in the parallelsort library.. Please use Lazarus-1.1-38262-fpc-2.6.1 from http://mirrors.iwi.me/lazarus/snapshots / cause it scales better on this version. Language: FPC Pascal v2.2.0+ / Delphi 7+: http://www.freepascal.org/ or Lazarus 32 bits or 64 bits from: http://mirrors.iwi.me/lazarus/snapshots/ Operating Systems: Win , Linux and Ma...

Code to compile a code
In a c program I have a var with the name of a file that I want to compile, how would I compile it using c code? Profetas <xuxu_18@yahoo.com> wrote: > In a c program I have a var with the name of a file that I want to compile, > how would I compile it using c code? Probably by cobbling together a string with the command to invoke the compiler, using your variable, and than feeding it to the system() function. Regards, Jens -- \ Jens Thoms Toerring ___ Jens.Toerring@physik.fu-berlin.de \__________________________ http://www.toerring....

Parallel execution of Systemc code
// Testbench rst=true; wait(10, SC_MS); rst=false; in1=8; in2=2; in3=3; in4=6; sel=2; wait(50, SC_MS); cout << "selected data:" << out << " " << endl; rst=true; sel = 0; wait(50, SC_MS); cout << "selected data:" << out << " " << endl; I am used to VHDL which runs in parallel, but I finding it difficult to understand/if Systemc code run in parallel like the example above. On 06/01/2015 09:15, muyihwah@gmail.com wrote: > // Testbench > rst=true; wait(10, SC_MS); > rst=false; in1=8; in...

MPI code within main
Hi, I am new to MPI and have been looking through example code, so I can parallelize some c++ code. It seems that all the examples show the MPI calls within main. ie int main(int argc, char *argv[]){ MPI::Init(argc, argv); .... MPI::Finalize(); } Can it be located outside of main? Thanks, J On 8 Mar 2007 08:20:07 -0800, juleigha27@gmail.com wrote: > Hi, > I am new to MPI and have been looking through example code, so I can > parallelize some c++ code. It seems that all the examples show the > MPI calls within main. > ie > int main(int argc, char *argv[]){ > MPI::In...

changing code by code
hi y'all! i want to implement a few macros to help me do my coding; when calling a macro: how i can i find out in which module i'm currently in; at which position and what text i have selected. it's gotta be possible somehow, but the screen-objekt really isn't of much help. can anyone help me? thanks in advance! mike maercker I don't know about macros. You might be able to use them to run code you write. But if you are up to coding, you need to set a reference to the 'Microsoft Visual Basic for Applications Extensibility 5.3'. It provides the interface for ...

MPI and LAM\MPI
Hello, I've been asked to setup five computers using the Pelican software by a grad student who is using MPI to write programs on his computer. While the Pelican software says its MPI the programs the grad student writes will not run under the LAM/MPI system that the Pelican software uses. I have had the student write a simple hello world program and when I try to run it on the Pelican system it will not run, it is not even recognised. Can anyone tell me what I need to do to get the MPI programs written by the grad student to run under this LAM/MPI. Thank you for any help you can giv...

How to make this code parallel in MATLAB ?
Dear All, How can I make the following code parallel ? % test parallel matlab clear all, clc T = rand(5); parfor i = 1:5 for j = 1 : 5 T(j,:) = j+i; end end T Matlab underlines T inside the loop saying the message : The PARFOR loop cannot run due to the way variable T is used ! I flipped over the documentations for a short time and could not solve this. Could anyone suggest any workaround ? It is urgent .... Thanks, Pawan "pawan " <misterpawan@gmail.com> wrote in message <igmmvh$lhm$1@fred.mathworks.com>... > Dear All, > > How can I make the following code parallel ? > > % test parallel matlab > clear all, clc > T = rand(5); > parfor i = 1:5 > for j = 1 : 5 > T(j,:) = j+i; > end > end > T > > Matlab underlines T inside the loop saying the message : The PARFOR loop cannot run due to the way variable T is used ! > > I flipped over the documentations for a short time and could not solve this. Could anyone suggest any workaround ? It is urgent .... > > Thanks, > Pawan First of all a disclaimer: I'm an average-level Matlab user, so what I say may be superseded below.... I don't think you can fill up T as above because the index variable changes inside each iteration of parfor. Allocate you variable to a temporary variable, and then fill T up o...

Only part of code to work in parallel
I have one problem with my code, I have 2 rutines in fortran. I need to paralelize only part of code, not all. How to do that? for example, I have input of matrix dimension, and when I run program on 4 computers then I have to input 4 times matrix dimension, but I want to input that once and run in parallel only one part of code, how to do it? And it's not an option to make If (MyRank = master) then. ... else. ... because source code is huge and can't be in one file. Any suggestions? Thx in advance d wrote: > I have one problem with my code, I have 2 rutines in fortran...

Parallelizing C/C++ code
Hello Folks, I have been searching for information about Compiler approaches to parallelization of code. I haven't much information about it. I have looked into SUIF, CIL and Daedalus but I don't know if they have enough results to be used as an actual parallelization platform. Will the approach be dependent on the MoC like in Daedalus? I am new to the area and would like to have some feedback from you about what is actually useful and what has already been dropped by the community. The intention is to get some kind of parallelism analysis out of C/C++ code. Thanks for any help, Raphael. According to google there are 4.4 million hits for "parallelizing compiler". :) So you know it's a huge area. Perhaps start with wikipedia for an introduction. On May 1, 7:00 am, russell kym horsell <k...@sdf.lonestar.org> wrote: > According to google there are 4.4 million hits for "parallelizing > compiler". :) So you know it's a huge area. Perhaps start with > wikipedia for an introduction. Hi Russel, Thanks for googling it for me :) Nevertheless, I think the problem for me lies more in the analysis area. Are the analysis for parallelism worth the effort for compilers technology? Googling for a tool that accomplishes such parallelization gives no result. I therefore imagine that it has no simple solution. Since I'm no compilers researcher I thought of asking here, maybe I could get an answer like: It has not been adopted by ...

Parallel Implementation of Code in Matlab
I have a question that whether we need to separately write code for parallel implementation of a code or the Matlab automatically implements the code parallely. Vikas Bajpai wrote: > I have a question that whether we need to separately write code for > parallel implementation of a code or the Matlab automatically > implements the code parallely. Matlab only automatically parallelizes some kinds of mathematics operations, and only if the vectors are big enough to make the overhead worthwhile. If you want your program to use parallelism, you have to write it specifica...

Virtual Key Codes, Scan Codes and ASCII Codes in C
Not sure if this is the right board but it is in c... I've got a console program written in c which receives key presses and gets a Windows Virtual key code. I am trying to convert it to the exact ascii value the user has typed. so if shift is down it should be a capital otherwise lower case etc. I started with this: void printKey(int key) { printf("Key: %c\n", key); } and I got all upper case (not really suprising) so then after a bit of research and grafting I came up with this: int vk2ascii(unsigned int vk, int *s) { int scan; unsigned char state[256]; HKL layout=GetK...

M code into programming C code or MATLAB programming code
hallo firends, Can any body help me write this below code into MATLAB object oriented or C code form please? M=50; % total number of periods. (choose for 1s duration) f0 = 50; % fundamental AC frequency T0 = 1/f0; %fundamental AC period T = M*T0; %Time for M periods (integer M) N=30; % sample points per period. dt = T0/N; % Sample at N points per period (integer N > 20) t = dt*[1:M*N-1]; % Sampling time array A0=120/2; % amplitude of AC signal. Divide by 2 or maybe sqrt (2) ? AC=A0*sin(f0*t*2*pi); % create AC signal. DC0=50; % DC amplitude DC=DC0*ones(1,length...

Web resources about - Quicksort code - comp.parallel.mpi

Quicksort - Wikipedia, the free encyclopedia
Additionally, quicksort's sequential and localized memory references work well with a cache . Quicksort is a comparison sort and, in efficient ...

Quicksort Partition via Prefix Scan
I'm going to describe how to put into effect the partition functionality of the Quicksort algorithm using the Prefix Scan operation.

How to rank a million images with a crowdsourced sort
Hot-or-Not style ? I.e. show a single image, ask the user to rank it from 1-10. As I see it, this allows me to average the scores, and I would ...

Twitter
Sign in Sign up You are on Mobile because you are using an old version of Internet Explorer. Learn more here Dan Crow @ crowquine London, United ...

Software - Wikipedia, the free encyclopedia
Computer software or just software , is a collection of computer programs and related data that provides the instructions for telling a computer ...

Joshua Nussbaum: git
Joshua Nussbaum Software + Hardware = :) Showing posts with label git . Show all posts Showing posts with label git . Show all posts Wednesday, ...

Sorting Algorithms Demo
We all know that Quicksort is one of the fastest algorithms forsorting. It's not often, however, that we get a chance to see exactly how fast ...

Stephen Y. Chen
Stephen Y. Chen

Thomas Wolf's Ada 95 Sorting Routines
Ada 95 Sorting Routines I wrote this software because I one day needed to have a fastsorting algorithm, but all existing Quicksort implementations ...

Groovy web console - Recent scripts
Groovy web console Subscribe to this site Recent scripts Actions ➤ Back to console reverse linked list by cdeszaq 3 hours ago Reverse Linked ...

Resources last updated: 3/19/2016 9:25:22 AM