f



Parallel Processing

Hi

I have developed a code which takes a couple of hours to run and I am aware of the fact that IDL automatically parallelizes some vector operations and one should prefer those instead of looping through arrays.

I have done all that but still I know I could speed up things by a factor of 2 when I do certain things on 2 cores.

For instance, somewhere in the program I pass some arrays to a function and this function then returns and equally large array with some calculated values. This is all done with one core since the operations in the function are not parallelized. 

However, I could split up the input arrays into to equally large parts and perform the calcualtions for each of those two on one core. In the end, when both are finished I could just concatenate the result-arrays.

Is this possible in some easy way?

thanks for your help :)
0
6/28/2012 2:05:24 PM
comp.lang.idl-pvwave 12260 articles. 4 followers. Post Follow

10 Replies
1085 Views

Similar Articles

[PageSpeed] 21

On Thursday, June 28, 2012 10:05:24 AM UTC-4, stefan....@gmail.com wrote:
> Hi
>=20
> I have developed a code which takes a couple of hours to run and I am awa=
re of the fact that IDL automatically parallelizes some vector operations a=
nd one should prefer those instead of looping through arrays.
>=20
> I have done all that but still I know I could speed up things by a factor=
 of 2 when I do certain things on 2 cores.
>=20
> For instance, somewhere in the program I pass some arrays to a function a=
nd this function then returns and equally large array with some calculated =
values. This is all done with one core since the operations in the function=
 are not parallelized.=20
>=20
> However, I could split up the input arrays into to equally large parts an=
d perform the calcualtions for each of those two on one core. In the end, w=
hen both are finished I could just concatenate the result-arrays.
>=20
> Is this possible in some easy way?
>=20
> thanks for your help :)


Hi Stefan,
I know of at least three ways to skin this cat, and they depend on the deta=
ils of your problem and how much money you're willing to spend.

(1) Simply break the problem into several smaller bits, and run each bit in=
 different IDL sessions.  The OS will naturally divvy the computations acco=
rdingly.  Of course, this is the easiest, but may not be practical for your=
 problem.

(2) Employ the IDL_IDLBridge architecture built by ITT for IDL.  This adds =
a new level of programming, but it's a way of running multiple IDL commands=
 simultaneously.  At first glance, this sounds tedious --- but I"ve done it=
 1000s of times and it's not too bad.  This works best for problems where y=
ou want to preform some long task on one set of data and then go to a next =
set --- and the sets are unrelated.  For example, have 100 objects and each=
 object must be processed identically.

(3) There is some aftermarket software (I believe it's called FastDL) that =
you can build to do this.  I'm not to familiar with it, but you can google =
it.

Russell
0
rryan (43)
6/28/2012 2:27:47 PM
Am Donnerstag, 28. Juni 2012 16:27:47 UTC+2 schrieb Russell:
> On Thursday, June 28, 2012 10:05:24 AM UTC-4, stefan....@gmail.com wrote:
> > Hi
> >=20
> > I have developed a code which takes a couple of hours to run and I am a=
ware of the fact that IDL automatically parallelizes some vector operations=
 and one should prefer those instead of looping through arrays.
> >=20
> > I have done all that but still I know I could speed up things by a fact=
or of 2 when I do certain things on 2 cores.
> >=20
> > For instance, somewhere in the program I pass some arrays to a function=
 and this function then returns and equally large array with some calculate=
d values. This is all done with one core since the operations in the functi=
on are not parallelized.=20
> >=20
> > However, I could split up the input arrays into to equally large parts =
and perform the calcualtions for each of those two on one core. In the end,=
 when both are finished I could just concatenate the result-arrays.
> >=20
> > Is this possible in some easy way?
> >=20
> > thanks for your help :)
>=20
>=20
> Hi Stefan,
> I know of at least three ways to skin this cat, and they depend on the de=
tails of your problem and how much money you're willing to spend.
>=20
> (1) Simply break the problem into several smaller bits, and run each bit =
in different IDL sessions.  The OS will naturally divvy the computations ac=
cordingly.  Of course, this is the easiest, but may not be practical for yo=
ur problem.
>=20
> (2) Employ the IDL_IDLBridge architecture built by ITT for IDL.  This add=
s a new level of programming, but it's a way of running multiple IDL comman=
ds simultaneously.  At first glance, this sounds tedious --- but I"ve done =
it 1000s of times and it's not too bad.  This works best for problems where=
 you want to preform some long task on one set of data and then go to a nex=
t set --- and the sets are unrelated.  For example, have 100 objects and ea=
ch object must be processed identically.
>=20
> (3) There is some aftermarket software (I believe it's called FastDL) tha=
t you can build to do this.  I'm not to familiar with it, but you can googl=
e it.
>=20
> Russell

Thanks for your fast reply. I was think of splitting everything up into two=
 parts by just starting two sessions separately with different job assignme=
nts and then in the end just merge the results. For me this approach is not=
 attractive since a) I have to set up two different initial sets each time =
I run something and b) I don't learn anything about parallel processing.

I think I will look into the IDLbridge and see whats in there

thanks
cheers
0
6/28/2012 2:35:05 PM
On Thursday, June 28, 2012 10:05:24 AM UTC-4, stefan....@gmail.com wrote:
> Hi
>=20
> I have developed a code which takes a couple of hours to run and I am awa=
re of the fact that IDL automatically parallelizes some vector operations a=
nd one should prefer those instead of looping through arrays.
>=20
> I have done all that but still I know I could speed up things by a factor=
 of 2 when I do certain things on 2 cores.
>=20
> For instance, somewhere in the program I pass some arrays to a function a=
nd this function then returns and equally large array with some calculated =
values. This is all done with one core since the operations in the function=
 are not parallelized.=20
>=20
> However, I could split up the input arrays into to equally large parts an=
d perform the calcualtions for each of those two on one core. In the end, w=
hen both are finished I could just concatenate the result-arrays.
>=20
> Is this possible in some easy way?
>=20
> thanks for your help :)

Yeah, that sounds like it's what you want.  Post back if it's not clear how=
 to proceed.  I use this stuff all the time for several big pipelines that =
I use, but it only works if you've got a relatively small amount of data to=
 process (which may take a long time).  Then you want repeat this procedure=
 for many similar units of data.

Russell
0
rryan (43)
6/28/2012 4:40:15 PM
Am Donnerstag, 28. Juni 2012 18:40:15 UTC+2 schrieb (unbekannt):
> On Thursday, June 28, 2012 10:05:24 AM UTC-4, stefan....@gmail.com wrote:
> > Hi
> >=20
> > I have developed a code which takes a couple of hours to run and I am a=
ware of the fact that IDL automatically parallelizes some vector operations=
 and one should prefer those instead of looping through arrays.
> >=20
> > I have done all that but still I know I could speed up things by a fact=
or of 2 when I do certain things on 2 cores.
> >=20
> > For instance, somewhere in the program I pass some arrays to a function=
 and this function then returns and equally large array with some calculate=
d values. This is all done with one core since the operations in the functi=
on are not parallelized.=20
> >=20
> > However, I could split up the input arrays into to equally large parts =
and perform the calcualtions for each of those two on one core. In the end,=
 when both are finished I could just concatenate the result-arrays.
> >=20
> > Is this possible in some easy way?
> >=20
> > thanks for your help :)
>=20
> Yeah, that sounds like it's what you want.  Post back if it's not clear h=
ow to proceed.  I use this stuff all the time for several big pipelines tha=
t I use, but it only works if you've got a relatively small amount of data =
to process (which may take a long time).  Then you want repeat this procedu=
re for many similar units of data.
>=20
> Russell

Hey

It works but I had to take a step back since I ran into some quite ridiculo=
us problem...

It actually takes longer for IDL to set up, say, two child processes and th=
en run them on separate cores as compared to running through 2 iterations o=
n one core.

So I implemented the IDLbridge right at the beginning and separated the who=
le process into 2 substeps and not just this one step where I have to do on=
e calculation.

If someone else should read this in the future:
It took me about 10 minutes to set up my first code to run on 2 cores witho=
ut knowing ANYTHING about that stuff.

Here is what guided me through:
http://slugidl.pbworks.com/w/page/29199259/Child%20Processes

very easy
cheers
0
6/28/2012 5:02:51 PM
Hi

Now that I have successfully implemented multi-threading another prblem occured:

To invoke multiple processes is start in a loop with

bridges[i]->EXECUTE, 'program,par1,par2', /NOWAIT

where bridges is an object-array which holds the different child processes.
Upon the execution of the last process I do

bridges[i]->EXECUTE, 'program,par3,par4'

And after that I destroy my bridges in a loop.

Now I have a problem if the last process finishes before one of the previous since upon its completion it will directly move to the part where all bridges are destroyed and kills my program...

Is there an easy way to tell IDL to wait for all my processes to finish and then destroy the bridges?

thanks
Stefan
0
7/5/2012 2:50:38 PM
Hi 

Now that I have successfully implemented multi-threading another problem occured: 

To invoke multiple processes I start in a loop with 

bridges[i]->EXECUTE, 'program,par1,par2', /NOWAIT 

where 'bridges' is an object-array which holds the different child processes. 
Upon the execution of the last process I do 

bridges[i]->EXECUTE, 'program,par3,par4' 

And after that I destroy my bridges in a loop. 

Now I have a problem if the last process finishes before one of the previous processes since upon its completion it will directly move to the part where all bridges are destroyed and kills my program... 

Is there an easy way to tell IDL to wait for all my processes to finish and then destroy the bridges? 

thanks 
Stefan 
0
7/5/2012 2:52:23 PM
On Thursday, July 5, 2012 4:52:23 PM UTC+2, Stefan wrote:
> Hi 
> 
> Now that I have successfully implemented multi-threading another problem occured: 
> 
> To invoke multiple processes I start in a loop with 
> 
> bridges[i]->EXECUTE, 'program,par1,par2', /NOWAIT 
> 
> where 'bridges' is an object-array which holds the different child processes. 
> Upon the execution of the last process I do 
> 
> bridges[i]->EXECUTE, 'program,par3,par4' 
> 
> And after that I destroy my bridges in a loop. 
> 
> Now I have a problem if the last process finishes before one of the previous processes since upon its completion it will directly move to the part where all bridges are destroyed and kills my program... 
> 
> Is there an easy way to tell IDL to wait for all my processes to finish and then destroy the bridges? 
> 
> thanks 
> Stefan

Use IDL_IDLBridge::Status to determine whether the job is finished. Pseudocode:

ndone=0
running=replicate(1, njobs)
while ndone lt njobs begin
    for j=0,njobs-1 do begin
        if running[j] then begin
           query status of the j-th job with IDL_IDLBridge::Status
           if finished
              destroy bridge
              running[j]=0
              ndone++
           endif
        endif
    endfor
    wait, 1
endwhile


regards,
Lajos
0
7/5/2012 3:13:02 PM
Am Donnerstag, 5. Juli 2012 17:13:02 UTC+2 schrieb fawltyl...@gmail.com:
> On Thursday, July 5, 2012 4:52:23 PM UTC+2, Stefan wrote:
> > Hi 
> > 
> > Now that I have successfully implemented multi-threading another problem occured: 
> > 
> > To invoke multiple processes I start in a loop with 
> > 
> > bridges[i]->EXECUTE, 'program,par1,par2', /NOWAIT 
> > 
> > where 'bridges' is an object-array which holds the different child processes. 
> > Upon the execution of the last process I do 
> > 
> > bridges[i]->EXECUTE, 'program,par3,par4' 
> > 
> > And after that I destroy my bridges in a loop. 
> > 
> > Now I have a problem if the last process finishes before one of the previous processes since upon its completion it will directly move to the part where all bridges are destroyed and kills my program... 
> > 
> > Is there an easy way to tell IDL to wait for all my processes to finish and then destroy the bridges? 
> > 
> > thanks 
> > Stefan
> 
> Use IDL_IDLBridge::Status to determine whether the job is finished. Pseudocode:
> 
> ndone=0
> running=replicate(1, njobs)
> while ndone lt njobs begin
>     for j=0,njobs-1 do begin
>         if running[j] then begin
>            query status of the j-th job with IDL_IDLBridge::Status
>            if finished
>               destroy bridge
>               running[j]=0
>               ndone++
>            endif
>         endif
>     endfor
>     wait, 1
> endwhile
> 
> 
> regards,
> Lajos

Ah, I see you put WAIT there if its not finished. I thought of using this in a loop but without putting WAIT you spend too many resources.

Thanks! :)
0
7/5/2012 3:16:27 PM
On Thursday, July 5, 2012 11:16:27 AM UTC-4, Stefan wrote:
> Am Donnerstag, 5. Juli 2012 17:13:02 UTC+2 schrieb fawltyl...@gmail.com:
> > On Thursday, July 5, 2012 4:52:23 PM UTC+2, Stefan wrote:
> > > Hi=20
> > >=20
> > > Now that I have successfully implemented multi-threading another prob=
lem occured:=20
> > >=20
> > > To invoke multiple processes I start in a loop with=20
> > >=20
> > > bridges[i]->EXECUTE, 'program,par1,par2', /NOWAIT=20
> > >=20
> > > where 'bridges' is an object-array which holds the different child pr=
ocesses.=20
> > > Upon the execution of the last process I do=20
> > >=20
> > > bridges[i]->EXECUTE, 'program,par3,par4'=20
> > >=20
> > > And after that I destroy my bridges in a loop.=20
> > >=20
> > > Now I have a problem if the last process finishes before one of the p=
revious processes since upon its completion it will directly move to the pa=
rt where all bridges are destroyed and kills my program...=20
> > >=20
> > > Is there an easy way to tell IDL to wait for all my processes to fini=
sh and then destroy the bridges?=20
> > >=20
> > > thanks=20
> > > Stefan
> >=20
> > Use IDL_IDLBridge::Status to determine whether the job is finished. Pse=
udocode:
> >=20
> > ndone=3D0
> > running=3Dreplicate(1, njobs)
> > while ndone lt njobs begin
> >     for j=3D0,njobs-1 do begin
> >         if running[j] then begin
> >            query status of the j-th job with IDL_IDLBridge::Status
> >            if finished
> >               destroy bridge
> >               running[j]=3D0
> >               ndone++
> >            endif
> >         endif
> >     endfor
> >     wait, 1
> > endwhile
> >=20
> >=20
> > regards,
> > Lajos
>=20
> Ah, I see you put WAIT there if its not finished. I thought of using this=
 in a loop but without putting WAIT you spend too many resources.
>=20
> Thanks! :)

Yeah, I put a wait in there too.  I've thought about "calibrating" the wait=
 time.  It seems to me that if your process takes (say 30 minutes) then pol=
ling the bridges every 1 second seems like overkill.  My gut-feeling is tha=
t this wait time should be roughly 1/3 or 1/4 the expected time per process=
, but of course you'd want to ensure that it's always waiting a minimum of =
~0.5 s.

Russell
0
rryan (43)
7/5/2012 3:29:21 PM
On 5 juil, 17:16, Stefan <stefan.meing...@gmail.com> wrote:
> Am Donnerstag, 5. Juli 2012 17:13:02 UTC+2 schrieb fawltyl...@gmail.com:
>
>
>
>
>
> > On Thursday, July 5, 2012 4:52:23 PM UTC+2, Stefan wrote:
> > > Hi
>
> > > Now that I have successfully implemented multi-threading another prob=
lem occured:
>
> > > To invoke multiple processes I start in a loop with
>
> > > bridges[i]->EXECUTE, 'program,par1,par2', /NOWAIT
>
> > > where 'bridges' is an object-array which holds the different child pr=
ocesses.
> > > Upon the execution of the last process I do
>
> > > bridges[i]->EXECUTE, 'program,par3,par4'
>
> > > And after that I destroy my bridges in a loop.
>
> > > Now I have a problem if the last process finishes before one of the p=
revious processes since upon its completion it will directly move to the pa=
rt where all bridges are destroyed and kills my program...
>
> > > Is there an easy way to tell IDL to wait for all my processes to fini=
sh and then destroy the bridges?
>
> > > thanks
> > > Stefan
>
> > Use IDL_IDLBridge::Status to determine whether the job is finished. Pse=
udocode:
>
> > ndone=3D0
> > running=3Dreplicate(1, njobs)
> > while ndone lt njobs begin
> > =A0 =A0 for j=3D0,njobs-1 do begin
> > =A0 =A0 =A0 =A0 if running[j] then begin
> > =A0 =A0 =A0 =A0 =A0 =A0query status of the j-th job with IDL_IDLBridge:=
:Status
> > =A0 =A0 =A0 =A0 =A0 =A0if finished
> > =A0 =A0 =A0 =A0 =A0 =A0 =A0 destroy bridge
> > =A0 =A0 =A0 =A0 =A0 =A0 =A0 running[j]=3D0
> > =A0 =A0 =A0 =A0 =A0 =A0 =A0 ndone++
> > =A0 =A0 =A0 =A0 =A0 =A0endif
> > =A0 =A0 =A0 =A0 endif
> > =A0 =A0 endfor
> > =A0 =A0 wait, 1
> > endwhile
>
> > regards,
> > Lajos
>
> Ah, I see you put WAIT there if its not finished. I thought of using this=
 in a loop but without putting WAIT you spend too many resources.
>
> Thanks! :)
>
>

My way to manage that is putting /NOWAIT, but waiting inside the loop,
as follows:

  repeat begin
    wait, 1
    test =3D 1B
    foreach b,bridges do test AND=3D (b.Status() ne 1)
  endrep until test

alain.
0
7/5/2012 3:53:26 PM
Reply:

Similar Artilces:

Parallel Processing in IDL
What's an easy way to use multiple processors in IDL? I have a large program but I want to start with a simple program first. Let's say I have an array of a million integers and I have four processors. I want to add all the elements in this array and output it. I could split the array into four parts and give it to each processor right? Is this possible? How would I do this? Thanks! On 10/11/10 10:35 AM, Ammar Yusuf wrote: > What's an easy way to use multiple processors in IDL? I have a large > program but I want to start with a simple program first. Let's say I &g...

[ANN] parallel 1.3: simple parallel processing.
Version 1.3 of ssh_parallel and parallel have been released. This release features much better error handling in ssh_parallel, so network errors are less likely to cause jobs to go missing. Additionally, it allows you to specify a nice level for processes. Same as the last time, I'm interested in any comments/critiques anyone may have. Original release notes (with updated URLs): If you ever write a shell script along the lines of: some | pipieline | bash then you may be interested in my program parallel. The above line would be written as: some | pipeline | parallel 4 to perform ...

parallel processing
HI, I have a tricky problem about parallel processing using JavaScript. a script makes use of classes. 2 objects A and B are created at intialisation. The two objects make use of the same function foo(). at runtime, the action: A.foo(); B.foo(); is executed. I noticed that in any browser, the result will be that the browser executes the function foo() related to A. Then stop and execute the function foo() related to B. This leave A unfinished. Is there a possiblity to have the function called by A running at the same time than B's one? I mean having two instance of the function running ...

parallel process
How to make parallel calculations on SKILL ? one way i can see is to launch several external processes using interprocess communication (ipc) "Andrey Orlenko" <eagle@ukr.net> wrote in message news:c4gldj$2mf2$1@news.ntu-kpi.kiev.ua... > How to make parallel calculations on SKILL ? S. Badel wrote: > one way i can see is to launch several external processes > using interprocess communication (ipc) > >>How to make parallel calculations on SKILL ? My situation: I've got program GUI on SKILL, it starts external program / sh("./program &") /. External program is calculating and GUI must at the same time output intermediate data to window from external program (it's clear) AND GUI must respond to user actions (stop program, pause, exit, e.t.c) you probably can do this with IPC like this : instead of launching your program with sh, lauch it with ipcBeginProcess. you can define skill callback functions to synchronously process output from the program. then, execution can continue normally and callbacks will be called whenever data is available. procedure( launchProcess() process = ipcBeginProcess( "command" nil "myDataHandler" "myErrHandler" "myPostExecFunction" ) ) ; procedure procedure( myDataHandler( childId data ) printf( "program outputted %s\n" data ) ) ; myDataHandler procedure( myErrorHandler( childId data ) printf( "program error : %s\n"...

Parallel processing
Hello, I'm trying to implement a parallel process, and I'm not sure how to set it up and I was wondering if someone could help me please? My code resembles, for i=1:N % N approx = 10^6 x_new = my_func(x_old) A(:,i) = x_new; x_old = x_new; end where x_old and x_new are Mx1 vectors and A is a MxN matrix. From one iteration to the next, the only dependence is x_new(i) on x_old(i). I'd like to parallelise this by splitting the elements of x_new and x_old across more than one core. Is there a way I can, say, matlabpool open 5 <use core's ...

parallel processing
Hi, I'm not quite sure what 'feature' i'm looking for ... any input appreciated. I want to parallelize a particular task. #/usr/bin/perl -w use strict; my @target; our @result; for (my $i = 0; $i < @target; $i++) { $result[$i] = &do_some_work($target[$i]); } &report_results; .... &do_some_work requires a minute or so to complete. @target contains several hundred elements. Therefore, total execution time runs in the hundreds of minutes. Also, @target is not ordered ... e.g. there are no dependencies within @target ... if &do_some_work finishes proce...

Parallel processing
Hello, I am running version 7.01 on a quad core CPU. I am doing some heavy calculations but was too lazy to use any special commands for parallelization. However, to my surprise, when I check the performance I see that all 4 kernels a busy calculating. Does anybody have any info about this automatic parallelization of Mathematica? Daniel As far as I know, Mathematica does not paralellize unless you explicitedly tell it to. Did you see 4 Mathematica kernels running or just 4 cores being busy with unknown processes? Cheers -- SjoerdOn May 28, 12:50 pm, dh <...

Simplest way to spawn multiple IDL processes from within IDL?
I have a piece of code that I want to run on multiple data files with the o= nly difference in the input being the date within each filename. Previously for a small number of files I've just done it in a loop somethin= g like this: dates=3D['200901', '200902', '200903'] FOR i=3D0, n_elements(dates)-1 do begin this_date=3Ddates[i] processing_code, this_date ENDFOR The processing_code routine itself can take a while to run and now that the= number of dates I'm processing is getting larger I wondered if there was a= straightforward wait to spaw...

Parallel processes
Is there any good reason to have two almost similar processes, with almost similar sensitivity list? I have a problem with a size of the design. I'm using Lattice LC4128V and currently the design is using 130/128 logic functions. In on vhdl block there is two processes almost similar, like shown below. I got the code from the other designer and I'm just starting with VHDL, so I ask for your help. Can I save in logic elements by combining these two processes and does it affect the functionality or timing in some way? architecture ltr of dio_write is signal IO_WRITE_tmp : std_logic; ...

How to obtain the process ID of the current IDL process in a platform-independent way?
*** Question Is there a platform-independent "IDL way" to obtain the process ID of the current IDL process? *** Background I need the process ID (PID) of the current IDL process. Currently I have a working solution for a specific platform (Solaris 9 and 10), specifically IDLUnix> pid = CALL_EXTERNAL("/lib/sparcv9/libc.so", 'getpid') The reliance on a library from the operating system library limits the applicability to that particular platform and installation, so I consider it only a provisional solution. A recent discovery is the Unix libidl.so library th...

Parallel Computing and "The process cannot access the file because it is being used by another process."
while using parfor loop, I get "The process cannot access the file because it is being used by another process." error generated from an executable function embedded in the parfor loop. The function opens a file "tmp.key" and writes an image file "tmp.pgm" into "tmp.key" file. However, it seems that when this parfor loop runs on parallel processors, the processors are unable to access this "tmp.key" at the same time, and hence, the error is generated. First of all, is my assesment correct? secondly, how to resolve this issue and use parfor loop successfully. Any help is greatly appreciated, please. Irteza "Syed " <irtezaa@gatech.edu> writes: > while using parfor loop, I get "The process cannot access the file > because it is being used by another process." error generated from an > executable function embedded in the parfor loop. The function opens a > file "tmp.key" and writes an image file "tmp.pgm" into "tmp.key" > file. However, it seems that when this parfor loop runs on parallel > processors, the processors are unable to access this "tmp.key" at the > same time, and hence, the error is generated. First of all, is my > assesment correct? secondly, how to resolve this issue and use parfor > loop successfully. > > Any help is greatly appreciated, please. It sounds like your assessment is correct. By ...

CALL FOR PAPERS -- Special Session on Massively Parallel Processing at the 9th Workshop on Advances in Parallel and Distributed Computational Models -- IPDPS
*** CALL FOR PAPERS *** 2007 International Parallel & Distributed Processing Symposium Workshop on Advances in Parallel and Distributed Computational Models Special Session on Massively Parallel Processing *** Submission Deadline December 4th 2006 *** As part of the Workshop on Advances in Parallel and Distributed Computing Models (APDCM), the aim of this special session is to focus on computer systems that can scale to many thousands of processing elements and are used to solve a single problem. The focus is on identifying new and novel ideas rather than proving incremental advances. By concurrently exploring architecture, programming models, algorithms and applications, the session seeks to advance the state-of-the-art of Massively Parallel Processing (MPP) systems. Following the usual IPDPS practice, all MPP papers will be published in the Proceedings of the IEEE/ACM International Parallel and Distributed Processing Symposium (IPDPS). Topics of Interest: The topics of interest to this special session are: (+) Architectures and Experimental Systems: the use of increased parallelism on a single chip (MPP on a chip), the coordination and communication between massive numbers of processing elements, the use of processors in memory (PIMS), and parallelism in emerging technologies. (+) Large-scale Computing: the utilization of MPPs for large-scale computing, achieving peta-scale levels of processing, use of heterogeneous processing capabilities. (+) Paralleli...

CALL FOR PAPERS -- Special Session on Massively Parallel Processing at the 9th Workshop on Advances in Parallel and Distributed Computational Models -- IPDPS
*** CALL FOR PAPERS *** 2007 International Parallel & Distributed Processing Symposium Workshop on Advances in Parallel and Distributed Computational Models Special Session on Massively Parallel Processing *** Submission Deadline December 4th 2006 *** As part of the Workshop on Advances in Parallel and Distributed Computing Models (APDCM), the aim of this special session is to focus on computer systems that can scale to many thousands of processing elements and are used to solve a single problem. The focus is on identifying new and novel ideas rather than proving incremental advances. By concurrently exploring architecture, programming models, algorithms and applications, the session seeks to advance the state-of-the-art of Massively Parallel Processing (MPP) systems. Following the usual IPDPS practice, all MPP papers will be published in the Proceedings of the IEEE/ACM International Parallel and Distributed Processing Symposium (IPDPS). Topics of Interest: The topics of interest to this special session are: (+) Architectures and Experimental Systems: the use of increased parallelism on a single chip (MPP on a chip), the coordination and communication between massive numbers of processing elements, the use of processors in memory (PIMS), and parallelism in emerging technologies. (+) Large-scale Computing: the utilization of MPPs for large-scale computing, achieving peta-scale levels of processing, use of heterogeneous processing capabilities. (+) Parallel...

SAS Parallel processing
Hi, I am relatively new to SAS parallel processing scene and trying to work on the following piece of code using SAS 9 on a PC. The remote server has 4 processors and this is a UNIX based server. On my PC windows based SAS, I execute the following code after connecting to the remote server. options autosignon=yes ; rsubmit; /* Begin simulation */ options symbolgen ls=96; options sascmd='!sascmd -nosyntaxcheck' autosignon=yes cpucount=ACTUAL; %macro StatsModel(startLoop=, endLoop=, inputfn=,nsims=); filename params "/data/&inputfn"; data parameters; infile par...

Parallel Processing in Simulink?
Hi All, Does anyone know if Simulink supports parallel precessing / threading? How can I run two models at the same time? In particular, I have a model that waits for the response of an external device by means of an embedded MATLAB function. This causes the entire simulation process to stall just to wait for a response. However, the model required to trigger that response is elsewhere in the model and therefore doesn't ever execute.. fail. Is there a way of waiting for the response of that block WHILE continuing on with simulation? Thanks! Hi Annie, Are you trying to get two separat...

COMTI
Hi, does somebody knows is COMTI capable of parallel processing of messages between Windows and MCP. If we assure that programs on MCP side would return messages in asynchronius matter, could I use COMTI for communication from Windows site. Or should I use some other tool from Unisys (or anybody else?) A little bit more detailed description..... WIN MCP request.1 work.1 (will take 10 sec) request.2 work.2 (will take 2 sec) answer.2 answer.1 web.page_return.1 web.page_return.2 problem is also that WIN clienc would most likely be web page. In comp.sys.unisys, (RM) wrote in <ebbtpq$otf$1@ss408.t-com.hr>:: >Hi, >does somebody knows is COMTI capable of parallel processing of messages >between Windows and MCP. >If we assure that programs on MCP side would return messages in asynchronius >matter, could I use COMTI for communication from Windows site. > >Or should I use some other tool from Unisys (or anybody else?) > >A little bit more detailed description..... > >WIN MCP >request.1 > work.1 (will take 10 sec) >request.2 > work.2 (will take...

parallel processing #2
Dear all, I have some doubts. Please clarify, 1. Is parallel processing possible in DSP processors and other high level languages like C etc., If so, How?? 2. I am doing image processing on FPGA. I have to write a bitmap file from FPGA output. How can it be done?? 3. How a image can be processed parallely on FPGA. Waiting for replies.. Thanks prash "PrAsHaNtH@IIT" <prashaenator@gmail.com> wrote > > 2. I am doing image processing on FPGA. I have to write a bitmap file > from FPGA output. How can it be done?? > Carefully open up your FPGA. Take a picture ...

Parallel Application Processing
Suggestions appreciated on how to tackle parallel application processing on a single large Oracle table. The PL/SQL application processes needs to grab n rows for update, process, update and commit them. Several of these application processes need to be run in parallel due to the data volumes. The problem is ensuring that each parallel application process grabs a different set of rows for processing so that there are no contention - with one app process either waiting for another or raising an exception as its attempt to grab rows hit already locked rows. I've tried the following basic SQL construct: select * from table sample(m) where rownum < n for update nowait skip locked The idea is to grab any random sample of data, and from that attempt to grab at most n rows that are not locked, and lock them for updating. If the sample is not used, then each process will/could hit the same rows which means the 1st one may get n rows to lock and the 2nd process will find nothing as it will simply skip n rows. A method is thus needed to randomly identify a m set of rows and grab n rows for updating from it. Does this method make sense? Are there better methods to consider? Thanks. -- Billy On Mon, 08 Aug 2005 03:09:28 -0700, Billy wrote: > Does this method make sense? Are there better methods to consider? Partitioning it by hash (or range, if you can do that) and then acessing different partitions in parallel would seem like the perfect way. -- http://www.mgogal...

parallel processing #2
Further to some parallel processing comments here: One of my Geos interfaces (maybe Topdesk), allows 2 programmes to be open simultaneously, although they are not both processing at the same time; useful for transferring data between open programmes. When we use a printer set-up that has a buffer in the interface (Xetec, & Xetec Gold), or printer (any modern inkjet), once the data reaches the buffer we can continue another computer task. I have a spooler that allows me to send text to be printed to mydisk drive, then disconnect the drive from the computer, & continue to compute. My ...

Parallel Processing Project
hi, i intend to develop a parallel processing environment in a custom-built multiprocessor hardware system with 8086 processors. could you please send me your suggestions about the topic. any pointers to references in print or in electronic from will be very helpfull. my main motivation has been Liu and Gibson's book : "Microcomputer Systems : The 8086/8088 Family" and a similar project done by Bradford J. Rodriguez. one more thing,i don't have a logic analyzer with me, but i have a DSO. will that be a serious impairment as far as this project is concerned? Thanks and Regards, Vineeth T. M ...

help on parallel processing
hii everyone. i need to know that is there any way to execute 2 or more .m files simultaneously ? actually, i'm trying to call two individual files : one with 'wavplay' and another with a 'movie' function, from another .m file; such that on final execution i should view the movie and in the background, the wave file shall play. although, i know the control can be transferred to one called file at a time only but still, isn't it possible to distribute control to two separate files and both can be executed simultaneously. ----- ashutosh srivastava ...

PSE for parallel processing
Does anyone know of a good Problem Solving Environment (PSE) for parallel processing, then please let me know. I would also like to know if such tools are popular with the parallel programming people, i.e. are they really useful or are they just at a research stage? Pushkar Pradhan > Does anyone know of a good Problem Solving Environment (PSE) for > parallel processing, then please let me know. > I would also like to know if such tools are popular with the parallel > programming people, i.e. are they really useful or are they just at a > research stage? > > Pushkar P...

execute processes in parallel
Hi, Is there anyway to execute 2 processes simultaneously in a tcl script, so tat each one completes at more or less the same time, so that the total execution time of the script is reduced, when we do it sequentially. I saw few references to fork, exec, and spawn, but not able to figure out, how to get with them, any reference or example, will be more helpful. Thanks, Nutty. mp3nut@gmail.com schreef: > Hi, > > Is there anyway to execute 2 processes simultaneously in a tcl script, > so tat each one completes at more or less the same time, so that the > total execution time of the script is reduced, when we do it > sequentially. > > I saw few references to fork, exec, and spawn, but not able to figure > out, how to get with them, any reference or example, will be more > helpful. > > Thanks, > Nutty. Are you talking about separate processes in terms of what the computer understands as a process (run a browser and a word processor and a calender application and ..) or do you want two Tcl procedures that run simultaneously? In the first case, these external processes can be started via [exec command &] (or [open "|command"] to get access to the in/output of the process via a pipe mechanism). The & makes sure the process is running in the background. In the second case you are talking about threads. There is a Threads package available with a very nice API. Whether you will be able to gain time, is a matter of you...

Parallel Processing in SAS
I've heard that there are opportunties to do some parallel processing if your server has multiple CPUs. We have 4 CPUs running on windows 2003 SP1. Anyone have any insight or articles about Parallel processing? I heard only a few procs can actually utilize and take advantage of parallel processing, but wasn't sure. Any insight is appreciated! Jer wrote: > I've heard that there are opportunties to do some parallel processing if > your server has multiple CPUs. I think there are also opportunities for parallel processing with single CPU machines that support threading; on...

Web resources about - Parallel Processing - comp.lang.idl-pvwave

Analog signal processing - Wikipedia, the free encyclopedia
Analog signal processing is any signal processing conducted on analog signals by analog means. "Analog" indicates something that is mathematically ...

Conexant Announces New Audio Processing Solutions At CES
At CES, Conexant gave me a demo of some of the audio processing technology that they've been working on. Conexant is a company that probably ...

THE PAYMENTS ECOSYSTEM: Everything you need to know about the next era of payment processing
The way we pay has changed dramatically. People are using their smartphones for every kind of formal and informal transaction — to shop at stores, ...

USDA Investigating Possible Plastic Sabotage At Poultry Processing Plant
While it’s not uncommon to hear about chicken products that end up containing wayward bits of plastic (like this nugget issue , this sausage ...

Feds Stop Processing NICS Denial Appeals
Obama Administration Enacts More Gun Control By Misusing FBI Background Check System

This just in: Marco Rubio's substance is thinner than an Intel processing chip
In case you hadn't already guessed it, Marco Rubio's got a smile and a stump speech, but that's about it. Reporters who have followed him from ...

Food Processing: It's What Makes Us Human
Turning raw ingredients into something more has played an important role in our evolution.

Keeping Up With the Millennials in Spousal Immigration Processing
“Selfie stick. Snap. Upload to Facebook. 52 likes in a matter of minutes. These likes include people from all over the world who simply have ...

Arduino feat. Processing Laser CNC Project
Arduino-Processing-Laser-CNC - Arduino feat. Processing Laser CNC Project

When Should Approximate Query Processing Be Used?
This is a guest repost by Barzan Mozafari , who is part of a new startup, www.snappydata.io , that recently launched an open source OLTP + OLAP ...

Resources last updated: 3/28/2016 8:28:25 PM