Anyone have insights on the interaction between using the Parallel Computing toolbox, either on a single local machine or using Distributing computing on a cluster, and the built-in use of multiple cores through default multi-threading? Thanks. -Dick Startz

0 |

10/1/2011 6:28:11 PM

=20 SOFT COMPUTING Journal - SPRINGER =20 Special Issue on =20 Soft Computing for Bioinformatics =20 CALL FOR PAPERS =20 The past few years have witnessed phenomenal growth of=20 bioinformatics, an exciting field devoted to the interpretation and=20 analysis of biological data using computational techniques. Among=20 the large number of computational techniques used, soft computing,=20 which incorporates=20 =20 * neural networks,=20 * evolutionary computation,=20 * fuzzy systems, and=20 * chaos,=20 =20 stands out because of its demonstrated strength in handling=20 imprecise information and providing novel solutions to hard=20 problems.=20 =20 This special issue aims at not only showcasing innovative=20 applications of soft computing techniques to bioinformatics, but=20 also clarifying outstanding issues for future progress.=20 =20 Biological areas of interest include but are not limited to the=20 following:=20 =20 * protein structure and function,=20 * genomics,=20 * proteomics,=20 * molecular sequence analysis,=20 * evolution and phylogenetics,=20 * molecular interactions and structure,=20 * gene expression,=20 * metabolic pathways,=20 * regulatory networks,=20 * developmental control and systems biology. =20 =20 Papers should be submitted in PDF format via email to any of the=20 following guest editors by 30 March 2004: =20 * David Corne (D.W.Corne@exeter.ac....

Dear Colleagues, I would like to call your kind attention to the updated website of the Soft Computing Research Group at the University of Veszprem (Hungary) http://www.fmt.vein.hu/softcomp/ You can download MATLAB Toolboxes: - Fuzzy Clustering MATLAB Toolbox - Genetic Programming MATLAB Toolbox - Interactive Evolutionary Strategy (EASy) MATLAB Toolbox - Constrained Fuzzy Model Identification for the FMID Toolbox independent MATLAB programs related to: - Data mining * Fuzzy clustering based time-series segmentation * Supervised Fuzzy Clustering for the Identification of Fuzzy Classifiers * Fuzzy Modeling with Multidimensional Membership Functions: Grey-Box Identification and Control Design * Compact TS-Fuzzy Models through Clustering and OLS plus FIS Model Reduction * Inconsistency Analysis of Labeled Data * Star plots - MATLAB files for Graphical Representation of trace elements of clinkers - Process control and monitoring * Feedback Linearizing Control Using Hybrid Neural Networks Identified by Sensitivity Approach * Incorporating Prior Knowledge in Cubic Spline Approximation - Application to the Identification of Reaction Kinetic Models * Identification and Control of Nonlinear Systems Using Fuzzy Hammerstein Models - A Simple Fuzzy Classifier based on manuscripts in PDF about - fuzzy model based process control and monitoring - fuzzy clustering and classification - incorpor...

Dear Colleagues, I would like to call your kind attention to the updated website of the Soft Computing Research Group at the University of Veszprem (Hungary) http://www.fmt.vein.hu/softcomp/ You can download MATLAB Toolboxes: - Fuzzy Clustering MATLAB Toolbox - Genetic Programming MATLAB Toolbox - Interactive Evolutionary Strategy (EASy) MATLAB Toolbox - Constrained Fuzzy Model Identification for the FMID Toolbox independent MATLAB programs related to: - Data mining * Fuzzy clustering based time-series segmentation * Supervised Fuzzy Clustering for the Identification of Fuzzy Classifiers * Fuzzy Modeling with Multidimensional Membership Functions: Grey-Box Identification and Control Design * Compact TS-Fuzzy Models through Clustering and OLS plus FIS Model Reduction * Inconsistency Analysis of Labeled Data * Star plots - MATLAB files for Graphical Representation of trace elements of clinkers - Process control and monitoring * Feedback Linearizing Control Using Hybrid Neural Networks Identified by Sensitivity Approach * Incorporating Prior Knowledge in Cubic Spline Approximation - Application to the Identification of Reaction Kinetic Models * Identification and Control of Nonlinear Systems Using Fuzzy Hammerstein Models - A Simple Fuzzy Classifier based on manuscripts in PDF about - fuzzy model based process control and monitoring - fuzzy clustering and classification - incorporation of a priori knowledge in the identif...

I am trying to to some computations and I would like to do it in parallel using parfor or by Opening the matlabpool.. as the current implementations is too slow: result=zeros(25,16000); for i = 1:length(vector1) % length is 25 for j = 1:length(vector2) % length is 16000 temp1 = vector1(i); temp2 = vector2(j); t1 = load(matfiles1(temp1).name) %load image1 from matfile1 t2 = load(matfiles2(temp2).name) % load image2 from matfile2 result(i,j)=t1.*t2 end end It work fine but I would really like to know if there is a way to speed thing up ... Thanks a lot in advance! ...

Hi, I just bought a six-core desktop (12 Threads) and discovered that the maximum worker allowed by the parallel computing toolbox is eight workers. This is really disappointing, and I am just wondering if there is anyway to fully utilize the 12 processes and have 12 workers on one local machine. I browsed the help guide for the Distributed Computing Server toolbox and it seems it only works when you'd like to create workers on remote computers. Your help is greatly appreciated. Thank you! richard "Richard Liu" <richardkailiu@gmail.com> wrote in message news:hrhupd$2v3$1@fred.mathworks.com... > Hi, I just bought a six-core desktop (12 Threads) and discovered that the > maximum worker allowed by the parallel computing toolbox is eight workers. I believe that is the correct behavior, assuming you have just Parallel Computing Toolbox. To use more than 8 local workers, or to use workers across multiple machines, you will need MATLAB Distributed Computing Server as well. > This is really disappointing, and I am just wondering if there is anyway > to fully utilize the 12 processes and have 12 workers on one local > machine. I browsed the help guide for the Distributed Computing Server > toolbox and it seems it only works when you'd like to create workers on > remote computers. That is not the case. Could you post the URL of the documentation page that gave you that impression, so that our d...

Hi, i've written an open source C++ framework for Cell Computing. Cell Computing is alike grid computing, but is leant on the biologic. If you are interested please visit http://www.xatlanits.ch. Unfortunately not all documents are available in english yet. All ideas and improvments are welcome :-) -- ...

Your affiliate ID is: g1356720 Your password is: 6YUNW7 ...

Hi all, I'm working on a project that involves parallel computing, and I'm trying to think of applications that would benefit from parallel computing. For example, in cubic spline interpolation, I can divide the interval into groups that can each be handled by a processor. However, that simply provides an embarrassingly parallel solution. I would like to know if there are any problems that will benefit from parallel computing, for example problems that are computationally intensive. I'll be using MPI to program the parallel solutions. Thank you. Regards, Rayne [Moderator: see the Grand Challenge panel which should be auto posted in a few more days.] -- ...

Has anyone of you ever used LAM/MPI (Parallel Computing) under FreeBSD and if so, How has it performed compared to a Linux machine running it. Thanks -- ...

for i=1:size(APLocation,1) point=APLocation(i,:); parfor j=1:length (RXpoint) rssi(i,j) = LOSS(point,RXpoint(:,j)'); %%rssi having the signal strength from all AP's end end when i'm running the following code, i get the same value for each element of rssi (serially i get different results) where is my problem? Creating some fake data and and implementing a simple LOSS function, I do not see a different between the serial and parallel execution. Does LOSS use any persistent or global variables? function [ rssi rssi2 ] = foo APLocation = rand(10); RXpoint = rand(10) * 5; for i=1:size(APLocation,1) point=APLocation(i,:); parfor j=1:length (RXpoint) rssi(i,j) = LOSS(point,RXpoint(:,j)'); end end for i=1:size(APLocation,1) point=APLocation(i,:); for j=1:length (RXpoint) rssi2(i,j) = LOSS(point,RXpoint(:,j)'); end end end function rv = LOSS( a,b ) rv = sum( a+b ); end "michael" <bezenchu@gmail.com> wrote in message news:ht0t0k$fce$1@fred.mathworks.com... > for i=1:size(APLocation,1) > point=APLocation(i,:); > parfor j=1:length (RXpoint) > rssi(i,j) = LOSS(point,RXpoint(:,j)'); %%rssi having the signal > strength from all AP's > end > end > > when i'm running the following code, i get the same value for each element ...

Hello friends, I am implementing HMM based speech recognition system on MATLAB-2009a. I am trying to design system such a way that , in GUI recording of speech from user and computation for HMM for earlier speech going together in parallel . Right now I am using wavrecord(n,Fs) function to record speech. But it can't support parallel computation like , waiting for speech while other calculation is going on. Please give some suggestion for this problem or other audio recording function which works as I want...... Thanks, ...

Hi all, i wrote a script and a function intersect(line, cube). the function is called in the script 8000*2000 times where 8000 is the number of lines and 2000 is the number of cubes. i would like to call the function for each cube and each line. so i did the following %============================= for i=1:8000 for j=1:2000 output(i,j)= intersect(line (i), cube (j)) end end %============================= the lines and cubes are totally independent. This takes a 10 seconds max for only one cube and one line but it is taking forever for all the 8000 lines and 2000 cubes. I dont know what is the best solution to improve the processing time. 1) compiling? 2) parallel computing? if this or that, please advise. any help would be very appreciated. "Timo " <laamhamd@yahoo.fr> wrote in message <ji3sgt$cn$1@newscl01ah.mathworks.com>... > Hi all, > > i wrote a script and a function intersect(line, cube). > the function is called in the script 8000*2000 times where 8000 is the number of lines and 2000 is the number of cubes. > > i would like to call the function for each cube and each line. > so i did the following > > %============================= > for i=1:8000 > for j=1:2000 > output(i,j)= intersect(line (i), cube (j)) > end > end > %============================= > > the lines and cubes are totally independent. > > This takes a 10 seconds max fo...

Hi everybody I have a nonlinear constraint minimization problem and i try to solve it with fmincon, but since i have 238 variable fmincon works very slow. To seed up i want to use parallel computing. In matlab help forums i found: matlabpool open 2 options = optimset('UseParallel','always'); fmincon(...,options) matlabpool close but whenever i use this way it gives error as: ??? Error using ==> parallel_function at 598 Error in ==> objective at 12 Matrix dimensions must agree. Error in ==> parfinitedifferences at 110 parfor(gcnt=1:nVar) Error in ==> nlconst at 355 [gf,gnc(:,nonlIneqs_idx),gnc(:,nonlEqs_idx),numEvals] = ... Error in ==> fmincon at 724 [X,FVAL,LAMBDA,EXITFLAG,OUTPUT,GRAD,HESSIAN]=... When i remove matlabpool open 2 and matlabpool close, the algorithm works without error but in that case i noticed it does not work as parallel. Can anybody help to figure out why i am getting this error? Regards I just noticed what is the problem. In my objective function i have multiple variables. I use only variable that optimization depends on, i define others as global variable. I found that in optimization iteration these global variables first gets their real value but after second iteration inside fmincon they becomes zero and that's why it gives that dimension problem, and i have no idea why after first iteration these globals becomes zero? On 11/20...

Long story short, here is the criteria I need. 1) I need two modules running in parallel. One function must activate these two They don't need to START at the same time but both neither return info nor end. They run pretty much infinitely 1.1) I would prefer if the One function also ran separate from the above two in it's own loop but i can deal with a 3'rd function also not ending. 2) I need a shared variable between two of the modules. 1 Module will place information there while the other just reads it, but as they are running parallel. That's it. I can do this in pretty much any other language but matlab I've been having issues with. Problem is I absolutely need to use matlab for one module. If this is impossible to accomplish all in matlab then can i instead run this in C and call one matlab module with separate thread? Sample code below. function mainParallelTest() persistent counter; counter = 0; parfor i=0:2 if (i == 0) ImageProccessorParallelTest(); elseif(i == 1) EventProcessorParallelTest(); elseif(i == 2) prevCounter = 0; tempCounter = 1; while(true) tempCounter = counter; if (tempCounter == prevCounter) %disp('hi') end prevCounter = tempCounter; end end end "Alex Cruikshank" <cruiksam@gmail.com> writes: > Long story short, here is the criteria I need. >...

I have a computation problem which can be divided into a part that can be performed on GPU's and a part that needs to be run on a CPU. The first part (GPU side) can be divided up between many GPU's to speed computation, but the CPU side has to be single threaded and processes the result from the GPU part. So right now, the current flow of my code looks something like: GPU \ GPU ---- (gather)----> cpu -----> result GPU / With this layout, the cpu sits idle while the GPU's work, then the GPU's sit idle while the CPU works. I would like to set this up such that once the GPU (running on a number of workers) finishes, it gathers the data, submits a new job to the scheduler to begin the CPU processing, then starts over on the GPU processing so that both sides can run at the same time. Is there a way to either submit a task to just one worker, or otherwise segregate the workers like this? On Wednesday, April 11, 2012 10:22:36 PM UTC-4, Matthew wrote: > I would like to set this up such that once the GPU (running on a number o= f workers) finishes, it gathers the data, submits a new job to the schedule= r to begin the CPU processing, then starts over on the GPU processing so th= at both sides can run at the same time. Hi Matthew, Sounds like a typical producer/consumer setup. Implement a work-queue and = have a thread sitting on each GPU. I'm part of the ArrayFire team at accelereyes.com -- we're working to make = ArrayFire ...

Hi, I am using wring some codes to run my program in parallel using the matlab parallel toolbox. My program is all based on command line and no GUI is provided. So I am wondering if I can use command line to - detect how many cpus can be set to be a worker - set up the local workers, - and any other commands that dealing with this parallel toolbox Thanks very much. "George " <guanjihou@gmail.com> writes: > I am using wring some codes to run my program in parallel using the matlab > parallel toolbox. My program is all based on command line and no GUI is > provided. So I am wondering if I can use command line to - detect how many cpus > can be set to be a worker > - set up the local workers, > - and any other commands that dealing with this parallel toolbox The local scheduler automatically detects how many cores your machine has, providing you do not modify the local profile (you can specify an explicit number which overrides the automatic value). So, for example, if you issue the command matlabpool open local that will start as many workers as you have cores on your machine. The local scheduler keeps track of how many workers are running, and will not exceed the number of cores on your machine. Cheers, Edric. Edric M Ellis <eellis@mathworks.com> wrote in message <ytwvcmxoqpg.fsf@uk-eellis0l.dhcp.mathworks.com>... > "George " <guanjihou@gmail.com> writes: > > > I am using wring some ...

PARALLEL COMPUTING 2005 (ParCo2005) Malaga, Spain 13-16 September 2005 Announcement & Call for Papers URL: http://www.parco.org e-Mail: parco@ac.uma.es GENERAL Parallel computing is quickly becoming the standard computing paradigm in all areas of mainstream computing. This evolution from specialised technologies for handling large and complex applications to standard techniques for solving applied problems on affordable hardware is revolutionising computing strategies in virtually all application areas. The extension of these advanced technologies to networked based systems, such as clusters and grids, makes parallel computing one of the base technologies for e-science on a global scale. ParCo2005 is the continuation of the international conferences on parallel computing started in 1983 in Berlin. This makes it the longest running European based international conference in the area of high performance computing. Over the years the conference established itself as the foremost platform for exchanging know-how on the newest parallel computing strategies, technologies, methods and tools. AIMS AND SCOPE The aim of the conference is to give an overview of the state-of-the-art of the developments, applications and future trends in parallel or high-speed computing for all platforms, including grids and clusters. The conference addresses all aspects of parallel computing, inc...

When a computation is underway, the MATLAB editor locks up, meaning no files can be opened that are not already open. This happens both in Windows 7 (64-bit) and Mac OS. This seems like an obvious flaw to me. Is there a setting that would make MATLAB truly multithreaded? ...

Hi all, As I'm working in a geophysics lab, one of my point of interests regarding plan9 would be heavy scientific computation, especially parallel computing. So, first is there any fortran compiler on plan9? Second, is there any support for parallel computing, like mpi (even libs for that in C would be something)? Regards, Mathieu. -- GPG key on subkeys.pgp.net: KeyID: | Fingerprint: 683DE5F3 | 4324 5818 39AA 9545 95C6 09AF B0A4 DFEA 683D E5F3 -- On 10/17/07, lejatorn@gmail.com <lejatorn@gmail.com> wrote: > Hi all, > > As I'm working in a geophysics lab, one of my point of interests > regarding plan9 would be heavy scientific computation, especially > parallel computing. > > So, first is there any fortran compiler on plan9? Second, is there any > support for parallel computing, like mpi (even libs for that in C would > be something)? You have two choices. 1) write it from scratch 2) port gcc and then all of gnubin -- and then all of the various parallel computing software if you want industrial strength versions of this type of thing, you won't do (1) better than the scads of people doing it now (well, your code will sure look better than openmpi.org, but it won't do mpi better). So you might want to start with the gcc port. Not because we want gcc, mind, but because we need gcc -- because there are way too many bits that have to be done, and not enough people to do them. ron ...

All information about Computer software & Hardware. Visit : www.santoshcomputer.blogspot.com ...

Hi, I'm looking for a way to run matlab in a GCE (GridComputingEinvironment) without changing the code of my matlabprogramms or simulink simulations radicaly. Is there anybody who experienced some way to realize a such environment or can give me some information about a way to get it working?? All that on MS WinXP platform.... Any idea?? Thank you very much!!! Marc -- ...

Hi, I have a question regarding using compiled Matlab executables. I have a compiled Matlab code that runs. When the code runs, it creates a folder inside "T:\Temp\mcrCache7.14". For some reason I have to erase that directory every time I need to run the code, otherwise, it fails. So I wrote a PERL script to call the Matlab executable and to handle the erasing of the directory inside the mcrCache directory. All is well so far. The problem is that now I am running such Matlab compiled code in parallel, and I guess now that creates a problem. Now, the PERL scripts might erase the mcrCache while other PERL script is still running! Questions: 1) I wonder why we even need that mcrCache in the first place. Why can't Matlab just compile an executable that contains everything it needs (self-contained) and that runs locally??? 2) If the mcrCache is something that is needed and I can't take away (see question 1), then how do I handle this problem of running in parallel a compiled Matlab program? Any help in this matter is appreciated. Thanks, Noel Andres ...

Hi Folks I would like to use the GPU on my PC to run a DOE on a Simulink model. Does anybody know if this is possible? Regards Etienne There are a couple of groups on the Jacket forums that have had some success using GPUs via Embedded MATLAB/Jacket with Simulink. Try searching here: http://forums.accelereyes.com for simulink. ...

Dear MATLAB experts, currently, I use the academic version of MATLAB and I would like to try out the Parallel Computing Toolbox. Is there any way to acquire this toolbox for my academic MATLAB version? Kind Regards, Dima "Dimitri " <dimitrn@g.clemson.edu> wrote in message news:jiqmsn$bcn$1@newscl01ah.mathworks.com... > Dear MATLAB experts, > > currently, I use the academic version of MATLAB and I would like to try > out the Parallel Computing Toolbox. Is there any way to acquire this > toolbox for my academic MATLAB version? If you're using the Student Version of MATLAB, Parallel Computing Toolbox is available as an add-on product. http://www.mathworks.com/academia/student_version/companion.html Note that MATLAB Distributed Computing Server is NOT available as an add-on product, so I don't believe you will be able to connect multiple machines to create a cluster using Student Version. You should be able to use a local cluster, though. If you're using the Professional Version of MATLAB using Clemson University's license, you will need to check with the IT staff that are responsible for maintaining that license to determine if Parallel Computing Toolbox is included in the license or to discuss setting up a trial version of the toolbox if it is not. -- Steve Lord slord@mathworks.com To contact Technical Support use the Contact Us link on http://www.mathworks.com ...

For the formal concept of computation, see computation . For the magazine, see Computing (magazine) . For the scientific journal, see Computing ...

Through a For IT, By IT editorial filter, Network Computing connects the dots between architectural approach and how technology impacts the business, ...

More and more businesses are moving their computing platform to the cloud. Cloud computing is the term given to accessing computer and database ...

... the coolest examples of this back in October when we told you about the MUV Bird, which you can read about here . While the idea that our computing ...

Skyport Systems, the company redefining enterprise security architecture, today announced a $30 million round of funding, enabling the company ...

Oil company Total has almost tripled the performance of Pangea, a supercomputer it uses for analyzing subsurface imaging in search of new oilfields. ...

Building quantum computers is tricky business, with a range of obstacles to overcome, but scientists have had a recent breakthrough with a new ...

Quanta Computer has partnered with telecom equipment maker Ericsson to push the cloud computing datacenter market and the cooperation is expected ...

As solar technology advances, it's becoming clear that solar is a vital part of the equation that will grow tech's future.

The cloud rendering company Otoy is claiming to have invented a new software translation layer that would allow Nvidia's CUDA to run on a variety ...

Resources last updated: 3/30/2016 4:39:15 PM