f



Parallel Computing Toolbox and FFT

Hello All,

I have just started using the Parallel Computing Toolbox and I am a bit puzzled about some of the results that I am seeing. I have an 8 core Intel Xeon PC with 16GB of RAM and I'm using MATLAB R2010b. I thought that by using the Parallel Computing Toolbox I could speed up my fft computations. However I'm seeing the opposite effect. For example here is a snippet of code that I've been working on. As you can see the parallel version takes a lot longer than just performing a normal fft. Any suggestions to what I'm doing wrong or what I don't understand about this computation.

Thanks

tic
sched = findResource('scheduler', 'type', 'local')
j = createJob(sched)
createTask(j, @fft, 1, {X(1,:)})
submit(j);
waitForState(j)
results = fftshift(getAllOutputArguments(j))
test = results{1};
plot(abs(test))
destroy(j);

toc

tic;output = fft(X(1,:));toc


Elapsed time is 3.292587 seconds.
Elapsed time is 0.001755 seconds.

size(X(1,:))

ans =

           1       15000
0
5/2/2011 6:44:09 PM
comp.soft-sys.matlab 211266 articles. 23 followers. lunamoonmoon (257) is leader. Post Follow

1 Replies
533 Views

Similar Articles

[PageSpeed] 17

"Shannon " <shannon.fitzpatrick@drdc-rddc.gc.ca> writes:

> I have just started using the Parallel Computing Toolbox and I am a
> bit puzzled about some of the results that I am seeing. I have an 8
> core Intel Xeon PC with 16GB of RAM and I'm using MATLAB R2010b. I
> thought that by using the Parallel Computing Toolbox I could speed up
> my fft computations. However I'm seeing the opposite effect. For
> example here is a snippet of code that I've been working on. As you
> can see the parallel version takes a lot longer than just performing a
> normal fft. Any suggestions to what I'm doing wrong or what I don't
> understand about this computation.

The main problem here is that you're submitting a single task to the
local scheduler to perform a single FFT. This has to start up a MATLAB
worker for that purpose, execute your code, write the results back to
disk and so on. Also, you're only submitting a single task, so this will
only use a single core of your machine. In other words, you're
submitting a relatively small amount of work to a single core, and the
overall "wall clock" time you're seeing is dominated by the inevitable
overheads of launching and shutting down MATLAB to do that.

You can significantly reduce the overheads by opening a MATLABPOOL and
leaving that open, and then using PARFOR to execute stuff on the
workers. In your case,

matlabpool local 8

would be appropriate. Then, you could do something like this:

x = rand( 15000, 8 );
parfor ii = 1:8
  y( :, ii ) = fft( x( :, ii ) );
end

which would at least operate in parallel with relatively low
overheads. However, MATLAB's FFT operation is intrinsically
multithreaded - and explicit parallelism is almost never able to compete
with that (on a single machine). So, the above code snippet is always
going to be slower than the simple equivalent "y = fft(x);". Our GPU
implementation of FFT (available with Parallel Computing Toolbox) can
beat the CPU, but even here the CPU version is so good that you need a
decent amount of data for the GPU to be faster. (Incidentally, if you
are interested in GPU functionality, I'd recommend upgrading to R2011a
where we hope we've made things better and faster).

PARFOR on a single machine beats ordinary MATLAB in the case where the
execution time of the loop body is not already dominated by operations
which are intrinsically multithreaded. Of course, PARFOR can be run
across multiple machines, so there's a chance that if you had sufficient
resources, you could beat serial MATLAB. In the particular case of FFT
though, that's pretty much a "memory bandwidth" dominated operation, so
transferring the inputs and outputs may take so long that it dominates
the underlying FFT computation. 

Cheers,

Edric.
0
eellis (488)
5/3/2011 8:19:28 AM
Reply:

Similar Artilces:

Recent soft computing papers and MATLAB Toolboxes
Dear Colleagues, I would like to call your kind attention to the updated website of the Soft Computing Research Group at the University of Veszprem (Hungary) http://www.fmt.vein.hu/softcomp/ You can download MATLAB Toolboxes: - Fuzzy Clustering MATLAB Toolbox - Genetic Programming MATLAB Toolbox - Interactive Evolutionary Strategy (EASy) MATLAB Toolbox - Constrained Fuzzy Model Identification for the FMID Toolbox independent MATLAB programs related to: - Data mining * Fuzzy clustering based time-series segmentation * Supervised Fuzzy Clustering for the Identification of Fuzzy Classifiers * Fuzzy Modeling with Multidimensional Membership Functions: Grey-Box Identification and Control Design * Compact TS-Fuzzy Models through Clustering and OLS plus FIS Model Reduction * Inconsistency Analysis of Labeled Data * Star plots - MATLAB files for Graphical Representation of trace elements of clinkers - Process control and monitoring * Feedback Linearizing Control Using Hybrid Neural Networks Identified by Sensitivity Approach * Incorporating Prior Knowledge in Cubic Spline Approximation - Application to the Identification of Reaction Kinetic Models * Identification and Control of Nonlinear Systems Using Fuzzy Hammerstein Models - A Simple Fuzzy Classifier based on manuscripts in PDF about - fuzzy model based process control and monitoring - fuzzy clustering and classification - incorpor...

Recent soft computing papers and MATLAB Toolboxes
Dear Colleagues, I would like to call your kind attention to the updated website of the Soft Computing Research Group at the University of Veszprem (Hungary) http://www.fmt.vein.hu/softcomp/ You can download MATLAB Toolboxes: - Fuzzy Clustering MATLAB Toolbox - Genetic Programming MATLAB Toolbox - Interactive Evolutionary Strategy (EASy) MATLAB Toolbox - Constrained Fuzzy Model Identification for the FMID Toolbox independent MATLAB programs related to: - Data mining * Fuzzy clustering based time-series segmentation * Supervised Fuzzy Clustering for the Identification of Fuzzy Classifiers * Fuzzy Modeling with Multidimensional Membership Functions: Grey-Box Identification and Control Design * Compact TS-Fuzzy Models through Clustering and OLS plus FIS Model Reduction * Inconsistency Analysis of Labeled Data * Star plots - MATLAB files for Graphical Representation of trace elements of clinkers - Process control and monitoring * Feedback Linearizing Control Using Hybrid Neural Networks Identified by Sensitivity Approach * Incorporating Prior Knowledge in Cubic Spline Approximation - Application to the Identification of Reaction Kinetic Models * Identification and Control of Nonlinear Systems Using Fuzzy Hammerstein Models - A Simple Fuzzy Classifier based on manuscripts in PDF about - fuzzy model based process control and monitoring - fuzzy clustering and classification - incorporation of a priori knowledge in the identif...

My parallel toolbox does not have GPU computing
Hello My Matlab version is R2010a 64bit Linux. I can see that my Matlab comes with parallel toolbox. However, in that toolbox, I can not find GPU computing subsection. Matlab therefore gives error when I call "gpuArray". I wonder why is that, is that because my Matlab Version is old, or do I have to purchase a separate gpu toolbox? "Peng " <pengguan1983@gmail.com> wrote in message news:j601v9$28v$1@newscl01ah.mathworks.com... > Hello > > My Matlab version is R2010a 64bit Linux. I can see that my Matlab comes > with parallel toolbox. However, in that toolbox, I can not find GPU > computing subsection. Matlab therefore gives error when I call "gpuArray". > I wonder why is that, is that because my Matlab Version is old, or do I > have to purchase a separate gpu toolbox? The Release Notes for Parallel Computing Toolbox indicate GPU capabilities were introduced in release R2010b, which is newer than the release R2010a version that you're using. http://www.mathworks.com/help/toolbox/distcomp/rn/bsloyak-1.html -- Steve Lord slord@mathworks.com To contact Technical Support use the Contact Us link on http://www.mathworks.com Peng, you don't need to upgrade your MATLAB version. Just use Jacket, http://accelereyes.com You'll be better off with this approach for these reasons: http://accelereyes.com/compare Thank you very much, that seems to be a very useful thing! "John Melonakos" ...

Parallel computing toolbox #3
Hi, I am using wring some codes to run my program in parallel using the matlab parallel toolbox. My program is all based on command line and no GUI is provided. So I am wondering if I can use command line to - detect how many cpus can be set to be a worker - set up the local workers, - and any other commands that dealing with this parallel toolbox Thanks very much. "George " <guanjihou@gmail.com> writes: > I am using wring some codes to run my program in parallel using the matlab > parallel toolbox. My program is all based on command line and no GUI is > provided. So I am wondering if I can use command line to - detect how many cpus > can be set to be a worker > - set up the local workers, > - and any other commands that dealing with this parallel toolbox The local scheduler automatically detects how many cores your machine has, providing you do not modify the local profile (you can specify an explicit number which overrides the automatic value). So, for example, if you issue the command matlabpool open local that will start as many workers as you have cores on your machine. The local scheduler keeps track of how many workers are running, and will not exceed the number of cores on your machine. Cheers, Edric. Edric M Ellis <eellis@mathworks.com> wrote in message <ytwvcmxoqpg.fsf@uk-eellis0l.dhcp.mathworks.com>... > "George " <guanjihou@gmail.com> writes: > > > I am using wring some ...

Call For Papers: Soft Computing for Bioinformatics (SOFT COMPUTING Journal
=20 SOFT COMPUTING Journal - SPRINGER =20 Special Issue on =20 Soft Computing for Bioinformatics =20 CALL FOR PAPERS =20 The past few years have witnessed phenomenal growth of=20 bioinformatics, an exciting field devoted to the interpretation and=20 analysis of biological data using computational techniques. Among=20 the large number of computational techniques used, soft computing,=20 which incorporates=20 =20 * neural networks,=20 * evolutionary computation,=20 * fuzzy systems, and=20 * chaos,=20 =20 stands out because of its demonstrated strength in handling=20 imprecise information and providing novel solutions to hard=20 problems.=20 =20 This special issue aims at not only showcasing innovative=20 applications of soft computing techniques to bioinformatics, but=20 also clarifying outstanding issues for future progress.=20 =20 Biological areas of interest include but are not limited to the=20 following:=20 =20 * protein structure and function,=20 * genomics,=20 * proteomics,=20 * molecular sequence analysis,=20 * evolution and phylogenetics,=20 * molecular interactions and structure,=20 * gene expression,=20 * metabolic pathways,=20 * regulatory networks,=20 * developmental control and systems biology. =20 =20 Papers should be submitted in PDF format via email to any of the=20 following guest editors by 30 March 2004: =20 * David Corne (D.W.Corne@exeter.ac....

Parallel Computing Toolbox for Academic License
Dear MATLAB experts, currently, I use the academic version of MATLAB and I would like to try out the Parallel Computing Toolbox. Is there any way to acquire this toolbox for my academic MATLAB version? Kind Regards, Dima "Dimitri " <dimitrn@g.clemson.edu> wrote in message news:jiqmsn$bcn$1@newscl01ah.mathworks.com... > Dear MATLAB experts, > > currently, I use the academic version of MATLAB and I would like to try > out the Parallel Computing Toolbox. Is there any way to acquire this > toolbox for my academic MATLAB version? If you're using the Student Version of MATLAB, Parallel Computing Toolbox is available as an add-on product. http://www.mathworks.com/academia/student_version/companion.html Note that MATLAB Distributed Computing Server is NOT available as an add-on product, so I don't believe you will be able to connect multiple machines to create a cluster using Student Version. You should be able to use a local cluster, though. If you're using the Professional Version of MATLAB using Clemson University's license, you will need to check with the IT staff that are responsible for maintaining that license to determine if Parallel Computing Toolbox is included in the license or to discuss setting up a trial version of the toolbox if it is not. -- Steve Lord slord@mathworks.com To contact Technical Support use the Contact Us link on http://www.mathworks.com ...

Parallel computing toolbox running problem
Dear all, I am currently using parallel computing for optimization. I use "matlabpool open 8" command; and use Multistart function for the optimization. In the Multistart function it has been set to be "UseParallel" is "always". Problem is that I see 9 MATLAB image name in Window task manager. Only one of them use 25% of my CPU resource (I have a quad core computer; 25% is equal to one core is fully operating) . All others are 0% CPU usage. I really don't understand the situation. In term of memory usage, the one using 25% of CPU power use 210M bytes. All the others uses 109M bytes. In my computer, open a new MATLAB window but doing nothing is around 109M bytes. What that means?? Please help. I need the parallel computing power. Thanks ...

Is the parallel computing toolbox the solution to this problem?
Hi, I am designing a feedback control loop as part of a project. The MATLAB program reads data through the analog input of a DAC (NI USB 6009), and when there is enough data (say N), it starts to process the data to get a control output and send it through the digital output of the DAC. The reading should be kept going while processing the control signal; and the control signal calculation is refreshed when every time when N/3 data comes in (like a moving window of size N, averaging the old data with the new ones) My problem is how to perform these in parallel? This doesn't need to be in real time, but needs to be fairly quick, the delay still needs to be in orders of seconds. I am not sure if parallel computing toolbox is the solution or there are better (simpler) ways. I am new to Matlab, so please give me some advice on this ^^ thank u all. ...

Parallel computing toolbox setup question
I have a quad core computer; and I use the parallel computing toolbox. I modify the default 'local' setting; and set different number for the "worker" number in the parallel computing setting, for example 2,4,8.. And also set the minimum and maximum worker to work for the task. After the setting, I did the validation to make sure everything is ok. However, no matter what I set, the AVERAGE cpu usage by MATLAB is exactly 25% of total CPU usage; and None of the cores run at 100% (All are around 10%-30%). I am using MATLAB to run optimization problem, so I really want my quad core computer using all its power to do the computing. Please help ...

parallel computing
Hello Everyone, I just purchased the parallel computing toolbox. I have 2 general questions on the 'spmd' function. Is running looping iterations on spmd faster than normal for loops ? Is spmd faster on quad core (4 cores) vs a dual core(2 cores) , again in the case of running looping iterations ? regards,Kate "Kate " <chino_tones@hotmail.com> wrote in message <j7tsjk$5r3$1@newscl01ah.mathworks.com>... > Hello Everyone, > > I just purchased the parallel computing toolbox. I have 2 general questions on the 'spmd' function. > > Is running looping iterations on spmd faster than normal for loops ? ========= The only chance it would be is if you distribute the loop, using for...drange. > Is spmd faster on quad core (4 cores) vs a dual core(2 cores) , again in the case of running looping iterations ? ============= You'll never know until you try. In my experience, It depends too much on what's being parallelized and the specifics of your architecture. ...

question on faster run-time with Parallel Computing toolbox
Good day all, I currently have an optimization algorithm implemented in Matlab. The time to arrive at a solution for this algorithm varies. There is a speed variation since success of ending the optimization relies on randomly selecting a set of data which is hopefully reliable enough for a solution. So for example, my algorithm can end in little as 1 iterations or a maximum of 200 iterations ( i.e. ,a value which I set). I have seen Matlab's Parallel computing toolbox but not sure if would apply to my situation. I would like to know if it is possible to somehow run my algorithm in "parallel" jobs, such that, I can run the script multiple times and end the overall algorithm with the job that terminates first ? Is anything like this or similarly possible with the parallel toolbox? thanks any suggestions, Aiden "Aidy" wrote in message <j7i8jp$1ai$1@newscl01ah.mathworks.com>... > Good day all, > > I currently have an optimization algorithm implemented in Matlab. The time to arrive at a solution for this algorithm varies. > > There is a speed variation since success of ending the optimization relies on randomly selecting a set of data which is hopefully reliable enough for a solution. So for example, my algorithm can end in little as 1 iterations or a maximum of 200 iterations ( i.e. ,a value which I set). > > I have seen Matlab's Parallel computing toolbox but not sure if would apply to my situati...

People tracking example in computer vision toolbox in matlab
Hey,, I am new into the area of image processing and matlab Can any one help me either by explainging or referencing some papers show how the People tracking example in computer vision toolbox in matlab ... I am also interesting if I attached different markers to different people how can I use thses markers to track people by labeling people in the video according to their markers ...

Parallel Computing Toolbox se of resources in a multi-core, multi-GPU system
I have a x86-64 system w/ 12 Cores and 8 GPU cards. Supposing I have the parallel toolbox, I am allowed to use up to 12 workers. From what I understand, the following scenario would work (Please correct me if I'm wrong): assign N workers to N GPU cards and (12-N) workers to (12-N) cores. Now, are we allowed to assign multiple cores to a single worker, and have that worker multi thread between cores? For example, use 5 workers: 4 workers to 4 GPU cards, and 1 Worker for 6 cores? I would appreciate any help. ...

parallel computing on a six-core local computer
Hi, I just bought a six-core desktop (12 Threads) and discovered that the maximum worker allowed by the parallel computing toolbox is eight workers. This is really disappointing, and I am just wondering if there is anyway to fully utilize the 12 processes and have 12 workers on one local machine. I browsed the help guide for the Distributed Computing Server toolbox and it seems it only works when you'd like to create workers on remote computers. Your help is greatly appreciated. Thank you! richard "Richard Liu" <richardkailiu@gmail.com> wrote in message news:hrhupd$2v3$1@fred.mathworks.com... > Hi, I just bought a six-core desktop (12 Threads) and discovered that the > maximum worker allowed by the parallel computing toolbox is eight workers. I believe that is the correct behavior, assuming you have just Parallel Computing Toolbox. To use more than 8 local workers, or to use workers across multiple machines, you will need MATLAB Distributed Computing Server as well. > This is really disappointing, and I am just wondering if there is anyway > to fully utilize the 12 processes and have 12 workers on one local > machine. I browsed the help guide for the Distributed Computing Server > toolbox and it seems it only works when you'd like to create workers on > remote computers. That is not the case. Could you post the URL of the documentation page that gave you that impression, so that our d...

Matlab parallel for loop or Matlab open pool
I am trying to to some computations and I would like to do it in parallel using parfor or by Opening the matlabpool.. as the current implementations is too slow: result=zeros(25,16000); for i = 1:length(vector1) % length is 25 for j = 1:length(vector2) % length is 16000 temp1 = vector1(i); temp2 = vector2(j); t1 = load(matfiles1(temp1).name) %load image1 from matfile1 t2 = load(matfiles2(temp2).name) % load image2 from matfile2 result(i,j)=t1.*t2 end end It work fine but I would really like to know if there is a way to speed thing up ... Thanks a lot in advance! ...

parallel computing using Cell Computing Model
Hi, i've written an open source C++ framework for Cell Computing. Cell Computing is alike grid computing, but is leant on the biologic. If you are interested please visit http://www.xatlanits.ch. Unfortunately not all documents are available in english yet. All ideas and improvments are welcome :-) -- ...

Matlab FFT
For the question below i was looking for help since some parts of it i don't get. For part A I import the data in matlab and then plot it using the plot function. i get my x-axis as the number of samples which is 16384 and then the y is the values of the signal. but i dono if i am drawing it correct. should it be in any diff format in terms of x-axis. maybe freq? For part B i found the Fs by deviding the 80ms/16384. Which should b right. For part C i get the dc componmet which is the mean of the graph from matlab. although if my FFT graph is corrct in the next part the X(0) will be my dc componemet. For part D i use FFT(ABS(signal)) get a real values but i need to confir this. Can you please tell me what i should do here? and show me the steps ONE BY ONE in matlab. i will need full answers and not walkthorughs since i have an idea how to do it but i know it is not right. files or graphs can be sent using www.yousentit.com or a link. the signal file can be found here: http://s59.yousendit.com/d.aspx?id=2QOB20YI5KNOG262E4UMLJAH8J thanx a bunch in advance. i am so lost in this one :( Question: The attached file called �signal� is a list of 16384 real numbers. These numbers represent the result of sampling an audio signal for 80 milliseconds. None of the components of that signal have a frequencies above the Nyquist threshold for the sampling rate used. (a) Plot the signal. (b) What was the sampling frequency? (c) What is the D.C. component of the signal? (d)...

Parallel computing and
Anyone have insights on the interaction between using the Parallel Computing toolbox, either on a single local machine or using Distributing computing on a cluster, and the built-in use of multiple cores through default multi-threading? Thanks. -Dick Startz ...

slow optimization using nonlcon with GA within matlab toolbox and mixed integer problem using GA within Matlab
greeting all, i'm using nonlcon(nonlinear inequality constraint) with GA from the toolbox provided by matlab. does anyone notice the extreme slowlness in evaluating nonlinear constraint as compared to other linear constraint? my problem for nonlinear constraint is as follow c(1)=x+y^2-2*ones(N,1) where x and y are Nx1 vector. which means if N increases, the size of constraint increases. and to clarify, the min problem is sometime like min c'z where z=[x;y], which means x and y are some range of variable within z(just a matter of problem formulation) well, is there anyone with other alternative or is it possible or method in including non-linear constraint directly to fitness function? (well, actually nonlcon deals nonlinear constraint by using Augmented Lagrangian Pattern Search from what i see from the help file, or am i wrong?) and one more question, is there anyone that attempt to attempt to apply mixed integer problem into the GA in matlab? For now, i can only switch before pure integer or pure real number problem, but not mixed at the moment. I will be trying to add in some repair operator to fix up the variable that needs to be integer and other remaining in real number. I'm not sure why you are using ga to solve your problem. Did you find fmincon unsatisfactory in any way? Alan Weiss MATLAB mathematical toolbox documentation On 5/10/2011 11:53 PM, terry wrote: > greeting all, > > i'm using nonlcon(nonlinear inequality constraint)...

Some Questions about Parallel Computing
1. Sometimes, we do parallel computing in Matlab. If we write parallel code but run it without opening workers, will we lose efficiency? I mean that compared to non-parallel code, will it run much slower? 2. Does all random number generators (for example, random number generator for normal distribution, random number generator for uniform distribution, and so on) in Matlab share a same basic random number generator? 3. When we do parallel computing in Matlab, will the random number generators be synced? For example, if we use a paralleled loop, will the random number generator be locked if it's being used by a thread, so that it cannot be accessed by other threads? 4. I wrote some parallel code in Matlab. Sometimes it runs OK but sometime error happens. I used a random number generator "normrnd" in my code. If I change this function to "randn" then it always works. Does anyone know the reason behind this? Why does "normrnd" lead the parallel computing to failure sometimes? It depends on the task, your system and your code. Are you using parfor or spmd blocks? Some sample code would go a long way in helping figure it out. One thing I will suggest is to open a matlab pool session and run pmode. This opens an interactive parallel job, so you can see what is happening on each worker as you run the code. "Chuanlong" wrote in message <isrunq$4q7$1@newscl01ah.mathworks.com>... > 1. Sometimes, we do parallel computing...

Parallel Computing under FreeBSD
Has anyone of you ever used LAM/MPI (Parallel Computing) under FreeBSD and if so, How has it performed compared to a Linux machine running it. Thanks -- ...

problem with parallel matlab
for i=1:size(APLocation,1) point=APLocation(i,:); parfor j=1:length (RXpoint) rssi(i,j) = LOSS(point,RXpoint(:,j)'); %%rssi having the signal strength from all AP's end end when i'm running the following code, i get the same value for each element of rssi (serially i get different results) where is my problem? Creating some fake data and and implementing a simple LOSS function, I do not see a different between the serial and parallel execution. Does LOSS use any persistent or global variables? function [ rssi rssi2 ] = foo APLocation = rand(10); RXpoint = rand(10) * 5; for i=1:size(APLocation,1) point=APLocation(i,:); parfor j=1:length (RXpoint) rssi(i,j) = LOSS(point,RXpoint(:,j)'); end end for i=1:size(APLocation,1) point=APLocation(i,:); for j=1:length (RXpoint) rssi2(i,j) = LOSS(point,RXpoint(:,j)'); end end end function rv = LOSS( a,b ) rv = sum( a+b ); end "michael" <bezenchu@gmail.com> wrote in message news:ht0t0k$fce$1@fred.mathworks.com... > for i=1:size(APLocation,1) > point=APLocation(i,:); > parfor j=1:length (RXpoint) > rssi(i,j) = LOSS(point,RXpoint(:,j)'); %%rssi having the signal > strength from all AP's > end > end > > when i'm running the following code, i get the same value for each element ...

Applications of parallel computing
Hi all, I'm working on a project that involves parallel computing, and I'm trying to think of applications that would benefit from parallel computing. For example, in cubic spline interpolation, I can divide the interval into groups that can each be handled by a processor. However, that simply provides an embarrassingly parallel solution. I would like to know if there are any problems that will benefit from parallel computing, for example problems that are computationally intensive. I'll be using MPI to program the parallel solutions. Thank you. Regards, Rayne [Moderator: see the Grand Challenge panel which should be auto posted in a few more days.] -- ...

eig with parallel toolbox
can eig can be used for [V D] = eig(A,B) with codistributed arrays and parallel toolbox "saurabh gupta" <saurabh@isu.iisc.ernet.in> writes: > can eig can be used for [V D] = eig(A,B) with codistributed arrays and > parallel toolbox Currently it cannot. In R2012a, "help codistributed/eig" shows that the supported EIG syntaxes are: D = eig(A) [V,D] = eig(A) A must be real symmetric or complex Hermitian. Cheers, Edric. Edric M Ellis <eellis@mathworks.com> wrote in message <ytwobrw4vdi.fsf@uk-eellis0l.dhcp.mathworks.com>... > "saurabh gupta" <saurabh@isu.iisc.ernet.in> writes: > > can eig can be used for [V D] = eig(A,B) with codistributed arrays and > > parallel toolbox > > Currently it cannot. In R2012a, "help codistributed/eig" shows that the > supported EIG syntaxes are: > > D = eig(A) > [V,D] = eig(A) > > A must be real symmetric or complex Hermitian. > > Cheers, > > Edric. thanks Edric. ...

Web resources about - Parallel Computing Toolbox and FFT - comp.soft-sys.matlab

Computing - Wikipedia, the free encyclopedia
For the formal concept of computation, see computation . For the magazine, see Computing (magazine) . For the scientific journal, see Computing ...

Network Computing
Through a For IT, By IT editorial filter, Network Computing connects the dots between architectural approach and how technology impacts the business, ...

Apple Signs Deal To Use Google’s Cloud Computing
More and more businesses are moving their computing platform to the cloud. Cloud computing is the term given to accessing computer and database ...

Can we stop pretending the iPad represents the future of computing?
... post-PC era was upon us. Not to single out Cook, even Steve Jobs believed that tablets were going to one day eclipse PCs as the future of computing. ...

Skyport Raises $30M from GV and Cisco Investments to Ramp Delivery of Zero-Trust Secure Computing Infrastructure ...
Skyport Systems, the company redefining enterprise security architecture, today announced a $30 million round of funding, enabling the company ...

Breakthrough brings quantum computing a big step closer
Quantum computing is now within closer reach thanks to a major breakthrough in which scientists have demonstrated that a key building block can ...

Could a Fredkin gate be the next quantum leap forward for computing?
Building quantum computers is tricky business, with a range of obstacles to overcome, but scientists have had a recent breakthrough with a new ...

Quanta partners with Ericsson for cloud computing datacenter business
Quanta Computer has partnered with telecom equipment maker Ericsson to push the cloud computing datacenter market and the cooperation is expected ...

Eight Ways Solar Power Is Driving New-Gen Computing
As solar technology advances, it's becoming clear that solar is a vital part of the equation that will grow tech's future.

GPU computing breakthrough? Cloud rendering company claims to run CUDA on non-Nvidia GPUs
The cloud rendering company Otoy is claiming to have invented a new software translation layer that would allow Nvidia's CUDA to run on a variety ...

Resources last updated: 3/30/2016 8:01:48 PM