f



Parallel Processing Project

hi,

 i intend to develop a parallel processing environment in a
custom-built multiprocessor hardware system with 8086 processors.


 could you please send me your suggestions about the topic. any
pointers to references in print or in electronic from will be very helpfull.

 my main motivation has been Liu and Gibson's book : "Microcomputer
Systems : The 8086/8088 Family" and a similar project done by Bradford J.
Rodriguez.
 
 one more thing,i don't have a logic analyzer with me, but i have a DSO. 
will that be a serious impairment as far as this project is concerned?


Thanks and Regards,

Vineeth T. M

0
9/12/2003 9:33:24 PM
comp.parallel 2866 articles. 0 followers. Post Follow

0 Replies
394 Views

Similar Articles

[PageSpeed] 37

Reply:

Similar Artilces:

[ANN] parallel 1.3: simple parallel processing.
Version 1.3 of ssh_parallel and parallel have been released. This release features much better error handling in ssh_parallel, so network errors are less likely to cause jobs to go missing. Additionally, it allows you to specify a nice level for processes. Same as the last time, I'm interested in any comments/critiques anyone may have. Original release notes (with updated URLs): If you ever write a shell script along the lines of: some | pipieline | bash then you may be interested in my program parallel. The above line would be written as: some | pipeline | parallel 4 to perform ...

Can i build parallel processing project???
Hi I am currently considering topics for my Degree year project and am very interested in trying to a build a parallel processing system consisting of 5 or 6 Pentium 2/3 machines. However my course has mainly focused on Database and Networking topics and i have had very limited programming experience. My question is what are the basic steps i will need to take to implement the system and would my lack of programming experience prohibit me from completing it? Any links to "beginner" information would also be much appreciated. -- Jonny Stakem wrote: > I am currently considering topics for my Degree year project and am > very interested in trying to a build a parallel processing system > consisting of 5 or 6 Pentium 2/3 machines. However my course has > mainly focused on Database and Networking topics and i have had very > limited programming experience. My question is what are the basic > steps i will need to take to implement the system and would my lack of > programming experience prohibit me from completing it? > Any links to "beginner" information would also be much appreciated. It's very easy to set up a set of workstations to run MPI programs. But it's harder to write MPI programs, and especially hard to write efficient MPI programs, ESPECIALLY on low performance hardware. Unless you already have a MPI program that does very little interprocess communication (is 'embarrassingly parallel'), I would not both...

Parallel processes
Is there any good reason to have two almost similar processes, with almost similar sensitivity list? I have a problem with a size of the design. I'm using Lattice LC4128V and currently the design is using 130/128 logic functions. In on vhdl block there is two processes almost similar, like shown below. I got the code from the other designer and I'm just starting with VHDL, so I ask for your help. Can I save in logic elements by combining these two processes and does it affect the functionality or timing in some way? architecture ltr of dio_write is signal IO_WRITE_tmp : std_logic; ...

USB to PARALLEL (not PARALLEL to USB)
I'm trying to connect a USB printer to a Print Server that only has a Parallel Port. I wouldn't think that combination would be so unique, but can't seem to find them anywhere. I think this cable would have a USB "B" socket plugging into the printer and the other end would be a DB25 Male socket, that plugs into the DB25 Female socket in the back of print server. Can't seem to find any adaptors or gender changers, even wireless options to accomplish this and really, really don't want to trash the print server. Any help would be greatly appreciated. ...

About my parallel projects...
Hello... As you have seen my saying to you on this forum that the "quality" of the learning process of the "what is" and "how to" is also a good tool that enhance the safety of parallel systems and the safety of parallel realtime critical systems and to enhance the performance, so i have to tell you that i am a programmer who knows well "what is" and "how to" avoid race conditions and deadlocks and starvation etc. and i know how to minimize cache-coherence traffic in synchronization algorithms etc. and i know well what is sequent...

Parallels or BootCamp? - -
I paid a heck of a price today for being senile. Completely messed up a backup file itself. Usually make a backup of my backup, but in this case I did not do it soon enough. Waaaah, it is going to take me months to recover from this, I had a whole bunch of stuff on that Mac. Looking at the good side of this, it forced me to make a decision I have been trying to make for over a year. Parallels or BootCamp? - - - BootCamp or Parallels? REALLY tough choice, at least for me. How one chooses depends a lot on the applications one is trying to run. I am going to switch back to usin...

parallel process
How to make parallel calculations on SKILL ? one way i can see is to launch several external processes using interprocess communication (ipc) "Andrey Orlenko" <eagle@ukr.net> wrote in message news:c4gldj$2mf2$1@news.ntu-kpi.kiev.ua... > How to make parallel calculations on SKILL ? S. Badel wrote: > one way i can see is to launch several external processes > using interprocess communication (ipc) > >>How to make parallel calculations on SKILL ? My situation: I've got program GUI on SKILL, it starts external program / sh("./program &") /. External program is calculating and GUI must at the same time output intermediate data to window from external program (it's clear) AND GUI must respond to user actions (stop program, pause, exit, e.t.c) you probably can do this with IPC like this : instead of launching your program with sh, lauch it with ipcBeginProcess. you can define skill callback functions to synchronously process output from the program. then, execution can continue normally and callbacks will be called whenever data is available. procedure( launchProcess() process = ipcBeginProcess( "command" nil "myDataHandler" "myErrHandler" "myPostExecFunction" ) ) ; procedure procedure( myDataHandler( childId data ) printf( "program outputted %s\n" data ) ) ; myDataHandler procedure( myErrorHandler( childId data ) printf( "program error : %s\n"...

Parallel threads is parallel proces
Hello all, On one of our Solaris 5.10 systems we have a database running Oracle 10.2.0.3. We have parallel_max_servers set to 100. I was surprised to see that a ps -Lef shows 258 parallel threads per parallel server process. See output below. ps -ef|grep p00 oracle 20780 1 0 Sep 02 ? 776:08 ora_p007_P835 oracle 20776 1 0 Sep 02 ? 525:32 ora_p005_P835 oracle 20778 1 0 Sep 02 ? 517:40 ora_p006_P835 oracle 20784 1 0 Sep 02 ? 532:26 ora_p009_P835 oracle 20782 1 0 Sep 02 ? 531:50 ora_p008_P835 oracle 20774 1 0 Sep 02 ? 498:28 ora_p004_P835 oracle 20768 1 0 Sep 02 ? 496:59 ora_p001_P835 oracle 20770 1 0 Sep 02 ? 495:00 ora_p002_P835 oracle 20772 1 0 Sep 02 ? 543:02 ora_p003_P835 oramgr 17883 17839 0 09:03:29 pts/3 0:00 grep p00 oracle 20766 1 0 Sep 02 ? 586:22 ora_p000_P835 ps -Lef|grep 20774 oracle 20774 1 1 258 0 Sep 02 ? 451:42 ora_p004_P835 oracle 20774 1 2 258 0 Sep 02 ? 0:12 ora_p004_P835 oracle 20774 1 3 258 0 Sep 02 ? 0:00 ora_p004_P835 oracle 20774 1 4 258 0 Sep 02 ? 8:55 ora_p004_P835 oracle 20774 1 5 258 0 Sep 02 ? 0:05 ora_p004_P835 oracle 20774 1 6 258 0 Sep 02 ? 9:01 ora_p004_P835 oracle 20774 1 7 258 0 Sep...

Parallel in, Parallel out shift register
I am trying to synthesize and simulate a parallel shift register that keeps shifting the input data as long as the enable pin is active. entity shift_out is Port ( --Inputs clk : in std_logic; en : in std_logic; rst : in std_logic; in1 : in std_logic_vector(31 downto 0); -- Outputs shift_val : out std_logic_vector(31 downto 0) ); end entity shift_out; architecture arch of shift_out is signal shift_t1 : std_logic_vector(31 downto 0) := (others => '0'); .... process (clk, rst, in1, en) is begin if rst = '1' then shift_t1 &...

Parallel LZ4 and my Parallel LZO...
Hello, I have downloaded the C version of the following compression algorithm: http://code.google.com/p/lz4/ They say that it's the fastest, so i have compiled it into a DLL with mingw to use it from FreePascal and Delphi, so i have wrote the interface and all was working perfectly, but when i have benchmarked the parallel LZ4 , that i have wrote, against my Parallel LZO algorithm , i have noticed that they have the almost the same speed on compression and decompression but my Parallel LZO is 7% better on compression ratio than Parallel LZ4 , so i have decided to not include Parallel LZ4 algorithm inside my Parallel Archiver, so if you want to compress Terabytes files i advice you to use my Parallel LZO algorithm with my Parallel Archiver. My Parallel archiver is very stable now, and you can download it from: http://pages.videotron.com/aminer/ Best Regards, Amine Moulay Ramdane. ...

parallel while loops not running in parallel
I am collecting data from a PCI 4351 board and a 6070E board. These are connected to a TC 2190 and a BNC 2090. I do not need exact sychronization, however I do need them to collect data at different rates. Currently I am using two while loops to perform this, but they run sequentially in stead of simultaneously, that is, the second while loop waits for the first one to finish before running. Is there any way I can get these to run both at the same time instead of one after the other? Parallel Whiles.vi: http://forums.ni.com/attachments/ni/170/326678/1/Parallel Whiles.vi Your two loops are calling the same SubVI, and that VI is not reentrant, so only one instance can be in memory at a time.&nbsp; Therefore, one loop will have to wait for the other loop to release control of the AI Read before it can access the AI Read. &nbsp; One option is to save AI Read as something else (maybe in your user.lib) and make it reentrant, but it actually calls AI Buffer Read which also uses non-reentrant VIs.&nbsp; I would suggest looking at your timing to see if you can make it as exclusionary as possible. Thanks for the reply, is there a better way to accomplish what I am trying to do? I was thinking maybe using a sequence and switching off between the two. Any way you do it, you're going to have to sequentially read from the data buffer, and the implementation you have now may be the fastest, if not the most accurate.&nbsp; I would recommend that you either put a wait...

parallel processing
HI, I have a tricky problem about parallel processing using JavaScript. a script makes use of classes. 2 objects A and B are created at intialisation. The two objects make use of the same function foo(). at runtime, the action: A.foo(); B.foo(); is executed. I noticed that in any browser, the result will be that the browser executes the function foo() related to A. Then stop and execute the function foo() related to B. This leave A unfinished. Is there a possiblity to have the function called by A running at the same time than B's one? I mean having two instance of the function running ...

parallel processing
Hi, I'm not quite sure what 'feature' i'm looking for ... any input appreciated. I want to parallelize a particular task. #/usr/bin/perl -w use strict; my @target; our @result; for (my $i = 0; $i < @target; $i++) { $result[$i] = &do_some_work($target[$i]); } &report_results; .... &do_some_work requires a minute or so to complete. @target contains several hundred elements. Therefore, total execution time runs in the hundreds of minutes. Also, @target is not ordered ... e.g. there are no dependencies within @target ... if &do_some_work finishes proce...

Parallel processing
Hello, I am running version 7.01 on a quad core CPU. I am doing some heavy calculations but was too lazy to use any special commands for parallelization. However, to my surprise, when I check the performance I see that all 4 kernels a busy calculating. Does anybody have any info about this automatic parallelization of Mathematica? Daniel As far as I know, Mathematica does not paralellize unless you explicitedly tell it to. Did you see 4 Mathematica kernels running or just 4 cores being busy with unknown processes? Cheers -- SjoerdOn May 28, 12:50 pm, dh <...

Parallel processing
Hello, I'm trying to implement a parallel process, and I'm not sure how to set it up and I was wondering if someone could help me please? My code resembles, for i=1:N % N approx = 10^6 x_new = my_func(x_old) A(:,i) = x_new; x_old = x_new; end where x_old and x_new are Mx1 vectors and A is a MxN matrix. From one iteration to the next, the only dependence is x_new(i) on x_old(i). I'd like to parallelise this by splitting the elements of x_new and x_old across more than one core. Is there a way I can, say, matlabpool open 5 <use core's ...

Parallel LZ4 and my Parallel LZO...
... Hello, I have downloaded the C version of the following compression algorithm: http://code.google.com/p/lz4/ They say that it's the fastest, so i have compiled it into a DLL with mingw to use it from FreePascal and Delphi, so i have wrote the interface and all was working perfectly, but when i have benchmarked the parallel LZ4 , that i have wrote, against my Parallel LZO algorithm , i have noticed that they have the almost the same speed on compression and decompression but my Parallel LZO is 7% better on compression ratio than Parallel LZ4 , so i have decided to not include Parallel LZ4 algorithm inside my Parallel Archiver, so if you want to compress Terabytes files i advice you to use my Parallel LZO algorithm with my Parallel Archiver. My Parallel archiver is very stable now, and you can download it from: http://pages.videotron.com/aminer/ Best Regards, Amine Moulay Ramdane. ...

CALL FOR PAPERS -- Special Session on Massively Parallel Processing at the 9th Workshop on Advances in Parallel and Distributed Computational Models -- IPDPS
*** CALL FOR PAPERS *** 2007 International Parallel & Distributed Processing Symposium Workshop on Advances in Parallel and Distributed Computational Models Special Session on Massively Parallel Processing *** Submission Deadline December 4th 2006 *** As part of the Workshop on Advances in Parallel and Distributed Computing Models (APDCM), the aim of this special session is to focus on computer systems that can scale to many thousands of processing elements and are used to solve a single problem. The focus is on identifying new and novel ideas rather than proving incremental advances. By concurrently exploring architecture, programming models, algorithms and applications, the session seeks to advance the state-of-the-art of Massively Parallel Processing (MPP) systems. Following the usual IPDPS practice, all MPP papers will be published in the Proceedings of the IEEE/ACM International Parallel and Distributed Processing Symposium (IPDPS). Topics of Interest: The topics of interest to this special session are: (+) Architectures and Experimental Systems: the use of increased parallelism on a single chip (MPP on a chip), the coordination and communication between massive numbers of processing elements, the use of processors in memory (PIMS), and parallelism in emerging technologies. (+) Large-scale Computing: the utilization of MPPs for large-scale computing, achieving peta-scale levels of processing, use of heterogeneous processing capabilities. (+) Paralleli...

CALL FOR PAPERS -- Special Session on Massively Parallel Processing at the 9th Workshop on Advances in Parallel and Distributed Computational Models -- IPDPS
*** CALL FOR PAPERS *** 2007 International Parallel & Distributed Processing Symposium Workshop on Advances in Parallel and Distributed Computational Models Special Session on Massively Parallel Processing *** Submission Deadline December 4th 2006 *** As part of the Workshop on Advances in Parallel and Distributed Computing Models (APDCM), the aim of this special session is to focus on computer systems that can scale to many thousands of processing elements and are used to solve a single problem. The focus is on identifying new and novel ideas rather than proving incremental advances. By concurrently exploring architecture, programming models, algorithms and applications, the session seeks to advance the state-of-the-art of Massively Parallel Processing (MPP) systems. Following the usual IPDPS practice, all MPP papers will be published in the Proceedings of the IEEE/ACM International Parallel and Distributed Processing Symposium (IPDPS). Topics of Interest: The topics of interest to this special session are: (+) Architectures and Experimental Systems: the use of increased parallelism on a single chip (MPP on a chip), the coordination and communication between massive numbers of processing elements, the use of processors in memory (PIMS), and parallelism in emerging technologies. (+) Large-scale Computing: the utilization of MPPs for large-scale computing, achieving peta-scale levels of processing, use of heterogeneous processing capabilities. (+) Parallel...

What are Parallels NAT and Parallels Guest Host?
What are Parallels NAT and Parallels Guest Host? These show up every time I boot up on my System Preference Network screen. Yes, I know they have something to do with Parallels running on my Mac but why the network connection? Is Parallels monitoring what I do or is there some need to connect with their server to run virtual windows with Windows and Linux? Thank for any answers that are helpful. ** Posted from http://www.teranews.com ** In article <hasta-73692F.10193913072008@free.teranews.com>, hasta la vista <hasta@lavista.org> wrote: > What are Parallels NAT and Par...

serial to parallel and parallel to serial conversion
Hello, I am simulating MC-CDMA/MC-DS-CDMA(using Walsh-Hadamard code in Rayleigh fading Channel + AWGN) using Matlab/Simulink. Do anyone knows how to do the serial to parallel and parallel to serial conversion of the data? Thanks! "Mikail " <mikailidirs@yahoo.co.uk> wrote in message <flmn52$d1i$1@fred.mathworks.com>... > Hello, > I am simulating MC-CDMA/MC-DS-CDMA(using Walsh-Hadamard code > in Rayleigh fading Channel + AWGN) using Matlab/Simulink. Do > anyone knows how to do the serial to parallel and parallel > to serial conversion of the data? Thanks! h...

About my Parallel compression library and my Parallel archiver...
... Hello, I have downloaded Easy compression library that costs you 149$ and more... Here it is: http://www.componentace.com/ecl_features.htm And i have noticed that it supports three compression algorithms: Zlib, Bzip and PPM, and i have tried to benchmark it against my Parallel compression library and i have found that the Easy compression library with maximum level compression that is ppmMax(with PPM maximum compression level) has less compression ratio than my Parallel compression library with maximum level compression that is clLZMAMax(LZMA with maximum compression level), try it yourself and see, so my Parallel compression library and Parallel archiver are better on compression ratio. If we take a look know at the performance and scalability, my Parallel my Parallel compression library and my Parallel archiver are very fast , you can even use my Parallel archiver as a hashtable from the harddisk with O(1) access, and they can scale with the number of cores, Easy compression library can not. If we take a look at the Relibalitity, my Parallel compression library and my Parallel archiver are very stable now, and they don't take too much memory ressources. If we take a look also at the usability , my Parallel compression library and my Parallel archiver are very easy to use. My Parallel compression library supports Parallel LZMA,Parallel Bzip,Parallel LZ,Parallel LZO and Parallel Gzip compression algorithms, and my...

About my Parallel compression library and my Parallel archiver...
.... Hello, I have downloaded Easy compression library that costs you 149$ and more... Here it is: http://www.componentace.com/ecl_features.htm And i have noticed that it supports three compression algorithms: Zlib, Bzip and PPM, and i have tried to benchmark it against my Parallel compression library and i have found that the Easy compression library with maximum level compression that is ppmMax(with PPM maximum compression level) has less compression ratio than my Parallel compression library with maximum level compression that is clLZMAMax(LZMA with maximum compression level), try it yourself and see, so my Parallel compression library and Parallel archiver are better on compression ratio. If we take a look know at the performance and scalability, my Parallel my Parallel compression library and my Parallel archiver are very fast , you can even use my Parallel archiver as a hashtable from the harddisk with O(1) access, and they can scale with the number of cores, Easy compression library can not. If we take a look at the Relibalitity, my Parallel compression library and my Parallel archiver are very stable now, and they don't take too much memory ressources. If we take a look also at the usability , my Parallel compression library and my Parallel archiver are very easy to use. My Parallel compression library supports Parallel LZMA,Parallel Bzip,Parallel LZ,Parallel LZO and Parallel Gzip compression algorithms, and m...

parallel processors connecting parallel processors
a tiny instruction set, for separating data from function reference(s), ( a representation of a hardware model dichotomization of data and function reference, separate parameter and return stack, ( PATENT PEND., "The Wheel", HoHoHE, :)) explicitly 0< AND XOR DROP OVER DUP @ ! 2* 2/ >R R> INVERT + implicitly JUMP_IF_ZERO JUMP CALL RETURN LITERAL enhanced with a second and third parameter stack, in parallel, for simulating VLIW address protect executable cache. basic machine concept link, enhanced with a second and third parameter stack along the diagonal of the N by...

A more scalable Parallel Varfiler and a more scalable Parallel Hashlist
Hello, I have implemented a more scalable Parallel varfiler that uses my scalable RWLock in each bucket of the Parallel hashtable, and it can be used also as a parallel hashtable and that uses ParallelHashList (Parallel Hashtable) with O(1) best case and when there is collisions in the hashtable, the average time complexity will be O(log(n)) in each bucket were the collisions happened. The parallel Hahstable uses lock striping and my scalable RWLock in each bucket , this allows multiple threads to write and read concurently. also ParallelHashList maintains an independant...

Web resources about - Parallel Processing Project - comp.parallel

Analog signal processing - Wikipedia, the free encyclopedia
Analog signal processing is any signal processing conducted on analog signals by analog means. "Analog" indicates something that is mathematically ...

Conexant Announces New Audio Processing Solutions At CES
At CES, Conexant gave me a demo of some of the audio processing technology that they've been working on. Conexant is a company that probably ...

THE PAYMENTS ECOSYSTEM: Everything you need to know about the next era of payment processing
The way we pay has changed dramatically. People are using their smartphones for every kind of formal and informal transaction — to shop at stores, ...

USDA Investigating Possible Plastic Sabotage At Poultry Processing Plant
While it’s not uncommon to hear about chicken products that end up containing wayward bits of plastic (like this nugget issue , this sausage ...

Feds Stop Processing NICS Denial Appeals
Obama Administration Enacts More Gun Control By Misusing FBI Background Check System

This just in: Marco Rubio's substance is thinner than an Intel processing chip
In case you hadn't already guessed it, Marco Rubio's got a smile and a stump speech, but that's about it. Reporters who have followed him from ...

Food Processing: It's What Makes Us Human
Turning raw ingredients into something more has played an important role in our evolution.

Keeping Up With the Millennials in Spousal Immigration Processing
“Selfie stick. Snap. Upload to Facebook. 52 likes in a matter of minutes. These likes include people from all over the world who simply have ...

Arduino feat. Processing Laser CNC Project
Arduino-Processing-Laser-CNC - Arduino feat. Processing Laser CNC Project

When Should Approximate Query Processing Be Used?
This is a guest repost by Barzan Mozafari , who is part of a new startup, www.snappydata.io , that recently launched an open source OLTP + OLAP ...

Resources last updated: 3/24/2016 7:40:34 PM