www.jjj.de/fft/int_fft.c Has c source to a fixed point fft

0 |

4/5/2004 11:57:31 PM

Hi, I searched the internet to find out the difference between a fixed point & a floating point DSP. To my surprise, I did not find a well defined difference. Plz, can anyone differentiate between the two so that a layman can also understand ? Thanx in advance... Sandeep Chikkerur wrote: > > Hi, > > I searched the internet to find out the difference between a fixed > point & a floating point DSP. > To my surprise, I did not find a well defined difference. > > Plz, can anyone differentiate between the two so that a layman can > also understand ? > > Thanx in advance... A fixed-point DSP does only fixed-point arithmetic, and usually has hardware to facilitate multiplication. A floating-point DSP has extra hardware for floating-point addition and multiplication (addition is a bit more involved than multiplication) and often omits the hardware to facilitate fixed-poing multiplication. Expect some difference in the size of a few registers. A matter of curiosity (mine): What difference had you imagined? Jerry -- Engineering is the art of making what you want from things you can get. ����������������������������������������������������������������������� In article <d5d88eb5.0306260219.36f87462@posting.google.com>, sandeep_mc81@yahoo.com says... > Hi, > > I searched the internet to find out the difference between a fixed > point & a floating point DSP. > To my surprise, I did not find a wel...

I'm using a microcontroller (dsPIC30F3013) to do an FFT on some "real" data (from a microphone) and I don't have enough internal RAM to get the frequency resolution I need. I found a forum entry by Rick Lyons as follows: -quote- Does the dsPIC33 allow you to perform 512-point FFT on complex-valued input samples? If so, there's a way to perform a 1024-point *real-input* FFT using a 512-point complex FFT routine. [-Rick-] -end quote- The FFT routine I have available does take a complex input (two sixteen bit words), and the existing code merely stuffs zeroes into the imaginary part of each complex input. So, can anyone tell me what the trick is for allowing this complex-input FFT to be more efficient with real input data such that I can effectively cut in half the input record length I need to use. Thanks in advance. Bob -- == All google group posts are automatically deleted due to spam == On Jun 28, 5:18=A0pm, "BobW" <nimby_GIMME_SOME_S...@roadrunner.com> wrote: > I'm using a microcontroller (dsPIC30F3013) to do an FFT on some "real" da= ta > (from a microphone) and I don't have enough internal RAM to get the > frequency resolution I need. > > I found a forum entry by Rick Lyons as follows: > > -quote- > Does the dsPIC33 allow you to perform > 512-point FFT on complex-valued input > samples? =A0If so, there's a way to perform > a 1024-point *real-input* FFT using >...

Hi All, I am using FFT on Motorola DSP (24 bit processor). I am working on an audio encoder project which required atleast 20 bit accuracy. The twiddle factor (which is in Q21 format) value is coming from table. The sample application developed to calculate 8-point 10 2048 FFT in radix-2 DIT is loosing 9-bit accuracy. If we compare the output of Windows floating point code and DSP fixed point code, it has 9-bit accuracy loss. Does anyone has idea about where am i making mistake? Any data points will be of great help. Regards, Mehul Check the literature / Google / etc. for block floating point. FFTs generate intermediate results that can oveflow so a divide by two is usually used at each outer loop iteration. But that overcompensates, costing you accuracy. Block floating point can prevent that. Also be sure to use the correct accuracy in your factors and lookup tables, rounding vs. truncation, etc. Some homework here will pay real dividends in accuracy and dynamic range... Jim Horn (tweaked 68k FFT code for HP 70000 series spectrum analyzers) ...

First things First..I am totally new to DSP...just completed an introductory course...never coded for DSP...not in C...not in Assembly..but I did use the built in functions in Matlab :D... I am trying to learn programming the Motorola DSP(I dont know which one :D My Prof wants me to code in C first...then code in Assembly...so the it doesn't matter). So, to start with, I was told to try and implement the Integer FFT. At first, I thought it must be some kinda new FFT algrithm :D But after going thru the comp.dsp archives..I figured out it must be that the data types they use in integer-fft are integers...hence it must be the same as fixed-pt FFT. But then I was given this IEEE trans. on Sig. Proc. paper to read...(The Integer Fast Fourier Transform by Nyugen ,et.al. (2000?)) I could not make out much...but integer-fft and fixed pt FFT seem to be different.. :O Can anybody help me out of my confusion? What is Fixed Point FFT? What is Integer FFT? thanx in advance...:D Karthik Ravikanti --- Let There Be Sound! http://students.iiit.net/~karthik_ravikanti Karthik Ravikanti wrote: (snip) > So, to start with, I was told to try and implement the Integer FFT. > At first, I thought it must be some kinda new FFT algrithm :D I think you are right, but there IS an algorithm called ICT, Integer Cosine Transform, a modified DCT designed to minimize the number of '1' bits in the coefficients for fast implementation on machines without a multiply instructio...

Hello, I am sure this is a pretty simple question, but I am having some ambiguity. I need to confirm what is happening exactly. Any information is really appreciated. So this is a Sharc DSP I am working with. Question is pretty simple. Rn = FIX Fx; FIX description from the manual is as follows: "Converts the floating-point operand in Fx to a twos-complement 32-bit fixed-point integer result. If the MODE1 register TRUNC bit=1, the FIX operation truncates the mantissa towards –Infinity. If the TRUNC bit=0, the FIX operation rounds the mantissa towards the nearest integer....

Hi all, I am working on a fixed-point FFT based application. The Input to FFT is a real signal while the FFT Core is Complex (a.k.a it can take real and imaginary inputs). Normally, one would tie the real signal to real input of the core and the imaginary input to zero. In order to optimize the resources, I want to use the complex input of the FFT Core efficiently. According to articles I found on the website, you can use split the input data into two streams and combine the outputs of the FFT. http://processors.wiki.ti.com/index.php/Efficient_FFT_Computation_of_Real_Input This all seems great for floating point. I am curious as what would be the rounding issues if the FFT Core is fixed-point core. The core I am using outputs unscaled fixed-point output. Any insights or suggestions into the problem are welcome. Thanks, rakumarudu rakumarudu <kalyanramu@n_o_s_p_a_m.gmail.com> wrote: > I am working on a fixed-point FFT based application. > The Input to FFT is a real signal while the FFT Core is Complex (a.k.a it > can take real and imaginary inputs). Normally, one would tie the real > signal to real input of the core and the imaginary input to zero. In order > to optimize the resources, I want to use the complex input of the FFT Core > efficiently. > According to articles I found on the website, you can use split > the input data into two streams and combine the outputs of the FFT. Well described in ...

hello all, I am new in dsp programing and i an using fixed point DSP, as read FIxed point arithmatic it seems to be confusing so plz suggest an link or document having good practical explaination of the topic, or pl give some examples for understanding, Thanks in Advance, Rahul. jadhav_rahul wrote: > hello all, > I am new in dsp programing and i an using fixed point DSP, as i > read FIxed point arithmatic it seems to be confusing so plz suggest any > link or document having good practical explaination of the topic, or plz > give some examples for understanding, > > Thanks in Advance, > Rahul. www.digitalsignallabs.com/fp.pdf Paul jadhav_rahul wrote: > hello all, > I am new in dsp programing and i an using fixed point DSP, as i > read FIxed point arithmatic it seems to be confusing so plz suggest any > link or document having good practical explaination of the topic, or plz > give some examples for understanding, Those application manuals give the very clear explanation of the DSP arithmetics and the basic algorithms regardless of the processor: http://www.analog.com/static/imported-files/processor_manuals/2127342adsp2100vol1.zip http://www.analog.com/static/imported-files/processor_manuals/60899921adsp2100vol2.zip Be extremely careful with using any assembly code from there: it is full of bugs. Vladimir Vassilevsky DSP and Mixed Signal Design Consultant http://www.abvolt.com > > >ja...

Hi, I have to implement the FFT for a vector long 2048 samples in a DSP which does not support floating point. Do you have any link to already written code that computes the FFT with integer computation? Thank you in advance Luca Baradel wrote: > Hi, > I have to implement the FFT for a vector long 2048 samples in a DSP which > does not support floating point. > Do you have any link to already written code that computes the FFT with > integer computation? > Thank you in advance Which DSP? Have you searched the chip manufacturer's web page? Most (maybe all?) have FFT...

Hi, I'm using emlmex to compile a function using matlab fixed point fi objects. I'm trying to speed up the execution speed of the compiled mex function. Even for the simplest function, c = a*b, the profiler shows a lot of the mex function time is being spent in embedded.fi.fi, which appears to be the fi object constructor. Is it possible to avoid or prevent this constructor from being called? Even if I define my mex function as: function c = simple_func(a,b,c) where a,b,c are fi objects contstructed _before_ the call to simple_func(), it appears as if the mex compiled ve...

hi, I have written program for fft in M file and using fixed poin toolbox i simulated FFT . The final results` WordLength was 35 an FractionLength 27. I then took normal fft and compared results ,error wa in the range of 10^-3. Just for curiosity i converted output of normal(Double Precision fft into fixed point using fi. The resulting fi object was having WordLengt of 16 and FractionLength of 12. And error almost zero. Now how is thi possible , a higher FractionLength should have shown less error isnt it? On Aug 3, 12:56 am, "sudarshan_onkar" <sudarshan.on...@gmail.com> wrote: > hi, > I have written program for fft in M file and using fixed point > toolbox i simulated FFT . The final results` WordLength was 35 and > FractionLength 27. I then took normal fft and compared results ,error was > in the range of 10^-3. > > Just for curiosity i converted output of normal(Double Precision ) > fft > into fixed point using fi. The resulting fi object was having WordLength > of 16 and FractionLength of 12. And error almost zero. Now how is this > possible , a higher FractionLength should have shown less error isnt it? Normalization issue? That would be my guess. Julius >On Aug 3, 12:56 am, "sudarshan_onkar" <sudarshan.on...@gmail.com> >wrote: >> hi, >> I have written program for fft in M file and using fixed point >> toolbox i simulated FFT . The final resul...

Hello Everyone, I have implemented an decimation in time FFT algorithm. I ran the cod first on Visual C++ and verified the results with MATLAB, which turned ou to be right. I then ran the same C code on BF533 using floating point data types. Th results turned out to be the same as in MATLAB. In order to reduce m cycles i used integer data types. I am however unable to verify my result with MATLAB. I do not know how to intepret my results and how to use MATLA as a frame of reference.Any suggestions would be of great help. Thanks & Regards, Pramod Jacob Plots of the data from the two transforms, fixed point and matlab, should be nearly identical in shape. If the data are scaled so that the peak value in both sets of data are equal, all of the other data values should be nearly identical. In article <7IOdnUrDuLer04PYnZ2dnUVZ_sidnZ2d@giganews.com>, "pramodjacob" <pramodjacob@gmail.com> wrote: >Hello Everyone, > >I have implemented an decimation in time FFT algorithm. I ran the code >first on Visual C++ and verified the results with MATLAB, which turned out >to be right. > >I then ran the same C code on BF533 using floating point data types. The >results turned out to be the same as in MATLAB. In order to reduce my >cycles i used integer data types. I am however unable to verify my results >with MATLAB. I do not know how to intepret my results and how to use MATLAB >as a frame of reference.Any suggestions wo...

Hi, i had a doubt on fixed point arithmetic. I did fixed point simulation of FFT for N=8k in C using macros. In firs case input samples were 1.15 and intermediate precision was also 1.15 an i got SNR of around 48-50 dB . In second case input samples were 1.12 an intermediate was same 1.15 still SNR was same. I was thinking SNR shoul increase. Please put your thoughts. On Oct 23, 3:07 am, "sudarshan_onkar" <sudarshan.on...@gmail.com> wrote: > Hi, i had a doubt on fixed point arithmetic. > I did fixed point simulation of FFT for N=8k in C using macros. In first > case input samples were 1.15 and intermediate precision was also 1.15 and > i got SNR of around 48-50 dB . In second case input samples were 1.12 and > intermediate was same 1.15 still SNR was same. I was thinking SNR should > increase. Please put your thoughts. The 1.15 intermediate quantization is limiting your SNR. If you reduce the input SNR, and leave the intermediate word width the same, why do you expect SNR to increase? John >On Oct 23, 3:07 am, "sudarshan_onkar" <sudarshan.on...@gmail.com> >wrote: >> Hi, i had a doubt on fixed point arithmetic. >> I did fixed point simulation of FFT for N=8k in C using macros. I first >> case input samples were 1.15 and intermediate precision was also 1.1 and >> i got SNR of around 48-50 dB . In second case input samples were 1.1 and >> intermediate was same 1.15 still SNR was same. ...

Hi. I'm writting a code to do a FFT using fixed point(16 bits). Well this is the question What is the better, make the sacaling after or before start a pass. I think that is better before start a pass. This is correct? Diego DX wrote: > Hi. > > I'm writting a code to do a FFT using fixed point(16 bits). Well this > is the question What is the better, make the sacaling after or before > start a pass. I think that is better before start a pass. This is > correct? Processor? -- Engineering is the art of making what you want from things you can get. ����������������������������������������������������������������������� | I'm writting a code to do a FFT using fixed point(16 bits). Well this | is the question What is the better, make the sacaling after or before | start a pass. I think that is better before start a pass. This is | correct? | If results are divided by two to avoid overflow at each stage (like the code I use) You better scale your input buffer before processing the FFT. Of course it depends of the precision you need ! Yes, it depends about the presicion, but it depends about the dsp's features to. The problem is that I have to use fixed point and the registers has 16 bits to represent a fixed point, and if I don't scale the data they overflow the registers. Thank you. Diego Hi, It's better to do the scaling inside the FFT when the intermediate results have grown. Or even better do a data dependent sc...

Hi, I am making a fixed point FFT in an FPGA. My architecture will use 1 'buttterfly' processor that will be reused in every stage. However I can have a different scaling in each stage. I am using a radix-2 decimation in time structure. The 'problem' is finding out the scaling to use in each stage. Note that I am not interested in keeping the 'rms' of the input and output the same, but Ijust want to use my fixed point operations as good as possible. After browsing a bit the web...I found several ties that each radix-2 stage can scale the input by 1+sqrt(2). I think this comes from the butterfly operation doing X'= X + TW*Y Y'= Y -TW*Y Hence if for example Y = 1 + i, and TW is at pi/4 ----> Imag(X') will be scaled with 1+sqrt(2) wrt X and Y (assuming X and Y equal). However to me assuming a scaling of 1+sqrt(2) for each stage looks a bit pessimistic because: - obviously not every butterfly will always pi/4 for the twiddle. E.g. the first FFT stage will just use 1 or so as twiddle, hence I think the gain on a branch would maximally be 2 in stead of 2.414 - when there is a large 'gain' on the imaginary branch, there is a small gain on the real branch and the other way around. For example in the above example Real(X') will be scaled with 1 only I think. The twiddle as such has no complex gain, so TW*Y has the same amplitude as Y, just that it can shift from I to Q and so... Now my question: does my reasoning make sense....

Hi all, Can anyone suggest me from where can I get a 1024 point fixed point real FFT C code? Thanks, SB Sumeer wrote: > Hi all, > > Can anyone suggest me from where can I get a 1024 point fixed point real FFT > C code? > > Thanks, > SB http://www.library.cornell.edu/nr/bookcpdf.html http://www.fftw.org/download.html -- Engineering is the art of making what you want from things you can get. ����������������������������������������������������������������������� Jerry Avins wrote: > Sumeer wrote: > >> Hi all, >> >> Can anyone suggest me from where can I get a 1024 point fixed point >> real FFT >> C code? >> >> Thanks, >> SB > > > http://www.library.cornell.edu/nr/bookcpdf.html > http://www.fftw.org/download.html I think you missed the first two-thirds of the subject line "Fixed point FFT". The above links are good and useful, but not for the purpose at hand. They have nothing to do with fixed point processing. Did you mean http://sourceforge.net/projects/kissfft ? -- Mark Mark Borgerding wrote: ... > Did you mean http://sourceforge.net/projects/kissfft ? No. I was half asleep. It just didn't register. Jerry -- Engineering is the art of making what you want from things you can get. ����������������������������������������������������������������������� assuming that your are using tms320c6000 series, there is a free dsplib available can be d...

Is there fixed-point math library (preferably as inline functions or macros) such as the one defined by European Telecommunications Standards Institute (ETSI) and used in in GSM 06.06 and other standard vocoders - available for: 1. ADI's BlackFin? 2. IBM PowerPC? 3. Intel Pentium? 4. TI DSP? If so, please provide me with the link for download etc. Thanks, GM Greg M. wrote: > Is there fixed-point math library (preferably as inline functions or > macros) such as the one defined by European Telecommunications > Standards Institute (ETSI) and used in in GSM 06.06 and other standard > vocoders - available for: > 1. ADI's BlackFin? > 2. IBM PowerPC? > 3. Intel Pentium? > 4. TI DSP? > > If so, please provide me with the link for download etc. > > Thanks, > GM TI provides a fixed point math library with each of their DSPs. I have no idea if it is at all compatible to the ETSI one -- do you have any links to that standard? Would I have any competitive advantage to standardizing on it myself? -- Tim Wescott Wescott Design Services http://www.wescottdesign.com ...

I was asked to see if I could find a back-of-envelope time estimate for the fastest time in microseconds that could be expected for execution of a 65536 point real FFT on whatever high end DSP was available. all the processor literature has smaller fft's, like 256 or 1024. Can I scale those numbers by log2 or log4 to get an estimate for 65536? all the data we have internally is for older TI c40's. Any estimates or places to look? thanks dave howard dmh2000@gmail.com wrote: > I was asked to see if I could find a back-of-envelope time estimate for > the fastest time in microseconds that could be expected for execution > of a 65536 point real FFT on whatever high end DSP was available. > > all the processor literature has smaller fft's, like 256 or 1024. Can I > scale those numbers by log2 or log4 to get an estimate for 65536? > > all the data we have internally is for older TI c40's. > > Any estimates or places to look? You probably shouldn't just scale them. You'll need to decide what sort of computation you're doing: floating point or fixed point. You'll also need to look at memory requirements and cache sizes. You can fit a 1k FFT in on-chip memory on, say, a TI C6713, but you can't fit a 64k FFT in there (it only has 64k on chip memory, and at least address 0 is not available to you), and the off-chip memory accesses will likely be a significant factor. If, however, you can use a PC as your "DS...

What is the effect of lowered resolution of intermediate results on an FFT? I am considering a number of implementation choices for a fixed-point FFT. The time-series data is inherently 16-bit integers (audio data). I want to do a 8192-point FFT on this data. Because of the need to explicitly handle overflows (in the butterfly operations) in a fixed-point implementation, it is convenient to limit the resolution of the intermediate results. There is a definite tradeoff between speed and how I check for overflow. I am also limited by the C language. Ideally I would like to multiply two 32-bit numbers and retain the high-order 32 bits of the 64-bit results. In previous implementations I have done this nicely in assembly language by changing the register from the register pair used to save the result. But this time my development system does not have the option of compiling C-code to intermediate assembly code and editing it before assembly. (I am targeting the ARM processor in Nokia/Symbian cell phones). Even though this development system does support 64-bit integers, the performance penalty in promoting all operands to 64 bits and then carrying out a full 64 x 64 multiply in software is prohibitive. Another choice is to limit the intermediate results to 13 bits of resolution by scaling the entire array when necessary. This would allow me to do a full FFT round without worrying about overflow. (My trig tables are 16-bit.) But I am concerned about the effect on ...

Hi, I am in the process of coding 4096-length fixed point fft. The inpu time domain samples are supposed to be 16-bit(preferrably 16.0 signe format) and output frequency domain coeff's should have a 16-bit length But I have b een getting a distorted spectrum with no resemblance t expected one. I am performing a 2-bit shift(divide by 4) per stage of the FFT algorith to avoid overflow. Am I pursuing on a wrong lane? Should I decrease the length of th FFT?.Your help would be appreciated. With Regards, K.Bhanu Prasad I am downscaling all the coefficients by 4 only if one of coeff at particular stage of FFT calculation exceed a reference value(here it i half the greatest 16-bit signed integer) ...

Folks, Is there any free source code for FFT Fixed Point in Matlab Code ? Thanks. Rgds, Berry in article 4225c8ca$0$29279$14726298@news.sunsite.dk, Mas Bas at buzzberry@gmail.com wrote on 03/02/2005 09:08: > > Is there any free source code for FFT Fixed Point in Matlab Code ? > i would like a copy, too. i have been thinking of writing the same for simulation for both educational reasons and a possible project in the future. it need only be the standard Cooley-Tukey radix-2 . this is why i had been thinking a lot about FFT quantization, lately, as in the other thread. -- r b-j rbj@audioimagination.com "Imagination is more important than knowledge." I have written Radix-2 FFT fixed point code in C. I think I should be able to convert it into Matlab in a relatively short time. I will post it here once it is ready. Btw, I have three implementations of the radix-2 decimation in frequency FFT algorithm. (a) not-inplace and recursive (b) inplace and recursive (c) inplace and non-recursive I needed (c) for my application. But, it was easy for me to start with (a) and step towards (c). I tested my code against matlabs' fft function for various test cases and it seems to be bug free. Since what you need is a matlab implementation, it makes more sense to implement a fourth version that is not-inplace and non-recursive. Let me know if you think otherwise. I will post it as soon as I am done. Most likely during this weekend. ...

I am doing development with ARM7TDMI.. Being too late in FFTW, although the FFT library is used the performance was not able to be demonstrated. I do processing the data, that took it with 16bit every 1024. I would appreciate information, if there is the one who knows a high speed FFT library even from FFTW with a fixed point. Thank you. > I am doing development with ARM7TDMI.. > Being too late in FFTW, although the FFT library is used the > performance was not able to be demonstrated. > I do processing the data, that took it with 16bit every 1024. > I would appr...

Hi I am trying to design a IIR filter using a 16-bit fixed point DSP. I nee to calculate the coeff in the form of second order sections. I would lik to do this with MATLAB. Does anyone have some MATLAB code that will d this. I found a routine on Texas Instrument site but as well as the coef it creates some scaling factors for the inputs. I dont want this as I hav seen other programs such as Filter Express that just give the Coeff. Thanks J This message was sent using the Comp.DSP web interface o www.DSPRelated.com OK. You have several kinds of IIR filter, for instance elliptic (the better), Chevishev, bessell, in matlab you can see the help there are some examples This is one matlab example: For data sampled at 1000 Hz, design a sixth-order lowpass elliptic filter with a cutoff frequency of 300 Hz, which corresponds to a normalized value of 0.6, 3 dB of ripple in the passband, and 50 dB of attenuation in the stopband: [b,a] = ellip(6,3,50,300/500); The filter's frequency response is freqz(b,a,512,1000) title('n=6 Lowpass Elliptic Filter') The coeffs are the array b and the array a. Tou can see in matlab help typing IIR in the search box Diego Diego wrote: > OK. You have several kinds of IIR filter, for instance elliptic (the > better), ... Better for some uses, worse for others. Jerry -- Engineering is the art of making what you want from things you can get. ����������������������������������������������������������������������� "...

Hi, The text below the dot line is except from Matlab help/Fixed-Point Toolbox. I do not understand the meaning of "single-precision". In the text I cannot relate single-precision to fixed-point in any means. Could you explain it to me? Thanks .............. Many textbooks also do roundoff error analysis for "single-precision" fixed-point products and sums with fractional numbers. In this example, we simulate 8-bit unsigned fractional numbers (all values between 0 and 1). U8 = numerictype('Signed',0,'WordLength',8,'FractionLength',8) ...

Hye, I have a question to try to find a way to improve my processing. I use a multirate filter with decimation. There is a decimation by 40. One FIr is used then decimation, and the IIR with decimation, then a second IIR with decimation, then again one IIR and decimation and at last one FIr . I use a fix point DSP. The filter are designed in 16 bits inputs / output (32 bits for computation). It appears that the results is that i have in output a 12bits resoutio and 4 lsb bits are lost during the stages. How can i improve this? Should i change the IIR design? or thedecimation? thank you This message was sent using the Comp.DSP web interface o www.DSPRelated.com domistep wrote: > Hye, > > I have a question to try to find a way to improve my processing. > > I use a multirate filter with decimation. > There is a decimation by 40. One FIr is used then decimation, and the 1 > IIR with decimation, then a second IIR with decimation, then again oner > IIR and decimation and at last one FIr . > > I use a fix point DSP. The filter are designed in 16 bits inputs / outputs > (32 bits for computation). > It appears that the results is that i have in output a 12bits resoution > and 4 lsb bits are lost during the stages. > > How can i improve this? > Should i change the IIR design? or thedecimation? > > This sort of problem is generally caused by an implementation detail that is not easy to track down without looki...

In mathematics , a fixed point (sometimes shortened to fixpoint , also known as an invariant point ) of a function is an element of the function's ...

Writes Terry Teachout. I do, however, want to point you to a few of my favorite postings. Some of them are about art, others about life, most ...

... Hulterstrom has created a gorgeously mesmerizing video that features the spinning body of his Yoyofactory Dream Titanium as a fixed point in ...

SENDAI, Japan - Kyodo News photojournalists conduct fixed-point observations at several areas in the disaster-hit no...

After the Celtics recent losing streak, our CSN crew tries to find answers to the reasons for the Celtics struggles, and create ways of fixing ...

Could adding fixed grid charges to the bills of solar customers encourage more people to ditch their utility?

With big U.S. banks set to report quarterly earnings next month, Jefferies Group's decline in third-quarter fixed-income revenue is expected ...

... and the centrality of social media in social interactions and relationships. The change has been brought about, not by improvements in fixed ...

A Queen’s Guard pointed his rifle at a member of the public outside Buckingham Palace on Friday with the bayonet fixture raised within a foot ...

Resources last updated: 3/30/2016 5:51:00 AM