|
|
View previous topic :: View next topic |
Author |
Message |
ee_hob
Joined: 15 May 2013 Posts: 11
|
MCP3204 response |
Posted: Wed May 15, 2013 7:42 pm |
|
|
Hi all,
I'm new to microcontrollers. I've been trying to interface MCP3204 with PIC16F877 by bit banging method. The reason why I don't use built-in functions is that I want to learn the basics.
Here is my code.
Code: | #include <16F877.h>
#fuses HS,NOWDT,NOPROTECT,NOBROWNOUT,NOLVP,NOPUT,NOWRT,NODEBUG,NOCPD
#use delay(clock=4000000)
#define use_portb_lcd TRUE
#include <LCD420.c>
#define CLK PIN_A0
#define DOUT PIN_A1
#define DIN PIN_A2
#define CS PIN_A3
int16 val_temp,ch3;
float value;
int16 read_ad()
{
int8 i;
output_high(CS);
output_low(CLK); //1. falling edge (Start)
output_high(DIN);
delay_us(10);
output_high(CLK);
delay_us(10);
output_low(CLK); //2. falling edge (Single/Dif)
delay_us(10);
output_high(CLK);
delay_us(10);
output_low(CLK); //3. falling edge (Don't care bit)
delay_us(10);
output_high(CLK);
delay_us(10);
output_low(CLK); //4. falling edge (D1)
delay_us(10);
output_high(CLK);
delay_us(10);
output_low(CLK); //5. falling edge (D0)
delay_us(10);
output_high(CLK);
delay_us(10);
output_low(CLK); //6. falling edge (Sample&Hold)
delay_us(10);
output_high(CLK);
delay_us(10);
output_low(CLK); //7. falling edge (null)
delay_us(10);
output_high(CLK);
delay_us(10);
for(i=0;i<12;i++)
{
output_low(CLK);
delay_us(10);
output_high(CLK);
delay_us(2);
if (input(DOUT)==1)
{
bit_set(val_temp,i);
}
else
{
bit_clear(val_temp,i);
}
delay_us(8);
}
output_low(CS);
val_temp=65535-val_temp;
ch3=val_temp;
return ch3;
}
void main()
{
while(1)
{
setup_timer_0(RTCC_INTERNAL|RTCC_DIV_1);
setup_timer_1(T1_DISABLED);
setup_timer_2(T2_DISABLED,0,1);
setup_adc_ports(NO_ANALOGS);
setup_adc(ADC_OFF);
lcd_init();
value=read_ad()*7,6295109483482108796826123445487e-5;
printf(lcd_putc,"value=%fV",value);
delay_ms(1000);
}
}
|
I don't know the reason but DOUT is always zero.
MSB is coming first so I thought that the complement of val_temp is giving me exactly what I want, right?
Could you please tell me what I miss?
Thanks in advance,
(IDE 4.057, PCB 4.057, PCM 4.057)
Link to datasheet: http://ww1.microchip.com/downloads/en/devicedoc/21298e.pdf (I tried to use Figure 5.1 on page 20) |
|
|
PCM programmer
Joined: 06 Sep 2003 Posts: 21708
|
|
Posted: Wed May 15, 2013 10:19 pm |
|
|
Here you are issuing a positive clock pulse before you set Din to a high
level:
Code: |
output_high(CS);
output_low(CLK); //1. falling edge (Start)
output_high(DIN);
delay_us(10);
|
But the MCP3204 data sheet says:
Quote: | The first clock received with CS low and Din high will constitute a start bit. |
You need to output the Din level first, and then generate the clock.
Also, CS has to be low before the clock occurs.
Also, your program does not initialize the signals going to the mcp3204.
The inital state of all PIC i/o pins, after power-on reset, is to be inputs.
They are not outputting a defined logic level. They are floating. You
should initialize them at the start of main() to the correct idle state for
each signal. |
|
|
ee_hob
Joined: 15 May 2013 Posts: 11
|
|
Posted: Thu May 16, 2013 12:58 am |
|
|
Hi PCM programmer,
Thanks for your quick reply.
I changed the code but nothing changed. I guess I didn't exactly understand the initialization part. I opened the driver file of MCP3204, initialization is done by adc_init which is mcp_init in my code. Could you please tell me where I'm wrong?
Code: | #include <16F877.h>
#fuses HS,NOWDT,NOPROTECT,NOBROWNOUT,NOLVP,NOPUT,NOWRT,NODEBUG,NOCPD
#use delay(clock=4000000)
#define use_portb_lcd TRUE
#include <LCD420.c>
#define CLK PIN_A0
#define DOUT PIN_A1
#define DIN PIN_A2
#define CS PIN_A3
int16 val_temp,ch3;
float value;
void mcp_init()
{
output_high(CS);
}
int16 read_ad()
{
int8 i;
output_high(CS);
output_high(DIN);
output_low(CLK); //1. falling edge (Start)
delay_us(10);
output_high(CLK);
delay_us(10);
output_low(CLK); //2. falling edge (Single/Dif)
delay_us(10);
output_high(CLK);
delay_us(10);
output_low(CLK); //3. falling edge (Don't care bit)
delay_us(10);
output_high(CLK);
delay_us(10);
output_low(CLK); //4. falling edge (D1)
delay_us(10);
output_high(CLK);
delay_us(10);
output_low(CLK); //5. falling edge (D0)
delay_us(10);
output_high(CLK);
delay_us(10);
output_low(CLK); //6. falling edge (Sample&Hold)
delay_us(10);
output_high(CLK);
delay_us(10);
output_low(CLK); //7. falling edge (null)
delay_us(10);
output_high(CLK);
delay_us(10);
for(i=0;i<12;i++)
{
output_low(CLK);
delay_us(10);
output_high(CLK);
delay_us(2);
if (input(DOUT)==1)
{
bit_set(val_temp,i);
}
else
{
bit_clear(val_temp,i);
}
delay_us(8);
}
output_low(CS);
val_temp=65535-val_temp;
ch3=val_temp;
return ch3;
}
void main()
{
while(1)
{
setup_timer_0(RTCC_INTERNAL|RTCC_DIV_1);
setup_timer_1(T1_DISABLED);
setup_timer_2(T2_DISABLED,0,1);
setup_adc_ports(NO_ANALOGS);
setup_adc(ADC_OFF);
lcd_init();
mcp_init();
value=read_ad()*7,6295109483482108796826123445487e-5;
printf(lcd_putc,"value=%fV",value);
delay_ms(1000);
}
}
|
|
|
|
PCM programmer
Joined: 06 Sep 2003 Posts: 21708
|
|
Posted: Thu May 16, 2013 1:10 am |
|
|
Quote: | output_high(CS);
output_high(DIN);
output_low(CLK); //1. falling edge (Start)
delay_us(10);
|
As I said before:
But the MCP3204 data sheet says:
Quote: |
The first clock received with CS low and Din high will constitute a start bit.
|
Look at the signal diagram in the data sheet. It clearly shows CS going
to a low level before any bits are sent to the mcp chip. |
|
|
ckielstra
Joined: 18 Mar 2004 Posts: 3680 Location: The Netherlands
|
|
Posted: Thu May 16, 2013 1:42 am |
|
|
Just another comment on this part of your code: Code: | void main()
{
while(1)
{
setup_timer_0(RTCC_INTERNAL|RTCC_DIV_1);
setup_timer_1(T1_DISABLED);
setup_timer_2(T2_DISABLED,0,1);
setup_adc_ports(NO_ANALOGS);
setup_adc(ADC_OFF);
lcd_init();
mcp_init(); | Please move all these initialization lines outside the while-loop. It is not a big error but now your program is repeating a lot of instructions that only have to be executed once at startup. It is wasting CPU cycles and could have some unwanted side effects.
Code: | value=read_ad()*7,6295109483482108796826123445487e-5; | Here you typed a float with 32 digits. Why so many?
It isn't really wrong, but it suggests a very high accuracy that will never be reached.
The MCP3204 gives a 12-bit value, that is a total range of 1024 values. With 1024 values you get a maximum accuracy of just over 3 digits. Heck, even with the 24 bits internal float value in the compiler you will only get a little more than 6 digits of accuracy.
What I mean is that after typing the value 7,6295109 every other digit you type there is being thrown away by the compiler and you are just complicating things unnecessary. |
|
|
ee_hob
Joined: 15 May 2013 Posts: 11
|
|
Posted: Thu May 16, 2013 2:29 am |
|
|
PCM programmer,
I understood my mistake about CS. Although I knew that was suppose to be like that but I didn't look carefully, sorry for that.
ckielstra,
I removed mcp_init(); outside the while loop as you said. I didn't remove lcd_init(); outside the while loop because when I do that, the screen becomes messy.
I didn't notice that the digit precision is limited, thanks for that. Because of being a newbie, I wanted to use fundamental things like value=5V/2^12.
By the way, I changed the code due to your instructions but it's still not working. I don't know if you had time to look at the datasheet but can there be a problem with timing? |
|
|
ee_hob
Joined: 15 May 2013 Posts: 11
|
|
Posted: Thu May 16, 2013 3:17 am |
|
|
I didn't want to open a new topic, if this is wrong, sorry for this.
PCM programmer,
I saw a driver written by you I guess. (http://www.ccsinfo.com/forum/viewtopic.php?t=41059) If you don't mind, I would like to ask a question about this.
Code: | int32 ad7799_read_data(void)
{
int32 retval;
int8 msb, mid, lsb;
output_low(AD7799_CS);
spi_write(AD7799_READ_DATA_CMD);
msb = spi_read(0); // Data comes out MSB first
mid = spi_read(0);
lsb = spi_read(0);
output_high(AD7799_CS);
// Convert the data bytes into a 32-bit value.
retval = make32(0, msb, mid, lsb);
return(retval);
}
|
Why do you divide the data into three parts then make them 32 bit? I can see that built-in functions are being used in this code but I really wondered.
Thanks in advance, |
|
|
RF_Developer
Joined: 07 Feb 2011 Posts: 839
|
|
Posted: Thu May 16, 2013 3:32 am |
|
|
As you're wanting to learn, I think its useful to simplify the process. Its confusing to think of start and stop bits (that's something relevant to asynchronous comms such as RS232 but not to synchronous such as SPI). Also, as others have pointed out, chip selects (CS) are traditionally active low, so their inactive state is high, and active, i.e. the chip is selected, is low.
SPI simultaneously sends and receives data, one bit of each for each clock period. However, setting the data bit to send and reading the data from the slave device are done on different clock edges. Also there are four possible combinations, modes 0 to 3, of the polarity of the clock and which edges are used for read and write. The MCP3204 uses probably the commonest mode, mode 0, of SPI where both ends read their incoming data on rising edges of the clock. This means that they must put their outgoing data out BEFORE the rising edge, and that typically means at the previous falling edge, or for the first bit, in advance of the first positive going clock edge.
The process sounds complex, but can be broken down to this, for masters. I'll use the CLK for clock, DI for data in (sometimes called MISO or Master In, Slave Out) and DO for data out (MOSI), and I'll assume we're sending bytes as most hardware implementations do:
Initialisation:
Set CS high (i.e. inactive - the slave ignores us)
Set CLK low
Set DO low.
Set DI to input
To transmit and receive a message/command.sequence of bytes:
Set CS low (active - slave now listens and sends to us)
For each byte we want to send:
Send and receive a byte.
Set CS high (inactive - ends the message, slave now ignores us again)
Where Send and receive a byte is:
For each bit of the byte from LSB (Least Significant Bit) to MSB:
Set DO to the bit to be sent
Possibly wait a little depending on processor speed and code efficiency
Read DI bit and put it into right place in the received byte
Set clock high
Possibly wait a little
Set clock low
A problem with any software implementation of SPI is that the active edge of the clock cannot be where the data is read. In hardware the clock literally clocks in the data at that moment with a delay measured typically in nanoseconds. You simply cant do that in firmware, the two cannot be coincident, so we have to read the data an instruction or two, or in C a line, which may be several instructions before we change the clock. Also hardware would read in and read out using shift registers, maybe even just one single register that reads into the MSB while reading out from the LSB. After a byte's worth of clocks the byte read will be in the right place in the register. I've done an SPI in an FPGA and I used two registers instead of one for simplicity. The firmware version would typically also use shifts on the send and receive bytes rather than masking out each bit in turn, i.e:
Set DO to the LSB bit of the byte to be sent
Shift received byte right one bit (also acts as wait)
Read DI bit into the MSB of the received byte
Set clock high
Shift send byte right one bit (also acts as a wait)
Set clock low
These processes shows that we don't have to worry about start and stop bits - there are none - and it leads naturally to a division into logical and neat subroutines that give clean code. Its also simple to alter this for other SPI modes.
RF Developer |
|
|
ee_hob
Joined: 15 May 2013 Posts: 11
|
|
Posted: Thu May 16, 2013 5:38 am |
|
|
RF_Developer,
Thanks a million for your reply.
I guess I'm getting idea of SPI but I still don't understand why I need to get byte by byte. Forgive me because I'm really having hard times to understand.
As I wrote above, I guess my problem is about timing operations. If I'm wrong please correct me but when I use hardware SPI instead of software SPI these timing problems are almost solved, right? |
|
|
dyeatman
Joined: 06 Sep 2003 Posts: 1934 Location: Norman, OK
|
|
Posted: Thu May 16, 2013 6:33 am |
|
|
ee_hob
Using bit bang you don't have to go "byte by byte". You can input or output
as many bits as you want at one time, based on how you write
your custom routines.. I have written 24 bit ADC routines like this.
If you use SPI hardware, however, you will have to read or write "byte by byte" due to
limitations of the SPI hardware in the chip.
Over time I have found it much easier (and cleaner) to use the SPI
hardware and read/write as many 8 bit bytes as required so I eventually
rewrote my 24 bit ADC routine to use 8 bit SPI hardware and it works
equally well.
From the ADCs perspective sending (or receiving) three 8 bit bytes in a
row (without changing any of the other signals) looks like 24 continuous
bits.
BTW, Section 6.1 and figures 6-1 and 6-2 in the datasheet explain and
show the byte mode really well.
Also, note the min clock speed and sample times in section 6.2. _________________ Google and Forum Search are some of your best tools!!!! |
|
|
ee_hob
Joined: 15 May 2013 Posts: 11
|
|
Posted: Thu May 16, 2013 8:18 am |
|
|
Thanks dyeatman, I'll be working on it. |
|
|
PCM programmer
Joined: 06 Sep 2003 Posts: 21708
|
|
Posted: Thu May 16, 2013 10:38 am |
|
|
RF_Developer wrote: |
Where Send and receive a byte is:
For each bit of the byte from LSB (Least Significant Bit) to MSB:
Set DO to the bit to be sent
Possibly wait a little depending on processor speed and code efficiency
Read DI bit and put it into right place in the received byte
Set clock high
Possibly wait a little
Set clock low
|
Your FPGA implementation described above, transmits a byte 'LSB bit first'.
But nearly all commerical SPI-compatible chips, including the mcp3204,
transmit a byte 'MSB bit first' (bit D7 is transmitted first, and bit D0 last). |
|
|
RF_Developer
Joined: 07 Feb 2011 Posts: 839
|
|
Posted: Fri May 17, 2013 1:59 am |
|
|
PCM programmer wrote: | RF_Developer wrote: |
Should have been:
Where Send and receive a byte is:
For each bit of the byte from MSB (Most Significant Bit) to LSB:
Set DO to the bit to be sent
Possibly wait a little depending on processor speed and code efficiency
Read DI bit and put it into right place in the received byte
Set clock high
Possibly wait a little
Set clock low
|
Your FPGA implementation described above, transmits a byte 'LSB bit first'.
But nearly all commerical SPI-compatible chips, including the mcp3204,
transmit a byte 'MSB bit first' (bit D7 is transmitted first, and bit D0 last). |
Yes PCM you are of course right. I was wrting from memory and messed that up :-)) It is indeed MSB first and my FPGA implementation was as well. You can probably tell I haven't had to implement a software SPI for years, which is what I was trying to describe. The principle is correct, I hope, even though I got the bit-endianness wrong.
SPI can be interpreted as two bit streams, send and receive, of arbitary length transmitted most significant bit first. It can be split up, for convenience of implementation, into groups of any number of bits.
Most hardware implementations split it into bytes. Some newer PICs are capable of 16 bit transfers.
SPI slaves that don't transfer a multiple of eight bits are generally arranged so that they will accept additional clock pulses after the expected end of data, and sometimes before the data, in order to accomodate commonly encountered SPI hardware. Figure 6-1 and 6-2 of the MCP3204 datasheet show how byte organised SPI data should be interpreted.
In the case of the MCP3204, it ignores zero bits - padding bits - at the start of a transfer until it sees a one - the start bit. That allows the user to pad the data with as many zeros as they need to align the bit stream to bytes in any way they want. igure 6-1 of the datasheet suggests one way with five padding leading zero bits, giving a total of 24 bits - 19 "real" bits, including the start bit, with the five padding bits up front. This right aligns the returned data into the second and third received bytes - the first received byte carries no information and can be ignored. The two received bytes can thus be simply assembled into a 16 bit integer.
This is based on PCM's example using the CCS built-in routines.
Code: |
// Convert single-ended on channel 2 (I think... untested - just an example)
output_low(MCP3204_CS);
spi_xfer(0x06); // Only bottom three bits relevant, upper five MUST be zero and bit 2 MUST be 1, bottom two are Sgl/Diff and D2. Ignore first received byte.
msb = spi_xfer(0x80); // Only upper two bits are relevant - D1 and D0, rest is "don't care" and is ignored.
lsb = spi_xfer(0); // Doesn't matter what we send here - its ignored
output_high(MCP3204_CS);
|
RF Developer |
|
|
|
|
You cannot post new topics in this forum You cannot reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot vote in polls in this forum
|
Powered by phpBB © 2001, 2005 phpBB Group
|